diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md deleted file mode 100644 index ffe94c456771c3e23275af8498a0e2d85eef4487..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md +++ /dev/null @@ -1,121 +0,0 @@ - -
If you are interested in online assessment and comic books, you might have heard of Cadmas 11 46. But what exactly is it? Is it a software, a comic book, or something else? In this article, we will explore what Cadmas and 11 46 are, how they are related, and how they can be used for educational purposes.
-Cadmas is an online assessment platform that helps higher education providers achieve institutional goals through better assessment experiences. It is a secure, online environment that facilitates an end-to-end assessment workflow, simplifying the process of implementing best practice assessment at scale. By empowering academics and supporting students, Cadmas can be used to solve the biggest challenges faced by universities today, such as academic integrity, student retention, remote learning, and online exams.
-Download ……… https://byltly.com/2uKvPI
Cadmus has several features and benefits for both learners and educators. For learners, Cadmus provides a supportive and scaffolded assessment experience that helps them develop their academic skills and achieve better outcomes. For example, Cadmus offers:
-For educators, Cadmus simplifies the process of designing and delivering high-quality digital assessment, consistently and at scale. For example, Cadmus offers:
-Cadmus can be used for a range of formative and summative, open-book written assessments and alternatives to exams. Some examples of how Cadmus can be used are:
-11 46 is a comic book series by Castle Comics that was published between November 2020 and June 2021 . It is a crime thriller that follows the lives of four strangers who are connected by a mysterious murder that took place at exactly 11:46 pm.
-The plot of 11 46 revolves around four main characters who have different backgrounds and motivations. They are:
-The story unfolds through multiple perspectives and timelines, revealing how each character is related to the murder and how their actions affect each other. The story also explores various themes and messages, such as corruption, justice, revenge, loyalty, etc.
-One of the main themes of 11 46 is the idea of fate versus free will. The title of the series refers to the exact time when the murder happened, suggesting that it was predetermined by some higher power or force. However, the series also shows how each character has some degree of choice and agency in their actions. The series asks questions such as:
-At first glance, Cadmus and 11 46 seem to have nothing in common. One is an online assessment platform for higher education, while the other is a comic book series for entertainment. However, upon closer examination, we can find some possible connections and similarities between them. For example:
-Cadmas 11 46 sway office
-Cadmas 11 46 bali finder
-Cadmas 11 46 opensea collection
-Cadmas 11 46 black panther
-Cadmas 11 46 NBA finals
-Cadmas 11 46 Jerome K Jerome
-Cadmas 11 46 Pune event management
-Cadmas 11 46 Oaxaca figs
-Cadmas 11 46 hedge fund
-Cadmas 11 46 short fiction writer
-Cadmas 11 46 glassware
-Cadmas 11 46 dance
-Cadmas 11 46 calculus
-Cadmas 11 46 emperor is dead
-Cadmas 11 46 chainsaw training
-Cadmas 11 46 workhorse
-Cadmas 11 46 fate of cadmus
-Cadmas 11 46 contemporary culinary style
-Cadmas 11 46 force crankset
-Cadmas 11 46 snow sport helmet
-Cadmas 11 46 trail running shoes
-Cadmas 11 46 board shorts
-Cadmas 11 46 slip on shoes
-Cadmas 11 46 black laces and nylon
-Cadmas 11 46 jeans by yeezy
-Cadmas 11 46 navy women's supercrew sweatshirt
-Cadmas 11 46 ebay items at great prices
-Cadmas 11 46 Baxter pharma earnings call
-Cadmas 11 46 black smoke burner
-Cadmas 11 46 Venus sign
-Cadmas 11 46 IT business solutions
-Cadmas 11 BB/38 UHM blazer SS7MU
-Cadmas #47 bdfb3a6fcd made with Microsoft sway
-Cadmus #48 cadmus and his legacy Kirstein
-Cadmus #49 history of science A.L. Kirstein
One way to use Cadmus to assess 11 46 is to design and deliver a Cadmus assignment based on the comic book series. For example, an educator can create an assignment that requires students to:
-The assignment can be aligned with the learning outcomes and assessment criteria of the course or subject. The assignment can also be tailored to suit different levels of difficulty and complexity, depending on the students' needs and abilities.
-Using Cadmus for 11 46 can have some benefits and challenges for both learners and educators. Some of the benefits are:
-Some of the challenges are:
-In conclusion, Cadmas 11 46 is a combination of an online assessment platform and a comic book series that can be used for educational purposes. Cadmas is a platform that helps higher education providers achieve institutional goals through better assessment experiences. 11 46 is a series that follows the lives of four strangers who are connected by a mysterious murder. By using Cadmus to assess 11 46, learners and educators can enjoy some benefits, such as developing critical thinking skills, engaging with a creative text, ensuring academic integrity, etc. However, they may also face some challenges, such as accessing or reading the text, finding or creating suitable assessment tasks, dealing with plagiarism or cheating issues, etc. Therefore, it is important to consider these factors before using Cadmus 11 46 for assessment.
-Here are some frequently asked questions and answers about Cadmas and 11 46:
-If you are a fan of flight simulation games, you might have heard of FSX - Maddog 2008 Professional, a popular add-on for Microsoft Flight Simulator X that lets you fly the Leonardo Maddog, a realistic and complex simulation of the McDonnell Douglas MD-80 aircraft. But did you know that there is a way to get this add-on for free, thanks to a crack made by a user named Komu? In this article, we will review FSX - Maddog 2008 Professional cracked by Komu, a download that claims to unlock all the features and benefits of the original add-on without paying a dime. We will also show you how to install and use it, as well as the pros and cons of using this crack. Finally, we will suggest some alternatives to this crack in case you are looking for other options.
-FSX - Maddog 2008 Professional is an add-on for Microsoft Flight Simulator X that was released in 2008 by Leonardo Software House, a company that specializes in developing flight simulation software. This add-on is a highly detailed and accurate simulation of the McDonnell Douglas MD-80 aircraft, also known as the Maddog, a twin-engine, medium-range jet airliner that was widely used by many airlines around the world from the 1980s to the 2000s.
-Download File ✒ https://byltly.com/2uKvyM
This add-on offers many features and benefits for flight simulation enthusiasts, such as:
-FSX - Maddog 2008 Professional is widely regarded as one of the best add-ons for FSX in terms of realism, complexity, and immersion. However, it also comes with a price tag of $59.99 USD (as of May 2023), which might be too expensive for some users who want to enjoy this add-on without breaking the bank.
-FSX Maddog 2008 Pro full version download
-How to install FSX Maddog 2008 Professional crack
-FSX Maddog 2008 Professional free torrent
-FSX Maddog 2008 Pro activation key
-FSX Maddog 2008 Professional patch by Komu
-FSX Maddog 2008 Pro serial number
-FSX Maddog 2008 Professional license code
-FSX Maddog 2008 Pro keygen
-FSX Maddog 2008 Professional gameplay video
-FSX Maddog 2008 Pro review
-FSX Maddog 2008 Professional system requirements
-FSX Maddog 2008 Pro manual pdf
-FSX Maddog 2008 Professional update
-FSX Maddog 2008 Pro mods
-FSX Maddog 2008 Professional liveries
-FSX Maddog 2008 Pro cockpit view
-FSX Maddog 2008 Professional tutorial
-FSX Maddog 2008 Pro tips and tricks
-FSX Maddog 2008 Pro cheats
-FSX Maddog 2008 Pro error fix
-FSX Maddog 2008 Professional forum
-FSX Maddog 2008 Professional support
-FSX Maddog 2008 Professional online multiplayer
-FSX Maddog 2008 Professional VR compatibility
-FSX Maddog 2008 Professional best settings
-FSX Maddog 2008 Pro comparison with other flight simulators
-FSX Maddog 2008 Pro realistic flight model
-FSX Maddog 2008 Pro sound pack
-FSX Maddog 2008 Pro scenery add-ons
-FSX Maddog 2008 Pro weather engine
-FSX Maddog 2008 Pro navigation database
-FSX Maddog 2008 Pro fuel planner
-FSX Maddog 2008 Pro flight plan generator
-FSX Maddog 2008 Pro charts and maps
-FSX Maddog 2008 Pro ATC communication
-FSX Maddog 2008 Pro emergency procedures
-FSX Maddog 2008 Pro failures simulation
-FSX Maddog 2008 Pro cold and dark start up
-FSX Maddog 2008 Pro take off and landing performance calculator
-FSX Maddog 2008 Pro autopilot functions
-FSX Maddog 2008 Pro FMC programming
-FSX Maddog 2008 Pro VNAV and LNAV modes
-FSX Maddog 2008 Pro SID and STAR procedures
-FSX Maddog 2008 Pro ILS approach and landing
-FSX Maddog 2008 Pro RNAV approach and landing
-FSX Maddog 2008 Pro VOR approach and landing
-FSX Maddog 2008 Pro visual approach and landing
-FSX Maddog 2008 Pro go around procedure
-FSX Maddog 2008 Pro holding pattern procedure
-FSX Maddog 2008 Pro diverting to alternate airport procedure
Komu's crack is a download that claims to bypass the activation process of FSX - Maddog 2008 Professional and allow users to use it for free. It was created by a user named Komu who uploaded it on various torrent sites in 2010. According to Komu's description, his crack does not modify any files or registry entries of the original add-on, but simply replaces the original .dll file with a cracked one that disables the activation check. He also claims that his crack does not affect any features or functions of the add-on, and that it works with any version of FSX.
-Komu's crack has been downloaded by thousands of users who wanted to try FSX - Maddog 2008 Professional without paying for it. Some users have reported that the crack works as advertised and that they have not encountered any problems or issues with it. However, other users have reported that the crack does not work at all or that it causes various errors or crashes during their flights. Moreover, some users have expressed ethical concerns about using this crack, as it violates the intellectual property rights of Leonardo Software House and deprives them of their deserved revenue.
-If you want to install and use FSX - Maddog 2008 Professional cracked by Komu, you will need to follow these steps:
-Note: These steps are based on Komu's instructions and user feedback. We do not endorse or recommend using this crack or any other illegal downloads. Use them at your own risk.
-FSX - Maddog 2008 Professional cracked by Komu has some pros and cons that you should consider before using it:
-If you are looking for alternatives to <
0a6ba089ebif you are not certain what is the best budget hotel for you, please take into account your initial budget as well as the purpose of your trip. the more affordable hotels may not be suitable for your needs. you should also consider what other options are available in the area. hotels that are in less populated areas tend to be less expensive, but also are farther from popular attractions.
-Download Zip »»» https://imgfil.com/2uxWUx
you'll be very satisfied with the hotel's service. the clarion express was very nice - the lobby had free wireless internet, and the rooms had a fridge and a coffeemaker. the walk to the hotel from downtown was fast and easy, even though i had to use the train to get to clarion. the hotel was very easy to get into, and the staff were friendly. a very nice choice.
-the clarion express hotel is a best choice for a budget hotel with a great location. enjoy our complimentary cooked-to-order breakfast each morning before you head out exploring. we offer free wireless internet, free local calls, and 32" lcd hd tvs with free cable in every room. just hop a train to clarion university in less than three miles. our city welcomes business travelers, so clarion express is an ideal hotel for travelers seeking a modern downtown hotel with the amenities and location of a big city hotel at a reasonable price.
-clarion city inn & suites in downtown harrisburg offers 100 rooms with complimentary internet access. non-smoking rooms include microwaves and refrigerators. rooms have microwaves, hair dryers, and coffee/tea makers. this harrisburg hotel has both seasonal and indoor pools. parking is free. complimentary breakfast is served daily.
- 899543212bIf you are a fan of billiards games, you might have heard of 8 Ball Pool, one of the most popular and addictive online pool games in the world. But did you know that there is a way to enhance your gaming experience and improve your skills with a simple tool? In this article, we will introduce you to 8 Ball Pool Long Line Tool APK, a modded version of the game that allows you to have longer aiming lines and more accurate shots. We will also show you how to download and install it on your Android device, and share some tips and tricks to win in 8 Ball Pool.
-8 Ball Pool is a game developed by Miniclip that simulates the real-life pool game of the same name. You can play it online with millions of players from around the world, or offline with your friends. You can also participate in tournaments, win trophies, and collect coins and cash to buy better cues and enter higher-stakes tables.
-Download Zip === https://jinyurl.com/2uNMpE
8 Ball Pool is played with a cue ball and fifteen object balls, numbered 1 through 15. Balls 1–7 are solid colors and commonly referred to as “low balls”, and balls 9–15 are striped and commonly referred to as “high balls.” One player must pocket balls of solid colors, while the other player must pocket the striped balls. The player who pockets their entire group and then legally pockets the 8-ball wins the game.
-For the break shot to be legal, the breaker (with the base of the cue ball placed anywhere behind the head string) must either pocket a number ball or drive at least four (4) number balls to one or more rails. No ball is called, and the cue ball is not required to hit any particular object ball first. If the breaker fails to make the legal break requirement, the balls will be re-racked and the opponent shall have the option of breaking or requesting the offending player to break again.
-If any numbered ball is pocketed on a legal break, the breaking player is to continue their inning. If the breaker makes a legal break but commits a foul, the game is to continue with the opponent having ball in hand anywhere behind the head-string, but must shoot an object ball beyond the head-string (outside of the “kitchen”) or it is a foul.
-If the breaker pockets the 8-ball on a legal break shot, they win the game unless they also scratch (pocket or drive off the table) the cue ball, in which case they lose. If any other object ball leaves the table on a legal break shot, it is spotted on its original position before shooting player plays their next shot.
-During normal play, each player remains at the table until they fail to legally pocket a ball of their group or commit a foul. If a player pockets any ball on a legal shot except for their own group or an opponent’s group (if playing an open table), they continue their inning. If they pocket their own group and an opponent’s group on one shot (if playing an open table), they continue their inning but must declare which group they are playing before their next shot.
-If a player pockets any ball on a foul shot, it remains pocketed except for the cue ball which is returned behind head string or spotted if it leaves table. If a player pockets the 8-ball on a legal shot, they win the game unless they also scratch, in which case they lose. If a player pockets the 8-ball on an illegal shot, they lose the game.
-8 ball pool mod apk with long lines
-8 ball pool hack apk long line tool
-8 ball pool unlimited coins and long lines apk
-8 ball pool long line tool apk download
-8 ball pool long line tool apk no root
-8 ball pool long line tool apk latest version
-8 ball pool long line tool apk for android
-8 ball pool long line tool apk free download
-8 ball pool long line tool apk online
-8 ball pool long line tool apk 2023
-8 ball pool cheat apk long line tool
-8 ball pool guideline hack apk long line tool
-8 ball pool mega mod apk long lines
-8 ball pool extended lines apk tool
-8 ball pool long line tool apk without ban
-8 ball pool aim hack apk long line tool
-8 ball pool anti ban apk long line tool
-8 ball pool premium apk long lines
-8 ball pool cracked apk long line tool
-8 ball pool modded apk with long lines
-8 ball pool unlimited guideline apk tool
-8 ball pool pro apk long line tool
-8 ball pool full version apk long lines
-8 ball pool unlocked apk long line tool
-8 ball pool patcher apk long lines
-8 ball pool generator apk long line tool
-8 ball pool trainer apk long lines
-8 ball pool mod menu apk with long lines
-8 ball pool glitch apk long line tool
-8 ball pool update apk long lines
-8 ball pool best mod apk with long lines
-8 ball pool easy win apk long line tool
-8 ball pool legendary cues apk with long lines
-8 ball pool rewards apk long line tool
-8 ball pool cash hack apk with long lines
-8 ball pool instant win apk long line tool
-8 ball pool level up hack apk with long lines
-8 ball pool auto win apk long line tool
-8 ball pool all cues unlocked apk with long lines
-8 ball pool vip mod apk with long lines
A foul occurs when a player fails to hit their own group of balls first, fails to hit any ball at all, scratches the cue ball, drives any ball off the table, touches any ball with their hand or cue, or violates any other rule of the game. When a foul is committed, the opponent gets ball in hand anywhere on the table. However, if the cue ball is behind the head string and an object ball is outside of the head string, the player must shoot an object ball outside of the head string or it is a foul.
-8 Ball Pool Long Line Tool APK is a modified version of the original 8 Ball Pool game that gives you some extra advantages over your opponents. It is not an official app from Miniclip, but a third-party app that you can download and install on your Android device for free.
-Some of the features that 8 Ball Pool Long Line Tool APK offers are:
-Some of the benefits that 8 Ball Pool Long Line Tool APK provides are:
-To install 8 Ball Pool Long Line Tool APK on your Android device, you need to follow these steps:
-Besides using 8 Ball Pool Long Line Tool APK, there are some other tips and tricks that you can apply to win in 8 Ball Pool. Here are some of them:
-When you play online, you can choose from different tables with different entry fees and prizes. The higher the entry fee, the higher the prize, but also the higher the risk. If you are a beginner, you should start with lower-level tables and work your way up gradually. Don't play on tables that are too expensive for your budget or skill level, as you might lose more than you gain.
-A cue is one of the most important factors that affect your performance in 8 Ball Pool. A better cue can give you more power, spin, aim, and time. You can buy cues with coins or cash in the game shop, or win them in tournaments or surprise boxes. You can also upgrade your cues with coins to improve their attributes. A good cue can make a big difference in your game, so don't hesitate to invest in one.
-English is a term that refers to the amount of spin you put on the cue ball when you hit it. By using English, you can control the direction and speed of the cue ball after it hits an object ball or a rail. You can use English to avoid scratches, make difficult shots, or set up your next shot. To use English, you need to hit the cue ball on the left or right side, rather than the center. You can also adjust the power and angle of your shot to achieve the desired effect.
-One of the challenges of playing online is that you have a limited time to make your shot. If you take too long, you might lose your turn or even the game. To avoid this, you should try to shoot faster and more confidently. You can do this by planning your shots ahead, using 8 Ball Pool Long Line Tool APK to aim better, and practicing your skills offline. Shooting faster can also put pressure on your opponent and make them nervous or impatient.
-Another way to improve your accuracy and precision in 8 Ball Pool is to extend your aim beyond the object ball. This means that you should visualize where you want the cue ball to go after it hits the object ball, and align your cue accordingly. This can help you to avoid scratches, position your cue ball better, and make more complex shots. You can also use 8 Ball Pool Long Line Tool APK to see the extended aiming lines and adjust your shots accordingly.
-8 Ball Pool is a fun and exciting game that can keep you entertained for hours. However, if you want to take your game to the next level, you might want to try 8 Ball Pool Long Line Tool APK, a modded version of the game that gives you longer aiming lines and more accurate shots. You can download and install it on your Android device for free and enjoy playing 8 Ball Pool with an edge over your opponents. You can also use some tips and tricks to win in 8 Ball Pool, such as choosing your tables wisely, buying a better cue, using a little English, shooting faster, and extending your aim. With these tools and techniques, you can become a master of 8 Ball Pool in no time.
-Here are some frequently asked questions about 8 Ball Pool Long Line Tool APK:
-Yes, 8 Ball Pool Long Line Tool APK is safe to use as long as you download it from a trusted source and follow the installation instructions carefully. It has an anti-ban system that prevents detection by Miniclip, so you don't have to worry about getting banned or losing your account.
-No, 8 Ball Pool Long Line Tool APK is only compatible with Android devices that have Android 4.1 or higher versions. It is not compatible with iOS devices or other platforms.
-Yes, you can play online with 8 Ball Pool Long Line Tool APK as long as you have a stable internet connection and a valid Miniclip account. You can play with other players who are using the same app or the original game.
-No, you cannot update 8 Ball Pool Long Line Tool APK as it is not an official app from Miniclip. If you update it, you might lose the modded features or encounter errors. You should always check for new versions of the app from the source where you downloaded it.
-No, you should not use 8 Ball Pool Long Line Tool APK with other mods or hacks as they might interfere with each other or cause problems. You should only use one mod or hack at a time for optimal performance and safety.
-If you are a fan of soccer games, you have probably heard of FIFA 20, the latest installment of the popular FIFA series by Electronic Arts. FIFA 20 is a realistic and immersive soccer simulation game that lets you experience the thrill of playing with your favorite teams and players in various modes and competitions. Whether you want to play solo or with friends, offline or online, FIFA 20 has something for everyone.
-Download Zip ⚙⚙⚙ https://jinyurl.com/2uNR2u
But what if you don't have a console or a PC to play FIFA 20? Don't worry, you can still enjoy this amazing game on your Android device. All you need is to download and install the FIFA 20 APK and OBB data files, which are modified versions of the original game that can run on Android devices without any issues. In this article, we will show you how to do that, as well as give you some tips and tricks to play FIFA 20 like a pro.
-FIFA 20 is not just another soccer game. It is a game that offers you a lot of features and benefits that make it stand out from other games in the genre. Here are some of them:
-Now that you know the features and benefits of FIFA 20, you might be wondering how to download and install it on your Android device. Well, it's not as hard as you might think. Just follow these simple steps:
-Before you can install any APK file on your device, you need to enable the option to allow unknown sources. This will let you install apps that are not from the Google Play Store. To do this, go to your device's settings, then security, then unknown sources. Toggle the switch to enable it.
-download apk fifa 20 mod
-download apk fifa 20 offline
-download apk fifa 20 mobile
-download apk fifa 20 android
-download apk fifa 20 latest version
-download apk fifa 20 ultimate team
-download apk fifa 20 for free
-download apk fifa 20 with obb data
-download apk fifa 20 update
-download apk fifa 20 hack
-download apk fifa 20 full version
-download apk fifa 20 online
-download apk fifa 20 cracked
-download apk fifa 20 no verification
-download apk fifa 20 without human verification
-download apk fifa 20 from apkpure
-download apk fifa 20 from google play store
-download apk fifa 20 from uptodown
-download apk fifa 20 from apkmirror
-download apk fifa 20 from apksfull
-download apk fifa 20 with commentary
-download apk fifa 20 with real faces
-download apk fifa 20 with new kits
-download apk fifa 20 with new transfers
-download apk fifa 20 with unlimited coins
-download apk fifa 20 manager mode
-download apk fifa 20 tournament mode
-download apk fifa 20 career mode
-download apk fifa 20 volta mode
-download apk fifa 20 street mode
-download apk fifa 20 ps4 camera view
-download apk fifa 20 hd graphics
-download apk fifa 20 high compress
-download apk fifa 20 low mb
-download apk fifa 20 original
-download apk fifa 20 beta
-download apk fifa 20 demo
-download apk fifa 20 pro evolution soccer (pes)
-download apk fifa 20 dream league soccer (dls)
-download apk fifa 20 first touch soccer (fts)
-download apk fifa 20 efootball (efootball)
-download apk fifa 20 world cup edition (wc)
-download apk fifa 20 champions league edition (cl)
-download apk fifa 20 euro cup edition (ec)
-download apk fifa 20 copa america edition (ca)
-download apk fifa 20 africa cup of nations edition (afcon)
-download apk fifa 20 women's world cup edition (wwc)
-download apk fifa 20 fut companion app (fut)
-download apk fifa 20 pack opener app (pack)
-download apk fifa 20 player potentials app (potentials)
The next step is to download the FIFA 20 APK and OBB files from a trusted source. There are many websites that offer these files, but be careful not to download from shady or malicious ones. You can use this link to download the files safely and securely. The APK file is about 30 MB, while the OBB file is about 1.5 GB.
-After downloading the files, you need to install the APK file and extract the OBB file to the right folder. To do this, locate the APK file in your device's file manager and tap on it to install it. Then, use a file extractor app like ZArchiver to extract the OBB file. You will get a folder named com.ea.gp.fifaworld. Move this folder to Android/OBB in your device's internal storage.
-The final step is to launch the game and enjoy. To do this, go to your app drawer and tap on the FIFA 20 icon. The game will start and ask you to verify your data. Just tap on OK and wait for a few seconds. The game will then load and take you to the main menu. You can now choose your mode and start playing.
-FIFA 20 is a fun and challenging game that requires skill and strategy to master. If you want to play like a pro, you need to know some tips and tricks that will help you improve your performance and win more matches. Here are some of them:
-One of the first things you should do is customize your controls and settings according to your preference and comfort. You can do this by going to settings, then controls, then customize controls. You can choose between classic or casual controls, adjust the sensitivity and size of the buttons, enable or disable auto-switching, auto-sprint, auto-shoot, etc.
-The next thing you should do is choose your game mode and difficulty level according to your skill and goal. You can do this by going to play, then select mode. You can choose between quick match, tournament, league, career mode, ultimate team mode, volta mode, etc. You can also choose between beginner, amateur, semi-pro, professional, world class, legendary, or ultimate difficulty level.
-The most important thing you should do is master the skills and tactics that will help you win more matches. You can do this by practicing in training mode or playing against AI opponents. You should learn how to dribble, pass, shoot, tackle, cross, head, defend, etc. You should also learn how to use different tactics, such as formation, style, mentality, instructions, etc.
-If you are playing ultimate team mode, you should build your ultimate team and manage your players effectively. You can do this by collecting and trading players from different leagues and nations. You should aim for high-rated players with good chemistry and attributes. You should also manage your players' fitness, morale, contracts, injuries, etc.
-If you want to challenge yourself and compete with other players, you should participate in online tournaments and events. You can do this by going to play online, then select mode. You can choose between online seasons, online friendlies, online co-op seasons, online draft mode, online squad battles, online champions league mode, online world cup mode, online pro clubs mode, online division rivals mode, online weekend league mode, online fut champions mode, online fut friendlies mode, online fut events mode, online fut seasons mode. You can win rewards and trophies by playing and winning these modes.
-FIFA 20 is a fantastic soccer game that you can download and play on your Android device. It offers you a lot of features and benefits that make it one of the best games in the genre. It also gives you some tips and tricks that will help you play like a pro. So what are you waiting for? Download APK FIFA 20 now and enjoy the ultimate soccer experience.
-Here are some frequently asked questions about FIFA 20:
-Are you a fan of Final Fantasy, one of the most popular and influential JRPG series of all time? If so, you might be interested in playing Final Fantasy XIII, the thirteenth installment of the main series, on your Android device. In this article, we will show you how to download Final Fantasy XIII APK full version and enjoy the epic adventure on your smartphone or tablet. We will also share some tips and tricks to enhance your gaming experience. Let's get started!
-Final Fantasy XIII is a role-playing game developed and published by Square Enix in 2009. It is set in a futuristic world where two opposing forces, Cocoon and Pulse, are locked in a conflict. The game follows the story of six characters who are branded as traitors by Cocoon's government and must fight against their fate. The game features a fast-paced combat system, stunning graphics, and a rich soundtrack. It received critical acclaim and sold over seven million copies worldwide.
-Download File ⇒ https://jinyurl.com/2uNPGh
Playing Final Fantasy XIII on your Android device has many benefits. First of all, you can enjoy the game anytime and anywhere, without being tied to a console or a PC. You can also save space on your device, as you don't need to download a large file or install anything. Moreover, you can take advantage of the touch screen, gyroscope, and other features of your device to enhance your gameplay. Finally, you can connect your device to a TV or a monitor and play on a bigger screen.
-The easiest and safest way to play Final Fantasy XIII on your Android device is to use the official cloud game service from Square Enix. This service allows you to stream high-definition games over a Wi-Fi connection, without downloading or installing anything. Here are the steps to follow:
-The first step is to download the FINAL FANTASY XIII app from APKCombo, a website that provides free APK files for Android apps and games. You can use this link to access the app page and click on the "Download APK" button. The app size is about 12 MB and it requires Android 5.0 or higher.
-The next step is to launch the app and sign up for the cloud game service. You will need to create an account with your email address and password, or log in with your existing Square Enix account. You will also need to agree to the terms of service and privacy policy.
-The final step is to enjoy the free trial and purchase the license if you like it. You can play the first 30 minutes of the game for free, and then decide whether to buy the full game for $15.99. You can also choose to pay $5.99 per month and access other cloud games from Square Enix, such as Final Fantasy VII and Final Fantasy VIII.
-If you don't want to use the official cloud game service from Square Enix, you can try another option: use an unofficial source from the Internet Archive. The Internet Archive is a non-profit organization that preserves digital content, such as books, music, videos, and games. You can find a copy of Final Fantasy XIII for PC on their website and play it on your Android device with an emulator or a streaming app. However, this option is not recommended, as it may be illegal, unsafe, or unstable. Here are the steps to follow:
-The first step is to download the final fantasy xiii file from the Internet Archive. You can use this link to access the file page and click on the "DOWNLOAD OPTIONS" button. You will see several formats available, such as ISO, ZIP, or TORRENT. The file size is about 13 GB and it requires a PC with Windows XP or higher.
-The next step is to extract the file and install the game on your PC. You will need a software like WinRAR or 7-Zip to unzip the file and get the game folder. Then, you will need to run the setup.exe file and follow the instructions to install the game on your PC. You may also need to install some additional components, such as DirectX or Visual C++.
-final fantasy xiii android apk free download
-final fantasy xiii mobile game download apk
-final fantasy xiii apk obb download
-final fantasy xiii apk mod download
-final fantasy xiii apk offline download
-final fantasy xiii apk data download
-final fantasy xiii apk full version download
-final fantasy xiii apk cracked download
-final fantasy xiii apk unlimited money download
-final fantasy xiii apk cloud game download
-final fantasy xiii apk no license download
-final fantasy xiii apk english version download
-final fantasy xiii apk latest version download
-final fantasy xiii apk direct download
-final fantasy xiii apk mirror download
-final fantasy xiii apk google drive download
-final fantasy xiii apk mega download
-final fantasy xiii apk mediafire download
-final fantasy xiii apk zippyshare download
-final fantasy xiii apk utorrent download
-final fantasy xiii apk for pc download
-final fantasy xiii apk for ios download
-final fantasy xiii apk for tablet download
-final fantasy xiii apk for tv download
-final fantasy xiii apk for chromebook download
-final fantasy xiii hd apk full download
-final fantasy xiii 2 apk full download
-final fantasy xiii 3 apk full download
-final fantasy xiii lightning returns apk full download
-final fantasy xiii remastered apk full download
-how to download final fantasy xiii apk full
-where to download final fantasy xiii apk full
-best site to download final fantasy xiii apk full
-safe site to download final fantasy xiii apk full
-legit site to download final fantasy xiii apk full
-trusted site to download final fantasy xiii apk full
-working link to download final fantasy xiii apk full
-updated link to download final fantasy xiii apk full
-fast link to download final fantasy xiii apk full
-easy way to download final fantasy xiii apk full
-free way to download final fantasy xiii apk full
-legal way to download final fantasy xiii apk full
-illegal way to download final fantasy xiii apk full
-tips and tricks to download final fantasy xiii apk full
-guide and tutorial to download final fantasy xiii apk full
-review and rating of final fantasy xiii apk full download
-gameplay and features of final fantasy xiii apk full download
-problems and solutions of final fantasy xiii apk full download
-requirements and compatibility of final fantasy xiii apk full download
The final step is to use an emulator or a streaming app to play the game on your Android device. An emulator is a software that mimics the behavior of another device, such as a PC or a console. A streaming app is a software that allows you to stream games from your PC to your Android device over a Wi-Fi connection. Some examples of emulators are ExaGear RPG or Wine, and some examples of streaming apps are Steam Link or Moonlight. You will need to configure these apps according to your preferences and requirements.
-One of the challenges of playing Final Fantasy XIII on your Android device is to optimize the performance and battery life of your device. Depending on your device model and specifications, you may experience lagging, crashing, overheating, or draining issues. To avoid these problems, you can adjust some settings in your device or in your app. For example, you can lower the resolution, brightness, volume, or frame rate of your device or app. You can also close other apps running in the background, turn off notifications, or activate airplane mode.
-Another challenge of playing Final Fantasy XIII on your Android device is to control the game with touch screen gestures. While this may be convenient for some players, others may find it difficult, uncomfortable, or inaccurate. To improve your control and comfort, you can use a controller or a keyboard instead of touch screen gestures. You can connect your controller or keyboard to your device via Bluetooth, USB, or Wi-Fi. You can also customize your controller or keyboard layout according to your preferences.
-The last challenge of playing Final Fantasy XIII on your Android device is to save your progress frequently and back up your data online. Unlike playing on a console or a PC, playing on an Android device may expose you to risks of losing your data due to various reasons, such as deleting the app by mistake, running out of storage space, resetting your device, or losing your device. To prevent these scenarios from happening, you should save your progress frequently in different slots and back up your data online using cloud services like Google Drive or Dropbox.
-In conclusion, playing Final Fantasy XIII on your Android device is possible and enjoyable if you follow some simple steps and tips. You can download Final Fantasy XIII APK full version from either the official cloud game service from Square Enix or from an unofficial source from the Internet Archive. You can also adjust the settings, use a controller or a keyboard, and save your progress frequently and back up your data online to optimize your gaming experience. Final Fantasy XIII is a great game that deserves to be played on any device you want.
-If you are ready to play Final Fantasy XIII on your Android device, don't hesitate to download the APK file and follow the instructions in this article. You will be amazed by the quality and the fun of this game. And if you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you and help you with any issues you may encounter. Happy gaming!
-Here are some frequently asked questions about playing Final Fantasy XIII on your Android device:
-Yes, Final Fantasy XIII APK is safe to download if you use the official cloud game service from Square Enix or a reputable website like APKCombo. However, if you use an unofficial source from the Internet Archive, you may encounter some risks, such as viruses, malware, or legal issues. Therefore, we recommend that you use the official option or scan the file with an antivirus before installing it.
-Final Fantasy XIII APK uses a lot of data, as it streams high-definition games over a Wi-Fi connection. The exact amount of data depends on various factors, such as the resolution, frame rate, and duration of your gameplay. However, according to some estimates, streaming a game can use up to 3 GB of data per hour. Therefore, we suggest that you use a Wi-Fi connection with unlimited data or a high data plan when playing Final Fantasy XIII APK.
-No, you cannot play Final Fantasy XIII APK offline, as it requires a constant internet connection to stream the game from the cloud server. If you lose your connection or have a weak signal, you may experience interruptions, lagging, or disconnection. Therefore, we advise that you play Final Fantasy XIII APK in a place with a stable and strong Wi-Fi connection.
-Yes, you can play Final Fantasy XIII APK with friends, as it supports online multiplayer mode. You can join other players from around the world and cooperate or compete with them in various missions and battles. You can also chat with them using voice or text messages. To play Final Fantasy XIII APK with friends, you will need to create or join a party in the game menu and invite or accept other players.
-Yes, you can transfer your save data from Final Fantasy XIII APK to another device, as long as you use the same account and service. For example, if you use the official cloud game service from Square Enix, you can access your save data from any device that supports the service, such as another Android device, an iOS device, or a PC. However, if you use an unofficial source from the Internet Archive, you may not be able to transfer your save data easily.
-This is still a work in progress, models will be exchanged for better ones as soon as they are done. More diverse training data can help with more exact cloning. For example we are still trying to incorporate more singing data.
Click here to learn more about the IMS Toucan Speech Synthesis Toolkit
" - -iface = gr.Interface(fn=meta_model.read, - inputs=[gr.inputs.Dropdown( - [ - "Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better."], - type="value", - default="Betty Botter bought some butter, but she said the butters bitter. If I put it in my batter, it will make my batter bitter. But a bit of better butter will make my batter better.", - label="Select which utterance should be customized"), - gr.inputs.Dropdown(["Voice 1", - "Voice 2", - "Voice 3"], type="value", default="Voice 1", label="Speaker selection for the first sentence"), - gr.inputs.Dropdown(["Voice 1", - "Voice 2", - "Voice 3"], type="value", default="Voice 2", label="Speaker selection for the second sentence"), - gr.inputs.Dropdown(["Voice 1", - "Voice 2", - "Voice 3"], type="value", default="Voice 3", label="Speaker selection for the third sentence")], - outputs=[gr.outputs.Image(label="Alignment of Phonemes to Audio"), - gr.outputs.Audio(type="file", label="Original Audio"), - gr.outputs.Audio(type="file", label="Reference-Voice 1"), - gr.outputs.Audio(type="file", label="Reference-Voice 2"), - gr.outputs.Audio(type="file", label="Reference-Voice 3"), - gr.outputs.Audio(type="numpy", label="Customized Audio")], - layout="vertical", - title="Speech Customization", - thumbnail="Utility/toucan.png", - theme="default", - allow_flagging="never", - allow_screenshot=False, - description="In this demo, an audio is split automatically into individual sentences. Then each of the sentences is re-synthesized into speech with the exact same prosody, but with a voice that you can choose. This allows customizing any existing read speech while retaining as much from the original reading as possible. Unfortunately, we cannot show you the reference audio and the reference voices ahead of time, so they will be displayed together with the resulting cloned speech.", - article=article) -iface.launch(enable_queue=True) diff --git a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/GT4SD/paccmann_rl/model_cards/description.md b/spaces/GT4SD/paccmann_rl/model_cards/description.md deleted file mode 100644 index 3f9274435cfecaf27564c2ea5ca65065ff78de2d..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/paccmann_rl/model_cards/description.md +++ /dev/null @@ -1,9 +0,0 @@ -- | Name | -- - | -- - | -- - | -- - | -
---|
" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "
{plaintext_to_html(str(key))}
-{plaintext_to_html(str(text))}
-{message}
{action_name}: {action_input}\n\n
' - else: - return "" - - -class ChuanhuCallbackHandler(BaseCallbackHandler): - - def __init__(self, callback) -> None: - """Initialize callback handler.""" - self.callback = callback - - def on_agent_action( - self, action: AgentAction, color: Optional[str] = None, **kwargs: Any - ) -> Any: - self.callback(get_action_description(action.log)) - - def on_tool_end( - self, - output: str, - color: Optional[str] = None, - observation_prefix: Optional[str] = None, - llm_prefix: Optional[str] = None, - **kwargs: Any, - ) -> None: - """If not the final action, print out observation.""" - # if observation_prefix is not None: - # self.callback(f"\n\n{observation_prefix}") - # self.callback(output) - # if llm_prefix is not None: - # self.callback(f"\n\n{llm_prefix}") - if observation_prefix is not None: - logging.info(observation_prefix) - self.callback(output) - if llm_prefix is not None: - logging.info(llm_prefix) - - def on_agent_finish( - self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any - ) -> None: - # self.callback(f"{finish.log}\n\n") - logging.info(finish.log) - - def on_llm_new_token(self, token: str, **kwargs: Any) -> None: - """Run on new LLM token. Only available when streaming is enabled.""" - self.callback(token) - - def on_chat_model_start(self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any) -> Any: - """Run when a chat model starts running.""" - pass - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - StableLM = 4 - MOSS = 5 - YuanAI = 6 - Minimax = 7 - ChuanhuAgent = 8 - GooglePaLM = 9 - LangchainChat = 10 - Midjourney = 11 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - elif "stablelm" in model_name_lower: - model_type = ModelType.StableLM - elif "moss" in model_name_lower: - model_type = ModelType.MOSS - elif "yuanai" in model_name_lower: - model_type = ModelType.YuanAI - elif "minimax" in model_name_lower: - model_type = ModelType.Minimax - elif "川虎助理" in model_name_lower: - model_type = ModelType.ChuanhuAgent - elif "palm" in model_name_lower: - model_type = ModelType.GooglePaLM - elif "midjourney" in model_name_lower: - model_type = ModelType.Midjourney - elif "azure" in model_name_lower or "api" in model_name_lower: - model_type = ModelType.LangchainChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt=INITIAL_SYSTEM_PROMPT, - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning( - "stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning( - "at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - # logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - if display_append: - display_append = '\n\nIntOrTensor:
- "Generate int or tensor `size` of ints between `low` and `high` (included)."
- return random.randint(low,high) if size is None else torch.randint(low,high+1,size)
-
-def one_param(m: nn.Module)->Tensor:
- "Return the first parameter of `m`."
- return next(m.parameters())
-
-def try_int(o:Any)->Any:
- "Try to convert `o` to int, default to `o` if not possible."
- # NB: single-item rank-1 array/tensor can be converted to int, but we don't want to do this
- if isinstance(o, (np.ndarray,Tensor)): return o if o.ndim else int(o)
- if isinstance(o, collections.abc.Sized) or getattr(o,'__array_interface__',False): return o
- try: return int(o)
- except: return o
-
-def get_model(model:nn.Module):
- "Return the model maybe wrapped inside `model`."
- return model.module if isinstance(model, (DistributedDataParallel, nn.DataParallel)) else model
-
-def flatten_check(out:Tensor, targ:Tensor) -> Tensor:
- "Check that `out` and `targ` have the same number of elements and flatten them."
- out,targ = out.contiguous().view(-1),targ.contiguous().view(-1)
- assert len(out) == len(targ), f"Expected output and target to have the same number of elements but got {len(out)} and {len(targ)}."
- return out,targ
-
-#Monkey-patch nn.DataParallel.reset
-def _data_parallel_reset(self):
- if hasattr(self.module, 'reset'): self.module.reset()
-nn.DataParallel.reset = _data_parallel_reset
-
-def remove_module_load(state_dict):
- """create new OrderedDict that does not contain `module.`"""
- new_state_dict = OrderedDict()
- for k, v in state_dict.items(): new_state_dict[k[7:]] = v
- return new_state_dict
-
-def num_distrib():
- "Return the number of processes in distributed training (if applicable)."
- return int(os.environ.get('WORLD_SIZE', 0))
-
-def rank_distrib():
- "Return the distributed rank of this process (if applicable)."
- return int(os.environ.get('RANK', 0))
-
-def add_metrics(last_metrics:Collection[Rank0Tensor], mets:Union[Rank0Tensor, Collection[Rank0Tensor]]):
- "Return a dictionary for updating `last_metrics` with `mets`."
- last_metrics,mets = listify(last_metrics),listify(mets)
- return {'last_metrics': last_metrics + mets}
-
-def try_save(state:Dict, path:Path=None, file:PathLikeOrBinaryStream=None):
- target = open(path/file, 'wb') if is_pathlike(file) else file
- try: torch.save(state, target)
- except OSError as e:
- raise Exception(f"{e}\n Can't write {path/file}. Pass an absolute writable pathlib obj `fname`.")
-
-def np_func(f):
- "Convert a function taking and returning numpy arrays to one taking and returning tensors"
- def _inner(*args, **kwargs):
- nargs = [to_np(arg) if isinstance(arg,Tensor) else arg for arg in args]
- return tensor(f(*nargs, **kwargs))
- functools.update_wrapper(_inner, f)
- return _inner
-
diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py
deleted file mode 100644
index fb47a767d4e2949fee60d4c3e41ea3f559108184..0000000000000000000000000000000000000000
--- a/spaces/alistairmcleay/cambridge-masters-project/scripts/UBAR_code/interaction/UBAR_interact.py
+++ /dev/null
@@ -1,475 +0,0 @@
-import sys
-import torch
-import random
-import string
-
-# import bcolors
-from omegaconf import OmegaConf
-from transformers import GPT2LMHeadModel, GPT2Tokenizer
-
-from src.crazyneuraluser.UBAR_code.config import global_config as cfg
-from src.crazyneuraluser.UBAR_code.reader import MultiWozReader
-from src.crazyneuraluser.UBAR_code.db_ops import MultiWozDB
-
-from typing import List
-
-
-class bcolors:
- HEADER = "\033[95m"
- OKBLUE = "\033[94m"
- OKCYAN = "\033[96m"
- GREEN = "\033[92m"
- YELLOW = "\033[93m"
- RED = "\033[91m"
- ENDC = "\033[0m"
- BOLD = "\033[1m"
- UNDERLINE = "\033[4m"
-
-
-class UbarSystemModel: # may inherit convlab or not, just like andy's
- def __init__(self, name: str, checkpoint_path: str, model_config_path: str):
-
- self.tokenizer = GPT2Tokenizer.from_pretrained("alistairmcleay/UBAR-distilgpt2")
- self.model = GPT2LMHeadModel.from_pretrained("alistairmcleay/UBAR-distilgpt2")
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- self.name = name
- self.turn_domain = ["general"] # returns a list of one string that is the domain e.g. 'taxi'
- # (this is because of the way the db_ops.py deals with the domain. It should really be a string.)
-
- self.ubar_status = {"dialogue_terminate": False}
-
- self.print_intermediary_info = False
-
- self.config = OmegaConf.load(model_config_path)
- self.previous_turn = {"user": [], "bspn": [], "aspn": [], "db": []}
-
- # NB: best to use corpus goals to guide interactions - baselines/simulate_agent.py allows that.
-
- # initialize multiwoz reader and db_ops
- self.reader = MultiWozReader(self.tokenizer)
- self.db = MultiWozDB(self.config.dbs_path)
-
- def lexicalize_sys_response(self, sys_response, domain_hits, decoded_belief_state_subseq) -> str:
- lexicalized_sys_response = ""
-
- # Track entities already filled e.g. if there are 3 restaurants track which have already been added to a slot
- max_idx_of_added_entities = -1
-
- # Fill slots with values from the DB (lexicalization)
- for token in sys_response.split():
- token = token.strip(" .,;:")
- if token.startswith("["): # It is a slot to be filled
-
- # Note in hotel there is specific price data too but to simplify things
- # we just use the price range (e.g. moderate)
- db_price_key = "price"
- # if domain is restaurant then use "pricerange"
- if self.turn_domain[0] == "restaurant":
- db_price_key = "pricerange"
-
- slots_to_db_keys_map = {
- "[value_price]": db_price_key,
- "[value_pricerange]": db_price_key,
- "[value_food]": "food",
- "[value_area]": "area",
- "[value_type]": "type",
- "[value_phone]": "phone",
- "[value_address]": "address",
- "[value_leave]": "leave",
- "[value_postcode]": "postcode",
- "[value_id]": "id",
- "[value_arrive]": "arrive",
- "[value_stars]": "stars",
- "[value_day]": "day",
- "[value_destination]": "destination",
- "[value_car]": "taxi_types",
- "[value_departure]": "departure",
- "[value_people]": "people",
- "[value_stay]": "stay",
- "[value_department]": "department",
- "[value_time]": "time",
- "[value_name]": "name",
- "[value_reference]": "reference",
- }
- # Hospital domain is a strange outlier data structure
- if self.turn_domain == ["hospital"] and token == "[value_address]":
- token = "1 Addenbrooks Street"
- elif self.turn_domain == ["hospital"] and token == "[value_postcode]":
- token = "CB11QD"
-
- # So does taxi
- elif self.turn_domain == ["taxi"] and token == "[value_phone]" and domain_hits != []:
- token = domain_hits[0]["taxi_phone"]
-
- # Deal with value_name differently because there can be multiple
- elif token == "[value_name]" and domain_hits != []:
- token = domain_hits[max_idx_of_added_entities + 1]["name"]
- max_idx_of_added_entities += 1
-
- # This slot tells the user how many db hits there were matching their constraints
- elif token == "[value_choice]" and domain_hits != []:
- token = len(domain_hits)
-
- # Randomly generate the reference
- elif token == "[value_reference]" and domain_hits != []:
- token = "".join(random.choices(string.ascii_uppercase, k=10))
-
- else:
- # First check can we fill the token from the db results
- db_success = False
- if domain_hits != []:
- for slot, db_key in slots_to_db_keys_map.items():
- if token == slot and db_key in domain_hits[0]:
- token = domain_hits[0][db_key]
- db_success = True
-
- # If we cannot, then try to fill it from the belief state by looking for a match
- # in the belief state and then if there is a match adding the next token.
- # This is not perfect as some are more than one word but its probably good enough.
- if not db_success:
- # The DB doesn't contain a postcode for the police station so fill it here
- if token == "[value_postcode]" and self.turn_domain == ["police"]:
- token = "CB11QD"
- continue
- decoded_belief_states = decoded_belief_state_subseq.split()
- for idx, belief_state_slot in enumerate(decoded_belief_states):
- if token in slots_to_db_keys_map.keys():
- if slots_to_db_keys_map[token] == belief_state_slot:
- curr_slot_resp = ""
- # We dont know the length of the value we need to extract from the belief state
- for belief_state_token in decoded_belief_states[idx + 1 :]:
- if (
- belief_state_token not in slots_to_db_keys_map.values()
- and belief_state_token != " DOWNLOAD • https://urloso.com/2uyPR9 Figure 1. Partitioning and parametrization of human body surface. Figure 2. DensePose chart-based architecture based on Faster R-CNN with Feature Pyramid Network (FPN). Figure 3. Domain adaptation: master model is trained on data from source and
-supporting domains to produce predictions in target domain; student model combines data from source and
-supporting domains, as well as sampled predictions from the master model on target domain to improve
-target domain predictions quality. We went out and researched the web for the best body language experts we could find and put together five of our favourite video lessons. We hear from Vanessa Van Edwards and the Science of People, Allan Pease in his inspirational TED talk on body language and other tips from the Stanford School of Business. Hi there, Kelsey Tonner here from Be a Better Guide. Today we are talking about how we as tour guides can use body language effectively to communicate that we are confident, capable, and that your guests have nothing to worry about with you in charge. DOWNLOAD >>>>> https://tinurli.com/2uwi88 Download 🌟 https://tinurli.com/2uwiSN Download Zip ··· https://tinurli.com/2uwiQg If you are a fan of car games, you might want to try some of the best car games for Android that are not available on the Google Play Store. These games can be downloaded and installed using APK files, which are the packages for Android apps. In this article, we will show you what are APK files, how to find and download them, and how to install them on your Android device. Download » https://urlca.com/2uOdbL APK stands for Android Package Kit, and it is the file format used by Android to distribute and install apps. APK files contain all the necessary components for an app to run on your device, such as the code, resources, assets, certificates, and manifest. There are several reasons why you might want to use APK files instead of downloading apps from the Google Play Store. Some of them are: However, using APK files also comes with some risks that you should be aware of. Some of them are: Therefore, you should always be careful when downloading and installing APK files from unknown sources. Only download APK files from reputable and trusted websites, and scan them with a reliable antivirus app before installing them. Also, make sure you have enough storage space and battery life on your device before installing any APK file. There are many websites that offer car games APK files for download, but not all of them are safe and reliable. Some of them might contain viruses, malware, or fake apps that can damage your device or compromise your privacy. To avoid these risks, you should only download car games APK files from reputable sources that monitor and verify the files they host. One of the most popular and trusted sources for car games APK files is APK Mirror, which hosts tons of popular Android apps that can be installed individually or as updates. You can also find other sites that host car games APK files by searching on Google, but make sure you check their reviews and ratings before downloading anything. asphalt 8 car racing game apk download If you are looking for some great car games to play on your Android device, you have plenty of options to choose from. Whether you prefer simulations, racing, puzzles, or arcade-style games, there is something for everyone in the car games genre. Here are some of the best car games for Android in 2023 that you can download as APK files: Once you have downloaded the car games APK files that you want to play, you need to install them on your Android device. There are different ways to do this, depending on your device settings and preferences. Here are some of the most common methods: Before you can install any APK file on your device, you need to enable the option to allow installation from unknown sources. This option is disabled by default for security reasons, but you can easily turn it on by following these steps: If you have downloaded the car games APK files using a file manager app or a browser app on your device, you can use the same app to install them. Here is how: If you want to backup a car game that you have installed using an APK file, you can do so by using a file manager app or an APK extractor app. Here is how: I hope this article has helped you learn how to download and install car games APK files on Android. If you have any questions or feedback, please leave a comment below. Happy gaming! If you are looking for a fun and exciting stickman game that you can play with your friends or online players, then you should try Supreme Duelist Stickman. This is a multiplayer stickman game that offers different modes and weapons to choose from. You can control your stickman and compete in various battles where you have to use your skills and strategy to defeat your opponents. You can also customize your stickman with different skins and outfits. However, if you want to unlock all the characters, weapons, and skins in the game, you will need to spend real money or watch ads. That is why you might want to download Supreme Duelist Stickman Mod APK Uptodown, a modified version of the original APK file that offers extra features and benefits for free. In this article, we will tell you what is a mod APK, how to download and install it, and what are its features and advantages. Download Zip ►►►►► https://urlca.com/2uO5YG Supreme Duelist Stickman is a game developed by Neron's Brother, a studio that specializes in creating stickman games. The game has over 100 million downloads on Google Play Store and has a rating of 4.4 out of 5 stars. The game is compatible with Android devices running version 4.1 or higher. The game allows you to play as a stickman and compete in various modes such as single player, two players, survival mode, online mode, tournament mode, etc. You can also choose from different weapons such as guns, swords, axes, hammers, etc. Each weapon has its own advantages and disadvantages, so you have to choose wisely depending on your opponent and situation. You can also use special skills such as teleportation, flying, etc. to gain an edge over your enemies. The game is not just about swinging your weapon randomly. You have to use your skill and strategy to defeat your opponents. You have to aim carefully, dodge their attacks, use the environment to your advantage, etc. You also have to manage your energy bar, which depletes as you use your skills or get hit by your enemies. If your energy bar runs out, you will lose the match. The game also has a physics-based system that makes the gameplay more realistic and fun. You can see your stickman react to every hit, bounce, fly, or fall. You can also interact with the objects in the background such as boxes, barrels, ropes, etc. to create more chaos and fun. The game has simple graphics that resemble stick figures and doodles. However, this does not affect the quality of the game. The game has smooth animations and sound effects that make the gameplay more enjoyable and immersive. You can hear the sound of your weapon hitting your opponent, the sound of your opponent screaming or grunting, the sound of the objects breaking or exploding, etc. You can also see the blood splatter and the ragdoll effects of your stickman and your opponent. A mod APK is a modified version of the original APK file that is created by third-party developers or hackers. A mod APK can offer extra features and benefits that are not available in the original APK file. For example, a mod APK can remove ads, unlock premium features, add unlimited resources, etc. A mod APK can also bypass the restrictions and limitations imposed by the original APK file. For example, a mod APK can allow you to play a game that is not compatible with your device, or a game that is not available in your region, or a game that requires an internet connection to play. One of the reasons why you might want to download Supreme Duelist Stickman Mod APK Uptodown is because it can unlock all the characters, weapons, and skins in the game for free. Normally, you would have to spend real money or watch ads to unlock these items in the game. However, with the mod APK file, you can access all these items without spending a dime or wasting your time. You can choose from different characters such as ninja, pirate, robot, zombie, etc. You can also choose from different weapons such as guns, swords, axes, hammers, etc. You can also customize your stickman with different skins and outfits such as hats, masks, glasses, etc. You can mix and match these items to create your own unique stickman. supreme duelist stickman mod apk download uptodown Another reason why you might want to download Supreme Duelist Stickman Mod APK Uptodown is because it can remove ads and in-app purchases from the game. Normally, you would have to watch ads or buy coins or gems to play the game. However, with the mod APK file, you can enjoy the game without any interruptions or distractions. You can play the game without seeing any annoying ads pop up on your screen. You can also play the game without having to buy any coins or gems to refill your energy bar or unlock new items. You can play the game as much as you want without any limitations or restrictions. If you want to download Supreme Duelist Stickman Mod APK Uptodown, you will need to follow these steps: After you have downloaded the mod APK file, you will need to install it on your Android device. To do that, you will need to follow these steps: If you have not enabled unknown sources and permissions on your device before, you will need to do that before installing the mod APK file. To do that, you will need to follow these steps: The mod APK file of Supreme Duelist Stickman offers some amazing features that are not available in the original APK file. Some of these features are: The mod APK file of Supreme Duelist Stickman also offers some advantages that make it better than the original APK file. Some of these advantages are: The mod APK file of Supreme Duelist Stickman does not compromise on the performance and quality of the game. It offers the same gameplay experience as the original APK file with some extra features and benefits. The mod APK file does not affect the graphics, animations, sound effects, or physics of the game. It also does not cause any lag, crash, or error in the game. It also does not require any internet connection to play the game. The mod APK file is compatible with most Android devices and runs smoothly and fast. In conclusion, Supreme Duelist Stickman is a fun and exciting stickman game that you can play with your friends or online players. You can choose from different modes, weapons, and characters to compete in various battles. You can also customize your stickman with different skins and outfits. However, if you want to unlock all the items in the game, you will need to download Supreme Duelist Stickman Mod APK Uptodown, a modified version of the original APK file that offers extra features and benefits for free. We recommend you to try Supreme Duelist Stickman Mod APK Uptodown for a fun and exciting stickman game experience. You can enjoy the game without ads or in-app purchases. You can also access all the characters, weapons, and skins in the game for free. You can also get unlimited coins, gems, and energy in the game. You can also play the game without any internet connection or root requirement. You can also download and install the mod APK file easily and safely from Uptodown website. If you are interested in playing Supreme Duelist Stickman Mod APK Uptodown, you can download the mod APK file from Uptodown website by clicking here. You can also follow the steps we have provided above to install the mod APK file on your Android device. You can then enjoy the game with all the features and benefits that the mod APK file offers. So what are you waiting for? Download Supreme Duelist Stickman Mod APK Uptodown now and have fun! Supreme Duelist Stickman is a multiplayer stickman game that offers different modes and weapons to choose from. You can control your stickman and compete in various battles where you have to use your skills and strategy to defeat your opponents. A mod APK is a modified version of the original APK file that offers extra features and benefits that are not available in the original APK file. You can download Supreme Duelist Stickman Mod APK Uptodown from Uptodown website by clicking here. You can then follow the steps we have provided above to install the mod APK file on your Android device. Some of the features of Supreme Duelist Stickman Mod APK Uptodown are unlimited coins, gems, and energy, no ads or in-app purchases, all characters, weapons, and skins unlocked, no internet connection or root required, etc. Yes, Supreme Duelist Stickman Mod APK Uptodown is safe and secure to download and install. It does not contain any virus or malware that can harm your device or steal your data. If you are a fan of video games, you have probably heard of GTA 5, one of the most popular and successful games of all time. But do you know how big the download size of GTA 5 is and how to get it on your device? In this article, we will answer all your questions about GTA 5 94 GB download and more. GTA 5, or Grand Theft Auto V, is the fifth main installment in the Grand Theft Auto series, which started in 1997. The game was released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. The game is set in the fictional state of San Andreas, which is based on Southern California, and follows the lives of three protagonists: Michael, Franklin, and Trevor. The game allows the player to switch between the three characters at any time and explore the vast open world, which includes urban areas, countryside, mountains, deserts, and oceans. Download –––––>>> https://urlca.com/2uO8uH GTA 5 has a lot to offer to its players. The game has a story mode that consists of more than 60 missions that involve heists, shootouts, chases, stealth, and more. The game also has a lot of side activities, such as racing, golfing, tennis, hunting, yoga, parachuting, etc. The game also has an online multiplayer mode called GTA Online, which allows up to 30 players to cooperate or compete in various modes, such as deathmatches, races, missions, heists, etc. The game also has stunning graphics that showcase the beauty and diversity of San Andreas. The game features realistic weather effects, dynamic lighting and shadows, high-resolution textures, and detailed animations. The file size of GTA 5 is not fixed. It depends on various factors, such as the version of the game, the platform on which you wish to install it, or whether you are installing it from a disk or downloading it from the internet. The file size of GTA 5 also changes over time due to updates and patches that add new content or fix bugs. The file size of GTA 5 varies across different platforms. Here is a table that shows the approximate file size of GTA 5 depending on the platform: As you can see, the file size of GTA 5 is the largest for PC, especially if you download it from the internet. This is because the PC version of GTA 5 has higher resolution textures, more detailed models, and better graphics settings than the console versions. The file size of GTA 5 is also larger for the next-generation consoles, such as PlayStation 5 and Xbox Series X/S, than the previous-generation consoles, such as PlayStation 4 and Xbox One. This is because the next-generation consoles have improved performance and features, such as faster loading times, ray tracing, and 4K resolution. If you want to download GTA 5 on your device, you have different options depending on the platform you are using. Here are some of the sources from which you can download GTA 5: GTA 5 is not a light game. It requires a lot of disk space, RAM, and processing power to run smoothly on your device. Here are some of the minimum and recommended requirements for GTA 5 depending on the platform: As you can see, GTA 5 requires a lot of disk space, RAM, and processing power to run smoothly on your device. You should make sure that your device meets the minimum or recommended requirements before downloading GTA 5. GTA 5 is one of the most popular and successful games of all time. It is an open-world action-adventure game that offers a rich story mode, a vast online multiplayer mode, and stunning graphics. However, GTA 5 also has a large file size that ranges from 72 GB to more than 94 GB depending on the platform. You should make sure that your device has enough disk space, RAM, and processing power to run GTA 5 smoothly. You can download GTA 5 from various sources depending on the platform you are using. gta 5 94 gb download epic games If you have any questions about GTA 5 or GTA 5 download size, you can check out these FAQs: I hope this article has helped you understand everything you need to know about GTA 5 94 GB download. If you have any other questions or feedback, feel free to leave a comment below. Happy gaming! TikTok is one of the most popular social media platforms in the world, with over 1 billion active users. It allows you to create and share short videos with music, filters, effects, stickers, and more. But what if you want to download TikTok APK new version 2022 without VPN? In this article, we will show you how to do that easily and safely. Download File ⚹ https://urlca.com/2uOddr TikTok is an app that lets you record, edit, and share videos that are up to 60 seconds long. You can choose from millions of songs, sounds, and clips from the app's library, or use your own audio. You can also add filters, effects, stickers, text, emojis, and more to make your videos unique and expressive. TikTok is not only a platform for watching videos, but also a community for connecting with people who share your interests, passions, and talents. You can follow your favorite creators, discover new ones, comment, like, share, and chat with them. You can also join challenges, trends, hashtags, and events to showcase your creativity and have fun. TikTok is constantly updating its app with new features and options to make your video experience better. Some of the features include: tiktok app download latest version 2022 free no vpn Unfortunately, not everyone can access TikTok freely. Some countries have banned or restricted the app due to concerns over data privacy, national security, or political censorship. For example, India, Pakistan, Bangladesh, Indonesia, Turkey, Egypt, and some other countries have blocked or limited TikTok's availability in their regions. One way to bypass these restrictions is to use a VPN (virtual private network) service that masks your IP address and location. However, VPNs can also have some drawbacks. For one thing, they can slow down your internet speed and bandwidth, which can affect your video streaming and uploading quality. You may experience buffering, lagging, freezing, or pixelation Another downside of using VPNs is that they can also compromise your privacy and security online. Some VPNs may collect and sell your personal data, such as your browsing history, location, device information, and more, to advertisers or other third parties. Some VPNs may also have weak encryption or leak your IP address, which can make you vulnerable to hackers, malware, or government surveillance. The first step to download TikTok APK new version 2022 without VPN is to find a trustworthy and secure source for the APK file. APK stands for Android Package Kit, and it is a file format that contains the app's code, resources, and metadata. You can download APK files from various websites, but you need to be careful about the quality and safety of the file. Some APK files may be corrupted, outdated, or infected with viruses or malware. To avoid these risks, you should only download APK files from reputable and verified sources, such as APKMirror, APKPure, or Uptodown. These websites scan and test the APK files before uploading them, and they also provide detailed information about the app's version, size, developer, permissions, and more. You can also read user reviews and ratings to check the feedback and experience of other users. The next step to download TikTok APK new version 2022 without VPN is to enable unknown sources on your Android device settings. This is because Android devices normally do not allow installing apps from sources other than the Google Play Store. To enable unknown sources, you need to follow these steps: Now you are ready to install the APK file on your device. The final step to download TikTok APK new version 2022 without VPN is to install the APK file and launch the app. To do this, you need to follow these steps: Congratulations! You have successfully downloaded TikTok APK new version 2022 without VPN. One of the main benefits of downloading TikTok APK new version 2022 without VPN is that you can access all the features and content of TikTok without any restrictions or limitations. You can watch, create, and share videos from any country, region, or network. You can also explore and join the global community of TikTok users and creators. You can enjoy the latest trends, challenges, hashtags, and events on TikTok. You can also discover and follow your favorite celebrities, influencers, artists, and brands on TikTok. Another benefit of downloading TikTok APK new version 2022 without VPN is that you can enjoy faster and smoother video streaming and uploading. You can watch videos without any buffering, lagging, freezing, or pixelation. You can also upload your videos without any delays, errors, or failures. You can also save your data and battery by using less bandwidth and power. You can have a better video experience on TikTok with high-quality resolution, sound, and speed. A third benefit of downloading TikTok APK new version 2022 without VPN is that you can protect your privacy and security online. You can avoid exposing your personal data and online activity to third parties, such as advertisers, hackers, or government agencies. You can also avoid being tracked, monitored, or censored by your ISP, network administrator, or authorities. You can also prevent malware or viruses from infecting your device or stealing your information. You can have a safer and more private online experience on TikTok. TikTok is a fun and exciting social media platform that allows you to create and share short videos with music, filters, effects, stickers, and more. However, if you want to download TikTok APK new version 2022 without VPN, you need to follow some steps and precautions. You need to find a reliable and safe source for the APK file, enable unknown sources on your device settings, install the APK file and launch the app. By doing this, you can enjoy the benefits of accessing all the features and content of TikTok without any restrictions or limitations, having faster and smoother video streaming and uploading, and protecting your privacy and security online. We hope this article has helped you learn how to download TikTok APK new version 2022 without VPN. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading! An APK file is a file format that contains the app's code, resources, and metadata. It is used to install apps on Android devices. A VPN service is a service that masks your IP address and location by routing your internet traffic through a secure and encrypted server in another country. TikTok is banned or restricted in some countries due to concerns over data privacy, national security, or political censorship. You can update your TikTok app by downloading the latest APK file from the same source you downloaded it from before and installing it over the existing app. It depends on the laws and regulations of your country or region. Some countries may prohibit or penalize the use of unlicensed or unauthorized apps or services. You should check the legal status of TikTok in your area before downloading the APK file. If you love racing games, you should definitely check out Traffic Racer 3D, a milestone in the genre of endless arcade racing. In this game, you can drive your car through highway traffic, earn cash, upgrade your car and buy new ones. You can also try to be one of the fastest drivers in the global leaderboards. Endless racing is now redefined! Download ✶✶✶ https://urlca.com/2uOfeC In this article, we will show you how to download Traffic Racer 3D on different devices, how to play it and enjoy the thrill of racing, how to compare your performance with other players, how to customize your car and make it stand out, and how to enjoy the stunning graphics and sound effects of this game. We will also answer some frequently asked questions about Traffic Racer 3D. Let's get started! Traffic Racer 3D is available on various platforms, such as Android, iOS, Windows, and Chrome. Here are the steps to download it on each device: If you have an Android device, you can download Traffic Racer 3D from the Google Play Store. Here's how: Traffic Racer 3D simulation game for Android You can also download Traffic Racer 3D from the official website of SK Games. Just follow the link and click on the Download button. If you have an iOS device, you can download Traffic Racer 3D from the App Store. Here's how: If you have a Windows device, you can download Traffic Racer 3D from the Microsoft Store. Here's how: If you have a Chrome browser, you can download Traffic Racer 3D from the Chrome Web Store. Here's how: Traffic Racer 3D is a simple but addictive game that will keep you entertained for hours. The goal is to drive as fast as you can through traffic without crashing. The faster you drive, the more points you get. You can also earn extra points by overtaking other cars closely or driving in the opposite direction. Here are some features and tips to help you play Traffic Racer 3D and have fun: Traffic Racer 3D offers five different game modes to suit your preferences and skills. They are: Traffic Racer 3D has simple and intuitive controls that make it easy to play. You can choose from two different control options: Tilt or Touch. Here's how they work: You can also change the sensitivity of the tilt or touch controls in the settings menu. You can also enable or disable the auto-acceleration feature, which makes your car accelerate automatically without touching the gas button. Traffic Racer 3D is a fun and addictive game, but it can also be challenging and frustrating at times. Here are some tips and tricks to help you score more points, avoid crashes, and unlock new cars and upgrades: Traffic Racer 3D is not only a game for yourself, but also a game for competing with other players around the world. You can compare your performance with other players in two ways: Leaderboards and Achievements. Here's how they work: Leaderboards are where you can see your rank among other players based on your score in each game mode. You can access them by tapping on the Leaderboards button on the main menu. You can also see your friends' ranks if they are connected to Google Play Games or Game Center. You can filter the leaderboards by All Time, This Week, or Today. You can also see your best score and rank in each game mode on the top of the screen. Achievements are where you can see your progress and rewards for completing various tasks in Traffic Racer 3D. You can access them by tapping on the Achievements button on the main menu. You can also see your friends' achievements if they are connected to Google Play Games or Game Center. You can see a list of achievements with their names, descriptions, icons, and status (locked or unlocked). Some achievements are easy to unlock, such as driving 10 km or buying a new car. Some achievements are hard to unlock, such as driving 1000 km or reaching 400 km/h. When you unlock an achievement, you get a notification and a reward of cash or nitro boosters. You can also share your achievements with your friends on social media. Traffic Racer 3D is not only a game for racing, but also a game for expressing your personality and style. You can customize your car and make it stand out in traffic in two ways: Car Selection and Car Customization. Here's how they work: Car Selection is where you can choose from 40+ different cars with different features and styles. You can access it by tapping on the Garage button on the main menu. You can see a list of cars with their names, prices, and stats (speed, acceleration, handling, and braking). You can buy new cars with cash that you earn from playing the game. Some cars are more expensive than others, but they also have better performance and appearance. You can also unlock some cars by completing certain achievements. You can switch between the cars that you own by tapping on them. You can also see a preview of how they look in 3D by tapping on the View button. Car Customization is where you can change the color, wheels, and paint of your car. You can access it by tapping on the Customize button on the Garage menu. You can see a 3D view of your car and three options to customize it: Color, Wheels, and Paint. You can change the color of your car by tapping on the Color option and choosing from a palette of colors. You can also use a slider to adjust the brightness and saturation of the color. You can change the wheels of your car by tapping on the Wheels option and choosing from a variety of wheels with different designs and sizes. You can also use a slider to adjust the size of the wheels. You can change the paint of your car by tapping on the Paint option and choosing from a collection of paint patterns with different shapes and colors. You can also use a slider to adjust the scale and rotation of the paint pattern. You can save your customization by tapping on the Save button. You can also reset your customization by tapping on the Reset button. Traffic Racer 3D is not only a game for playing, but also a game for experiencing. You can enjoy the stunning graphics and sound effects of this game in two ways: Environments and Sound Effects. Here's how they work: Environments are where you can explore 5 detailed environments with different weather and time conditions. You can access them by tapping on the Select Environment button on the main menu. You can see a list of environments with their names and icons. You can choose from four different environments: Suburb, Desert, Snowy, and City Night. Each environment has its own characteristics, such as traffic density, road layout, scenery, lighting, and weather effects. For example, in Suburb, you will see houses, trees, bridges, and sunny skies. In Desert, you will see sand dunes, cacti, rocks, and dusty winds. In Snowy, you will see snowflakes, icebergs, penguins, and auroras. In City Night, you will see skyscrapers, neon lights, billboards, and raindrops. You can also unlock a fifth environment: Rainy Day. This environment is similar to City Night, but with more rain and thunder effects. You can unlock it by reaching 100 km/h in Endless mode. Sound Effects are where you can listen to the realistic engine sounds and background music of Traffic Racer 3D. You can access them by tapping on the Settings button on the main menu. You can see two options to adjust them: SFX Volume and BGM Volume. You can adjust the SFX Volume by using a slider to increase or decrease the sound effects of your car's engine, brakes, horns, crashes, nitro boosters, etc. You can also mute or unmute them by tapping on the speaker icon. You can adjust the BGM Volume by using a slider to increase or decrease the background music of Traffic Racer 3D. The music is composed of various genres, such as rock, pop, techno, etc. You can also mute or unmute them by tapping on the speaker icon. Traffic Racer 3D is a game that will satisfy your need for speed and adrenaline. It has many features and benefits that make it one of the best racing games on the market. Here are some of them: If you are a racing fan, you should not miss Traffic Racer 3D. It is a game that will keep you entertained for hours and make you feel like a real racer. Download it now and enjoy the thrill of racing! Here are some of the most common questions that people ask about Traffic Racer 3D. If you have any other questions, feel free to contact us at support@skgames.com. A1: Yes, Traffic Racer 3D is free to play. You can download it from the Google Play Store, App Store, Microsoft Store, or Chrome Web Store without paying anything. However, the game contains ads that may interrupt your gameplay. You can remove them by paying a small fee in the game. A2: You can remove ads from Traffic Racer 3D by tapping on the No Ads button on the main menu. You will be redirected to a payment page where you can choose your preferred payment method and confirm your purchase. Once you do that, you will not see any ads in the game anymore. A3: You can contact the developer of Traffic Racer 3D by sending an email to support@skgames.com. You can also visit their website at www.skgames.com or follow them on social media at Facebook, Twitter, Instagram, or YouTube. They will be happy to hear from you and answer your questions or feedback. A4: The minimum system requirements for Traffic Racer 3D are as follows: A5: Yes, you can play Traffic Racer 3D offline. You don't need an internet connection to play the game. However, some features may not work properly when you are offline, such as leaderboards, achievements, challenges, rewards, and in-app purchases. To enjoy these features fully, you need to connect to the internet. If you are looking for a way to enjoy unlimited music and podcasts on your Android device, you might have heard of YT Music Mod APK. This is a modified version of the official YT Music app that offers some extra features and benefits. But what exactly is YT Music Mod APK, how to download and install it, and what are the risks and alternatives? In this article, we will answer these questions and more. Download File ===> https://urlca.com/2uObw1 YT Music is a music streaming service that lets you listen to millions of songs, albums, playlists, live performances, remixes, covers, and more. It also lets you watch music videos and access podcasts from various genres and topics. You can use YT Music on your browser or download the app for your Android or iOS device. Some of the features and benefits of using YT Music are: YT Music offers two versions: a free version that is supported by ads, and a premium version that costs $9.99 per month. The premium version gives you some exclusive benefits, such as: YT Music Mod APK is a modified version of the official YT Music app that bypasses some of the limitations and restrictions of the original app. It is not available on the Google Play Store or the official YT Music website, but you can find it on various third-party websites that offer APK downloads. Some of the features and advantages of using YT Music Mod APK are: To install YT Music Mod APK on your Android device, you need to follow these steps: y t music mod apk download latest version Congratulations, you have successfully installed YT Music Mod APK on your device. You can now enjoy unlimited music and podcasts without any ads or interruptions. While YT Music Mod APK may seem like a tempting option to enjoy YT Music for free, it is not without its risks and drawbacks. Here are some of the things you should be aware of before using YT Music Mod APK: Some of the risks of using YT Music Mod APK are: If you are looking for a safer and more reliable way to enjoy music and podcasts on your Android device, you may want to consider some of the alternatives to YT Music Mod APK. Here are some of the best ones: YT Music Mod APK is a modified version of the official YT Music app that offers some extra features and benefits. However, it also comes with some risks and drawbacks that you should be aware of before using it. If you want to enjoy music and podcasts on your Android device without any hassle or worry, you may want to consider some of the alternatives to YT Music Mod APK that we have listed above. Here are some of the frequently asked questions about YT Music Mod APK: No, YT Music Mod APK is not legal, as it violates the terms and conditions of YT Music and Google. It also infringes on the intellectual property rights of the artists and creators whose content is available on YT Music. No, YT Music Mod APK is not safe, as it may expose your device and data to malware, viruses, spyware, or other harmful software that could compromise your security and privacy. It may also cause bugs, errors, crashes, or performance issues with the app. No, YT Music Mod APK is not updated by YT Music or Google. It depends on the third-party developers who create and distribute it. Therefore, it may not have the latest features, updates, and improvements that are available on the official YT Music app. To uninstall YT Music Mod APK from your device, you need to follow these steps: There is no official way to contact the developers of YT Music Mod APK, as they are not affiliated with YT Music or Google. However, you may try to find their contact information on the website where you downloaded the app or on their social media accounts if they have any. There is no official way to report a problem with YT Music Mod APK, as it is not supported or updated by YT Music or Google. However, you may try to leave a comment or feedback on the website where you downloaded the app or on their social media accounts if they have any. Download File ☆ https://gohhs.com/2uFURC Download Zip ··· https://gohhs.com/2uFV9J MDSolids is a software for topics taught in the Mechanics of Materials course, such as beams, trusses, Mohr's circle transformations, section properties, torsion, and more[^1^]. It is designed to assist engineering students and professionals in solving a wide variety of engineering problems[^3^]. Download File ✶✶✶ https://gohhs.com/2uFUqR If you want to download MDSolids 4.0 full crack IDM, you will need to follow these steps: Note: IDM stands for Internet Download Manager, a tool that can speed up and manage your downloads. You can download IDM from https://www.internetdownloadmanager.com/ and use it to download MDSolids faster. MDSolids has many benefits for engineering students and professionals who want to learn and apply the concepts of Mechanics of Materials. Some of the benefits are: MDSolids is a powerful and versatile software that can help you learn and master the concepts of Mechanics of Materials. Whether you are a student or a professional, you will find MDSolids useful and beneficial for your engineering education and career. If you need to print plain text documents without any formatting, you might want to use the Generic Text Only driver. This driver is a built-in option in Windows that allows you to send raw text commands to your printer. It can be useful for printing receipts, labels, tickets, or other simple text documents. In this article, we will show you how to download and install the Generic Text Only driver for Windows 7. We will also provide some tips on how to use it effectively. Download ✪ https://gohhs.com/2uFTSb The Generic Text Only driver is included in Windows 7, but you might need to update it to the latest version. To do this, you can use a reliable driver update tool like DriverGuide. DriverGuide is a free service that scans your computer and finds the best drivers for your devices. It also lets you download and install drivers with one click. To download the Generic Text Only driver with DriverGuide, follow these steps: After downloading the Generic Text Only driver, you need to install it on your computer. To do this, follow these steps: Now that you have installed the Generic Text Only driver, you can use it to print plain text documents. To do this, follow these steps: The Generic Text Only driver can be useful for printing simple text documents, but it has some limitations. Here are some tips for using it effectively: Download File → https://gohhs.com/2uFTxx Download ✶ https://urlca.com/2uDcrX Download ––– https://urlca.com/2uDdzy Download Zip ✑ ✑ ✑ https://urlca.com/2uDd0L download fantastic beasts 3 english audio as source file.flv in 480p, 720p & 1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on drama, thriller. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. Download Zip ✸✸✸ https://urlca.com/2uDdYC download gravity english audio as source file.flv in 480p, 720p & 1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on drama. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movie based on drama. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. english subtitles added with english download brilliant: the operative code-uhd 720p web-dl.download brilliant: the operative code movie uhd web-dl original.this is an english movie & available in 720p,480p & 1080p. this is one of the best movie based on action. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. english subtitles added with english download civilization: beyond earth trailer 2022 english audio (720p 1080p ) online. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on comedy. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. download age of glory movie 2022 english audio in 720p,480p &1080p. this is a hollywood movie and available in 720p,480p & 1080p qualities. this is one of the best movies based on fantasy. this movie is not available in hindi or dual audio.this is web-dlprint with dd5.1 english audio & esubs. Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now! Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements. DOWNLOAD >>> https://urllie.com/2uNDn0 However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay. That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions. To download and install Tank Hero Mod APK, you need to follow these simple steps: Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them: The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle. This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold. Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique. The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style. tank hero mod apk download free unlimited money and gold Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now! Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements. However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay. That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions. To download and install Tank Hero Mod APK, you need to follow these simple steps: Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them: The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle. This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold. Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique. The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style. To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations. Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now! Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements. However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay. That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions. To download and install Tank Hero Mod APK, you need to follow these simple steps: Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them: The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle. This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold. Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique. The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style. To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations. Do you love playing tank games on your Android device? Do you want to have unlimited money and gold to upgrade your tanks and weapons? Do you want to enjoy various game modes, stunning graphics, and sound effects? If you answered yes to any of these questions, then you should download Tank Hero Mod APK right now! Tank Hero Mod APK is a modified version of the original Tank Hero game, which is a fast-paced 3D tank action game. In this game, you can control your own tank and shoot your enemies with different weapons. You can also customize your tank with various skins and decals. You can play in different game modes, such as campaign, survival, and multiplayer. You can also challenge yourself with different levels of difficulty and achievements. However, the original Tank Hero game has some limitations that may affect your gaming experience. For example, you need to earn money and gold by completing missions and defeating enemies. You need to use them to buy and upgrade your tanks and weapons. You may also encounter ads and in-app purchases that may interrupt your gameplay. That's why you need Tank Hero Mod APK, which is a hacked version of the original game that gives you unlimited money and gold. With this mod, you can buy and upgrade any tank and weapon you want without worrying about the cost. You can also enjoy the game without any ads or in-app purchases. You can also access all the features and content of the game without any restrictions. To download and install Tank Hero Mod APK, you need to follow these simple steps: Tank Hero Mod APK has many amazing features that make it one of the best tank games for Android. Here are some of them: The most obvious feature of Tank Hero Mod APK is that it gives you unlimited money and gold. Money and gold are the main currencies in the game that you need to buy and upgrade your tanks and weapons. Normally, you have to earn them by completing missions and defeating enemies. However, with Tank Hero Mod APK, you can get unlimited money and gold as soon as you start the game. You can use them to buy any tank or weapon you want without worrying about the cost. You can also upgrade them to the maximum level without any hassle. This feature gives you a lot of advantages in the game. For example, you can have more powerful tanks and weapons that can destroy your enemies faster and easier. You can also have more variety and fun in choosing your tanks and weapons according to your preference. You can also save time and effort in grinding for money and gold. Tank Hero Mod APK has a lot of tanks and weapons for you to choose from. There are over 50 tanks and over 100 weapons in the game, each with different stats, abilities, and effects. You can find tanks and weapons of different types, such as light, medium, heavy, artillery, rocket, laser, plasma, etc. You can also customize your tanks with various skins and decals to make them look more cool and unique. The differences between tanks and weapons are not only cosmetic but also functional. For example, some tanks have more speed, armor, or firepower than others. Some weapons have more range, accuracy, or damage than others. Some tanks and weapons also have special features, such as stealth, shield, or EMP. You have to consider these factors when choosing your tank and weapon for your play style. To choose the best tank and weapon for your play style, you have to experiment with different combinations and see what works best for you. You can also check the stats and descriptions of each tank and weapon in the shop or inventory menu. You can also read reviews and tips from other players online or watch videos of gameplay demonstrations. Tank Hero Mod Tank Hero Mod APK has multiple game modes for you to enjoy. You can play in different game modes, such as campaign, survival, and multiplayer. Each game mode has its own rules, objectives, and challenges. You can also choose the level of difficulty and the number of enemies in each game mode. The campaign mode is the main mode of the game, where you have to complete various missions and stages. You have to fight against different types of enemies, such as tanks, helicopters, turrets, etc. You have to destroy them all and reach the end of each stage. You can also collect stars and medals by completing the missions with high scores and achievements. You can use the stars and medals to unlock new tanks and weapons. The survival mode is the endless mode of the game, where you have to survive as long as possible against waves of enemies. You have to shoot and dodge the incoming enemies and avoid getting hit by their bullets and missiles. You can also collect power-ups and bonuses that can help you survive longer. You can also compete with other players on the global leaderboard and see how long you can last. The multiplayer mode is the online mode of the game, where you can play with or against other players from around the world. You can join or create rooms in the multiplayer mode and choose the game mode, map, and settings. You can also chat with other players using the in-game chat feature. You can play in different modes, such as team deathmatch, capture the flag, king of the hill, etc. You can also cooperate or compete with other players and show your skills and strategies. Tank Hero Mod APK has stunning graphics and sound effects that make the game more realistic and immersive. The game has 3D graphics that are well-designed and detailed. The game also has dynamic lighting and shadows that create a realistic atmosphere. The game also has smooth animations and transitions that make the gameplay more fluid and responsive. The game also has sound effects that are clear and crisp. The game also has background music that is catchy and fitting for each game mode and situation. The game also has voice-overs that are expressive and humorous. The game also has sound settings that allow you to adjust the volume and quality of the sound effects, music, and voice-overs. To enjoy the stunning graphics and sound effects of Tank Hero Mod APK, you need to have a device that meets the minimum requirements of the game. You also need to have a stable internet connection for the online mode. You can also adjust the graphics and sound settings in the game menu to suit your preference and device performance. Tank Hero Mod APK is a great tank game for Android that offers unlimited money and gold, various tanks and weapons, multiple game modes, stunning graphics and sound effects, and more. It is a fun and addictive game that will keep you entertained for hours. It is also easy to download and install on your device. If you are looking for a tank game that will challenge your skills and strategies, then you should download Tank Hero Mod APK now. It is one of the best tank games for Android that you will ever play. It is a modded version of the original Tank Hero game that gives you more features and content than ever before. So what are you waiting for? Download Tank Hero Mod APK now and enjoy unlimited money and gold, various tanks and weapons, multiple game modes, stunning graphics and sound effects, and more! Do you love the nostalgic look of old photos and videos? Do you want to capture your memories in a retro style? If yes, then you might be interested in downloading APK Mod Old Roll, a photography app that lets you take and edit photos and videos in a vintage, classic style. In this article, we will tell you what APK Mod Old Roll is, what features it offers, what are the benefits of using it, and how to install it on your Android device. We will also give you some tips on how to find the best APK mod sites to download this app and other modded apps for free. Download File 🆗 https://urllie.com/2uNH2z APK Mod Old Roll is a modified version of the original Old Roll app, which is a photography app that allows you to take and edit photos and videos in a vintage, classic style. The app has various filters, effects, stickers, frames, and fonts that you can apply to your photos and videos to make them look like they were taken decades ago. You can also adjust the brightness, contrast, saturation, exposure, and other settings to enhance your photos and videos. The modded version of Old Roll has some extra features that are not available in the original version. These include: By using APK Mod Old Roll, you can enjoy the following benefits: Before we tell you how to install APK Mod Old Roll on your Android device, let us explain what APK mod is and how it works. APK mod is a modified version of an original Android app that has been altered by someone to provide new or improved features that are not present in the original version. An Android app is packaged into a file that has an extension named .APK, which contains all the elements of the app and can be installed on an Android device. download old roll premium apk mod Using APK mod has some advantages and disadvantages that you should be aware of. Here are some of them: Using APK mod also involves some risks and precautions that you should take into account. Here are some of them: Now that you know what APK mod is and how it works, let us show you how to install APK Mod Old Roll on your Android device. The process is simple and easy, and it only takes a few minutes. Just follow these steps: The first step is to download the APK file of APK Mod Old Roll from a trusted and reliable source. You can use one of the best APK mod sites that we will recommend later in this article, or you can search for it on Google or other search engines. Make sure you download the latest version of the app, which is 1.0.9 as of June 2023. The second step is to enable unknown sources on your device settings. This will allow you to install apps that are not from the official app store, such as APK mod files. To do this, go to your device's settings, then security, then unknown sources, and toggle it on. You may see a warning message that says installing apps from unknown sources may harm your device, but don't worry, as long as you download from a trusted source, you should be fine. The third step is to locate and install the APK file that you have downloaded on your device. You can use a file manager app to find the file in your downloads folder, or you can tap on the notification that says download complete. Once you find the file, tap on it and follow the instructions on the screen to install it on your device. You may see a pop-up message that says this type of file can harm your device, but just ignore it and tap on install anyway. The fourth and final step is to launch the app and enjoy its features. You can find the app icon on your home screen or in your app drawer. Tap on it and grant it any permissions it may ask for. Then, you can start taking and editing photos and videos in a vintage, classic style with APK Mod Old Roll. If you are looking for the best APK mod sites to download APK Mod Old Roll and other apps for free, here are some of our recommendations: APKPure is one of the most popular and trusted APK mod sites that offers a wide range of apps and games for Android devices. You can find both original and modded versions of apps and games, as well as exclusive apps that are not available on the official app store. You can also update your apps with one click, download region-locked apps, and request new apps or mods. HappyMod is another popular and trusted APK mod site that specializes in providing modded versions of apps and games for Android devices. You can find thousands of mods for different categories, such as action, adventure, arcade, casual, simulation, sports, and more. You can also download multiple mods for the same app or game, and choose the one that suits your needs. You can also rate and review the mods, and request new mods or updates. ReXdl is another popular and trusted APK mod site that offers a huge collection of apps and games for Android devices. You can find both original and modded versions of apps and games, as well as premium apps that are normally paid on the official app store. You can also download apps and games that are modded with unlimited money, coins, gems, lives, and other resources. You can also browse by categories, genres, or tags, and download fast and secure. Apkmody is another popular and trusted APK mod site that provides high-quality apps and games for Android devices. You can find both original and modded versions of apps and games, as well as exclusive apps that are not available on the official app store. You can also download apps and games that are modded with unlimited money, coins, gems, lives, and other resources. You can also search by keywords, categories, or popularity, and download fast and safe. In conclusion, APK Mod Old Roll is a photography app that lets you take and edit photos and videos in a vintage, classic style. It has various filters, effects, stickers, frames, and fonts that you can apply to your photos and videos to make them look like they were taken decades ago. You can also adjust the brightness, contrast, saturation, exposure, and other settings to enhance your photos and videos. The modded version of Old Roll has some extra features that are not available in the original version, such as unlocked all cameras, premium version unlocked, no ads or watermarks, and no root required. To install APK Mod Old Roll on your Android device, you need to download the APK file from a trusted source, enable unknown sources on your device settings, locate and install the APK file, and launch the app and enjoy. You can also use one of the best APK mod sites that we have recommended in this article to download APK Mod Old Roll and other apps for free. Here are some frequently asked questions about APK Mod Old Roll: A: APK Mod Old Roll is safe to use as long as you download it from a trusted source and follow the precautions we have mentioned in this article. However, there is always a risk of malware or virus infections when using APK mod files, so you should always scan the file before installing it on your device. A: APK Mod Old Roll is not legal to use as it violates the intellectual property rights of the original developers of Old Roll. By using APK Mod Old Roll, you may also breach the terms and conditions of the official app store or the original developers. Therefore, you should use APK Mod Old Roll at your own risk and responsibility. A: To update APK Mod Old Roll, you need to download the latest version of the APK file from a trusted source and install it on your device. You may also check the best APK mod sites that we have recommended in this article to see if they have updated versions of APK Mod Old Roll. A: To uninstall APK Mod Old Roll, you need to go to your device's settings, then apps, then APK Mod Old Roll, and tap on uninstall. You may also use a file manager app to find and delete the APK file from your device. You may also want to backup your photos and videos before uninstalling the app, as they may be deleted along with the app. A: APK Mod Old Roll is designed for Android devices, so you may not be able to use it on other devices, such as iOS, Windows, or Mac. However, you may try using an Android emulator or a virtual machine to run APK Mod Old Roll on your PC or laptop. ¿Te gustan los juegos de anime, vestir y batalla? Entonces te encantará Gacha Club, el último juego casual de estrategia RPG de Lunime, la compañía que ha lanzado una gran variedad de juegos gacha desde 2015. Gacha Club es la secuela de Gacha Life, uno de los juegos más populares de Lunime con más de 10 millones de descargas. En este artículo, te mostraremos cómo descargar e instalar Gacha Club en tu dispositivo Android, así como algunas de las características y consejos que ofrece el juego. DOWNLOAD ☑ https://gohhs.com/2uPtPO Gacha Club es un juego gratuito que puedes descargar fácilmente desde la Google Play Store. Solo tienes que seguir estos pasos: Nota: El juego puede funcionar lento o con lag en dispositivos antiguos o con pantallas 4k. También puede haber algunos errores o glitches si no tienes suficiente espacio de almacenamiento en tu teléfono. Si experimentas alguno de estos problemas, intenta reiniciar el juego o liberar espacio en tu dispositivo. Gacha Club tiene mucho contenido y funciones para que te diviertas durante horas. Estas son algunas de las principales características que ofrece el juego: gacha club descargar apk gratis En Gacha Club, puedes crear tus propios personajes de anime y vestirlos con tus atuendos favoritos. Puedes personalizar hasta 10 personajes principales y 90 personajes extra con cientos de opciones de vestuario, peinado, accesorios, armas y más. También puedes cambiar los colores de casi todos los elementos, elegir entre 600 poses diferentes y ajustar el cabello, los ojos y los objetos para que se adapten a tus personajes. Después de diseñar tus personajes, puedes entrar en el modo Estudio y crear cualquier escena que puedas imaginar. Puedes añadir hasta 10 personajes en cualquier lugar de la pantalla, así como tus mascotas y objetos favoritos. Puedes elegir entre una gran variedad de fondos y primeros planos, hacer que tus personajes hablen entre sí con cajas de texto personalizadas, añadir un narrador para crear escenas narrativas, guardar y cargar hasta 15 escenas y usar preajustes faciales para cambiar rápidamente tu expresión. Si quieres más acción, puedes gachar más de 180 unidades para usar en batalla. Puedes gachar por 150 mascotas para aumentar tus estadísticas, coleccionar personajes super raros Corruptos y DJ, usar materiales para mejorar Otra forma de divertirte y ganar recursos en Gacha Club es jugar a los mini-juegos que ofrece el juego. Hay cuatro mini-juegos disponibles: Lemo & Yumi Dance, Mascot Whack!, Memory Match, y Usagi vs. Neko. Cada uno tiene su propia mecánica y dificultad, pero todos te darán oro y otras recompensas si logras una buena puntuación. Estos son algunos consejos para cada mini-juego: Gacha Club es un juego muy completo y divertido que te permite crear tus propios personajes de anime y jugar con ellos en diferentes modos. Puedes personalizarlos al máximo, crear escenas increíbles, gachar unidades y mascotas, luchar contra monstruos y jefes, y jugar a mini-juegos variados. Además, es un juego gratuito y sin conexión, lo que lo hace ideal para pasar el rato sin preocuparte por el dinero o el internet. Si te gustan los juegos de anime, vestir y batalla, no dudes en descargar Gacha Club en tu dispositivo Android y disfrutar de todo lo que ofrece. También puedes unirte a la comunidad de Gacha Club en Facebook o en su página web oficial para compartir tus creaciones, conocer a otros jugadores, y estar al tanto de las novedades del juego. ¿A qué esperas? ¡Entra en Gacha Club y empieza tu aventura hoy mismo! A continuación, te respondemos algunas de las preguntas más comunes sobre Gacha Club: Download File ✦ https://urlgoal.com/2uyNkL " + " Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the settings page and downgrade your Space to a CPU Basic Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it! You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off. There's only one step left before you can train your model: attribute a T4-small or A10G-small GPU to it (via the Settings tab) and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off. Do a Running on CPU 🥶 This demo does not work on CPU. Download File › https://urlin.us/2uEyxP Download Zip · https://urlin.us/2uExFe Download File ✸ https://tiurll.com/2uCizI Download ✏ ✏ ✏ https://tiurll.com/2uCj9n Download ✶✶✶ https://tiurll.com/2uCkAp {plaintext_to_html(str(key))} {plaintext_to_html(str(text))} {message} Download ★ https://bytlly.com/2uGyEt DOWNLOAD ✵ https://bytlly.com/2uGwi2 Counter-Strike: Global Offensive (CS:GO) is one of the most popular and competitive first-person shooter games in the world. It features team-based action gameplay with various maps and game modes, as well as weapon customization and a vibrant multiplayer community. If you want to experience the thrill of CS:GO on your PC, you need to download and install the latest version of the game, which is V1.33.1.0 AutoUpdate. Download Zip » https://bytlly.com/2uGwYc In this article, we will show you how to download and install CS:GO V1.33.1.0 AutoUpdate for PC using two methods: torrent or launcher. Both methods are easy and fast, and will allow you to enjoy the game in no time. A torrent is a file that contains information about other files that are distributed over a peer-to-peer network. You can use a torrent client, such as uTorrent or BitTorrent, to download the files you want from other users who have them. This way, you can download large files faster and more efficiently. To download CS:GO V1.33.1.0 AutoUpdate for PC using torrent, follow these steps: A launcher is a program that allows you to download and install games directly from their official sources. You can use a launcher, such as Steam or SE7EN.ws, to download and install CS:GO V1.33.1.0 AutoUpdate for PC without using a torrent. To download CS:GO V1.33.1.0 AutoUpdate for PC using launcher, follow these steps: Counter-Strike: Global Offensive V1.33.1.0 AutoUpdate is the latest version of the game that offers improved performance, stability, and security The Odyssey is a 1997 TV miniseries based on Homer's epic poem of the same name. It stars Armand Assante as Odysseus, the Greek hero who faces many trials and adventures on his way back home after the Trojan War. The miniseries also features Greta Scacchi, Isabella Rossellini, Vanessa Williams, and Christopher Lee. Download File >>> https://bytlly.com/2uGw7Y If you want to watch this classic adaptation of The Odyssey, you might be wondering how to download it online. There are many websites that claim to offer free downloads of The Odyssey full movie 1997 176, but not all of them are safe or legal. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Others might require you to sign up for a subscription or pay a fee to access the download link. To avoid these risks and enjoy The Odyssey full movie 1997 176 without any hassle, we recommend you to use a reliable and reputable streaming service that has the rights to stream or download this miniseries. One of such services is Netflix, which offers a wide range of movies and shows for a monthly fee. You can watch The Odyssey full movie 1997 176 on Netflix by following these steps: That's it! You can now enjoy The Odyssey full movie 1997 176 on your device with Netflix. Remember to respect the copyright laws and only download or stream content from authorized sources. The Odyssey full movie 1997 176 is a faithful adaptation of Homer's epic poem, which is considered one of the greatest works of literature in history. The miniseries covers the entire story of Odysseus and his journey home, from the fall of Troy to the reunion with his wife Penelope and son Telemachus. Along the way, he encounters many mythical creatures and gods, such as the Cyclops, the Sirens, the Lotus-eaters, Circe, Calypso, Poseidon, Athena, and Zeus. The Odyssey full movie 1997 176 is not only a thrilling adventure, but also a profound exploration of human nature, morality, loyalty, courage, love, and fate. It shows how Odysseus struggles to overcome his flaws and temptations, and how he grows as a leader and a hero. It also depicts the challenges and hardships that his family and friends face in his absence, and how they cope with the uncertainty and danger of his return. The Odyssey full movie 1997 176 is a masterpiece of storytelling that will captivate you from start to finish. It has received critical acclaim and numerous awards, including four Emmys and a Golden Globe. It has also been praised for its stunning cinematography, costumes, music, and special effects. The cast delivers outstanding performances that bring the characters to life with emotion and authenticity. Download File ✑ https://bytlly.com/2uGyDc Download ✦ https://bytlly.com/2uGyz5
-
-```bash
-torchrun --nproc_per_node=1 \
- eval_narrator.py \
- --caption-top-p 0.95 --caption-temperature 0.7 \
- --eval-freq 10000 \
- --resume $CHECKPOINT
-```
-
-
-
-```bash
-python eval_zeroshot.py --dataset ek100_mir --root datasets/EK100/video_ht256px/ --clip-length 4 --resume $PATH
-```
-By increasing the number of frames per clip, eg `--clip-length 16`, you are expected to see a better performance.
-
-
-
-```bash
-python eval_zeroshot.py --dataset ek100_cls --metadata-val datasets/EK100/epic-kitchens-100-annotations/EPIC_100_validation.csv --resume $PATH
-```
-
-
-
-```bash
-python eval_zeroshot.py --dataset charades_ego --metadata-val datasets/CharadesEgo/CharadesEgo/CharadesEgo_v1_test_only1st.csv --root datasets/CharadesEgo/CharadesEgo_v1_480/ --clip-length 16 --sparse-sample --resume $PATH
-```
-
-
-
-```bash
-python eval_zeroshot.py --dataset egtea --metadata-val datasets/EGTEA/test_split1.txt --root datasets/EGTEA/cropped_clips/ --clip-length 16 --clip-stride 2 --num-crops 3 --num-clips 10 --resume $PATH
-```
-
-
-
-```bash
-python eval_zeroshot.py --dataset ego4d_mcq --metadata-val datasets/Ego4D/egomcq.json --root datasets/Ego4D/video_5min_chunks_288px/ --clip-length 4 --resume $PATH --use-half -j 4
-```
-
-
-
-### Multi-node training (Slurm)
-```bash
-# TimeSformer-Base
-python run_with_submitit_finetune_retrieval.py \
- --pretrain-model $PATH \
- --use-checkpoint --nodes 4
-
-# TimeSformer-Large
-python run_with_submitit_finetune_retrieval.py \
- --pretrain-model $PATH \
- --batch-size 4 \
- --use-checkpoint --nodes 4
-```
-
-### Single-machine training
-```bash
-torchrun --nproc_per_node=8 \
- main_finetune_retrieval.py \
- --output-dir $OUT_DIR \
- --pretrain-model $PATH \
- --use-checkpoint
-```
-
-Note that you might see a slight drop of performance when training on a single node compared to multiple nodes (everything else being the same) because of a smaller total batch size.
-
-### Evaluation
-
-Evaluation is done every `--eval-freq 5` epochs by default during fine-tuning.
-If you want to evaluate any checkpoint after fine-tuning, please switch to `--evaluate` mode and specify the path to the checkpoint by `--resume $FINETUNED_CHECKPOINT`.
-```bash
-torchrun --nproc_per_node=1 \
- main_finetune_retrieval.py \
- --output-dir $OUT_DIR \
- --pretrain-model $PATH \
- --use-checkpoint \
- --evaluate \
- --resume $FINETUNED_CHECKPOINT
-```
-
-
-
-
-### Multi-node training (Slurm)
-
-```bash
-# TimeSformer-Base
-python run_with_submitit_finetune_retrieval.py \
- --dataset charades_ego \
- --metadata datasets/CharadesEgo/CharadesEgo/metadata_filtered_train.pkl \
- --metadata-val datasets/CharadesEgo/CharadesEgo/CharadesEgo_v1_test_only1st.csv \
- --root datasets/CharadesEgo/CharadesEgo_v1_480/ \
- --epochs 10 \
- --save-freq 1 --eval-freq 1 \
- --sparse-sample \
- --pretrain-model $PATH \
- --use-checkpoint --nodes 4
-
-# TimeSformer-Large
-python run_with_submitit_finetune_retrieval.py \
- --dataset charades_ego \
- --metadata datasets/CharadesEgo/CharadesEgo/metadata_filtered_train.pkl \
- --metadata-val datasets/CharadesEgo/CharadesEgo/CharadesEgo_v1_test_only1st.csv \
- --root datasets/CharadesEgo/CharadesEgo_v1_480/ \
- --epochs 10 \
- --save-freq 1 --eval-freq 1 \
- --sparse-sample \
- --pretrain-model $PATH \
- --batch-size 4 \
- --use-checkpoint --nodes 4
-```
-
-### Evaluation
-```bash
-torchrun --nproc_per_node=1 \
- main_finetune_retrieval.py \
- --dataset charades_ego \
- --metadata datasets/CharadesEgo/CharadesEgo/metadata_filtered_train.pkl \
- --metadata-val datasets/CharadesEgo/CharadesEgo/CharadesEgo_v1_test_only1st.csv \
- --root datasets/CharadesEgo/CharadesEgo_v1_480/ \
- --output-dir $OUT_DIR \
- --sparse-sample \
- --pretrain-model $PATH \
- --evaluate \
- --resume $FINETUNED_CHECKPOINT
-```
-
-
-
-### Multi-node training (Slurm)
-
-```bash
-# TimeSformer-Base
-python run_with_submitit_finetune_classification.py \
- --pretrain-model $PATH \
- --use-vn-classifier --num-classes 97 300 3806 \
- --use-sgd --wd 4e-5 --lr-multiplier-on-backbone 0.1 \
- --use-checkpoint --node 1
-
-# TimeSformer-Large
-python run_with_submitit_finetune_classification.py \
- --pretrain-model $PATH \
- --use-vn-classifier --num-classes 97 300 3806 \
- --use-sgd --wd 4e-5 --lr-multiplier-on-backbone 0.1 \
- --use-checkpoint --node 4
-```
-
-
-
-```bash
-# TimeSformer-Base
-python run_with_submitit_finetune_classification.py \
- --dataset egtea \
- --metadata-train datasets/EGTEA/train_split1.txt \
- --metadata-val datasets/EGTEA/test_split1.txt \
- --root datasets/EGTEA/cropped_clips/ \
- --pretrain-model $PATH \
- --num-classes 106 \
- --use-sgd --wd 4e-5 \
- --use-checkpoint --node 1
-
-# TimeSformer-Large
-python run_with_submitit_finetune_classification.py \
- --dataset egtea \
- --metadata-train datasets/EGTEA/train_split1.txt \
- --metadata-val datasets/EGTEA/test_split1.txt \
- --root datasets/EGTEA/cropped_clips/ \
- --pretrain-model $PATH \
- --num-classes 106 \
- --use-sgd --wd 4e-5 \
- --batch-size 4 \
- --use-checkpoint --node 4
-```
-### Evaluation
-```bash
-torchrun --nproc_per_node=1 \
- main_finetune_classification.py \
- --dataset egtea \
- --metadata-train datasets/EGTEA/train_split1.txt \
- --metadata-val datasets/EGTEA/test_split1.txt \
- --root datasets/EGTEA/cropped_clips/ \
- --output-dir $OUT_DIR \
- --pretrain-model $PATH \
- --num-classes 106 \
- --use-sgd --wd 4e-5 \
- --evaluate \
- --resume $FINETUNED_CHECKPOINT \
- --num-crops 3 --num-clips 10 \
- --use-half
-```
- If you work with SQL Server databases, you may have heard of Red Gate Sql Compare 11, a powerful tool that helps you compare and deploy SQL Server schemas quickly and accurately. But you may also have come across some websites or files that claim to offer a crack for this software, which supposedly allows you to use it for free or without a license. In this article, we will explain what a crack is, why it is risky and illegal to use one, and how you can avoid it. We will also show you some alternatives to using a crack for Red Gate Sql Compare 11, so you can enjoy its features and benefits without compromising your security or integrity. Download Zip ❤❤❤ https://urlcod.com/2uI9P8 Before we dive into the details of what a crack is and how to avoid it, let's first understand what Red Gate Sql Compare 11 is and what it does. Red Gate Sql Compare 11 is a software product developed by Red Gate Software, a leading provider of tools for working with databases and data. It is part of the SQL Toolbelt, a suite of tools that covers every aspect of SQL Server development and administration. Red Gate Sql Compare 11 is designed to help you compare and deploy SQL Server database schemas, which are the structures that define the tables, views, stored procedures, functions, triggers, indexes, constraints, and other objects in your database. By comparing two database schemas, you can see what has changed between them, down to individual lines of SQL code. You can also generate scripts to synchronize the schemas, either by applying the changes directly to the target database or by saving them as files for later use. You can also compare backups, scripts folders, snapshots, or source control projects with live databases or vice versa. Red Gate Sql Compare 11 is useful for various scenarios, such as: A crack is a type of software that modifies or bypasses the security features of another software, such as a license key, a serial number, a trial period, or a digital signature. A crack is usually created by hackers or crackers who reverse-engineer the original software and alter its code or behavior. A crack is often distributed as a file or a program that can be downloaded from the internet or shared through peer-to-peer networks. Some people use a crack for various reasons, such as: However, using a crack is not only unethical and illegal, but also risky and dangerous. In the next section, we will explain what are the risks and consequences of using a crack. Using a crack for Red Gate Sql Compare 11 or any other software can expose you to various risks and consequences, such as: As you can see, using a crack for Red Gate Sql Compare 11 is not worth it. You are putting yourself at risk of legal troubles, security breaches, quality problems, and ethical dilemmas. You are also missing out on the features and benefits of Red Gate Sql Compare 11 that we will discuss in the next section. Now that we have explained what a crack is and how to avoid it, let's focus on the positive side of Red Gate Sql Compare 11. In this section, we will show you how Red Gate Sql Compare 11 can help you compare and deploy SQL Server schemas easily and efficiently. We will also highlight some of the main features and benefits of Red Gate Sql Compare 11 that make it stand out from other similar tools. Finally, we will tell you how you can get a free trial or buy a license for Red Gate Sql Compare 11. Red Gate Sql Red Gate Sql Compare 11 helps you compare and deploy SQL Server schemas in a few simple steps: With Red Gate Sql Compare 11, you can compare and deploy SQL Server schemas in minutes, instead of hours or days. You can also automate the process by using the command line interface or the API of Red Gate Sql Compare 11. You can also integrate Red Gate Sql Compare 11 with your existing tools and workflows, such as Visual Studio, SSMS, PowerShell, or TeamCity. Red Gate Sql Compare 11 has many features and benefits that make it a superior tool for comparing and deploying SQL Server schemas. Here are some of them: With Red Gate Sql Compare 11, you can enjoy all these features and benefits without worrying about cracks or malware. You can also get a free trial or buy a license for Red Gate Sql Compare 11 in the next section. If you want to try Red Gate Sql Compare 11 for yourself, you can download a free trial from here. The free trial lasts for 14 days and gives you access to all the features and functions of Red Gate Sql Compare 11. You can also extend the trial period by contacting Red Gate Software. If you want to buy a license for Red Gate Sql Compare 11, you can do so from here. The price of a license depends on the number of users and servers that you need. You can also get discounts for bulk purchases or renewals. A license for Red Gate Sql Compare 11 includes: By buying a license for Red Gate Sql Compare 11, you are supporting the development and improvement of this software, as well as the software industry in general. You are also ensuring that you get the best quality and service from Red Gate Software. In case you have already installed or used a crack for Red Gate Sql Compare 11, or you suspect that someone else has done so on your computer, you need to take immediate action to detect and remove it. In this section, we will show you how to do that. There are some signs that can indicate that you have a crack for Red Gate Sql Compare 11 installed on your computer, such as: If you notice any of these signs, you should assume that you have a crack for Red Gate Sql Compare 11 installed on your computer and proceed to remove it as soon as possible. To remove a crack for Red Gate Sql Compare 11 from your computer, you need to follow these steps: By following these steps, you can remove a crack for Red Gate Sql Compare 11 from your computer and restore its normal functioning. You can also prevent future infections by following some best practices in the next section. To protect your computer from cracks and malware in the future, you need to follow some best practices, such as: By following these best practices, you can protect your computer from cracks and malware in the future. You can also enjoy the benefits of using legitimate software, such as Red Gate Sql Compare 11. If you are looking for alternatives to using a crack for Red Gate Sql Compare 11, you have some options to choose from. In this section, we will show you some free or low-cost alternatives to using a crack for Red Gate Sql Compare 11. We will also compare their advantages and disadvantages with Red Gate Sql Compare 11. Finally, we will help you choose the best alternative for your needs and budget. Some of the free or low-cost alternatives to using a crack for Red Gate Sql Compare 11 are: These are some of the free or low-cost alternatives to using a crack for Red Gate Sql Compare 11 that you can consider. However, they are not necessarily equivalent or superior to Red Gate Sql Compare 11. In the next section, we will compare their advantages and disadvantages with Red Gate Sql Compare 11. Each of these alternatives has its own advantages and disadvantages when compared to Red Gate Sql Compare 11. Here are some of them: As you can see, none of these alternatives can match the quality and performance of Red Gate Sql Compare 11. They either have fewer features, higher prices, lower reliability, or higher risks. Therefore, we recommend that you stick with Red Gate Sql Compare 11 as your tool of choice for comparing and deploying SQL Server schemas. In the next section, we will help you choose the best alternative for your needs and budget. If you still want to explore other options besides Red Gate Sql Compare 11, you need to consider some factors that can help you choose the best alternative for your needs and budget. Here are some of them: By considering these factors, you can narrow down your choices and select the best alternative for your needs and budget. However, we still believe that Red Gate Sql Compare 11 is the best option for most users who want to compare and deploy SQL Server schemas easily and efficiently. In this article, we have discussed what a crack is and how to avoid it. We have also shown you how Red Gate Sql Compare 11 can help you compare and deploy SQL Server schemas quickly and accurately. We have also highlighted some of the main features and benefits of Red Gate Sql Compare 11 that make it stand out from other similar tools. Finally, we have shown you how you can get a free trial or buy a license for Red Gate Sql Compare 11. We hope that this article has helped you understand why using a crack for Red Gate Sql Compare 11 is not worth it. You are putting yourself at risk of legal troubles, security breaches, quality problems , and ethical dilemmas. You are also missing out on the features and benefits of Red Gate Sql Compare 11 that can help you compare and deploy SQL Server schemas easily and efficiently. You are also harming the software vendor and the industry that provide you with valuable tools and services. Instead of using a crack for Red Gate Sql Compare 11, you should use legitimate software that is licensed, updated, and supported by the software vendor. You can also try some free or low-cost alternatives to using a crack for Red Gate Sql Compare 11, but they may not be as good or as reliable as Red Gate Sql Compare 11. You should also follow some best practices to protect your computer from cracks and malware in the future. By doing so, you can enjoy the advantages of using Red Gate Sql Compare 11 without compromising your security or integrity. You can also support the development and improvement of this software, as well as the software industry in general. You can also build trust and respect between yourself and the software vendor and the users. If you want to learn more about Red Gate Sql Compare 11, you can visit here. If you want to download a free trial or buy a license for Red Gate Sql Compare 11, you can visit here. If you have any questions or feedback about Red Gate Sql Compare 11, you can contact Red Gate Software here. Thank you for reading this article. We hope that you have found it useful and informative. Please share it with your friends and colleagues who may be interested in this topic. Please also leave us a comment below and let us know what you think about this article and Red Gate Sql Compare 11. Here are some frequently asked questions about Red Gate Sql Compare 11 and cracks: A crack is a type of software that modifies or bypasses the security features of another software, such as a license key, a serial number, a trial period, or a digital signature. A keygen is a type of software that generates valid license keys or serial numbers for another software. Both are illegal and risky to use, as they violate the terms and conditions of the software license agreement and may contain malware. Yes, it is illegal to use a crack for Red Gate Sql Compare 11, as it violates the terms and conditions of the software license agreement, which is a legally binding contract between you and the software vendor. By using a crack, you are infringing the intellectual property rights of the software vendor and potentially committing software piracy, which is a criminal offense in many countries. You could face legal actions, such as lawsuits, fines, or even imprisonment, from the software vendor or the authorities. Yes, a crack for Red Gate Sql Compare 11 can damage your database or data, as it can introduce errors, bugs, glitches, or conflicts in the software that can cause it to malfunction, crash, freeze, or slow down. A crack can also prevent you from receiving updates, patches , fixes, or support from the software vendor. A crack can also interfere with other programs or processes on your computer and cause compatibility or stability issues. A crack can also contain malware, such as viruses, worms, trojans, spyware, ransomware, or keyloggers, that can infect your computer and cause damage, such as deleting or encrypting your files, stealing your personal or financial information, logging your keystrokes, monitoring your online activity, or taking control of your computer. You can contact Red Gate Software for support or feedback by visiting here. You can also find various resources, such as documentation, help articles, videos, forums, webinars, and events on their website. You can also follow them on social media platforms, such as Twitter, Facebook, LinkedIn, and YouTube. You can find more information about Red Gate Sql Compare 11 by visiting here. You can also download a free trial or buy a license for Red Gate Sql Compare 11 by visiting here. You can also access Redgate University and SQL Compare Learning Hub by visiting here and here respectively.
- 결과물을 생성하고, 나만의 diffusion 시스템을 구축하고, 확산 모델을 훈련하는 데 필요한 기본 기술을 배워보세요. 🤗 Diffusers를 처음 사용하는 경우 여기에서 시작하는 것이 좋습니다! 파이프라인, 모델, 스케줄러를 로드하는 데 도움이 되는 실용적인 가이드입니다. 또한 특정 작업에 파이프라인을 사용하고, 출력 생성 방식을 제어하고, 추론 속도에 맞게 최적화하고, 다양한 학습 기법을 사용하는 방법도 배울 수 있습니다. 라이브러리가 왜 이런 방식으로 설계되었는지 이해하고, 라이브러리 이용에 대한 윤리적 가이드라인과 안전 구현에 대해 자세히 알아보세요. 🤗 Diffusers 클래스 및 메서드의 작동 방식에 대한 기술 설명. Download File >>> https://geags.com/2uCqBc Download Zip ✸ https://geags.com/2uCqBY Download File ➡ https://geags.com/2uCr0L If you are a fan of real-time strategy games, you might have heard of Command and Conquer Red Alert 3, a popular game that was released in 2008 by Electronic Arts. The game is set in an alternate history where World War III is raging between three factions: the Allies, the Soviet Union, and the Empire of the Rising Sun. The game features a co-operative campaign mode, where you can team up with another player or an AI-controlled commander, as well as a competitive multiplayer mode, where you can battle against other players online. Download >>>>> https://tinourl.com/2uL2Js However, to play the game, you need a registration code, which is a unique serial number that verifies your ownership of the game. The registration code is usually provided when you purchase the game from an official source, such as EA's website or Steam. But what if you don't have a registration code, or you lost it, or you want to play the game without paying for it? Is there a way to crack the registration code and play the game for free? The answer is yes, but it comes with some risks. In this article, we will show you two methods to get a registration code crack for Command and Conquer Red Alert 3, and explain the pros and cons of each method. We will also answer some frequently asked questions about the game and its registration code. Let's get started! Command and Conquer Red Alert 3 is a real-time strategy game that was developed by EA Los Angeles and published by Electronic Arts in 2008. It is the third installment in the Red Alert series, which is a spin-off of the Command and Conquer franchise. The game is set in an alternate history where World War III is raging between three factions: the Allies, led by US President Howard T. Ackerman; the Soviet Union, led by Premier Anatoly Cherdenko; and the Empire of the Rising Sun, led by Emperor Yoshiro. The game features a co-operative campaign mode, where you can team up with another player or an AI-controlled commander to complete missions across different locations around the world. Each faction has its own storyline, units, buildings, and special abilities. The game also features a competitive multiplayer mode, where you can battle against other players online using one of the three factions. The game has received generally positive reviews from critics and players alike, who praised its gameplay, graphics, humor, and co-op mode. A registration code is a unique serial number that verifies your ownership of the game. The registration code is usually provided when you purchase the game from an official source, such as EA's website or Steam. You need to enter the registration code when you install the game on your computer or when you launch it for the first time. The registration code is also required to access online features of the game, such as multiplayer mode or updates. The purpose of the registration code is to prevent piracy and unauthorized distribution of the game. By requiring a registration code, EA hopes to ensure that only legitimate customers can play the game and enjoy its full features. However, some people may not have a registration code for various reasons, such as losing it, buying a second-hand copy of the game, such as buying a second-hand copy of the game, downloading it from an unofficial source, or borrowing it from a friend. In that case, you might want to use a crack to bypass the registration code and play the game for free. red alert 3 serial key generator A crack is a modified version of the game's executable file or a program that alters the game's behavior to bypass the registration code. A crack can be downloaded from various websites that offer pirated games or software. However, using a crack is not without risks. Some of the risks are: Therefore, using a crack is not recommended and could have negative consequences for you and your computer. You should always use a legitimate registration code and purchase the game from an official source if you want to enjoy the game safely and legally. If you still want to use a crack to play Command and Conquer Red Alert 3 for free, despite the risks involved, there are two methods that you can try. However, we do not endorse or support these methods and we are not responsible for any damage or loss that may result from using them. Use them at your own risk and discretion. A serial number generator is a program that generates random serial numbers that can be used as registration codes for various games or software. You can download a serial number generator for Command and Conquer Red Alert 3 from some websites that offer pirated games or software. Here are the steps to use this method: You can search for a serial number generator for Command and Conquer Red Alert 3 on Google or other search engines. You will find many websites that claim to offer such a program. However, be careful and avoid clicking on suspicious links or downloading files from untrusted sources. Some of these websites could be scams or contain malware that could harm your computer. One of the websites that offers a serial number generator for Command and Conquer Red Alert 3 is Smart Serials. You can visit this website and click on the "Download Command & Conquer Red Alert 3 Serial number" button. You will be redirected to another page where you will have to verify that you are human by completing a captcha. After that, you will be able to download the serial number generator as a ZIP file. After downloading the ZIP file, you will have to extract it using a program like WinRAR or 7-Zip. You will find an executable file named "Command & Conquer Red Alert 3 Keygen.exe" inside the extracted folder. Double-click on this file to run the serial number generator. You will see a window with a button that says "Generate". Click on this button and wait for a few seconds. The generator will produce a random serial number that looks like this: "XXXX-XXXX-XXXX-XXXX-XXXX". Copy this serial number by selecting it and pressing Ctrl+C on your keyboard. Now that you have copied a serial number from the generator, you can use it to install or launch Command and Conquer Red Alert 3 on your computer. If you have already installed the game, you can launch it by double-clicking on its icon on your desktop or in your Start menu. If you have not installed the game yet, you can insert the game disc into your CD/DVD drive or mount the game image using a program like Daemon Tools or PowerISO. When you install or launch the game, you will be prompted to enter your registration code. Paste the serial number that you copied from the generator by pressing Ctrl+V on your keyboard. Click on "Done" and wait for the game to verify your registration code. If the serial number is valid, you will be able to play the game without any problems. However, this method has some drawbacks. First of all, the serial number generator may not work for all versions of the game or for all regions. You may have to try different generators or serial numbers until you find one that works for you. Second, the serial number generator may contain malware or viruses that could harm your computer. You should scan the file with an antivirus program before running it. Third, the serial number generator may not be compatible with the latest updates or patches of the game. You may have to disable your internet connection or firewall to prevent the game from checking for updates or verifying your registration code online. A pre-cracked version of the game is a modified version of the game that has already been cracked by someone else and does not require a registration code to play. You can download a pre-cracked version of Command and Conquer Red Alert 3 from some websites that offer pirated games or software. Here are the steps to use this method: You can search for a pre-cracked version of Command and Conquer Red Alert 3 on Google or other search engines. You will find many websites that claim to offer such a version. However, be careful and avoid clicking on suspicious links or downloading files from untrusted sources. Some of these websites could be scams or contain malware that could harm your computer. One of the websites that offers a pre-cracked version of Command and Conquer Red Alert 3 is MrPcGamer. You can visit this website and click on the "Go To Donwload" button. You will be redirected to another page where you will have to choose one of the download links from different hosts. You can use any link that works for you, but we recommend using MegaUp.net or Mega.nz for faster and safer downloads. You will have to download 3 parts of the game, each around 2.5 GB in size. After downloading all 3 parts of the game, you will have to extract them using a program like WinRAR or 7-Zip. You will find an ISO file named "Command.and.Conquer.Red.Alert.3.MULTi12-PROPHET" inside the extracted folder. This is an image file that contains all the data of the game. You will have to mount this image file using a program like Daemon Tools or PowerISO. After mounting the image file, you will see a virtual CD/DVD drive on your computer that contains the game's setup files. Double-click on this drive to open it and run the "Setup.exe" file to start installing the game. Follow the instructions on the screen and choose your preferred language and installation directory. The installation process may take some time depending on your computer's speed. After installing the game, you will have to copy some files from the image file to your installation directory. Open the image file again and go to the "PROPHET" folder inside it. You will see two files named "ra3_1.12.game" and "ra3_1.12.game.bak". Copy these two files and paste them into your installation directory, which is usually located at "C:\Program Files (x86)\Electronic Arts\Command & Conquer Red Alert 3". When prompted, choose to replace or overwrite the existing files. Now that you have installed and cracked the game, you can launch it by double-clicking on its icon on your desktop or in your Start menu. You will not be asked to enter a registration code when you launch the game. You will be able to play the game without any problems. However, this method also has some drawbacks. First of all, downloading a pre-cracked version of the game is illegal and violates the game's end user license agreement (EULA). You could face legal consequences if you are caught using or distributing a pre-cracked version of the game. Second, downloading a pre-cracked version of the game could expose your computer to malware or viruses that could harm your computer. You should scan the file with an antivirus program before installing it. Third, downloading a pre-cracked version of the game could prevent you from updating the game to the latest version or accessing online features of the game, such as multiplayer mode or patches. You could also miss out on new content, bug fixes, or improvements that are released by the game developers. In this article, we have shown you two methods to get a registration code crack for Command and Conquer Red Alert 3, and explained the pros and cons of each method. We have also answered some frequently asked questions about the game and its registration code. However, we do not endorse or support these methods and we are not responsible for any damage or loss that may result from using them. Use them at your own risk and discretion. The best way to play Command and Conquer Red Alert 3 is to use a legitimate registration code and purchase the game from an official source, such as EA's website or Steam. This way, you can enjoy the game safely and legally, and support the game developers who worked hard to create this amazing game. You can also access online features of the game, such as multiplayer mode or updates, and get new content, bug fixes, or improvements that are released by the game developers. Command and Conquer Red Alert 3 is a fun and exciting real-time strategy game that will keep you entertained for hours. Whether you play as the Allies, the Soviet Union, or the Empire of the Rising Sun, you will experience a unique storyline, units, buildings, and special abilities. You can also team up with another player or an AI-controlled commander in co-operative campaign mode, or battle against other players online in competitive multiplayer mode. The game features stunning graphics, humor, and co-op mode. We hope you enjoyed this article and learned something new. If you have any questions or comments, feel free to leave them below. Thank you for reading! Here are some frequently asked questions about Command and Conquer Red Alert 3 and its registration code: You can also access more advanced options by clicking on the Advanced button on the bottom right corner of the Processing tab. You can access options such as High Frequency Restoration, Harmonic Fidelity Restoration, Spectrum Analyzer, etc. DFX Audio Enhancer is a very reliable and stable software that works smoothly with most audio devices and applications. However, sometimes you may encounter some issues or problems that affect your audio quality or performance. Here are some tips on how to troubleshoot and optimize DFX Audio Enhancer: Download File ===== https://tinourl.com/2uL5eP Some of the common issues that you may face while using DFX Audio Enhancer are: Some of the best practices and tips for using DFX Audio Enhancer are: In conclusion, DFX Audio Enhancer is a powerful software that enhances the sound quality of your PC by applying various effects and features. It works with any media player, browser, or application that plays audio on your PC. It can also optimize your headphones, speakers, or other audio devices for the best sound experience. If you want to enjoy the best sound quality on your PC, you should download and install DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free from this link: https://www.dfxaudioenhancersetupfree.com. This is a trusted and verified source that offers the latest version of DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free. You can also use DFX Audio Enhancer to improve your audio quality by selecting from different modes and presets, adjusting the effects manually, troubleshooting and optimizing DFX Audio Enhancer, etc. I hope this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading! Here are some frequently asked questions about DFX Audio Enhancer: DFX Audio Enhancer download for Windows 11[^1^] Yes, DFX Audio Enhancer is safe to use. It does not contain any viruses, malware, spyware, adware, etc. It does not harm your PC or audio files in any way. It only enhances your sound quality by applying various effects and features. Yes, DFX Audio Enhancer is free to use. You can download and install it for free from this link: https://www.dfxaudioenhancersetupfree.com. This is a trusted and verified source that offers the latest version of DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free. The system requirements for using DFX Audio Enhancer are: If you want to uninstall DFX Audio Enhancer from your PC, you can follow these steps: If you want to get more information about DFX Audio Enhancer, you can visit these websites: If you are a fan of romantic Bollywood movies, you might want to download Lagu India Mann Mp3, a collection of songs from the 1999 film Mann. Mann is a remake of the 1957 classic An Affair to Remember, starring Aamir Khan and Manisha Koirala as two lovers who meet on a cruise and fall in love, despite being engaged to other people. The film features some of the most popular songs of the late 90s, composed by Sanjeev Darshan and sung by Udit Narayan, Alka Yagnik, Anuradha Paudwal and others. Some of the songs that you can download from Lagu India Mann Mp3 are: Download ✔ https://tinourl.com/2uL2H6 You can download Lagu India Mann Mp3 from various websites that offer free or paid downloads of Bollywood songs. However, you should be careful about the quality and legality of the downloads, as some websites may contain viruses or malware, or violate copyright laws. You can also stream Lagu India Mann Mp3 online on platforms like YouTube[^1^] or Spotify. Lagu India Mann Mp3 is a great way to enjoy the music and emotions of Mann, one of the most successful and acclaimed films of Aamir Khan and Manisha Koirala. Download Lagu India Mann Mp3 today and relive the magic of Mann! If you want to know more about the film Mann, you can also watch it online or on DVD. Mann is a film that explores the themes of love, fate, sacrifice and destiny. It is a film that will make you laugh, cry and feel the emotions of the characters. Mann is a film that will stay with you for a long time. Mann is also a film that showcases the acting talents of Aamir Khan and Manisha Koirala, who have given some of their best performances in this film. Aamir Khan is known for his versatility and perfectionism, and he brings out the charm and charisma of his character Dev Karan Singh, a playboy who falls in love for the first time. Manisha Koirala is known for her beauty and grace, and she portrays the innocence and dignity of her character Priya Verma, a painter who is engaged to a wealthy businessman. The chemistry between the two actors is palpable and realistic, and they make you root for their love story. Mann is also a film that has some memorable supporting characters, such as Rajeev (Anil Kapoor), Priya's fiancé who loves her unconditionally and respects her decision; Kamini (Sharmila Tagore), Dev's grandmother who encourages him to follow his heart; and Pooja (Rani Mukerji), Dev's friend who helps him reunite with Priya. The film also has some comic relief from Neeraj Vora, who plays Dev's friend and manager. Mann is a film that you should not miss if you are a fan of Bollywood romance. It is a film that will touch your heart and soul. Download Lagu India Mann Mp3 and watch Mann today! If you are a fan of Telugu movies, you might have heard of Dammu, a 2012 action drama film starring Jr. NTR, Trisha and Karthika Nair. The film was a hit at the box office and received positive reviews from critics and audiences alike. But what if you missed watching it on the big screen or want to watch it again at your convenience? Well, you are in luck because you can download Dammu Telugu movie in HD quality using an online player. In this article, we will tell you everything you need to know about Dammu Telugu movie, how to download it in dvdrip format using an online player, where to watch it online legally and for free or with a subscription, and some frequently asked questions related to the topic. So, without further ado, let's get started. Download Zip ››››› https://tinourl.com/2uL4mq Dammu (also known as Dhammu) is a 2012 Indian Telugu-language action film written and directed by Boyapati Srinu. The film follows Rama Chandra (Jr. NTR), an orphan who agrees to pose as the heir to a rich and powerful royal family called Suryavamsi, who are looking for an heir after their son dies in an accident. However, Rama Chandra soon finds himself in a conflict with another royal family called Chandravamsi, who are their rivals for generations. The film features Jr. NTR as Rama Chandra / Raja Vasireddy Vijayadwaja Srisimha, Trisha as Sathya, Karthika Nair as Neelaveni, Nassar as Chandravamsi King, Brahmanandam as Jaanaki Bhanupriya as Rama Chandra's mother Venu Thottempudi as Rama Chandra's brother-in-law Abhinaya as Rama Chandra's elder sister Hari Teja as Rama Chandra's elder sister Chitralekha as Rama Chandra's elder sister Suman as Suryavamsi King/Raja Surya Pratap Singh Kota Srinivasa Rao as Raja of Veera Durgam Fort Ahuti Prasad as Neelaveni's father Ali as Rama Chandra's friend Rahul Dev as Police Officer Sampath Raj as Chandravamsi King's elder son Kishore as Chandravamsi King's middle son Venu Madhav as Chandravamsi King's youngest son. The film was released theatrically on 27 April 2012 along with a Tamil dubbed version titled Singamagan. The film received mostly positive reviews from critics who praised Jr. NTR's performance, action sequences, music by M.M. Keeravani and cinematography by Arthur A. Wilson. The film was also a commercial success grossing over ₹55 crore worldwide. There are many reasons why you should watch Dammu Telugu movie if you haven't already or want to watch it again. Here are some of them: Watch Dammu Telugu Full Movie HD Online Free If you want to download Dammu Telugu movie in HD quality using an online player then follow these simple steps: An online player is a software or website that allows you to stream or download movies online without having to install any additional software or plugins on your device. An online player usually has a large collection of movies from different genres languages countries and years that you can choose from. An online player also offers various features such as subtitles multiple audio tracks resume playback speed control etc. Dvdrip format is a video file format that is ripped from a DVD source. Dvdrip format usually has a high quality resolution (720p or 1080p) low file size (700 MB or 1 GB) compatibility with most devices (PCs smartphones tablets etc.) and convenience (no need for DVD players or discs). There are many benefits of downloading movies in dvdrip format such as: However, there are also some drawbacks of downloading movies in dvdrip format such as: If you want to download movies safely and legally then follow these tips: If you don't want to download Dammu Telugu movie then you can also watch it online legally and for free or with a subscription. Here are some of the websites and platforms where you can watch Dammu Telugu movie online: ZEE5 is a video-on-demand platform that offers a variety of content in different languages genres and formats. You can watch Dammu Telugu movie on ZEE5 for free with ads or with a subscription that starts from ₹99 per month. ZEE5 also offers features such as offline download subtitles multiple audio tracks resume playback etc. You can watch ZEE5 on your PC smartphone tablet smart TV etc. YouTube is a video-sharing platform that allows users to upload watch share and comment on videos. You can watch Dammu Telugu movie on YouTube for free with ads or with a YouTube Premium subscription that costs ₹129 per month. YouTube also offers features such as offline download subtitles multiple audio tracks resume playback etc. You can watch YouTube on your PC smartphone tablet smart TV etc. There are also other options to watch Dammu Telugu movie online such as Netflix Amazon Prime Video and Hotstar. However these platforms require a subscription that ranges from ₹199 to ₹999 per month. These platforms also offer features such as offline download subtitles multiple audio tracks resume playback etc. You can watch these platforms on your PC smartphone tablet smart TV etc. Dammu Telugu movie is a 2012 action drama film starring Jr. NTR Trisha and Karthika Nair. The film is about an orphan who poses as the heir to a royal family and gets involved in a feud with another royal family. The film is a hit at the box office and received positive reviews from critics and audiences alike. You can download Dammu Telugu movie in HD quality using an online player or watch it online legally and for free or with a subscription. We hope this article has helped you to know more about Dammu Telugu movie and how to download or watch it online. If you have any questions or feedback please feel free to leave them in the comments section below. Here are some frequently asked questions and their answers related to Dammu Telugu movie and how to download or watch it online: Call of Duty Modern Warfare 3 is a popular first-person shooter game that was released in 2011. However, some players may encounter an error message when they try to launch the game. The error message says that the game cannot run because it cannot see the steam API, and that it is usually caused by not having a steam_appid.txt file, or that Steam itself is not running. This error can be frustrating and prevent you from enjoying the game. Fortunately, there are some possible solutions that you can try to fix this issue. Here are some steps that you can follow: Download File ► https://urlgoal.com/2uCJm7 If none of these steps work, you may need to contact Steam support or Activision support for further assistance. Hopefully, one of these solutions will help you fix the error and enjoy Call of Duty Modern Warfare 3 without any problems. Call of Duty Modern Warfare 3 is a thrilling game that offers a variety of modes and features. You can play the single-player campaign, which follows the events of the previous games and takes you to different locations around the world. You can also play the multiplayer mode, which lets you compete with other players online in various modes such as Team Deathmatch, Domination, Search and Destroy, and more. You can also play the Spec Ops mode, which lets you team up with a friend or a bot and complete various missions and challenges. Call of Duty Modern Warfare 3 is a game that will keep you entertained for hours with its fast-paced action and immersive graphics. However, if you encounter the error unable to create steam_appid.txt, you may not be able to enjoy the game at all. That's why it is important to follow the steps above and try to fix the issue as soon as possible. Once you do that, you can launch the game and have fun. Call of Duty Modern Warfare 3 is one of the best-selling games of all time, and for good reason. It offers a thrilling and immersive experience that will make you feel like you are in the middle of a war. However, some players may face a frustrating error that prevents them from launching the game. The error says that the game cannot see the steam API and that it is usually caused by not having a steam_appid.txt file or by Steam not running. If you are one of those players, don't worry. There are some simple solutions that you can try to fix this error and play the game without any issues. In this article, we will show you how to create a steam_appid.txt file, verify the integrity of the game files, run the game as an administrator, and contact support if none of these steps work. By following these steps, you should be able to fix the error and enjoy Call of Duty Modern Warfare 3 to the fullest. I would like to keep it short... Remove all the songs.Of course, not the one starring Mahi Gill. The movie could have been shorter. Seriously after interval I had no motivation to see the other half. It was quite predictable Raja Mishra would have his revenge in the second half and the movie shall end.But I did not quit and finally I was presented with a nice ending. A good thought has been given into the movie and has been seasoned with quite a masala. I would say on a Saturday when you are exhausted after working for last 5 days just go and watch the movie. The star cast has been chosen nicely and I totally appreciate the acting of Saif,Jimmy,Raj Babbar,Gulshan Grover.... Just take the movie out of this world. At the end of the movie when Raja and Rudra are riding in a pickup the truck fails. It suddenly turns into a rocket and they are travelling at a very high speed. The movie has the elements of good acting. Saif and Jimmy are simply fantastic.The chemistry between the two is brilliant. Its the third time I have seen Jimmy Shergil and Saif Ali Khan together. This time the movie is more commercial. Mythril has been playing his kind of character. I am quite satisfied with the movie. It gives one a feel of an average Indian crime plot in US. As expected nothing great has been done. I am sure and feel quite disappointed with the second half. It is a nice movie but one is not compelled to go watch the movie. Download File ✺✺✺ https://urlgoal.com/2uCMBn This is another disappointing movie by Nana Patekar. He has just back as Raja Mishra from the last movie Bullet Raja (2013).. So what makes him come back to play another Raja Mishra. Frontschweine ist ein Strategiespiel aus dem Jahr 1999, das die Spieler in die Rolle von anthropomorphen Schweinen versetzt, die in einem Krieg gegen andere Tierarten kämpfen. Das Spiel zeichnet sich durch seinen schwarzen Humor, seine abwechslungsreichen Missionen und seine originellen Waffen aus. Download File ✔ https://urlgoal.com/2uCLIU Jetzt können die Fans dieses Kultspiels das Abenteuer der Frontschweine in einem E-Book erleben, das alle Details der Story, der Charaktere und der Spielmechanik enthält. Das E-Book ist im EPUB-Format erhältlich und kann auf jedem kompatiblen Gerät gelesen werden. Das Beste daran ist, dass das E-Book völlig kostenlos ist. Die Fans müssen nur die Website von Chip besuchen und den Download-Link anklicken. Dort finden sie auch weitere Informationen über das Spiel und seine Entwickler. Frontschweine ist ein Muss für alle Liebhaber von Strategiespielen und von schrägem Humor. Das E-Book bietet die perfekte Gelegenheit, dieses Meisterwerk wieder zu entdecken oder zum ersten Mal zu genieÃen. Das E-Book von Frontschweine enthält nicht nur die komplette Spielanleitung, sondern auch viele Hintergrundinformationen über die Welt und die Geschichte des Spiels. Die Leser erfahren mehr über die verschiedenen Fraktionen der Schweine, ihre Motive und ihre Persönlichkeiten. AuÃerdem werden die anderen Tierarten vorgestellt, die als Feinde oder Verbündete der Schweine auftreten. Das E-Book bietet auch einen Einblick in die Entwicklung des Spiels, seine Inspirationen und seine Herausforderungen. Die Leser können sich einige Konzeptzeichnungen, Skizzen und Screenshots ansehen, die zeigen, wie das Spiel entstanden ist. Darüber hinaus enthält das E-Book einige Interviews mit den Entwicklern, die ihre Erfahrungen und Anekdoten teilen. Frontschweine ist ein E-Book, das sowohl unterhaltsam als auch informativ ist. Es ist eine Hommage an ein Spiel, das viele Fans auf der ganzen Welt begeistert hat. Es ist auch eine Chance, mehr über die Kunst und die Technik des Videospiel-Designs zu lernen. Das E-Book ist ein Geschenk für alle Frontschweine-Liebhaber und für alle, die es werden wollen. Das E-Book von Frontschweine ist nicht nur für die Fans des Spiels gedacht, sondern auch für alle, die sich für Strategiespiele im Allgemeinen interessieren. Das E-Book erklärt die Grundlagen des Genres, seine Geschichte und seine Entwicklung. Es zeigt auch, wie Frontschweine sich von anderen Strategiespielen unterscheidet und was es so einzigartig macht. Das E-Book bietet auch einige Tipps und Tricks für die Spieler, die das Spiel meistern wollen. Es enthält eine detaillierte Beschreibung aller Waffen, Fahrzeuge und Gegenstände, die im Spiel verfügbar sind. Es gibt auch einige Ratschläge, wie man die verschiedenen Missionen erfolgreich abschlieÃen kann. Das E-Book ist eine wertvolle Ressource für alle, die das Beste aus dem Spiel herausholen wollen. Das E-Book von Frontschweine ist ein umfassendes und faszinierendes Werk, das das Spiel in all seinen Aspekten beleuchtet. Es ist eine Lektüre, die jeden Strategiespiel-Fan begeistern wird. Es ist auch eine Möglichkeit, die Welt der Frontschweine zu erkunden und zu verstehen. Das E-Book ist ein Muss für alle, die dieses legendäre Spiel lieben oder kennenlernen wollen. Yeh Jawaani Hai Deewani is a 2013 Bollywood romantic comedy film starring Ranbir Kapoor, Deepika Padukone, Aditya Roy Kapur and Kalki Koechlin. The film follows the lives of four friends who go on a trip to Manali and then reunite after eight years. The film was a huge box office hit and received positive reviews from critics and audiences alike. Download File ··· https://urlgoal.com/2uCKQ2 If you want to watch Yeh Jawaani Hai Deewani full movie online in HD quality, you have several options. You can either stream it on a legal platform like Netflix, Amazon Prime Video, Hotstar or Zee5, or you can download it from a torrent site like Filmywap, Tamilrockers or Movierulz. However, we do not recommend downloading from illegal sources as it may harm your device and violate the copyright laws. The best way to watch Yeh Jawaani Hai Deewani full movie online in HD quality is to use an HD online player that can play any video format without any hassle. One such player is VLC Media Player, which is free and easy to use. Here are the steps to watch Yeh Jawaani Hai Deewani full movie online in HD quality using VLC Media Player: We hope this article helped you watch Yeh Jawaani Hai Deewani full movie online in HD quality. If you liked this article, please share it with your friends and family who love Bollywood movies. Thank you for reading! Yeh Jawaani Hai Deewani is a film that celebrates friendship, love and life. The film has many memorable scenes and dialogues that have become iconic in Bollywood. Some of the most popular scenes are the trekking scene where Bunny and Naina bond over their dreams, the wedding scene where Bunny and Naina dance to the song "Badtameez Dil", the airport scene where Bunny confesses his love to Naina and the final scene where they reunite in Udaipur. The film also has a stellar soundtrack composed by Pritam and sung by various artists like Arijit Singh, Benny Dayal, Shalmali Kholgade and Mohit Chauhan. The songs are catchy, romantic and energetic and suit the mood of the film. Some of the most popular songs are "Badtameez Dil", "Balam Pichkari", "Kabira", "Ilahi" and "Subhanallah". Yeh Jawaani Hai Deewani is a film that will make you laugh, cry and fall in love. It is a film that you can watch with your friends, family or partner and enjoy every moment of it. It is a film that will stay with you for a long time and make you nostalgic for your own adventures. It is a film that you should not miss. Download Zip ····· https://tinurll.com/2uzmsB Download Zip ===== https://gohhs.com/2uEzDr Movi Slideshow Maker Crack can be a tool that is amazing creating professional slideshow videos on your computer. Moreover, it advanced features that can make it easy and fast to produce a ready-to-play slip in the shape of a video guide, which we may then circulate on the web or among friends. The program gives you to improve the standard of the images inserted, changing the lighting, colour, compare, and other available choices. You may also crop and turn the photographs as needed to fit better as part of your slideshow. A preview section is obtained, making sure that every step of just how you shall keep a vision on the result. Photos, videos, and also acoustics files you fill are stored in a -panel which you have the ability to set to show just specific types. DOWNLOAD ✏ ✏ ✏ https://gohhs.com/2uEzp4 Movavi Slideshow Maker Serial Keyis a tool that makes creating a slideshow fast and straightforward. We obtain the slideshow in no time and we may create it in standard or custom formats. You may make slideshows of extensive collections of photos or you can crop and rotate pictures. The program additionally has flexible designing and editing tools. Movavi Slideshow Maker Serial Key offers simply no limits as to the number of photos that we can trim, cut, combine and manipulate into new videos. A very simple interface permits us to carry on quickly. A variety of video and also photo editing tools are accessible, allowing us to make a gallery of the final results or to export them to hard drive. Movavi Slideshow Maker Serial Key has the ability to add synchronised music, modify their brightness, contrast, colour, and other options. If you are a fan of first-person shooter games, you have probably heard of Counter Strike 1.6, one of the most popular and influential online games of all time. But how can you download it and play it online in 2023? In this article, we will show you how to do that in a few simple steps. DOWNLOAD ===== https://ssurll.com/2uNZOE Counter Strike 1.6 is a team-based game that pits two teams against each other: terrorists and counter-terrorists. The game was originally a mod for Half-Life, a game developed by Valve in 1998. The mod was created by two fans, Minh Le and Jess Cliffe, who released the first version in 1999. The mod became so popular that Valve hired them and acquired the rights to the game. Valve then released several updates and versions of the game, including Counter Strike 1.6 in 2003, which is considered the final and most stable version of the original game. Counter Strike 1.6 is a game that focuses on realism, teamwork, and strategy. The game has several modes, such as bomb defusal, hostage rescue, assassination, and deathmatch. The game also has dozens of maps, each with its own layout, objectives, and tactics. The game also has a variety of weapons, ranging from pistols, rifles, shotguns, submachine guns, sniper rifles, grenades, and knives. Each weapon has its own characteristics, such as accuracy, recoil, damage, and price. The game also has a money system, where players earn money by killing enemies, completing objectives, or winning rounds. The money can be used to buy weapons, armor, or equipment at the beginning of each round. The easiest and safest way to download Counter Strike 1.6 is through Steam, the digital distribution platform created by Valve. Steam allows you to buy, download, install, update, and play games on your computer. To download Counter Strike 1.6 through Steam, you need to do the following: If you don't want to pay for the game or use Steam, you can also download Counter Strike 1.6 from other websites that offer free downloads of the game. However, you need to be careful when doing this, as some websites may contain viruses or malware that can harm your computer or steal your personal information. To download Counter Strike 1.6 from a website safely, you need to do the following: download cs 1.6 original However, you should be aware that downloading Counter Strike 1.6 from unofficial websites may have some risks, such as compatibility issues, outdated versions, or unwanted modifications. Therefore, we recommend that you always scan the file with an antivirus program before opening it and check the reviews and ratings of the website before downloading it. Alternatively, you can also try playing Counter Strike 1.6 on your browser, without downloading anything, by visiting CS-ONLINE.CLUB. This website allows you to play Counter Strike 1.6 online with other players, using only your browser and an internet connection. You don't need to register or pay anything to play on this website, but you may need to install some plugins or extensions to make it work properly. Once you have downloaded and installed Counter Strike 1.6 on your computer, you can start playing it online by joining a server from the game menu. To do this, you need to do the following: Another way to play Counter Strike 1.6 online is to find a server from a website that lists them. There are many websites that offer this service, such as Gametracker, Game-monitor, or CS-Servers. To find a server from a website, you need to do the following: If you want to play Counter Strike 1.6 online with your friends or customize your own game settings, you can also create your own server. To do this, you need to do the following: One of the most important skills for playing Counter Strike 1.6 online is to know the maps and the weapons well. Each map has its own layout, objectives, and strategies, so you need to learn them by playing them often or watching other players play them. You also need to know which weapons are best suited for each map, situation, and play style. For example, some weapons are more effective at long range, while others are better at close quarters. Some weapons are more accurate, while others have more recoil. Some weapons are more expensive, while others are more affordable. You need to experiment with different weapons and find the ones that work best for you. Another essential skill for playing Counter Strike 1.6 online is to communicate with your teammates. Communication is key for teamwork, coordination, and strategy. You can communicate with your teammates by using voice chat or text chat in the game. You can also use commands or gestures to convey information or instructions. For example, you can use commands like "follow me", "cover me", "enemy spotted", or "need backup" to communicate with your teammates quickly and easily. You can also use gestures like pointing, nodding, or waving to communicate with your teammates without words. Communication can help you win more rounds and have more fun playing with your teammates. The final skill for playing Counter Strike 1.6 online is to practice your aim and reflexes. Aim and reflexes are crucial for winning gunfights, eliminating enemies, and surviving. You can practice your aim and reflexes by playing the game regularly, training on aim maps, or using aim trainers. You can also improve your aim and reflexes by adjusting your mouse sensitivity, crosshair settings, and resolution to suit your preferences. You can also watch professional players or streamers play the game and learn from their techniques and tips. Practicing your aim and reflexes can help you become a better player and enjoy the game more. Counter Strike 1.6 is a classic game that has been loved by millions of players for over two decades. It is a game that offers realism, teamwork, and strategy in a fast-paced and exciting way. If you want to download Counter Strike 1.6 and play it online in 2023, you can do so by following the steps we have outlined in this article. You can also improve your skills and have more fun by learning the maps and the weapons, communicating with your teammates, and practicing your aim and reflexes. We hope you found this article helpful and informative. Now go ahead and download Counter Strike 1.6 and play it online with your friends or other players around the world. Counter Strike 1.6 is still popular in 2023, despite being an old game. According to Steam Charts, Counter Strike 1.6 had an average of 16,000 players online in June 2023, making it one of the top 100 most played games on Steam. The game also has a loyal fan base and a vibrant community that creates and shares new content, such as maps, mods, skins, or servers. The system requirements for Counter Strike 1.6 are very low compared to modern games. According to Steam, the minimum system requirements are: The recommended system requirements are: Counter Strike: Global Offensive (CS:GO) is the latest version of Counter Strike, released by Valve in 2012. It is a modernized and improved version of the game, with new graphics, gameplay, features, modes, maps, weapons, skins, and more. Some of the main differences between Counter Strike 1.6 and CS:GO are: If you want to improve your performance and FPS (frames per second) in Counter Strike 1.6,you can try the following tips: If you want to find more information and resources about Counter Strike 1.6,you can visit the following websites: These websites can help you learn more about Counter Strike 1.6, its history, its features, its community, and its legacy. You can also find other websites that offer news, guides, forums, videos, and more about the game. If you are a fan of soccer games, you might have heard of Score Hero 2, a unique and immersive game that lets you control the action and become a soccer superstar. Unlike other soccer games, Score Hero 2 does not require you to play an entire match, but instead puts you in various situations where you need to score a goal or assist a teammate. You can also customize your hero, choose from over 90 real clubs, and take part in regular events for medals and glory. However, as fun as Score Hero 2 is, it also has some limitations that might frustrate you. For example, you need to spend money to buy items, upgrade your skills, or change your appearance. You also need energy to play each level, which can run out quickly if you fail or retry too often. And if you want to unlock all the levels and stories, you need to complete a lot of challenges and earn stars. DOWNLOAD ↔ https://ssurll.com/2uO02x Fortunately, there is a way to overcome these limitations and enjoy Score Hero 2 without any restrictions. And that is by using the Score Hero APK hack, a modified version of the game that gives you unlimited money and energy, as well as unlocks all the levels and stories. With this hack, you can buy anything you want, play as long as you want, and explore the game at your own pace. In this article, we will show you how to download and install the Score Hero APK hack, how to use its features, and some tips and tricks for playing Score Hero 2. So read on and get ready to score some amazing goals! The first step to use the Score Hero APK hack is to download it from a reliable source. There are many websites that claim to offer the hack, but not all of them are safe or trustworthy. Some of them might contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything. One of the websites that we recommend is [AN1.com](^1^), which offers a variety of hacked games for Android devices. You can find the Score Hero APK hack by searching for "score hero mod" on their website. The latest version of the hack is 2.75, which was updated on February 15, 2023. The file size is 95.6 MB, which is not too large for most devices. Before downloading the file, make sure that you have enough storage space on your device. You can also check the comments section of the website to see what other users have said about the hack. If there are any complaints or issues reported, you might want to look for another source. Once Once you have downloaded the file, you need to install it on your device. But before that, you need to enable unknown sources on your device settings. This is because the Score Hero APK hack is not from the official Google Play Store, and your device might block it by default. To enable unknown sources, follow these steps: score hero mod apk unlimited money and energy download Now you are ready to install the Score Hero APK hack. To do that, follow these steps: Congratulations, you have successfully installed the Score Hero APK hack on your device. Now you can launch the game and enjoy the hack features. The Score Hero APK hack has three main features that will make your gaming experience more fun and easy. These are unlimited money, unlimited energy, and unlocked levels and stories. Let's see how to use each of them. Money is the currency of Score Hero 2, which you can use to buy items, upgrade your skills, or change your appearance. You can earn money by playing levels, completing challenges, or watching ads. However, these methods are slow and tedious, and you might not have enough money to buy what you want. With the Score Hero APK hack, you don't have to worry about money anymore. You will have unlimited money from the start of the game, which means you can buy anything you want without any restrictions. You can access the shop by tapping on the cart icon on the top right corner of the screen. There you will find various categories of items, such as balls, boots, shirts, pants, hair, accessories, and more. You can also upgrade your skills by tapping on the star icon on the top left corner of the screen. There you can improve your shooting, passing, dribbling, speed, stamina, and more. To use unlimited money, just tap on any item or skill that you want to buy or upgrade and confirm your purchase. You will see that your money balance will not decrease at all, no matter how much you spend. This way, you can customize your hero and make him look and play like a pro. Energy is another resource that you need to play Score Hero 2. Each level requires a certain amount of energy to play, which varies depending on the difficulty and length of the level. You can see how much energy a level requires by looking at the lightning icon below it. You start with a maximum of 10 energy points, which regenerate over time or by watching ads. However, energy can run out quickly if you play too many levels in a row or if you fail or retry too often. When that happens, you have to wait for your energy to refill or watch ads to get more energy. This can be annoying and interrupt your gaming flow. With the Score Hero APK hack, you don't have to worry about energy anymore either. You will have unlimited energy from the start of the game, which means you can play as many levels as you want without any interruptions. You will see that your energy bar will always be full, no matter how much you play. The last feature of the Score Hero APK hack is unlocked levels and stories. Score Hero 2 has over 600 levels and 20 stories that follow your hero's career from a rookie to a legend. However, not all of them are available from the beginning. You need to complete previous levels and earn stars to unlock new ones. This can be challenging and time-consuming, especially if some levels are too hard or require specific goals or conditions. You might get stuck on a level for a long time or miss out on some stories that interest you. With the Score Hero APK hack, you don't have to worry about unlocking levels and stories anymore either. You will have all of them unlocked from the start of the game, which means you can play any level or story that you want without any restrictions. You can access them by tapping on the map icon on the bottom left corner of the screen. You can access them by tapping on the map icon on the bottom left corner of the screen. There you will see a list of stories, each with a number of levels. You can tap on any story or level that you want to play and start the action. This way, you can explore the game at your own pace and enjoy the different scenarios and challenges that Score Hero 2 offers. Now that you know how to use the Score Hero APK hack features, you might wonder how to play Score Hero 2 like a pro. Well, we have some tips and tricks for you that will help you score amazing goals and win awards. Here they are: Scoring goals is the main objective of Score Hero 2, and it is also the most fun part. However, not all goals are equal. Some are more spectacular and rewarding than others. For example, you can score goals by using different techniques, such as curling, chipping, volleying, or heading. You can also score goals from different distances, angles, or positions. And you can score goals in different situations, such as free kicks, penalties, corners, or counterattacks. The game will reward you for scoring amazing goals by giving you awards, such as gold balls, trophies, medals, or stars. These awards will also help you unlock new items, skills, or stories. Therefore, you should always try to score amazing goals and win awards whenever possible. But how do you score amazing goals? Well, here are some tips: Passing is another important aspect of Score Hero 2, as it allows you to create opportunities and assist your teammates. However, passing is not always easy, as you have to deal with defenders who will try to intercept your passes or tackle your hero. Therefore, you need to pass wisely and avoid defenders whenever possible. But how do you pass wisely and avoid defenders? Well, here are some tips: Customizing your hero and improving your skills are two ways to make your hero stand out and perform better in Score Hero 2. You can change your hero's appearance, such as his hair, face, skin, clothes, and accessories. You can also upgrade your hero's skills, such as his shooting, passing, dribbling, speed, stamina, and more. But how do you customize your hero and improve your skills? Well, here are some tips: Score Hero 2 is a great game for soccer lovers who want to control the action and become a soccer superstar. However, it also has some limitations that might hinder your gaming experience. That's why we recommend using the Score Hero APK hack, a modified version of the game that gives you unlimited money and energy, as well as unlocks all levels and stories. With this hack, you can enjoy Score Hero 2 without any restrictions and have more fun and freedom. You can buy anything you want, play as long as you want, and explore the game at your own pace. You can also score amazing goals, pass wisely, and customize your hero. In this article, we showed you how to download and install the Score Hero APK hack, how to use its features, and some tips and tricks for playing Score Hero 2. We hope you found this article helpful and informative. Now it's time for you to try the Score Hero APK hack and see for yourself how awesome it is. Download it now and start scoring some amazing goals! A1: Yes, the Score Hero APK hack is safe to use as long as you download it from a reliable source like [AN1.com]. However, you should always be careful when downloading anything from unknown sources and scan it with an antivirus app before installing it. A2: No, you don't need to root your device to use the Score Hero APK hack. The hack works on any Android device without requiring any special permissions or modifications. A3: Yes, you can play online with the Score Hero APK hack. The hack does not affect your online connectivity or compatibility with other players. However, you should be careful not to abuse the hack features or cheat in online matches, as this might ruin the fun for others or get you reported. A4: No, you will not get banned for using the Score Hero APK hack. The hack is undetectable by the game servers and does not interfere with your account data or progress. However, you should always use the hack responsibly and not brag about it or share it with others. A5: You can update the Score Hero APK A5: You can update the Score Hero APK hack by visiting the same website where you downloaded it and looking for the latest version. You can also check the comments section of the website to see if there are any updates or news about the hack. To install the update, you need to follow the same steps as before, but make sure to uninstall the previous version of the hack first. Arturia Brass VSTi RTAS V2.0.5 I is a software instrument that uses physical and acoustic modelling techniques to create realistic and versatile sounds of trumpet, trombone and saxophone. It was developed in collaboration with the IRCAM institute in Paris, and offers a range of features and benefits that make it stand out from other sample-based or phrase-based brass libraries. Some of the advantages of Arturia Brass VSTi RTAS V2.0.5 I are: Download File » https://urlgoal.com/2uI9Xw If you are looking for a virtual brass instrument that can deliver realistic, expressive and versatile sounds, Arturia Brass VSTi RTAS V2.0.5 I might be the perfect choice for you. You can download a free demo version from Arturia's website or buy the full version for $249. Arturia Brass VSTi RTAS V2.0.5 I is not only a great tool for creating realistic brass sounds, but also a fun and creative instrument that can inspire you to make new music. You can use it to play melodies, harmonies, riffs, loops or even entire songs with the help of the built-in sequencer and mixer. You can also use it to add some spice and flavor to your existing tracks by layering or replacing the brass parts with Arturia's unique sounds. One of the most impressive features of Arturia Brass VSTi RTAS V2.0.5 I is its ability to emulate the human expression and articulation of the brass players. You can control the dynamics, pitch, tone and effects of each instrument with your MIDI keyboard, breath controller, modulation wheel or other MIDI devices. You can also choose from different playing styles such as legato, staccato, portamento, glissando, vibrato, mute and more. You can even create your own custom playing styles by editing the parameters of each articulation. Another feature that sets Arturia Brass VSTi RTAS V2.0.5 I apart from other virtual brass instruments is its realistic and flexible sound engine. It uses physical modelling to simulate the physics and acoustics of the brass instruments, such as the shape and size of the mouthpiece, bore, bell and valves. It also uses acoustic modelling to simulate the sound of the room and the microphone placement. You can adjust these parameters to create different sounds and effects, such as changing the material of the instrument, adding a mute or a wah-wah pedal, changing the position of the microphone or adding some reverb or delay. Mohabbat is a 1997 Hindi romantic film starring Sanjay Kapoor, Madhuri Dixit and Akshaye Khanna. The film tells the story of Gaurav, a wealthy businessman who falls in love with Shweta, a singer and dancer. However, their relationship faces many obstacles, such as Shweta's past, Gaurav's family and a rival named Rohit. Download ☆☆☆☆☆ https://urlgoal.com/2uI9fg If you are looking for a film that combines romance, drama and music, Mohabbat is a great choice. The film has beautiful songs composed by Nadeem-Shravan and sung by Kumar Sanu, Alka Yagnik and Udit Narayan. The film also showcases Madhuri Dixit's stunning dance skills in songs like "Pyar Kiya Hai Chori Chori" and "O Baby Don't Break My Heart". Mohabbat is also one of the few Hindi films that have Somali subtitles. You can watch the film on YouTube[^1^] with Af Somali subtitles and enjoy the story of love and passion. Mohabbat is a film that will touch your heart and make you feel the emotions of the characters. The film has a twisty plot that keeps the audience engaged till the end. The film explores the themes of friendship, sacrifice, betrayal and destiny. The film also has some comic scenes that lighten the mood and provide relief from the intense drama. The film has a star-studded cast that delivers powerful performances. Madhuri Dixit shines as Shweta, the woman who is torn between two men who love her. Sanjay Kapoor plays Gaurav, the rich and generous man who is ready to do anything for his friend and his love. Akshaye Khanna plays Rohit, the loyal and brave man who faces many challenges in his life. The film was directed by Reema Rakesh Nath, who also wrote the story, screenplay and dialogues. The film was produced by Rakesh Nath, who is Madhuri Dixit's manager and close friend. The film was released on 19 September 1997 and received mixed reviews from critics and audiences. The film was praised for its music, cinematography and performances, but criticized for its slow pace, weak direction and predictable plot. The film was a moderate success at the box office, but failed to live up to the expectations of Madhuri Dixit's fans. Mohabbat is a film that will appeal to those who love romantic films with a touch of drama and suspense. The film has some memorable scenes and songs that will stay with you for a long time. The film is a tribute to the bond of friendship and the power of love. Mohabbat is a film that will make you cry, laugh and feel. Mohabbat is not only a film for Hindi speakers, but also for Somali speakers. The film has Somali subtitles that make it easy to follow the dialogues and songs. The film also has some aspects that Somali viewers can relate to, such as the importance of family, friendship and faith. The film also shows the diversity and beauty of Indian culture, which has some similarities and differences with Somali culture. Somali culture is rich and diverse, with influences from various regions and religions. Somalia is a country located in the Horn of Africa, bordered by Ethiopia, Djibouti, Kenya, the Gulf of Aden and the Indian Ocean. Somalia has a population of about 15 million people, most of whom are ethnic Somalis who speak Somali as their mother tongue. Somali is a Cushitic language that belongs to the Afro-Asiatic language family. Somali also has many loanwords from Arabic, Persian, English and Italian. Somalis are predominantly Sunni Muslims who follow the ShafiÊ¿i school of Islamic law. Islam plays a central role in Somali society and culture, influencing their values, norms and traditions. Somalis have a strong sense of community and hospitality, as well as a respect for elders and guests. Somalis also have a long history of poetry, oral literature and storytelling, which reflect their creativity and wisdom. Somalis are known for their love of music and dance, which are often performed at weddings, festivals and other social occasions. If you are looking for a powerful and easy-to-use software for CNC machining, you might want to check out Delcam Featurecam 2014 20 1 0 24 Torrent. This software is designed to help you create high-quality parts with minimal programming time and effort. In this article, we will review the features, benefits, and drawbacks of Delcam Featurecam 2014 20 1 0 24 Torrent, as well as provide you with a link to download it for free. Delcam Featurecam 2014 20 1 0 24 Torrent is a software package that allows you to create CNC programs for various types of machines, such as mills, lathes, turn-mills, wire EDMs, and multi-axis machines. It uses a feature-based approach that automatically recognizes the features of your CAD model and generates the optimal toolpaths for them. You can also customize the toolpaths using various options and parameters. Delcam Featurecam 2014 20 1 0 24 Torrent supports a wide range of file formats, such as IGES, STEP, DXF, DWG, STL, Parasolid, SolidWorks, Solid Edge, Inventor, and CATIA. Download ✫✫✫ https://cinurl.com/2uEZfT Some of the benefits of Delcam Featurecam 2014 20 1 0 24 Torrent are: Some of the drawbacks of Delcam Featurecam 2014 20 1 0 24 Torrent are: If you want to download Delcam Featurecam 2014 20 1
-
-```
-0 24 Torrent for free, you can follow the link below. However, please be aware that downloading and using pirated software is illegal and unethical, and may expose you to viruses, malware, or other security risks. We do not endorse or recommend downloading or using Delcam Featurecam 2014 20 1 0 24 Torrent without a valid license or permission from the developer. Use it at your own risk and responsibility.
-
-Download Delcam Featurecam 2014 20 1 0 24 Torrent
-
- Delcam Featurecam 2014 20 1 0 24 Torrent is a software package that can help you create CNC programs for various types of machines and operations. It has many features and benefits that can improve your machining quality and efficiency. However, it also has some drawbacks and limitations that you should be aware of before using it. Moreover, downloading and using pirated software is illegal and unethical, and may harm your computer or data. Therefore, we suggest that you purchase a legitimate copy of Delcam Featurecam 2014 from the official website or a trusted reseller. We hope that this article has given you some useful information and insights about Delcam Featurecam 2014 20 1 0 24 Torrent. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading! r.k.shevgaonkar, c. reddy, a.n. chandorkar, r.r.reddy, s.sridhar, a. prabhakar, s. manjunathan, effect of input pulse length on delay line performance, proc. of the 25th conference on electromagnetic and electronic pulse-simulation, ultrasonics, sensing and electromagnetic materials, june 2011, penang, malaysia. in this course, professor r. k. shevgaonkar, department of electrical engineering, iit, bombay (nptel), gives 42 video lectures on the concepts of transmission lines and electromagnetic waves. Download ❤ https://cinurl.com/2uEYoE b. k. mohanty, dr. r.k.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, proc. of the eleventh ieee international symposium on personal, indoor and mobile radio communications, pimrc 2008, munich, germany, sept. 2008. b.k. mohanty, r.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, ieee microwave and wireless components letters, vol. 18, no. 12, august 2007. b.k. mohanty, r.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, proc. of the tenth ieee pimrc, moscow, russia, september 2006. b.k. mohanty, dr. r.shevgaonkar, r.shevgaonkar, m.f.kundalkar, & a.n.chandorkar, development of transceivers for a time division duplex (tdd) based ieee 802.15.3c system, proc. of the ieee international symposium on personal, indoor and mobile radio communications, pimrc 2006, kyoto, japan, sep. 2006. Download Zip ✪ https://cinurl.com/2uEXRg Download Zip ✒ https://cinurl.com/2uEX8m AutoCAD LT is a professional CAD software that allows you to create 2D and 3D designs with precision and detail. It is widely used by designers, engineers, architects, and other professionals who need to create technical drawings and plans. Download ✔ https://bytlly.com/2uGkHA If you are looking for a way to download AutoCAD LT 2010 32 bit for free, you may have some options depending on your situation. Here are some possible methods: Therefore, we recommend that you only download AutoCAD LT 2010 32 bit from official or trusted sources. If you need more information or assistance, you can contact Autodesk support or visit their forums[^3^]. Alternatively, you can consider upgrading to the latest version of AutoCAD LT that offers more features, compatibility, and security. AutoCAD LT 2010 32 bit is a version of AutoCAD LT that was released in 2009. It is compatible with Windows XP, Vista, and 7 operating systems. It has some features that make it a good choice for 2D drafting and design, such as: These are some of the reasons why you may want to download AutoCAD LT 2010 32 bit for free. However, you should also be aware of some of the limitations and drawbacks of this version, such as: Therefore, you should weigh the pros and cons of downloading AutoCAD LT 2010 32 bit for free before making your decision. You may also want to consider upgrading to the latest version of AutoCAD LT that offers more features, compatibility, and security. Download Zip ••• https://bytlly.com/2uGlxl DOWNLOAD ☆☆☆ https://bytlly.com/2uGkF1 iREB is a tool that allows you to bypass iTunes errors when restoring custom firmware on your iOS device. It uses the usb control msg exploit from 3.1.2 and the limera1n/steaks4uce exploit to put your device into a pwned DFU mode, which enables you to restore custom firmware without any errors. iH8sn0w is a well-known hacker and developer who created iREB, as well as other tools such as iBoot32Patcher, sn0wbreeze, and f0recast. He is also one of the members of the evad3rs team that released the evasi0n jailbreak for iOS 6.x. Download File ✦ https://bytlly.com/2uGm9G In this article, we will show you how to use iH8sn0w's iREB v3.1.2 for Windows to restore custom firmware on your iOS device. Custom firmware is not the same as the official firmware that comes with your device. Custom firmware is modified by third-party developers to add features, improve performance, and customize the user interface. There are many benefits of using custom firmware on your iOS device, such as: However, custom firmware also comes with some risks and drawbacks, such as: Therefore, before you decide to install custom firmware on your device, you should weigh the pros and cons carefully and do some research on the best custom firmware for your device and needs. You should also backup your data and follow the steps in this article precisely to avoid any problems. If you are a fan of dark fantasy and action games, you might have heard of Demon Hunter: Shadow World, a popular mobile game developed by EA Publishing. This game lets you unleash your inner warrior in a world invaded by demons, undead, and other creatures of the night. You can choose from different classes of hunters, each with their own unique skills and weapons, and fight against hordes of enemies in various modes and locations. You can also customize your hunter with various outfits, accessories, and upgrades to suit your style and preferences. However, as much as this game is fun and exciting, it also has some limitations that might hinder your enjoyment. For example, some features are locked behind a paywall, such as premium outfits, weapons, and skills. You also have to deal with ads, in-game purchases, and internet connection requirements. These factors can make the game less satisfying and more frustrating for some players. DOWNLOAD ••• https://bltlly.com/2uOkvt Fortunately, there is a way to overcome these limitations and enjoy the game to its fullest potential. You can download Demon Hunter: Shadow World Premium Mod APK, a modified version of the original game that offers additional benefits and enhancements that are not present in the official release. In this article, we will tell you more about the features and benefits of this mod apk, as well as how to download and install it on your device. Read on to find out more! Before we dive into the benefits of the mod apk version, let us first review the features of the original game. Demon Hunter: Shadow World is an action-packed mobile game that offers a super satisfying combat system, diverse content, and thrilling PvP battles. Here are some of the main features of this game: Demon Hunter: Shadow World features a super satisfying combat system that lets you slash, shoot, and cast skills with intuitive controls and responsive feedback. You can use combos, dodges, counters, and special moves to defeat your enemies and earn rewards. The game also has a dynamic camera that adjusts to your movements and actions, creating a cinematic experience. Demon Hunter: Shadow World allows you to choose from different classes of hunters, each with their own unique skills and weapons. You can play as a swordmaster, a gunslinger, a mage, or a priest, depending on your preference and playstyle. You can also customize your hunter with various outfits, accessories, and upgrades to suit your style and preferences. You can collect and craft different items and equipment to enhance your power and abilities. Demon Hunter: Shadow World offers a variety of modes and locations to explore and enjoy. You can play solo or co-op missions, challenge bosses and dungeons, or join guilds and events. You can also travel to different locations, such as forests, deserts, cities, and castles, each with their own enemies and secrets. The game has a rich and immersive story that unfolds as you progress through the game. * download demon hunter shadow world mod menu apk Demon Hunter: Shadow World also features a thrilling PvP mode that lets you test your skills and strategies against other players from around the world. You can join ranked matches, tournaments, or custom games, and compete for glory and rewards. You can also chat with other players, form alliances, or challenge rivals. The game has a fair and balanced matchmaking system that ensures you have a fun and competitive experience. Now that we have covered the features of the original game, let us move on to the benefits of the mod apk version. Demon Hunter: Shadow World Premium Mod APK is a modified version of the original game that offers additional benefits and enhancements that are not present in the official release. Here are some of the main benefits of this mod apk: One of the biggest benefits of Demon Hunter: Shadow World Premium Mod APK is that it gives you access to premium features for free. This means that you can unlock and use all the premium outfits, weapons, and skills that are otherwise locked behind a paywall. You can also enjoy unlimited VIP privileges, such as faster leveling, more rewards, and exclusive events. You can experience the game without any limitations or restrictions. Another benefit of Demon Hunter: Shadow World Premium Mod APK is that it removes all the ads and other annoyances that might interrupt your gameplay. This means that you can play the game without any pop-ups, banners, or videos that might distract you or slow down your device. You can also skip the loading screens, tutorials, and other unnecessary elements that might waste your time or resources. You can enjoy the game without any interruptions or frustrations. A third benefit of Demon Hunter: Shadow World Premium Mod APK is that it gives you unlimited resources and in-app purchases. This means that you can have unlimited coins, gems, energy, and other currencies that you can use to buy or upgrade anything in the game. You can also have unlimited access to all the items and equipment in the shop, as well as all the in-app purchases that might enhance your gameplay. You can enjoy the game without any worries or limitations. A fourth benefit of Demon Hunter: Shadow World Premium Mod APK is that it allows you to play the game offline and on any device. This means that you can play the game without an internet connection, which is useful if you have a poor or unstable connection, or if you want to save your data or battery. You can also play the game on any device, regardless of its specifications or operating system. The mod apk is optimized to run smoothly and efficiently on any device. Now that we have discussed the benefits of Demon Hunter: Shadow World Premium Mod APK, let us show you how to download and install it on your device. The process is simple and easy, but you need to follow some steps carefully to avoid any errors or issues. Here are the steps to download and install Demon Hunter: Shadow World Premium Mod APK: The first step is to find a reliable source where you can download Demon Hunter: Shadow World Premium Mod APK. There are many websites that offer mod apk files for various games, but not all of them are trustworthy or safe. Some of them might contain viruses, malware, or spyware that might harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading anything from an unknown source. One of the best sources where you can download Demon Hunter: Shadow World Premium Mod APK is [text], a website that provides high-quality mod apk files for various games. This website is trusted by millions of users worldwide and has a reputation for being safe and secure. You can download Demon Hunter: Shadow World Premium Mod APK from this website without any worries or risks. The second step is to download the mod apk file from [text]. To do this, you need to visit the website and search for Demon Hunter: Shadow World Premium Mod APK. You will see a list of results that match your query. You need to select the one that has the latest version of the mod apk file, which is usually the first or second result. You can also check the details and reviews of the mod apk file to make sure it is what you are looking for. Once you have selected the result, you will be redirected to a download page where you can see a download button. You need to click on the download button and wait for a few seconds until the download starts. The mod apk file will be saved in your device's download folder or any other location that you have specified. The third step is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the official app store. However, since you are installing a mod apk file from an external source, you need to enable this setting to allow the installation. To do this, you need to follow these steps: Once you have enabled unknown sources on your device, you can proceed to the next step. The fourth step is to install the mod apk file on your device. To do this, you need to follow these steps: Once you have installed the mod apk file on your device, you can proceed to the final step. The fifth and final step is to enjoy the game. To do this, you need to follow these steps: Demon Hunter: Shadow World is an amazing mobile game that offers a super satisfying combat system, diverse content, and thrilling PvP battles. However, if you want to enjoy the game to its fullest potential, you should download Demon Hunter: Shadow World Premium Mod APK, a modified version of the original game that offers additional benefits and enhancements that are not present in the official release. You can access premium features for free, remove ads and other annoyances, have unlimited resources and in-app purchases, and play offline and on any device. All you need to do is follow the steps we have outlined above and install the mod apk file on your device. You will be able to enjoy the game without any limitations or restrictions. If you are ready to unleash your inner warrior in a world invaded by demons, undead, and other creatures of the night, download Demon Hunter: Shadow World Premium Mod APK today and start playing! A mod apk is a modified version of an original app or game that offers additional benefits and enhancements that are not present in the official release. A mod apk can unlock premium features, remove ads, provide unlimited resources, or add new content or functionality to an app or game. Demon Hunter: Shadow World mod apk is safe to use as long as you download it from a reliable source, such as [text]. This website provides high-quality mod apk files for various games that are tested and verified by millions of users worldwide. You can download Demon Hunter: Shadow World mod apk from this website without any worries or risks. The minimum requirements to play Demon Hunter: Shadow World mod apk are as follows: To update Demon Hunter: Shadow World mod apk, you need to follow the same steps as you did to download and install it. You need to visit [text] and look for the latest version of the mod apk file. You need to download and install it on your device, replacing the previous version. You might also need to enable unknown sources on your device again if you have disabled it after installing the mod apk file. You will be able to enjoy the updated features and content of the game. If you have any questions, feedback, or issues regarding Demon Hunter: Shadow World mod apk, you can contact the developer of the mod apk file through their email address, which is [text]. You can also visit their website, which is [text], to find more information and support for the mod apk file. The developer of the mod apk file is not affiliated with EA Publishing, the developer of the original game, so you should not contact them for any matters related to the mod apk file. Do you love playing GTA San Andreas on your PC or console? Do you wish you could play it online with other players from around the world? If yes, then you should try GTA San Andreas Multiplayer (SAMP) on your Android device. Download Zip ===> https://bltlly.com/2uOir6 GTA San Andreas Multiplayer (SAMP) is a mod for GTA San Andreas that allows you to play online with other players on dedicated servers. It was originally released for PC in 2006 and has since become one of the most popular multiplayer mods for GTA fans. Why is GTA SAMP so popular among gamers? Because it gives you the freedom to play GTA San Andreas in any way you want. You can choose from hundreds of servers that offer different game modes, such as deathmatch, roleplay, racing, zombie survival, and more. You can also customize your character's appearance, clothes, weapons, vehicles, and even create your own server with your own rules and settings. What are the benefits of playing GTA SAMP on Android devices? Well, for one thing, you can play it anywhere and anytime you want. You don't need a PC or a console to enjoy GTA SAMP online. You just need a compatible Android device, the original GTA San Andreas game installed, and an internet connection. You can also use touchscreen controls or external controllers to play GTA SAMP on your Android device. And don't worry about performance issues or battery drain, because GTA SAMP is optimized for low-end devices and battery saving mode. gta samp android apk 2021 free download If you are interested in playing GTA SAMP on your Android device, you will need to download the GTA SAMP Android APK file from a trusted source. This is a modified version of the original GTA San Andreas game that allows you to connect to GTA SAMP servers and play online with other players. Here are the requirements and steps to download and install GTA SAMP Android APK 2021. Now that you know how to download and install GTA SAMP Android APK 2021, you might be wondering what are the features that make it so awesome. Well, here are some of the main features of GTA SAMP Android APK 2021 that you can enjoy on your Android device. The most obvious and exciting feature of GTA SAMP Android APK 2021 is the multiplayer mode. You can play with up to 1000 players on a single server, which is much more than the original GTA San Andreas game. You can also choose from different game modes, such as deathmatch, roleplay, racing, zombie survival, and more. Each game mode has its own rules, objectives, and challenges that you can try out. You can also chat with other players using text or voice messages, and make friends or enemies along the way. Another feature of GTA SAMP Android APK 2021 is the customization. You can customize your character's appearance, clothes, weapons, vehicles, and even create your own server with your own rules and settings. You can change your character's skin, hair, face, tattoos, accessories, and more. You can also buy or steal different weapons, vehicles, and items from the game world or other players. You can also add mods and scripts to enhance your gameplay experience, such as new maps, vehicles, weapons, skins, missions, etc. The last feature of GTA SAMP Android APK 2021 is the compatibility. You don't need a high-end device to play GTA SAMP on your Android device. GTA SAMP Android APK 2021 is compatible with most Android devices running Android 4.0 or higher. It also supports both touchscreen and external controllers, so you can choose the control scheme that suits you best. It is also optimized for low-end devices and battery saving mode, so you don't have to worry about performance issues or battery drain. If you are looking for a digital audio workstation (DAW) software that can handle every aspect of music production, from recording to mixing to mastering, you might want to check out Cakewalk Sonar X2 Producer. This software is designed for professional musicians, producers, engineers, and composers who want to create high-quality music in any genre. In this article, we will review the features and benefits of Cakewalk Sonar X2 Producer, as well as its pros and cons, comparison with other DAWs, pricing and availability. By the end of this article, you will have a better idea of whether or not Cakewalk Sonar X2 Producer is the right DAW for you. Cakewalk Sonar X2 Producer is packed with features and tools that can help you create amazing music. Here are some of the main features and benefits of this software: Download ✵ https://urlcod.com/2uHyfr The Skylight interface is one of the most distinctive features of Cakewalk Sonar X2 Producer. It is a user-friendly and flexible workspace that allows you to move seamlessly among different elements of music production, such as recording, editing, mixing, mastering, etc. You can customize the interface according to your preferences and workflow by docking, undocking, resizing, collapsing, or expanding any window or pane. You can also switch between different views, such as Track View, Piano Roll View, Staff View, Console View, etc., with a single click. The Skylight interface also features a Smart Grid that automatically adapts to your zoom level and snap settings, making it easier to align your clips and events. The ProChannel is a modular and expandable channel strip that gives you complete control over your sound. It is available on every track and bus in Cakewalk Sonar X2 Producer, and it includes a gate, a compressor, an equalizer, a tube saturation module, and a reverb module. You can also add more modules, such as the Console Emulator, the Softube Saturation Knob, the BREVERB SONAR, the R-MIX SONAR, etc., to expand your sonic possibilities. The ProChannel also allows you to change the order of the modules, save and load presets, and copy and paste settings across tracks and buses. Cakewalk Sonar X2 Producer comes with 20 virtual instruments that can help you create and edit sounds for any genre of music. These include: Cakewalk Sonar X2 Producer also comes with 59 audio and MIDI effects processors that can help you enhance your tracks. These include: The Matrix View is a feature that allows you to trigger and remix loops and clips in real-time. You can drag and drop audio or MIDI clips from the Track View or the Browser into the Matrix cells, and then trigger them individually or in groups using your mouse, keyboard, or MIDI controller. You can also record your performance and capture it as a new track in the Track View. The Matrix View is ideal for creating variations, transitions, mash-ups, or live performances of your music. The Smart Tool is a feature that allows you to perform multiple editing tasks with a single tool. Depending on where you position the cursor on the clip or event, the Smart Tool will change its function accordingly. For example, you can use the Smart Tool to select, move, split, trim, fade, slip-stretch, or draw envelopes on your clips or events. The Smart Tool also works with the Smart Grid to snap your edits to the grid lines. The Console Emulator is a feature that allows you to add analog warmth and character to your mix. It is a ProChannel module that emulates the sound of three legendary analog consoles: the SSL 4000 G+, the Neve 88RS, and the API 3124. You can choose between different console models for each track or bus, and adjust the drive and crosstalk parameters to achieve the desired sonic flavor. The Console Emulator also features a trim control that lets you adjust the input gain of each channel. The R-MIX SONAR is a feature that allows you to manipulate the frequency and panning of any audio source. It is a ProChannel module that uses Roland's proprietary V-Remastering technology to analyze and isolate different elements of an audio file. You can use R-MIX SONAR to adjust the volume, pan, pitch, or reverb of any frequency band or region of an audio file. You can also use R-MIX SONAR to remove or extract vocals, instruments, or noises from an audio file. The FX Chains are a feature that allows you to create and save complex effect routings. You can insert multiple effects processors into a single FX Chain, and then adjust their parameters using a custom interface. You can also assign knobs, sliders, buttons, or switches to control multiple parameters at once. You can save your FX Chains as presets and recall them later for different tracks or projects. You can also share your FX Chains with other users online. The Automation and Take Lanes are features that allow you to record and edit parameter changes and multiple takes. You can use automation to automate any parameter of any track or effect processor, such as volume, pan, mute, solo, send level, effect bypass, etc. You can record automation in real-time using your mouse, keyboard, or MIDI controller, or draw automation curves using the Smart Tool. You can also edit automation data using the Automation Lanes, which show the automation envelopes for each track or bus. You can use Take Lanes to record multiple takes of the same track, and then comp them together using the Smart Tool. You can also edit take data using the Take Lanes, which show the take clips for each track. Cakewalk Sonar X2 Producer also allows you to share your music with the world. You can upload your tracks directly to SoundCloud, a popular online platform for music distribution and collaboration. You can also export your tracks as MusicXML files, a standard format for exchanging musical notation data. You can also export your tracks as audio files in various formats, such as WAV, MP3, WMA, OGG, FLAC, etc., or as video files in various formats, such as AVI, WMV, MOV, etc. Cakewalk Sonar X2 Producer is a powerful and versatile DAW software that has many advantages, but also some disadvantages. Here are some of the pros and cons of this software: Cakewalk Sonar X2 Producer is one of the many DAWs available in the market today. Each DAW has its own strengths and weaknesses, and different users may prefer different DAWs depending on their needs and preferences. Here are some of the main differences between Cakewalk Sonar X2 Producer and other popular DAWs: Cakewalk Sonar X2 Producer is available for purchase online from the official Cakewalk website or from authorized dealers. The price of Cakewalk Sonar X2 Producer is $499 USD, which is a reasonable price for a professional DAW software that offers so many features and tools. However, you can also find discounts and promotions from time to time that can lower the price of Cakewalk Sonar X2 Producer. You can also download a free trial version of Cakewalk Sonar X2 Producer from the official Cakewalk website that allows you to use the software for 30 days with no limitations. Cakewalk Sonar X2 Producer is a powerful and versatile DAW software that can handle every aspect of music production, from recording to mixing to mastering. It offers a user-friendly and flexible interface, a modular and expandable channel strip, a rich and diverse collection of virtual instruments and effects, a real-time loop and clip triggering feature, a multiple editing tasks with a single tool feature, a frequency and panning manipulation feature, a complex effect routings feature, a parameter changes and multiple takes feature, and an easy music sharing feature. However, Cakewalk Sonar X2 Producer also has a steep learning curve, high system requirements, and limited compatibility. It also differs from other popular DAWs in terms of audio recording and editing capabilities, virtual instruments and effects, interface and workflow, etc. In conclusion, Cakewalk Sonar X2 Producer is a great DAW software for professional musicians, producers, engineers, and composers who want to create high-quality music in any genre. It is especially suitable for those who value sound quality, flexibility, diversity, creativity, and performance. However, Cakewalk Sonar X2 Producer may not be the best DAW software for beginners or hobbyists who prefer simplicity, affordability, compatibility, or familiarity. Ultimately, the choice of DAW software depends on your personal needs and preferences. If you are interested in buying Cakewalk Sonar X2 Producer or trying it out for free for 30 days, you can visit the official Cakewalk website or follow the links below: We hope you enjoyed this article and learned something new about Cakewalk Sonar X2 Producer. If you have any questions or comments, feel free to leave them below. Thank you for reading! Here are some of the frequently asked questions about Cakewalk Sonar X2 Producer: The minimum system requirements for Cakewalk Sonar X2 Producer are: The recommended system requirements for Cakewalk Sonar X2 Producer are: If you own a previous version of Cakewalk Sonar, such as Sonar X1, Sonar 8.5, Sonar 8, etc., you can upgrade to Cakewalk Sonar X2 Producer at a discounted price. You can visit the official Cakewalk website or follow the link below to check the upgrade options and prices: Cakewalk Sonar X2 Producer comes with a comprehensive user manual that explains all the features and functions of the software. You can access the user manual from the Help menu in the software, or download it from the official Cakewalk website or follow the link below: Cakewalk Sonar X2 Producer also comes with a series of video tutorials that show you how to use the software in various scenarios. You can access the video tutorials from the Help menu in the software, or watch them online from the official Cakewalk website or follow the link below: Cakewalk Sonar X2 Producer also has a large and active online community of users who can help you with any questions or problems you may have. You can join the official Cakewalk forums, where you can ask questions, share tips, exchange ideas, and get feedback from other users. You can also browse through the existing topics and posts to find answers to your questions. You can visit the official Cakewalk forums from the official Cakewalk website or follow the link below: I have written four FAQs so far, and I need to write one more to complete the article. Here is the fifth FAQ: If you are not satisfied with Cakewalk Sonar X2 Producer, or you want to try other DAWs, there are many alternatives available in the market. Some of the most popular alternatives are: These are just some of the alternatives to Cakewalk Sonar X2 Producer. You can find more alternatives online or by asking other users for recommendations. I have finished writing the article. Here is the custom message: If you are a fan of Bollywood action movies, you might want to download 720p Maidan-E-Jung movie, a 1995 film directed by K.C. Bokadia and starring Dharmendra, Akshay Kumar, Karisma Kapoor and Jaya Prada. The movie is about a village head named Daata Guru (Amrish Puri) who rules over his people with an iron fist and exploits them for his own benefit. He faces opposition from a young man named Karan (Akshay Kumar) who falls in love with his daughter Tulsi (Karisma Kapoor) and vows to free the villagers from his tyranny. Download ☆ https://urlcod.com/2uHxd9 Download 720p Maidan-E-Jung movie to watch how Karan fights against Daata Guru and his henchmen with the help of his father Shankar (Dharmendra) and his mother Parvati (Jaya Prada). The movie is full of thrilling action scenes, melodious songs and emotional drama. You can download 720p Maidan-E-Jung movie from various online platforms that offer high-quality video and audio. However, you should be careful of the legal and ethical issues involved in downloading pirated content and respect the rights of the original creators. One of the highlights of the movie is its music composed by Bappi Lahiri and sung by various artists like Vinod Rathod, Udit Narayan, Kumar Sanu, Sadhana Sargam, Ila Arun and Gurdas Maan. The movie has six songs that range from romantic to folk to qawwali. Some of the popular songs are Aayo Phaganiyo[^1^], Shaam Dhal Rahi Hai[^2^] and Teetar Boleâ¦Kiti Kiti[^3^]. The songs are well-choreographed and picturized on the lead actors and supporting cast. Download 720p Maidan-E-Jung movie to enjoy the action-packed story of Karan and Tulsi's love and their struggle against Daata Guru's oppression. The movie has some memorable dialogues and performances by the veteran actors like Dharmendra, Jaya Prada and Amrish Puri. The movie also has some comic relief provided by Shakti Kapoor and Kader Khan who play Daata Guru's loyal but foolish servants. The movie is a typical masala entertainer that will keep you hooked till the end. Maidan-E-Jung was released on 14th April 1995 and faced stiff competition from other movies like Karan Arjun, Dilwale Dulhania Le Jayenge and Raja. The movie was made on a budget of â¹ 32.5 million and collected â¹ 72.3 million at the box office[^1^]. The movie was an average grosser and did not live up to the expectations of the audience and critics. The movie was criticized for its outdated plot, poor direction, weak screenplay and excessive violence. The movie also failed to utilize the star power of its cast and wasted their talents. Download 720p Maidan-E-Jung movie only if you are a die-hard fan of the actors or the genre. The movie is not a masterpiece of cinema but a typical 90s masala flick that has some moments of entertainment and nostalgia. The movie is not for everyone and may not appeal to the modern sensibilities of the viewers. The movie is best enjoyed with a pinch of salt and a lot of suspension of disbelief.main.py
import streamlit as st
-
-from nets.envs import SCI
-
-
-st.set_page_config(
- page_title="HET_sci",
- menu_items={
- 'About':'https://advpropsys.github.io'
- }
-)
-
-st.title('HETfit_scientific')
-st.markdown("#### Imagine a package which was engineered primarly for data driven plasma physics devices design, mainly hall effect thrusters, yup that's it"
- "\n### :orange[Don't be scared away though, it has much simpler interface than anything you ever used for such designs]")
-st.markdown('### Main concepts:')
-st.markdown( "- Each observational/design session is called an **environment**, for now it can be either RCI or SCI (Real or scaled interface)"
- "\n In this overview we will only touch SCI, since RCI is using PINNs which are different topic"
- "\n- You specify most of the run parameters on this object init, :orange[**including generation of new samples**] via GAN"
- "\n- You may want to generate new features, do it !"
- "\n- Want to select best features for more effctive work? Done!"
- "\n- Compile environment with your model of choice, can be ***any*** torch model or sklearn one"
- "\n- Train !"
- "\n- Plot, inference, save, export to jit/onnx, measure performance - **they all are one liners** "
- )
-st.markdown('### tl;dr \n- Create environment'
- '\n```run = SCI(*args,**kwargs)```'
- '\n - Generate features ```run.feature_gen()``` '
- '\n - Select features ```run.feature_importance()```'
- '\n - Compile env ```run.compile()```'
- '\n - Train model in env ```run.train()```'
- '\n - Inference, plot, performance, ex. ```run.plot3d()```'
- '\n #### And yes, it all will work even without any additional arguments from user besides column indexes'
- )
-st.write('Comparison with *arXiv:2206.04440v3*')
-col1, col2 = st.columns(2)
-col1.metric('Geometry accuracy on domain',value='83%',delta='15%')
-col2.metric('$d \mapsto h$ prediction',value='98%',delta='14%')
-
-st.header('Example:')
-
-st.markdown('Remeber indexes and column names on this example: $P$ - 1, $d$ - 3, $h$ - 3, $m_a$ - 6,$T$ - 7')
-st.code('run = SCI(*args,**kwargs)')
-
-run = SCI()
-st.code('run.feature_gen()')
-run.feature_gen()
-st.write('New features: (index-0:22 original samples, else is GAN generated)',run.df.iloc[1:,9:].astype(float))
-st.write('Most of real dataset is from *doi:0.2514/1.B37424*, hence the results mostly agree with it in specific')
-st.code('run.feature_importance(run.df.iloc[1:,1:7].astype(float),run.df.iloc[1:,7]) # Clear and easy example')
-
-st.write(run.feature_importance(run.df.iloc[1:,1:6].astype(float),run.df.iloc[1:,6]))
-st.markdown(' As we can see only $h$ and $d$ passed for $m_a$ model, not only that linear dependacy was proven experimantally, but now we got this from data driven source')
-st.code('run.compile(idx=(1,3,7))')
-run.compile(idx=(1,3,7))
-st.code('run.train(epochs=10)')
-run.train(epochs=10)
-st.code('run.plot3d()')
-st.write(run.plot3d())
-st.code('run.performance()')
-st.write(run.performance())
-
-st.write('Try it out yourself! Select a column from 1 to 10')
-number = st.number_input('Here',min_value=1, max_value=10, step=1)
-
-if number:
- st.code(f'run.compile(idx=(1,3,{number}))')
- run.compile(idx=(1,3,number))
- st.code('run.train(epochs=10)')
- run.train(epochs=10)
- st.code('run.plot3d()')
- st.write(run.plot3d())
-
-
-
-st.markdown('In this intro we covered simplest user flow while using HETFit package, resulted data can be used to leverage PINN and analytical models of Hall effect thrusters'
- '\n #### :orange[To cite please contact author on https://github.com/advpropsys]')
-
-
🔥ChatGPT API 🚀Streaming🚀
"""
-description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
-```
-User: Duplicate the Space and run securely with your OpenAI API Key
Archmodels Vol 123
-
-Oct 23, 2018 · 3d modeling software, modeller and general home 3d modeling software for beginners to advanced users. Home 3D modelers software make 3d models, which can be used in various. Bedroom sets 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs.
-
-The following is a list of a few apps that you can use for 3D modeling, designing and rendering. Mac 3d modeling apps. Rendering software for macOS 3d modeling apps.
-
-However if your printer isnt used for 3d printing, it would have a whole different application in the bedroom. To get started with 3D printing, youll want to learn 3D printing software, and to get your head. 3d modeling software, modeller and general home 3d modeling software for beginners to advanced users. Home 3D modelers software make 3d models, which can be used in various.
-
-3D modeling, rendering and imaging software. Free download. Bedroom set 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs.
-
-Since theyre generally known for their pencils, pencil modelling softwares are probably among the most frequently used software for rendering images. Modeling, rendering, and animation are key parts of animation and character design. When building models, it's essential to choose a tool that can be used effectively and efficiently.
-
-See our list of best 3d modeling software for more popular choices. Bedroom set 3d modeler - the newest fully functional 3d modeler for your home 3d design and 3d modeling. 3D modeling and 3d rendering software for beginners and professionals. Find a variety of 3D modeling and rendering tools and apps to help you create your own 3D models or 3D designs.
-
-First round of 3D renders. For additional resources, check out our list of recommended apps and software. Free 3D modelling software is available to everyone with a PC. This is the kind of software that provides a feature set comparable 4fefd39f24
-
-
-
diff --git a/spaces/blossom618/text_generator/App.py b/spaces/blossom618/text_generator/App.py
deleted file mode 100644
index 51bf358eea0ad0221716c05225ea3f5307cd0f19..0000000000000000000000000000000000000000
--- a/spaces/blossom618/text_generator/App.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-
-generator = pipeline('text-generation', model='gpt2')
-
- def generate (text):
- result=generator (text)
- return result [0] ['generated_text')
-
-gr.Interface (fn=generate, inputs=gr.inputs.Textbox(), outputs=gr.outputs. Textbox()) . launch ()
\ No newline at end of file
diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py
deleted file mode 100644
index f7e67bcc89dd0c8e50d770e600b55f179fe19588..0000000000000000000000000000000000000000
--- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/grids/diffusion/4_bands_base_32khz.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Training of the 4 diffusion models described in
-"From Discrete Tokens to High-Fidelity Audio Using Multi-Band Diffusion"
-(paper link).
-"""
-
-from ._explorers import DiffusionExplorer
-
-
-@DiffusionExplorer
-def explorer(launcher):
- launcher.slurm_(gpus=4, partition='learnfair')
-
- launcher.bind_({'solver': 'diffusion/default',
- 'dset': 'internal/music_10k_32khz'})
-
- with launcher.job_array():
- launcher({'filter.use': True, 'filter.idx_band': 0, "processor.use": False, 'processor.power_std': 0.4})
- launcher({'filter.use': True, 'filter.idx_band': 1, "processor.use": False, 'processor.power_std': 0.4})
- launcher({'filter.use': True, 'filter.idx_band': 2, "processor.use": True, 'processor.power_std': 0.4})
- launcher({'filter.use': True, 'filter.idx_band': 3, "processor.use": True, 'processor.power_std': 0.75})
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py
deleted file mode 100644
index 2da70ae34e31dfe1a2ab4d5625a3e2b096aa5c7f..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/caffe2_patch.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import contextlib
-from unittest import mock
-import torch
-
-from detectron2.modeling import poolers
-from detectron2.modeling.proposal_generator import rpn
-from detectron2.modeling.roi_heads import keypoint_head, mask_head
-from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
-
-from .c10 import (
- Caffe2Compatible,
- Caffe2FastRCNNOutputsInference,
- Caffe2KeypointRCNNInference,
- Caffe2MaskRCNNInference,
- Caffe2ROIPooler,
- Caffe2RPN,
- caffe2_fast_rcnn_outputs_inference,
- caffe2_keypoint_rcnn_inference,
- caffe2_mask_rcnn_inference,
-)
-
-
-class GenericMixin(object):
- pass
-
-
-class Caffe2CompatibleConverter(object):
- """
- A GenericUpdater which implements the `create_from` interface, by modifying
- module object and assign it with another class replaceCls.
- """
-
- def __init__(self, replaceCls):
- self.replaceCls = replaceCls
-
- def create_from(self, module):
- # update module's class to the new class
- assert isinstance(module, torch.nn.Module)
- if issubclass(self.replaceCls, GenericMixin):
- # replaceCls should act as mixin, create a new class on-the-fly
- new_class = type(
- "{}MixedWith{}".format(self.replaceCls.__name__, module.__class__.__name__),
- (self.replaceCls, module.__class__),
- {}, # {"new_method": lambda self: ...},
- )
- module.__class__ = new_class
- else:
- # replaceCls is complete class, this allow arbitrary class swap
- module.__class__ = self.replaceCls
-
- # initialize Caffe2Compatible
- if isinstance(module, Caffe2Compatible):
- module.tensor_mode = False
-
- return module
-
-
-def patch(model, target, updater, *args, **kwargs):
- """
- recursively (post-order) update all modules with the target type and its
- subclasses, make a initialization/composition/inheritance/... via the
- updater.create_from.
- """
- for name, module in model.named_children():
- model._modules[name] = patch(module, target, updater, *args, **kwargs)
- if isinstance(model, target):
- return updater.create_from(model, *args, **kwargs)
- return model
-
-
-def patch_generalized_rcnn(model):
- ccc = Caffe2CompatibleConverter
- model = patch(model, rpn.RPN, ccc(Caffe2RPN))
- model = patch(model, poolers.ROIPooler, ccc(Caffe2ROIPooler))
-
- return model
-
-
-@contextlib.contextmanager
-def mock_fastrcnn_outputs_inference(
- tensor_mode, check=True, box_predictor_type=FastRCNNOutputLayers
-):
- with mock.patch.object(
- box_predictor_type,
- "inference",
- autospec=True,
- side_effect=Caffe2FastRCNNOutputsInference(tensor_mode),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_mask_rcnn_inference(tensor_mode, patched_module, check=True):
- with mock.patch(
- "{}.mask_rcnn_inference".format(patched_module), side_effect=Caffe2MaskRCNNInference()
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-@contextlib.contextmanager
-def mock_keypoint_rcnn_inference(tensor_mode, patched_module, use_heatmap_max_keypoint, check=True):
- with mock.patch(
- "{}.keypoint_rcnn_inference".format(patched_module),
- side_effect=Caffe2KeypointRCNNInference(use_heatmap_max_keypoint),
- ) as mocked_func:
- yield
- if check:
- assert mocked_func.call_count > 0
-
-
-class ROIHeadsPatcher:
- def __init__(self, heads, use_heatmap_max_keypoint):
- self.heads = heads
- self.use_heatmap_max_keypoint = use_heatmap_max_keypoint
- self.previous_patched = {}
-
- @contextlib.contextmanager
- def mock_roi_heads(self, tensor_mode=True):
- """
- Patching several inference functions inside ROIHeads and its subclasses
-
- Args:
- tensor_mode (bool): whether the inputs/outputs are caffe2's tensor
- format or not. Default to True.
- """
- # NOTE: this requries the `keypoint_rcnn_inference` and `mask_rcnn_inference`
- # are called inside the same file as BaseXxxHead due to using mock.patch.
- kpt_heads_mod = keypoint_head.BaseKeypointRCNNHead.__module__
- mask_head_mod = mask_head.BaseMaskRCNNHead.__module__
-
- mock_ctx_managers = [
- mock_fastrcnn_outputs_inference(
- tensor_mode=tensor_mode,
- check=True,
- box_predictor_type=type(self.heads.box_predictor),
- )
- ]
- if getattr(self.heads, "keypoint_on", False):
- mock_ctx_managers += [
- mock_keypoint_rcnn_inference(
- tensor_mode, kpt_heads_mod, self.use_heatmap_max_keypoint
- )
- ]
- if getattr(self.heads, "mask_on", False):
- mock_ctx_managers += [mock_mask_rcnn_inference(tensor_mode, mask_head_mod)]
-
- with contextlib.ExitStack() as stack: # python 3.3+
- for mgr in mock_ctx_managers:
- stack.enter_context(mgr)
- yield
-
- def patch_roi_heads(self, tensor_mode=True):
- self.previous_patched["box_predictor"] = self.heads.box_predictor.inference
- self.previous_patched["keypoint_rcnn"] = keypoint_head.keypoint_rcnn_inference
- self.previous_patched["mask_rcnn"] = mask_head.mask_rcnn_inference
-
- def patched_fastrcnn_outputs_inference(predictions, proposal):
- return caffe2_fast_rcnn_outputs_inference(
- True, self.heads.box_predictor, predictions, proposal
- )
-
- self.heads.box_predictor.inference = patched_fastrcnn_outputs_inference
-
- if getattr(self.heads, "keypoint_on", False):
-
- def patched_keypoint_rcnn_inference(pred_keypoint_logits, pred_instances):
- return caffe2_keypoint_rcnn_inference(
- self.use_heatmap_max_keypoint, pred_keypoint_logits, pred_instances
- )
-
- keypoint_head.keypoint_rcnn_inference = patched_keypoint_rcnn_inference
-
- if getattr(self.heads, "mask_on", False):
-
- def patched_mask_rcnn_inference(pred_mask_logits, pred_instances):
- return caffe2_mask_rcnn_inference(pred_mask_logits, pred_instances)
-
- mask_head.mask_rcnn_inference = patched_mask_rcnn_inference
-
- def unpatch_roi_heads(self):
- self.heads.box_predictor.inference = self.previous_patched["box_predictor"]
- keypoint_head.keypoint_rcnn_inference = self.previous_patched["keypoint_rcnn"]
- mask_head.mask_rcnn_inference = self.previous_patched["mask_rcnn"]
diff --git a/spaces/camenduru-com/riffusion-api/README.md b/spaces/camenduru-com/riffusion-api/README.md
deleted file mode 100644
index bd2838089b8594792d6f3f341230a88d2fdddb30..0000000000000000000000000000000000000000
--- a/spaces/camenduru-com/riffusion-api/README.md
+++ /dev/null
@@ -1,8 +0,0 @@
----
-title: Riffusion App API
-emoji: ⚙
-colorFrom: grey
-colorTo: grey
-sdk: docker
-pinned: false
----
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py
deleted file mode 100644
index 29bc748fa91a6d3de6ec42842416de6af7134f5c..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiofiles/os.py
+++ /dev/null
@@ -1,51 +0,0 @@
-"""Async executor versions of file functions from the os module."""
-import os
-
-from . import ospath as path
-from .ospath import wrap
-
-__all__ = [
- "path",
- "stat",
- "statvfs",
- "rename",
- "renames",
- "replace",
- "remove",
- "unlink",
- "mkdir",
- "makedirs",
- "rmdir",
- "removedirs",
- "link",
- "symlink",
- "readlink",
- "listdir",
- "scandir",
- "access",
- "sendfile",
- "wrap",
-]
-
-
-stat = wrap(os.stat)
-rename = wrap(os.rename)
-renames = wrap(os.renames)
-replace = wrap(os.replace)
-remove = wrap(os.remove)
-unlink = wrap(os.unlink)
-mkdir = wrap(os.mkdir)
-makedirs = wrap(os.makedirs)
-rmdir = wrap(os.rmdir)
-removedirs = wrap(os.removedirs)
-link = wrap(os.link)
-symlink = wrap(os.symlink)
-readlink = wrap(os.readlink)
-listdir = wrap(os.listdir)
-scandir = wrap(os.scandir)
-access = wrap(os.access)
-
-if hasattr(os, "sendfile"):
- sendfile = wrap(os.sendfile)
-if hasattr(os, "statvfs"):
- statvfs = wrap(os.statvfs)
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md
deleted file mode 100644
index de158e0eea0c287507b701376abc9307ce92c0f1..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/doc/DENSEPOSE_IUV.md
+++ /dev/null
@@ -1,627 +0,0 @@
-# Chart-based Dense Pose Estimation for Humans and Animals
-
-## Overview
-
-The goal of chart-based DensePose methods is to establish dense correspondences
-between image pixels and 3D object mesh by splitting the latter into charts and estimating
-for each pixel the corresponding chart index `I` and local chart coordinates `(U, V)`.
-
-
-
-
-
-
-
-
-
-
-### Improved Baselines, Original Fully Convolutional Head
-
-These models use an improved training schedule and Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446).
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APsegm
-
APdp. AP
-
GPSdp. AP
-
GPSmmodel id
-download
-
-
-
-
-R_50_FPN_s1x_legacy
-s1x
-0.307
-0.051
-3.2
-58.1
-58.2
-52.1
-54.9
-164832157
-model | metrics
-
-R_101_FPN_s1x_legacy
-s1x
-0.390
-0.063
-4.3
-59.5
-59.3
-53.2
-56.0
-164832182
-model | metrics
-
-
-
-
-
-### Improved Baselines, DeepLabV3 Head
-
-These models use an improved training schedule, Panoptic FPN head from [Kirillov et al, 2019](https://arxiv.org/abs/1901.02446) and DeepLabV3 head from [Chen et al, 2017](https://arxiv.org/abs/1706.05587).
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APsegm
-
APdp. AP
-
GPSdp. AP
-
GPSmmodel id
-download
-
-
-
-
-R_50_FPN_s1x
-s1x
-0.359
-0.066
-4.5
-61.2
-67.2
-63.7
-65.3
-165712039
-model | metrics
-
-R_101_FPN_s1x
-s1x
-0.428
-0.079
-5.8
-62.3
-67.8
-64.5
-66.2
-165712084
-model | metrics
-
-
-
-
-
-### Baselines with Confidence Estimation
-
-These models perform additional estimation of confidence in regressed UV coodrinates, along the lines of [Neverova et al., 2019](https://papers.nips.cc/paper/8378-correlated-uncertainty-for-learning-dense-correspondences-from-noisy-labels).
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APsegm
-
APdp. AP
-
GPSdp. AP
-
GPSmmodel id
-download
-
-
-
-
-R_50_FPN_DL_s1x
-s1x
-0.392
-0.070
-6.7
-61.1
-68.3
-65.6
-66.7
-165712097
-model | metrics
-
-R_101_FPN_DL_s1x
-s1x
-0.478
-0.083
-7.0
-62.3
-68.7
-66.3
-67.6
-165712116
-model | metrics
-
-
-
-
-
-Acronyms:
-
-`WC1`: with confidence estimation model type 1 for `U` and `V`
-
-`WC2`: with confidence estimation model type 2 for `U` and `V`
-
-### Baselines with Mask Confidence Estimation
-
-Models that perform estimation of confidence in regressed UV coodrinates
-as well as confidences associated with coarse and fine segmentation,
-see [Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) for details.
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APsegm
-
APdp. AP
-
GPSdp. AP
-
GPSmmodel id
-download
-
-
-
-
-R_50_FPN_WC1_s1x
-s1x
-0.353
-0.064
-4.6
-60.5
-67.0
-64.2
-65.4
-173862049
-model | metrics
-
-
-R_50_FPN_WC2_s1x
-s1x
-0.364
-0.066
-4.8
-60.7
-66.9
-64.2
-65.7
-173861455
-model | metrics
-
-
-R_50_FPN_DL_WC1_s1x
-s1x
-0.397
-0.068
-6.7
-61.1
-68.1
-65.8
-67.0
-173067973
-model | metrics
-
-
-R_50_FPN_DL_WC2_s1x
-s1x
-0.410
-0.070
-6.8
-60.8
-67.9
-65.6
-66.7
-173859335
-model | metrics
-
-
-R_101_FPN_WC1_s1x
-s1x
-0.435
-0.076
-5.7
-62.5
-67.6
-64.9
-66.3
-171402969
-model | metrics
-
-
-R_101_FPN_WC2_s1x
-s1x
-0.450
-0.078
-5.7
-62.3
-67.6
-64.8
-66.4
-173860702
-model | metrics
-
-
-R_101_FPN_DL_WC1_s1x
-s1x
-0.479
-0.081
-7.9
-62.0
-68.4
-66.2
-67.2
-173858525
-model | metrics
-
-R_101_FPN_DL_WC2_s1x
-s1x
-0.491
-0.082
-7.6
-61.7
-68.3
-65.9
-67.2
-173294801
-model | metrics
-
-
-
-
-
-Acronyms:
-
-`WC1M`: with confidence estimation model type 1 for `U` and `V` and mask confidence estimation
-
-`WC2M`: with confidence estimation model type 2 for `U` and `V` and mask confidence estimation
-
-### Bootstrapping Baselines
-
-Master and student models trained using the bootstrapping pipeline with chimpanzee as the target category,
-see [Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf)
-and [Bootstrapping Pipeline](BOOTSTRAPPING_PIPELINE.md) for details.
-Evaluation is performed on [DensePose Chimps](DENSEPOSE_DATASETS.md#densepose-chimps) dataset.
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APsegm
-
APdp. AP
-
GPSdp. AP
-
GPSmmodel id
-download
-
-
-
-
-R_50_FPN_WC1M_s1x
-s1x
-0.381
-0.066
-4.8
-60.6
-66.7
-64.0
-65.4
-217144516
-model | metrics
-
-
-R_50_FPN_WC2M_s1x
-s1x
-0.342
-0.068
-5.0
-60.7
-66.9
-64.2
-65.5
-216245640
-model | metrics
-
-
-R_50_FPN_DL_WC1M_s1x
-s1x
-0.371
-0.068
-6.0
-60.7
-68.0
-65.2
-66.7
-216245703
-model | metrics
-
-
-R_50_FPN_DL_WC2M_s1x
-s1x
-0.385
-0.071
-6.1
-60.8
-68.1
-65.0
-66.4
-216245758
-model | metrics
-
-
-R_101_FPN_WC1M_s1x
-s1x
-0.423
-0.079
-5.9
-62.0
-67.3
-64.8
-66.0
-216453687
-model | metrics
-
-
-R_101_FPN_WC2M_s1x
-s1x
-0.436
-0.080
-5.9
-62.5
-67.4
-64.5
-66.0
-216245682
-model | metrics
-
-
-R_101_FPN_DL_WC1M_s1x
-s1x
-0.453
-0.079
-6.8
-62.0
-68.1
-66.4
-67.1
-216245771
-model | metrics
-
-R_101_FPN_DL_WC2M_s1x
-s1x
-0.464
-0.080
-6.9
-61.9
-68.2
-66.1
-67.1
-216245790
-model | metrics
-
-
-
-
-
-Acronyms:
-
-`WC1M`: with confidence estimation model type 1 for `U` and `V` and mask confidence estimation
-
-`Atop10P`: humans and animals from the 10 best suitable categories are used for training
-
-`CA`: class agnostic training, where all annotated instances are mapped into a single category
-
-`B_<...>`: schedule with bootstrapping with the specified results sampling strategy
-
-Note:
-
-The relaxed `dp. APex GPS` metric was used in
-[Sanakoyeu et al., 2020](https://arxiv.org/pdf/2003.00080.pdf) to evaluate DensePose
-results. This metric considers matches at thresholds 0.2, 0.3 and 0.4 additionally
-to the standard ones used in the evaluation protocol. The minimum threshold is
-controlled by `DENSEPOSE_EVALUATION.MIN_IOU_THRESHOLD` config option.
-
-### License
-
-All models available for download are licensed under the
-[Creative Commons Attribution-ShareAlike 3.0 license](https://creativecommons.org/licenses/by-sa/3.0/)
-
-## References
-
-If you use chart-based DensePose methods, please take the references from the following
-BibTeX entries:
-
-DensePose bootstrapping pipeline:
-```
-@InProceedings{Sanakoyeu2020TransferringDensePose,
- title = {Transferring Dense Pose to Proximal Animal Classes},
- author = {Artsiom Sanakoyeu and Vasil Khalidov and Maureen S. McCarthy and Andrea Vedaldi and Natalia Neverova},
- journal = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year = {2020},
-}
-```
-
-DensePose with confidence estimation:
-```
-@InProceedings{Neverova2019DensePoseConfidences,
- title = {Correlated Uncertainty for Learning Dense Correspondences from Noisy Labels},
- author = {Neverova, Natalia and Novotny, David and Vedaldi, Andrea},
- journal = {Advances in Neural Information Processing Systems},
- year = {2019},
-}
-```
-
-Original DensePose:
-```
-@InProceedings{Guler2018DensePose,
- title={DensePose: Dense Human Pose Estimation In The Wild},
- author={R\{i}za Alp G\"uler, Natalia Neverova, Iasonas Kokkinos},
- journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year={2018}
-}
-```
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/point_utils.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/point_utils.py
deleted file mode 100644
index eed876ea9e0127c584c008bd5aab3e16e2c8c66a..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/point_utils.py
+++ /dev/null
@@ -1,77 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import torch
-
-from detectron2.layers import cat
-
-
-def get_point_coords_from_point_annotation(instances):
- """
- Load point coords and their corresponding labels from point annotation.
-
- Args:
- instances (list[Instances]): A list of N Instances, where N is the number of images
- in the batch. These instances are in 1:1
- correspondence with the pred_mask_logits. The ground-truth labels (class, box, mask,
- ...) associated with each instance are stored in fields.
- Returns:
- point_coords (Tensor): A tensor of shape (N, P, 2) that contains the coordinates of P
- sampled points.
- point_labels (Tensor): A tensor of shape (N, P) that contains the labels of P
- sampled points. `point_labels` takes 3 possible values:
- - 0: the point belongs to background
- - 1: the point belongs to the object
- - -1: the point is ignored during training
- """
- point_coords_list = []
- point_labels_list = []
- for instances_per_image in instances:
- if len(instances_per_image) == 0:
- continue
- point_coords = instances_per_image.gt_point_coords.to(torch.float32)
- point_labels = instances_per_image.gt_point_labels.to(torch.float32).clone()
- proposal_boxes_per_image = instances_per_image.proposal_boxes.tensor
-
- # Convert point coordinate system, ground truth points are in image coord.
- point_coords_wrt_box = get_point_coords_wrt_box(proposal_boxes_per_image, point_coords)
-
- # Ignore points that are outside predicted boxes.
- point_ignores = (
- (point_coords_wrt_box[:, :, 0] < 0)
- | (point_coords_wrt_box[:, :, 0] > 1)
- | (point_coords_wrt_box[:, :, 1] < 0)
- | (point_coords_wrt_box[:, :, 1] > 1)
- )
- point_labels[point_ignores] = -1
-
- point_coords_list.append(point_coords_wrt_box)
- point_labels_list.append(point_labels)
-
- return (
- cat(point_coords_list, dim=0),
- cat(point_labels_list, dim=0),
- )
-
-
-def get_point_coords_wrt_box(boxes_coords, point_coords):
- """
- Convert image-level absolute coordinates to box-normalized [0, 1] x [0, 1] point cooordinates.
- Args:
- boxes_coords (Tensor): A tensor of shape (R, 4) that contains bounding boxes.
- coordinates.
- point_coords (Tensor): A tensor of shape (R, P, 2) that contains
- image-normalized coordinates of P sampled points.
- Returns:
- point_coords_wrt_box (Tensor): A tensor of shape (R, P, 2) that contains
- [0, 1] x [0, 1] box-normalized coordinates of the P sampled points.
- """
- with torch.no_grad():
- point_coords_wrt_box = point_coords.clone()
- point_coords_wrt_box[:, :, 0] -= boxes_coords[:, None, 0]
- point_coords_wrt_box[:, :, 1] -= boxes_coords[:, None, 1]
- point_coords_wrt_box[:, :, 0] = point_coords_wrt_box[:, :, 0] / (
- boxes_coords[:, None, 2] - boxes_coords[:, None, 0]
- )
- point_coords_wrt_box[:, :, 1] = point_coords_wrt_box[:, :, 1] / (
- boxes_coords[:, None, 3] - boxes_coords[:, None, 1]
- )
- return point_coords_wrt_box
diff --git a/spaces/catundchat/tts_cn/modules.py b/spaces/catundchat/tts_cn/modules.py
deleted file mode 100644
index 289f4e3bdc7e1c783766b4c20bdf4475e65c932b..0000000000000000000000000000000000000000
--- a/spaces/catundchat/tts_cn/modules.py
+++ /dev/null
@@ -1,522 +0,0 @@
-import copy
-import math
-import numpy as np
-import scipy
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm
-
-import commons
-from commons import init_weights, get_padding
-from transforms import piecewise_rational_quadratic_transform
-
-
-LRELU_SLOPE = 0.1
-
-
-class LayerNorm(nn.Module):
- def __init__(self, channels, eps=1e-5):
- super().__init__()
- self.channels = channels
- self.eps = eps
-
- self.gamma = nn.Parameter(torch.ones(channels))
- self.beta = nn.Parameter(torch.zeros(channels))
-
- def forward(self, x):
- x = x.transpose(1, -1)
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
- return x.transpose(1, -1)
-
-
-class ConvReluNorm(nn.Module):
- def __init__(
- self,
- in_channels,
- hidden_channels,
- out_channels,
- kernel_size,
- n_layers,
- p_dropout,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.hidden_channels = hidden_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
- assert n_layers > 1, "Number of layers should be larger than 0."
-
- self.conv_layers = nn.ModuleList()
- self.norm_layers = nn.ModuleList()
- self.conv_layers.append(
- nn.Conv1d(
- in_channels, hidden_channels, kernel_size, padding=kernel_size // 2
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout))
- for _ in range(n_layers - 1):
- self.conv_layers.append(
- nn.Conv1d(
- hidden_channels,
- hidden_channels,
- kernel_size,
- padding=kernel_size // 2,
- )
- )
- self.norm_layers.append(LayerNorm(hidden_channels))
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask):
- x_org = x
- for i in range(self.n_layers):
- x = self.conv_layers[i](x * x_mask)
- x = self.norm_layers[i](x)
- x = self.relu_drop(x)
- x = x_org + self.proj(x)
- return x * x_mask
-
-
-class DDSConv(nn.Module):
- """
- Dialted and Depth-Separable Convolution
- """
-
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0):
- super().__init__()
- self.channels = channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.p_dropout = p_dropout
-
- self.drop = nn.Dropout(p_dropout)
- self.convs_sep = nn.ModuleList()
- self.convs_1x1 = nn.ModuleList()
- self.norms_1 = nn.ModuleList()
- self.norms_2 = nn.ModuleList()
- for i in range(n_layers):
- dilation = kernel_size**i
- padding = (kernel_size * dilation - dilation) // 2
- self.convs_sep.append(
- nn.Conv1d(
- channels,
- channels,
- kernel_size,
- groups=channels,
- dilation=dilation,
- padding=padding,
- )
- )
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
- self.norms_1.append(LayerNorm(channels))
- self.norms_2.append(LayerNorm(channels))
-
- def forward(self, x, x_mask, g=None):
- if g is not None:
- x = x + g
- for i in range(self.n_layers):
- y = self.convs_sep[i](x * x_mask)
- y = self.norms_1[i](y)
- y = F.gelu(y)
- y = self.convs_1x1[i](y)
- y = self.norms_2[i](y)
- y = F.gelu(y)
- y = self.drop(y)
- x = x + y
- return x * x_mask
-
-
-class WN(torch.nn.Module):
- def __init__(
- self,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- p_dropout=0,
- ):
- super(WN, self).__init__()
- assert kernel_size % 2 == 1
- self.hidden_channels = hidden_channels
- self.kernel_size = (kernel_size,)
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
- self.p_dropout = p_dropout
-
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
- self.drop = nn.Dropout(p_dropout)
-
- if gin_channels != 0:
- cond_layer = torch.nn.Conv1d(
- gin_channels, 2 * hidden_channels * n_layers, 1
- )
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight")
-
- for i in range(n_layers):
- dilation = dilation_rate**i
- padding = int((kernel_size * dilation - dilation) / 2)
- in_layer = torch.nn.Conv1d(
- hidden_channels,
- 2 * hidden_channels,
- kernel_size,
- dilation=dilation,
- padding=padding,
- )
- in_layer = torch.nn.utils.weight_norm(in_layer, name="weight")
- self.in_layers.append(in_layer)
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2 * hidden_channels
- else:
- res_skip_channels = hidden_channels
-
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight")
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, x, x_mask, g=None, **kwargs):
- output = torch.zeros_like(x)
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
-
- if g is not None:
- g = self.cond_layer(g)
-
- for i in range(self.n_layers):
- x_in = self.in_layers[i](x)
- if g is not None:
- cond_offset = i * 2 * self.hidden_channels
- g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :]
- else:
- g_l = torch.zeros_like(x_in)
-
- acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor)
- acts = self.drop(acts)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- res_acts = res_skip_acts[:, : self.hidden_channels, :]
- x = (x + res_acts) * x_mask
- output = output + res_skip_acts[:, self.hidden_channels :, :]
- else:
- output = output + res_skip_acts
- return output * x_mask
-
- def remove_weight_norm(self):
- if self.gin_channels != 0:
- torch.nn.utils.remove_weight_norm(self.cond_layer)
- for l in self.in_layers:
- torch.nn.utils.remove_weight_norm(l)
- for l in self.res_skip_layers:
- torch.nn.utils.remove_weight_norm(l)
-
-
-class ResBlock1(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
- super(ResBlock1, self).__init__()
- self.convs1 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[2],
- padding=get_padding(kernel_size, dilation[2]),
- )
- ),
- ]
- )
- self.convs1.apply(init_weights)
-
- self.convs2 = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=1,
- padding=get_padding(kernel_size, 1),
- )
- ),
- ]
- )
- self.convs2.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c1, c2 in zip(self.convs1, self.convs2):
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c1(xt)
- xt = F.leaky_relu(xt, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c2(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs1:
- remove_weight_norm(l)
- for l in self.convs2:
- remove_weight_norm(l)
-
-
-class ResBlock2(torch.nn.Module):
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
- super(ResBlock2, self).__init__()
- self.convs = nn.ModuleList(
- [
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[0],
- padding=get_padding(kernel_size, dilation[0]),
- )
- ),
- weight_norm(
- Conv1d(
- channels,
- channels,
- kernel_size,
- 1,
- dilation=dilation[1],
- padding=get_padding(kernel_size, dilation[1]),
- )
- ),
- ]
- )
- self.convs.apply(init_weights)
-
- def forward(self, x, x_mask=None):
- for c in self.convs:
- xt = F.leaky_relu(x, LRELU_SLOPE)
- if x_mask is not None:
- xt = xt * x_mask
- xt = c(xt)
- x = xt + x
- if x_mask is not None:
- x = x * x_mask
- return x
-
- def remove_weight_norm(self):
- for l in self.convs:
- remove_weight_norm(l)
-
-
-class Log(nn.Module):
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
- logdet = torch.sum(-y, [1, 2])
- return y, logdet
- else:
- x = torch.exp(x) * x_mask
- return x
-
-
-class Flip(nn.Module):
- def forward(self, x, *args, reverse=False, **kwargs):
- x = torch.flip(x, [1])
- if not reverse:
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
- return x, logdet
- else:
- return x
-
-
-class ElementwiseAffine(nn.Module):
- def __init__(self, channels):
- super().__init__()
- self.channels = channels
- self.m = nn.Parameter(torch.zeros(channels, 1))
- self.logs = nn.Parameter(torch.zeros(channels, 1))
-
- def forward(self, x, x_mask, reverse=False, **kwargs):
- if not reverse:
- y = self.m + torch.exp(self.logs) * x
- y = y * x_mask
- logdet = torch.sum(self.logs * x_mask, [1, 2])
- return y, logdet
- else:
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
- return x
-
-
-class ResidualCouplingLayer(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=0,
- gin_channels=0,
- mean_only=False,
- ):
- assert channels % 2 == 0, "channels should be divisible by 2"
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.half_channels = channels // 2
- self.mean_only = mean_only
-
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
- self.enc = WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- p_dropout=p_dropout,
- gin_channels=gin_channels,
- )
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
- self.post.weight.data.zero_()
- self.post.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0) * x_mask
- h = self.enc(h, x_mask, g=g)
- stats = self.post(h) * x_mask
- if not self.mean_only:
- m, logs = torch.split(stats, [self.half_channels] * 2, 1)
- else:
- m = stats
- logs = torch.zeros_like(m)
-
- if not reverse:
- x1 = m + x1 * torch.exp(logs) * x_mask
- x = torch.cat([x0, x1], 1)
- logdet = torch.sum(logs, [1, 2])
- return x, logdet
- else:
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
- x = torch.cat([x0, x1], 1)
- return x
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class ConvFlow(nn.Module):
- def __init__(
- self,
- in_channels,
- filter_channels,
- kernel_size,
- n_layers,
- num_bins=10,
- tail_bound=5.0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.n_layers = n_layers
- self.num_bins = num_bins
- self.tail_bound = tail_bound
- self.half_channels = in_channels // 2
-
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0)
- self.proj = nn.Conv1d(
- filter_channels, self.half_channels * (num_bins * 3 - 1), 1
- )
- self.proj.weight.data.zero_()
- self.proj.bias.data.zero_()
-
- def forward(self, x, x_mask, g=None, reverse=False):
- x0, x1 = torch.split(x, [self.half_channels] * 2, 1)
- h = self.pre(x0)
- h = self.convs(h, x_mask, g=g)
- h = self.proj(h) * x_mask
-
- b, c, t = x0.shape
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
-
- unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels)
- unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt(
- self.filter_channels
- )
- unnormalized_derivatives = h[..., 2 * self.num_bins :]
-
- x1, logabsdet = piecewise_rational_quadratic_transform(
- x1,
- unnormalized_widths,
- unnormalized_heights,
- unnormalized_derivatives,
- inverse=reverse,
- tails="linear",
- tail_bound=self.tail_bound,
- )
-
- x = torch.cat([x0, x1], 1) * x_mask
- logdet = torch.sum(logabsdet * x_mask, [1, 2])
- if not reverse:
- return x, logdet
- else:
- return x
diff --git a/spaces/catundchat/tts_cn/utils.py b/spaces/catundchat/tts_cn/utils.py
deleted file mode 100644
index f193a3e225b368fe7324852994676ad7236c970e..0000000000000000000000000000000000000000
--- a/spaces/catundchat/tts_cn/utils.py
+++ /dev/null
@@ -1,319 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
- iteration = checkpoint_dict["iteration"]
- learning_rate = checkpoint_dict["learning_rate"]
- if optimizer is not None:
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info(
- "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration)
- )
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save(
- {
- "model": state_dict,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def load_model(checkpoint_path, model):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- return model
-
-
-def save_model(model, checkpoint_path):
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save({'model': state_dict}, checkpoint_path)
-
-
-def summarize(
- writer,
- global_step,
- scalars={},
- histograms={},
- images={},
- audios={},
- audio_sampling_rate=22050,
-):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats="HWC")
- for k, v in audios.items():
- writer.add_audio(k, v, global_step, audio_sampling_rate)
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(
- alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
- )
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
- if info is not None:
- xlabel += "\n\n" + info
- plt.xlabel(xlabel)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding="utf-8") as f:
- filepaths_and_text = []
- for line in f:
- path_text = line.strip().split(split)
- filepaths_and_text.append(path_text)
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "-c",
- "--config",
- type=str,
- default="./configs/bert_vits.json",
- help="JSON file for configuration",
- )
- parser.add_argument("-m", "--model", type=str, required=True, help="Model name")
-
- args = parser.parse_args()
- model_dir = os.path.join("./logs", args.model)
-
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn(
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- )
- )
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn(
- "git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]
- )
- )
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/cccc-c/bingo/README.md b/spaces/cccc-c/bingo/README.md
deleted file mode 100644
index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000
--- a/spaces/cccc-c/bingo/README.md
+++ /dev/null
@@ -1,195 +0,0 @@
----
-title: bingo
-emoji: 📉
-colorFrom: red
-colorTo: red
-sdk: docker
-pinned: true
-license: mit
-duplicated_from: hf4all/bingo
----
-
-Name
-lr
-
schedtrain
-
time
(s/iter)inference
-
time
(s/im)train
-
mem
(GB)box
-
APsegm
-
APdp. APex
-
GPSdp. AP
-
GPSdp. AP
-
GPSmmodel id
-download
-
-
-
-
-R_50_FPN_DL_WC1M_3x_Atop10P_CA
-3x
-0.522
-0.073
-9.7
-61.3
-59.1
-36.2
-20.0
-30.2
-217578784
-model | metrics
-
-
-R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uniform
-3x
-1.939
-0.072
-10.1
-60.9
-58.5
-37.2
-21.5
-31.0
-256453729
-model | metrics
-
-
-R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_uv
-3x
-1.985
-0.072
-9.6
-61.4
-58.9
-38.3
-22.2
-32.1
-256452095
-model | metrics
-
-
-R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_finesegm
-3x
-2.047
-0.072
-10.3
-60.9
-58.5
-36.7
-20.7
-30.7
-256452819
-model | metrics
-
-R_50_FPN_DL_WC1M_3x_Atop10P_CA_B_coarsesegm
-3x
-1.830
-0.070
-9.6
-61.3
-59.2
-37.9
-21.5
-31.6
-256455697
-model | metrics
-
-由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看
-
-
-#### 部署到 Netlify
-[](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo)
-
-#### 部署到 Vercel
-如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用
-
-[](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example)
-
-#### 部署到 Render
-
-[](https://render.com/deploy?repo=https://github.com/weaigc/bingo)
-正常格式/网页端保存的格式(格式仅供参考)
-
-```
-curl 'https://www.bing.com/turing/captcha/challenge' \
- -H 'authority: www.bing.com' \
- -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \
- -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \
- -H 'cache-control: max-age=0' \
- -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \
- -H 'dnt: 1' \
- -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \
- -H 'sec-ch-ua-arch: "x86"' \
- -H 'sec-ch-ua-bitness: "64"' \
- -H 'sec-ch-ua-full-version: "116.0.1938.29"' \
- -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \
- -H 'sec-ch-ua-mobile: ?0' \
- -H 'sec-ch-ua-model: ""' \
- -H 'sec-ch-ua-platform: "Windows"' \
- -H 'sec-ch-ua-platform-version: "15.0.0"' \
- -H 'sec-fetch-dest: document' \
- -H 'sec-fetch-mode: navigate' \
- -H 'sec-fetch-site: none' \
- -H 'sec-fetch-user: ?1' \
- -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \
- -H 'sec-ms-gec-version: 1-116.0.1938.29' \
- -H 'upgrade-insecure-requests: 1' \
- -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \
- -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \
- -H 'x-edge-shopping-flag: 1' \
- --compressed
-```
-转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式)
-
-```
-Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA==
-```
-
-talk language allan pease pdf free download
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md b/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md
deleted file mode 100644
index 71101c800b567ab75cda4774d6a752038b804fba..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Kvisoft FlipBook Maker Enterprise v4.0 The Ultimate Page Turning Software for PDF Word Excel and More.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Kvisoft FlipBook Maker Enterprise v4.0
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md b/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md
deleted file mode 100644
index 1ce860ef68ad6649f4ce42bafa6a1971250bdb4a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Tamilnadu Dr Mgr Medical University Digital Library The Future of Digital Scholarship and Innovation in Medicine.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Tamilnadu Dr Mgr Medical University Digital Librar radar graffiti alpha
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py
deleted file mode 100644
index 4cada106b01c564faf17969d24038f80abd5de6f..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/attr/converters.py
+++ /dev/null
@@ -1,144 +0,0 @@
-# SPDX-License-Identifier: MIT
-
-"""
-Commonly useful converters.
-"""
-
-
-import typing
-
-from ._compat import _AnnotationExtractor
-from ._make import NOTHING, Factory, pipe
-
-
-__all__ = [
- "default_if_none",
- "optional",
- "pipe",
- "to_bool",
-]
-
-
-def optional(converter):
- """
- A converter that allows an attribute to be optional. An optional attribute
- is one which can be set to ``None``.
-
- Type annotations will be inferred from the wrapped converter's, if it
- has any.
-
- :param callable converter: the converter that is used for non-``None``
- values.
-
- .. versionadded:: 17.1.0
- """
-
- def optional_converter(val):
- if val is None:
- return None
- return converter(val)
-
- xtr = _AnnotationExtractor(converter)
-
- t = xtr.get_first_param_type()
- if t:
- optional_converter.__annotations__["val"] = typing.Optional[t]
-
- rt = xtr.get_return_type()
- if rt:
- optional_converter.__annotations__["return"] = typing.Optional[rt]
-
- return optional_converter
-
-
-def default_if_none(default=NOTHING, factory=None):
- """
- A converter that allows to replace ``None`` values by *default* or the
- result of *factory*.
-
- :param default: Value to be used if ``None`` is passed. Passing an instance
- of `attrs.Factory` is supported, however the ``takes_self`` option
- is *not*.
- :param callable factory: A callable that takes no parameters whose result
- is used if ``None`` is passed.
-
- :raises TypeError: If **neither** *default* or *factory* is passed.
- :raises TypeError: If **both** *default* and *factory* are passed.
- :raises ValueError: If an instance of `attrs.Factory` is passed with
- ``takes_self=True``.
-
- .. versionadded:: 18.2.0
- """
- if default is NOTHING and factory is None:
- raise TypeError("Must pass either `default` or `factory`.")
-
- if default is not NOTHING and factory is not None:
- raise TypeError(
- "Must pass either `default` or `factory` but not both."
- )
-
- if factory is not None:
- default = Factory(factory)
-
- if isinstance(default, Factory):
- if default.takes_self:
- raise ValueError(
- "`takes_self` is not supported by default_if_none."
- )
-
- def default_if_none_converter(val):
- if val is not None:
- return val
-
- return default.factory()
-
- else:
-
- def default_if_none_converter(val):
- if val is not None:
- return val
-
- return default
-
- return default_if_none_converter
-
-
-def to_bool(val):
- """
- Convert "boolean" strings (e.g., from env. vars.) to real booleans.
-
- Values mapping to :code:`True`:
-
- - :code:`True`
- - :code:`"true"` / :code:`"t"`
- - :code:`"yes"` / :code:`"y"`
- - :code:`"on"`
- - :code:`"1"`
- - :code:`1`
-
- Values mapping to :code:`False`:
-
- - :code:`False`
- - :code:`"false"` / :code:`"f"`
- - :code:`"no"` / :code:`"n"`
- - :code:`"off"`
- - :code:`"0"`
- - :code:`0`
-
- :raises ValueError: for any other value.
-
- .. versionadded:: 21.3.0
- """
- if isinstance(val, str):
- val = val.lower()
- truthy = {True, "true", "t", "yes", "y", "on", "1", 1}
- falsy = {False, "false", "f", "no", "n", "off", "0", 0}
- try:
- if val in truthy:
- return True
- if val in falsy:
- return False
- except TypeError:
- # Raised when "val" is not hashable (e.g., lists)
- pass
- raise ValueError(f"Cannot convert value to bool: {val}")
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c
deleted file mode 100644
index 228f72ba18e112fa2fe9b8cd7813366be96b02ea..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dxva2_av1.c
+++ /dev/null
@@ -1,508 +0,0 @@
-/*
- * DXVA2 AV1 HW acceleration.
- *
- * copyright (c) 2020 Hendrik Leppkes
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "config_components.h"
-
-#include "libavutil/avassert.h"
-#include "libavutil/pixdesc.h"
-
-#include "dxva2_internal.h"
-#include "av1dec.h"
-
-#define MAX_TILES 256
-
-struct AV1DXVAContext {
- FFDXVASharedContext shared;
-
- unsigned int bitstream_allocated;
- uint8_t *bitstream_cache;
-};
-
-struct av1_dxva2_picture_context {
- DXVA_PicParams_AV1 pp;
- unsigned tile_count;
- DXVA_Tile_AV1 tiles[MAX_TILES];
- uint8_t *bitstream;
- unsigned bitstream_size;
-};
-
-static int get_bit_depth_from_seq(const AV1RawSequenceHeader *seq)
-{
- if (seq->seq_profile == 2 && seq->color_config.high_bitdepth)
- return seq->color_config.twelve_bit ? 12 : 10;
- else if (seq->seq_profile <= 2 && seq->color_config.high_bitdepth)
- return 10;
- else
- return 8;
-}
-
-static int fill_picture_parameters(const AVCodecContext *avctx, AVDXVAContext *ctx, const AV1DecContext *h,
- DXVA_PicParams_AV1 *pp)
-{
- int i,j, uses_lr;
- const AV1RawSequenceHeader *seq = h->raw_seq;
- const AV1RawFrameHeader *frame_header = h->raw_frame_header;
- const AV1RawFilmGrainParams *film_grain = &h->cur_frame.film_grain;
-
- unsigned char remap_lr_type[4] = { AV1_RESTORE_NONE, AV1_RESTORE_SWITCHABLE, AV1_RESTORE_WIENER, AV1_RESTORE_SGRPROJ };
- int apply_grain = !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN) && film_grain->apply_grain;
-
- memset(pp, 0, sizeof(*pp));
-
- pp->width = avctx->width;
- pp->height = avctx->height;
-
- pp->max_width = seq->max_frame_width_minus_1 + 1;
- pp->max_height = seq->max_frame_height_minus_1 + 1;
-
- pp->CurrPicTextureIndex = ff_dxva2_get_surface_index(avctx, ctx, h->cur_frame.f);
- pp->superres_denom = frame_header->use_superres ? frame_header->coded_denom + AV1_SUPERRES_DENOM_MIN : AV1_SUPERRES_NUM;
- pp->bitdepth = get_bit_depth_from_seq(seq);
- pp->seq_profile = seq->seq_profile;
-
- /* Tiling info */
- pp->tiles.cols = frame_header->tile_cols;
- pp->tiles.rows = frame_header->tile_rows;
- pp->tiles.context_update_id = frame_header->context_update_tile_id;
-
- for (i = 0; i < pp->tiles.cols; i++)
- pp->tiles.widths[i] = frame_header->width_in_sbs_minus_1[i] + 1;
-
- for (i = 0; i < pp->tiles.rows; i++)
- pp->tiles.heights[i] = frame_header->height_in_sbs_minus_1[i] + 1;
-
- /* Coding tools */
- pp->coding.use_128x128_superblock = seq->use_128x128_superblock;
- pp->coding.intra_edge_filter = seq->enable_intra_edge_filter;
- pp->coding.interintra_compound = seq->enable_interintra_compound;
- pp->coding.masked_compound = seq->enable_masked_compound;
- pp->coding.warped_motion = frame_header->allow_warped_motion;
- pp->coding.dual_filter = seq->enable_dual_filter;
- pp->coding.jnt_comp = seq->enable_jnt_comp;
- pp->coding.screen_content_tools = frame_header->allow_screen_content_tools;
- pp->coding.integer_mv = frame_header->force_integer_mv || !(frame_header->frame_type & 1);
- pp->coding.cdef = seq->enable_cdef;
- pp->coding.restoration = seq->enable_restoration;
- pp->coding.film_grain = seq->film_grain_params_present && !(avctx->export_side_data & AV_CODEC_EXPORT_DATA_FILM_GRAIN);
- pp->coding.intrabc = frame_header->allow_intrabc;
- pp->coding.high_precision_mv = frame_header->allow_high_precision_mv;
- pp->coding.switchable_motion_mode = frame_header->is_motion_mode_switchable;
- pp->coding.filter_intra = seq->enable_filter_intra;
- pp->coding.disable_frame_end_update_cdf = frame_header->disable_frame_end_update_cdf;
- pp->coding.disable_cdf_update = frame_header->disable_cdf_update;
- pp->coding.reference_mode = frame_header->reference_select;
- pp->coding.skip_mode = frame_header->skip_mode_present;
- pp->coding.reduced_tx_set = frame_header->reduced_tx_set;
- pp->coding.superres = frame_header->use_superres;
- pp->coding.tx_mode = frame_header->tx_mode;
- pp->coding.use_ref_frame_mvs = frame_header->use_ref_frame_mvs;
- pp->coding.enable_ref_frame_mvs = seq->enable_ref_frame_mvs;
- pp->coding.reference_frame_update = 1; // 0 for show_existing_frame with key frames, but those are not passed to the hwaccel
-
- /* Format & Picture Info flags */
- pp->format.frame_type = frame_header->frame_type;
- pp->format.show_frame = frame_header->show_frame;
- pp->format.showable_frame = frame_header->showable_frame;
- pp->format.subsampling_x = seq->color_config.subsampling_x;
- pp->format.subsampling_y = seq->color_config.subsampling_y;
- pp->format.mono_chrome = seq->color_config.mono_chrome;
-
- /* References */
- pp->primary_ref_frame = frame_header->primary_ref_frame;
- pp->order_hint = frame_header->order_hint;
- pp->order_hint_bits = seq->enable_order_hint ? seq->order_hint_bits_minus_1 + 1 : 0;
-
- memset(pp->RefFrameMapTextureIndex, 0xFF, sizeof(pp->RefFrameMapTextureIndex));
- for (i = 0; i < AV1_REFS_PER_FRAME; i++) {
- int8_t ref_idx = frame_header->ref_frame_idx[i];
- AVFrame *ref_frame = h->ref[ref_idx].f;
-
- pp->frame_refs[i].width = ref_frame->width;
- pp->frame_refs[i].height = ref_frame->height;
- pp->frame_refs[i].Index = ref_frame->buf[0] ? ref_idx : 0xFF;
-
- /* Global Motion */
- pp->frame_refs[i].wminvalid = h->cur_frame.gm_invalid[AV1_REF_FRAME_LAST + i];
- pp->frame_refs[i].wmtype = h->cur_frame.gm_type[AV1_REF_FRAME_LAST + i];
- for (j = 0; j < 6; ++j) {
- pp->frame_refs[i].wmmat[j] = h->cur_frame.gm_params[AV1_REF_FRAME_LAST + i][j];
- }
- }
- for (i = 0; i < AV1_NUM_REF_FRAMES; i++) {
- AVFrame *ref_frame = h->ref[i].f;
- if (ref_frame->buf[0])
- pp->RefFrameMapTextureIndex[i] = ff_dxva2_get_surface_index(avctx, ctx, ref_frame);
- }
-
- /* Loop filter parameters */
- pp->loop_filter.filter_level[0] = frame_header->loop_filter_level[0];
- pp->loop_filter.filter_level[1] = frame_header->loop_filter_level[1];
- pp->loop_filter.filter_level_u = frame_header->loop_filter_level[2];
- pp->loop_filter.filter_level_v = frame_header->loop_filter_level[3];
- pp->loop_filter.sharpness_level = frame_header->loop_filter_sharpness;
- pp->loop_filter.mode_ref_delta_enabled = frame_header->loop_filter_delta_enabled;
- pp->loop_filter.mode_ref_delta_update = frame_header->loop_filter_delta_update;
- pp->loop_filter.delta_lf_multi = frame_header->delta_lf_multi;
- pp->loop_filter.delta_lf_present = frame_header->delta_lf_present;
- pp->loop_filter.delta_lf_res = frame_header->delta_lf_res;
-
- for (i = 0; i < AV1_TOTAL_REFS_PER_FRAME; i++) {
- pp->loop_filter.ref_deltas[i] = frame_header->loop_filter_ref_deltas[i];
- }
-
- pp->loop_filter.mode_deltas[0] = frame_header->loop_filter_mode_deltas[0];
- pp->loop_filter.mode_deltas[1] = frame_header->loop_filter_mode_deltas[1];
- pp->loop_filter.frame_restoration_type[0] = remap_lr_type[frame_header->lr_type[0]];
- pp->loop_filter.frame_restoration_type[1] = remap_lr_type[frame_header->lr_type[1]];
- pp->loop_filter.frame_restoration_type[2] = remap_lr_type[frame_header->lr_type[2]];
- uses_lr = frame_header->lr_type[0] || frame_header->lr_type[1] || frame_header->lr_type[2];
- pp->loop_filter.log2_restoration_unit_size[0] = uses_lr ? (6 + frame_header->lr_unit_shift) : 8;
- pp->loop_filter.log2_restoration_unit_size[1] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8;
- pp->loop_filter.log2_restoration_unit_size[2] = uses_lr ? (6 + frame_header->lr_unit_shift - frame_header->lr_uv_shift) : 8;
-
- /* Quantization */
- pp->quantization.delta_q_present = frame_header->delta_q_present;
- pp->quantization.delta_q_res = frame_header->delta_q_res;
- pp->quantization.base_qindex = frame_header->base_q_idx;
- pp->quantization.y_dc_delta_q = frame_header->delta_q_y_dc;
- pp->quantization.u_dc_delta_q = frame_header->delta_q_u_dc;
- pp->quantization.v_dc_delta_q = frame_header->delta_q_v_dc;
- pp->quantization.u_ac_delta_q = frame_header->delta_q_u_ac;
- pp->quantization.v_ac_delta_q = frame_header->delta_q_v_ac;
- pp->quantization.qm_y = frame_header->using_qmatrix ? frame_header->qm_y : 0xFF;
- pp->quantization.qm_u = frame_header->using_qmatrix ? frame_header->qm_u : 0xFF;
- pp->quantization.qm_v = frame_header->using_qmatrix ? frame_header->qm_v : 0xFF;
-
- /* Cdef parameters */
- pp->cdef.damping = frame_header->cdef_damping_minus_3;
- pp->cdef.bits = frame_header->cdef_bits;
- for (i = 0; i < 8; i++) {
- pp->cdef.y_strengths[i].primary = frame_header->cdef_y_pri_strength[i];
- pp->cdef.y_strengths[i].secondary = frame_header->cdef_y_sec_strength[i];
- pp->cdef.uv_strengths[i].primary = frame_header->cdef_uv_pri_strength[i];
- pp->cdef.uv_strengths[i].secondary = frame_header->cdef_uv_sec_strength[i];
- }
-
- /* Misc flags */
- pp->interp_filter = frame_header->interpolation_filter;
-
- /* Segmentation */
- pp->segmentation.enabled = frame_header->segmentation_enabled;
- pp->segmentation.update_map = frame_header->segmentation_update_map;
- pp->segmentation.update_data = frame_header->segmentation_update_data;
- pp->segmentation.temporal_update = frame_header->segmentation_temporal_update;
- for (i = 0; i < AV1_MAX_SEGMENTS; i++) {
- for (j = 0; j < AV1_SEG_LVL_MAX; j++) {
- pp->segmentation.feature_mask[i].mask |= frame_header->feature_enabled[i][j] << j;
- pp->segmentation.feature_data[i][j] = frame_header->feature_value[i][j];
- }
- }
-
- /* Film grain */
- if (apply_grain) {
- pp->film_grain.apply_grain = 1;
- pp->film_grain.scaling_shift_minus8 = film_grain->grain_scaling_minus_8;
- pp->film_grain.chroma_scaling_from_luma = film_grain->chroma_scaling_from_luma;
- pp->film_grain.ar_coeff_lag = film_grain->ar_coeff_lag;
- pp->film_grain.ar_coeff_shift_minus6 = film_grain->ar_coeff_shift_minus_6;
- pp->film_grain.grain_scale_shift = film_grain->grain_scale_shift;
- pp->film_grain.overlap_flag = film_grain->overlap_flag;
- pp->film_grain.clip_to_restricted_range = film_grain->clip_to_restricted_range;
- pp->film_grain.matrix_coeff_is_identity = (seq->color_config.matrix_coefficients == AVCOL_SPC_RGB);
-
- pp->film_grain.grain_seed = film_grain->grain_seed;
- pp->film_grain.num_y_points = film_grain->num_y_points;
- for (i = 0; i < film_grain->num_y_points; i++) {
- pp->film_grain.scaling_points_y[i][0] = film_grain->point_y_value[i];
- pp->film_grain.scaling_points_y[i][1] = film_grain->point_y_scaling[i];
- }
- pp->film_grain.num_cb_points = film_grain->num_cb_points;
- for (i = 0; i < film_grain->num_cb_points; i++) {
- pp->film_grain.scaling_points_cb[i][0] = film_grain->point_cb_value[i];
- pp->film_grain.scaling_points_cb[i][1] = film_grain->point_cb_scaling[i];
- }
- pp->film_grain.num_cr_points = film_grain->num_cr_points;
- for (i = 0; i < film_grain->num_cr_points; i++) {
- pp->film_grain.scaling_points_cr[i][0] = film_grain->point_cr_value[i];
- pp->film_grain.scaling_points_cr[i][1] = film_grain->point_cr_scaling[i];
- }
- for (i = 0; i < 24; i++) {
- pp->film_grain.ar_coeffs_y[i] = film_grain->ar_coeffs_y_plus_128[i];
- }
- for (i = 0; i < 25; i++) {
- pp->film_grain.ar_coeffs_cb[i] = film_grain->ar_coeffs_cb_plus_128[i];
- pp->film_grain.ar_coeffs_cr[i] = film_grain->ar_coeffs_cr_plus_128[i];
- }
- pp->film_grain.cb_mult = film_grain->cb_mult;
- pp->film_grain.cb_luma_mult = film_grain->cb_luma_mult;
- pp->film_grain.cr_mult = film_grain->cr_mult;
- pp->film_grain.cr_luma_mult = film_grain->cr_luma_mult;
- pp->film_grain.cb_offset = film_grain->cb_offset;
- pp->film_grain.cr_offset = film_grain->cr_offset;
- pp->film_grain.cr_offset = film_grain->cr_offset;
- }
-
- // XXX: Setting the StatusReportFeedbackNumber breaks decoding on some drivers (tested on NVIDIA 457.09)
- // Status Reporting is not used by FFmpeg, hence not providing a number does not cause any issues
- //pp->StatusReportFeedbackNumber = 1 + DXVA_CONTEXT_REPORT_ID(avctx, ctx)++;
- return 0;
-}
-
-static int dxva2_av1_start_frame(AVCodecContext *avctx,
- av_unused const uint8_t *buffer,
- av_unused uint32_t size)
-{
- const AV1DecContext *h = avctx->priv_data;
- AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
-
- if (!DXVA_CONTEXT_VALID(avctx, ctx))
- return -1;
- av_assert0(ctx_pic);
-
- /* Fill up DXVA_PicParams_AV1 */
- if (fill_picture_parameters(avctx, ctx, h, &ctx_pic->pp) < 0)
- return -1;
-
- ctx_pic->bitstream_size = 0;
- ctx_pic->bitstream = NULL;
- return 0;
-}
-
-static int dxva2_av1_decode_slice(AVCodecContext *avctx,
- const uint8_t *buffer,
- uint32_t size)
-{
- const AV1DecContext *h = avctx->priv_data;
- const AV1RawFrameHeader *frame_header = h->raw_frame_header;
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
- struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data;
- void *tmp;
-
- ctx_pic->tile_count = frame_header->tile_cols * frame_header->tile_rows;
-
- /* too many tiles, exceeding all defined levels in the AV1 spec */
- if (ctx_pic->tile_count > MAX_TILES)
- return AVERROR(ENOSYS);
-
- /* Shortcut if all tiles are in the same buffer */
- if (ctx_pic->tile_count == h->tg_end - h->tg_start + 1) {
- ctx_pic->bitstream = (uint8_t *)buffer;
- ctx_pic->bitstream_size = size;
-
- for (uint32_t tile_num = 0; tile_num < ctx_pic->tile_count; tile_num++) {
- ctx_pic->tiles[tile_num].DataOffset = h->tile_group_info[tile_num].tile_offset;
- ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size;
- ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row;
- ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column;
- ctx_pic->tiles[tile_num].anchor_frame = 0xFF;
- }
-
- return 0;
- }
-
- /* allocate an internal buffer */
- tmp = av_fast_realloc(ctx->bitstream_cache, &ctx->bitstream_allocated,
- ctx_pic->bitstream_size + size);
- if (!tmp) {
- return AVERROR(ENOMEM);
- }
- ctx_pic->bitstream = ctx->bitstream_cache = tmp;
-
- memcpy(ctx_pic->bitstream + ctx_pic->bitstream_size, buffer, size);
-
- for (uint32_t tile_num = h->tg_start; tile_num <= h->tg_end; tile_num++) {
- ctx_pic->tiles[tile_num].DataOffset = ctx_pic->bitstream_size + h->tile_group_info[tile_num].tile_offset;
- ctx_pic->tiles[tile_num].DataSize = h->tile_group_info[tile_num].tile_size;
- ctx_pic->tiles[tile_num].row = h->tile_group_info[tile_num].tile_row;
- ctx_pic->tiles[tile_num].column = h->tile_group_info[tile_num].tile_column;
- ctx_pic->tiles[tile_num].anchor_frame = 0xFF;
- }
-
- ctx_pic->bitstream_size += size;
-
- return 0;
-}
-
-static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
- DECODER_BUFFER_DESC *bs,
- DECODER_BUFFER_DESC *sc)
-{
- const AV1DecContext *h = avctx->priv_data;
- AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
- void *dxva_data_ptr;
- uint8_t *dxva_data;
- unsigned dxva_size;
- unsigned padding;
- unsigned type;
-
-#if CONFIG_D3D11VA
- if (ff_dxva2_is_d3d11(avctx)) {
- type = D3D11_VIDEO_DECODER_BUFFER_BITSTREAM;
- if (FAILED(ID3D11VideoContext_GetDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context,
- D3D11VA_CONTEXT(ctx)->decoder,
- type,
- &dxva_size, &dxva_data_ptr)))
- return -1;
- }
-#endif
-#if CONFIG_DXVA2
- if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) {
- type = DXVA2_BitStreamDateBufferType;
- if (FAILED(IDirectXVideoDecoder_GetBuffer(DXVA2_CONTEXT(ctx)->decoder,
- type,
- &dxva_data_ptr, &dxva_size)))
- return -1;
- }
-#endif
-
- dxva_data = dxva_data_ptr;
-
- if (ctx_pic->bitstream_size > dxva_size) {
- av_log(avctx, AV_LOG_ERROR, "Bitstream size exceeds hardware buffer");
- return -1;
- }
-
- memcpy(dxva_data, ctx_pic->bitstream, ctx_pic->bitstream_size);
-
- padding = FFMIN(128 - ((ctx_pic->bitstream_size) & 127), dxva_size - ctx_pic->bitstream_size);
- if (padding > 0) {
- memset(dxva_data + ctx_pic->bitstream_size, 0, padding);
- ctx_pic->bitstream_size += padding;
- }
-
-#if CONFIG_D3D11VA
- if (ff_dxva2_is_d3d11(avctx))
- if (FAILED(ID3D11VideoContext_ReleaseDecoderBuffer(D3D11VA_CONTEXT(ctx)->video_context, D3D11VA_CONTEXT(ctx)->decoder, type)))
- return -1;
-#endif
-#if CONFIG_DXVA2
- if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD)
- if (FAILED(IDirectXVideoDecoder_ReleaseBuffer(DXVA2_CONTEXT(ctx)->decoder, type)))
- return -1;
-#endif
-
-#if CONFIG_D3D11VA
- if (ff_dxva2_is_d3d11(avctx)) {
- D3D11_VIDEO_DECODER_BUFFER_DESC *dsc11 = bs;
- memset(dsc11, 0, sizeof(*dsc11));
- dsc11->BufferType = type;
- dsc11->DataSize = ctx_pic->bitstream_size;
- dsc11->NumMBsInBuffer = 0;
-
- type = D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL;
- }
-#endif
-#if CONFIG_DXVA2
- if (avctx->pix_fmt == AV_PIX_FMT_DXVA2_VLD) {
- DXVA2_DecodeBufferDesc *dsc2 = bs;
- memset(dsc2, 0, sizeof(*dsc2));
- dsc2->CompressedBufferType = type;
- dsc2->DataSize = ctx_pic->bitstream_size;
- dsc2->NumMBsInBuffer = 0;
-
- type = DXVA2_SliceControlBufferType;
- }
-#endif
-
- return ff_dxva2_commit_buffer(avctx, ctx, sc, type,
- ctx_pic->tiles, sizeof(*ctx_pic->tiles) * ctx_pic->tile_count, 0);
-}
-
-static int dxva2_av1_end_frame(AVCodecContext *avctx)
-{
- const AV1DecContext *h = avctx->priv_data;
- struct av1_dxva2_picture_context *ctx_pic = h->cur_frame.hwaccel_picture_private;
- int ret;
-
- if (ctx_pic->bitstream_size <= 0)
- return -1;
-
- ret = ff_dxva2_common_end_frame(avctx, h->cur_frame.f,
- &ctx_pic->pp, sizeof(ctx_pic->pp),
- NULL, 0,
- commit_bitstream_and_slice_buffer);
-
- return ret;
-}
-
-static int dxva2_av1_uninit(AVCodecContext *avctx)
-{
- struct AV1DXVAContext *ctx = avctx->internal->hwaccel_priv_data;
-
- av_freep(&ctx->bitstream_cache);
- ctx->bitstream_allocated = 0;
-
- return ff_dxva2_decode_uninit(avctx);
-}
-
-#if CONFIG_AV1_DXVA2_HWACCEL
-const AVHWAccel ff_av1_dxva2_hwaccel = {
- .name = "av1_dxva2",
- .type = AVMEDIA_TYPE_VIDEO,
- .id = AV_CODEC_ID_AV1,
- .pix_fmt = AV_PIX_FMT_DXVA2_VLD,
- .init = ff_dxva2_decode_init,
- .uninit = dxva2_av1_uninit,
- .start_frame = dxva2_av1_start_frame,
- .decode_slice = dxva2_av1_decode_slice,
- .end_frame = dxva2_av1_end_frame,
- .frame_params = ff_dxva2_common_frame_params,
- .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
- .priv_data_size = sizeof(struct AV1DXVAContext),
-};
-#endif
-
-#if CONFIG_AV1_D3D11VA_HWACCEL
-const AVHWAccel ff_av1_d3d11va_hwaccel = {
- .name = "av1_d3d11va",
- .type = AVMEDIA_TYPE_VIDEO,
- .id = AV_CODEC_ID_AV1,
- .pix_fmt = AV_PIX_FMT_D3D11VA_VLD,
- .init = ff_dxva2_decode_init,
- .uninit = dxva2_av1_uninit,
- .start_frame = dxva2_av1_start_frame,
- .decode_slice = dxva2_av1_decode_slice,
- .end_frame = dxva2_av1_end_frame,
- .frame_params = ff_dxva2_common_frame_params,
- .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
- .priv_data_size = sizeof(struct AV1DXVAContext),
-};
-#endif
-
-#if CONFIG_AV1_D3D11VA2_HWACCEL
-const AVHWAccel ff_av1_d3d11va2_hwaccel = {
- .name = "av1_d3d11va2",
- .type = AVMEDIA_TYPE_VIDEO,
- .id = AV_CODEC_ID_AV1,
- .pix_fmt = AV_PIX_FMT_D3D11,
- .init = ff_dxva2_decode_init,
- .uninit = dxva2_av1_uninit,
- .start_frame = dxva2_av1_start_frame,
- .decode_slice = dxva2_av1_decode_slice,
- .end_frame = dxva2_av1_end_frame,
- .frame_params = ff_dxva2_common_frame_params,
- .frame_priv_data_size = sizeof(struct av1_dxva2_picture_context),
- .priv_data_size = sizeof(struct AV1DXVAContext),
-};
-#endif
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c
deleted file mode 100644
index 74a2c10b5c89ae6a9f4b15902ad7c747e6badbf2..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/ftr.c
+++ /dev/null
@@ -1,208 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include "adts_header.h"
-#include "avcodec.h"
-#include "codec_internal.h"
-#include "get_bits.h"
-#include "decode.h"
-
-typedef struct FTRContext {
- AVCodecContext *aac_avctx[64]; // wrapper context for AAC
- int nb_context;
- AVPacket *packet;
- AVFrame *frame;
-} FTRContext;
-
-static av_cold int ftr_init(AVCodecContext *avctx)
-{
- FTRContext *s = avctx->priv_data;
- const AVCodec *codec;
- int ret;
-
- if (avctx->ch_layout.nb_channels > 64 ||
- avctx->ch_layout.nb_channels <= 0)
- return AVERROR(EINVAL);
-
- s->packet = av_packet_alloc();
- if (!s->packet)
- return AVERROR(ENOMEM);
-
- s->frame = av_frame_alloc();
- if (!s->frame)
- return AVERROR(ENOMEM);
-
- s->nb_context = avctx->ch_layout.nb_channels;
-
- codec = avcodec_find_decoder(AV_CODEC_ID_AAC);
- if (!codec)
- return AVERROR_BUG;
-
- for (int i = 0; i < s->nb_context; i++) {
- s->aac_avctx[i] = avcodec_alloc_context3(codec);
- if (!s->aac_avctx[i])
- return AVERROR(ENOMEM);
- ret = avcodec_open2(s->aac_avctx[i], codec, NULL);
- if (ret < 0)
- return ret;
- }
-
- avctx->sample_fmt = s->aac_avctx[0]->sample_fmt;
- if (!av_sample_fmt_is_planar(avctx->sample_fmt))
- return AVERROR(EINVAL);
-
- return 0;
-}
-
-static int ftr_decode_frame(AVCodecContext *avctx, AVFrame *frame,
- int *got_frame, AVPacket *avpkt)
-{
- FTRContext *s = avctx->priv_data;
- GetBitContext gb;
- int ret, ch_offset = 0;
-
- ret = init_get_bits8(&gb, avpkt->data, avpkt->size);
- if (ret < 0)
- return ret;
-
- frame->nb_samples = 0;
-
- for (int i = 0; i < s->nb_context; i++) {
- AVCodecContext *codec_avctx = s->aac_avctx[i];
- GetBitContext gb2 = gb;
- AACADTSHeaderInfo hdr_info;
- int size;
-
- if (get_bits_left(&gb) < 64)
- return AVERROR_INVALIDDATA;
-
- memset(&hdr_info, 0, sizeof(hdr_info));
-
- size = ff_adts_header_parse(&gb2, &hdr_info);
- if (size <= 0 || size * 8 > get_bits_left(&gb))
- return AVERROR_INVALIDDATA;
-
- if (size > s->packet->size) {
- ret = av_grow_packet(s->packet, size - s->packet->size);
- if (ret < 0)
- return ret;
- }
-
- ret = av_packet_make_writable(s->packet);
- if (ret < 0)
- return ret;
-
- memcpy(s->packet->data, avpkt->data + (get_bits_count(&gb) >> 3), size);
- s->packet->size = size;
-
- if (size > 12) {
- uint8_t *buf = s->packet->data;
-
- if (buf[3] & 0x20) {
- int tmp = buf[8];
- buf[ 9] = ~buf[9];
- buf[11] = ~buf[11];
- buf[12] = ~buf[12];
- buf[ 8] = ~buf[10];
- buf[10] = ~tmp;
- }
- }
-
- ret = avcodec_send_packet(codec_avctx, s->packet);
- if (ret < 0) {
- av_log(avctx, AV_LOG_ERROR, "Error submitting a packet for decoding\n");
- return ret;
- }
-
- ret = avcodec_receive_frame(codec_avctx, s->frame);
- if (ret < 0)
- return ret;
-
- if (!avctx->sample_rate) {
- avctx->sample_rate = codec_avctx->sample_rate;
- } else {
- if (avctx->sample_rate != codec_avctx->sample_rate)
- return AVERROR_INVALIDDATA;
- }
-
- if (!frame->nb_samples) {
- frame->nb_samples = s->frame->nb_samples;
- if ((ret = ff_get_buffer(avctx, frame, 0)) < 0)
- return ret;
- } else {
- if (frame->nb_samples != s->frame->nb_samples)
- return AVERROR_INVALIDDATA;
- }
-
- skip_bits_long(&gb, size * 8);
-
- if (ch_offset + s->frame->ch_layout.nb_channels > avctx->ch_layout.nb_channels)
- return AVERROR_INVALIDDATA;
-
- if (avctx->sample_fmt != codec_avctx->sample_fmt)
- return AVERROR_INVALIDDATA;
-
- for (int ch = 0; ch < s->frame->ch_layout.nb_channels; ch++)
- memcpy(frame->extended_data[ch_offset + ch],
- s->frame->extended_data[ch],
- av_get_bytes_per_sample(codec_avctx->sample_fmt) * s->frame->nb_samples);
-
- ch_offset += s->frame->ch_layout.nb_channels;
-
- if (ch_offset >= avctx->ch_layout.nb_channels)
- break;
- }
-
- *got_frame = 1;
-
- return get_bits_count(&gb) >> 3;
-}
-
-static void ftr_flush(AVCodecContext *avctx)
-{
- FTRContext *s = avctx->priv_data;
-
- for (int i = 0; i < s->nb_context; i++)
- avcodec_flush_buffers(s->aac_avctx[i]);
-}
-
-static av_cold int ftr_close(AVCodecContext *avctx)
-{
- FTRContext *s = avctx->priv_data;
-
- for (int i = 0; i < s->nb_context; i++)
- avcodec_free_context(&s->aac_avctx[i]);
- av_packet_free(&s->packet);
- av_frame_free(&s->frame);
-
- return 0;
-}
-
-const FFCodec ff_ftr_decoder = {
- .p.name = "ftr",
- .p.long_name = NULL_IF_CONFIG_SMALL("FTR Voice"),
- .p.type = AVMEDIA_TYPE_AUDIO,
- .p.id = AV_CODEC_ID_FTR,
- .init = ftr_init,
- FF_CODEC_DECODE_CB(ftr_decode_frame),
- .close = ftr_close,
- .flush = ftr_flush,
- .priv_data_size = sizeof(FTRContext),
- .p.capabilities = AV_CODEC_CAP_SUBFRAMES | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
-};
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c
deleted file mode 100644
index fccfe7610fe581c1b7b5f5d9d6e90705988fecad..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/hpel_template.c
+++ /dev/null
@@ -1,106 +0,0 @@
-/*
- * Copyright (c) 2000, 2001 Fabrice Bellard
- * Copyright (c) 2002-2004 Michael Niedermayer How to Download and Install Car Games APK on Android
-car game apk download
- What are Car Games APK Files?
-The benefits of using APK files
-
-
-The risks of using APK files
-
-
-How to Find and Download Car Games APK Files
-The best sources for car games APK files
-
-extreme car driving simulator apk download
-ultimate car driving simulator apk download
-race master 3d car racing apk download
-real racing 3 car game apk download
-csr racing 2 car game apk download
-need for speed no limits car game apk download
-traffic racer car game apk download
-hill climb racing 2 car game apk download
-asphalt 9 legends car racing game apk download
-drift max pro car drifting game apk download
-car parking multiplayer car game apk download
-city racing 3d car game apk download
-turbo driving racing 3d car game apk download
-mad skills motocross 2 car game apk download
-gt racing 2 the real car experience apk download
-drag racing classic car game apk download
-fastlane road to revenge car game apk download
-beach buggy racing 2 car game apk download
-real drift car racing lite apk download
-rally fury extreme racing car game apk download
-pixel car racer retro style car game apk download
-smashy road wanted 2 car game apk download
-rebel racing realistic car game apk download
-traffic rider motorcycle racing game apk download
-bike race free top motorcycle racing games apk download
-moto x3m bike race game and stunts racing apk download
-bike stunt 3d bike games bike race free apk download
-bike mayhem free best bike game ever apk download
-trial xtreme 4 extreme bike racing champions apk download
-moto traffic race 2 multiplayer bike racing game apk download
-bike blast rush bmx bicycle run and jump games apk download
-bike unchained 2 mountain bike downhill and slopestyle apk download
-dirt bike unchained red bull's new bike game apk download
-downhill masters downhill mountain biking game apk download
-stickman downhill motocross bike and bmx racing game apk download
-stickman downhill monstertruck monster truck racing game apk download
-monster truck destruction real monster truck simulator game apk download
-monster truck go racing games for kids and toddlers apk download
-monster truck demolition derby crash stunts simulator 2021 apk download
-monster truck stunt games mega ramp impossible tracks 3d apk download
-monster truck robot games robot transforming games 2021 apk download
-monster truck police chase cop vs robbers escape games 2021 apk download
-monster truck zombie crusher drive your great vehicle through 20 levels of zombies apocalypse madness and crush them all in this fun and addictive driving and shooting zombie survival...apk downloadThe best car games for Android in 2023
-
-
-
-Name Description Download Link
-Asphalt 9: Legends The latest installment of the Asphalt series, featuring stunning graphics, realistic physics, and over 50 licensed cars from top manufacturers. You can race against other players online or offline, customize your cars, and join a club to compete for rewards. Asphalt 9: Legends APK
-Real Racing 3 A realistic racing simulation game that offers over 250 cars from 33 brands, 19 real tracks, and a variety of game modes. You can compete with friends and rivals in cross-platform multiplayer, join a team, and participate in special events. Real Racing 3 APK
-CarX Drift Racing 2 A drifting game that lets you experience the thrill of sliding sideways on different tracks. You can customize your cars, tune your engine, and challenge other players in online or offline modes. You can also create your own club and join tournaments. CarX Drift Racing 2 APK
-CSR Racing 2 A drag racing game that features over 200 licensed cars from top brands, stunning graphics, and realistic physics. You can upgrade your cars, compete with other players in live races, join a crew, and explore a 3D city. CSR Racing 2 APK
-Need for Speed No Limits A racing game that lets you build your dream car from scratch, using over 1000 customization options. You can race on various tracks, evade the cops, and take down rivals. You can also join events and win exclusive rewards. Need for Speed No Limits APK
-Drive Ahead! A fun and chaotic game that pits you against your friends or AI opponents in gladiator-style car battles. You can choose from over 100 vehicles, ranging from monster trucks to UFOs, and smash your enemies' heads with various weapons and obstacles. Drive Ahead! APK
-Hill Climb Racing 2 A physics-based driving game that challenges you to climb hills and overcome obstacles with your vehicle. You can unlock and upgrade over 20 vehicles, customize your driver, and compete with other players in online or offline modes. Hill Climb Racing 2 APK
-Car Parking Multiplayer A realistic parking simulator that offers over 100 cars, 75 levels, and a huge open world. You can park your car in different scenarios, interact with other players, chat with them, and even exchange cars. Car Parking Multiplayer APK How to Install Car Games APK Files on Android
-How to enable unknown sources on Android
-
-
- How to use a file manager or a browser to install APK files
-
-
-
- How can I backup car games APK files?
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md
deleted file mode 100644
index e3a8b8318e41f2394622a83c2aa23c1a150d1c54..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Supreme Duelist Stickman Unlocked Mod Apk for Android.md
+++ /dev/null
@@ -1,137 +0,0 @@
-
-Supreme Duelist Stickman Mod APK Uptodown: A Fun and Exciting Stickman Game
-supreme duelist stickman mod apk uptodown
- What is Supreme Duelist Stickman?
-A multiplayer stickman game with different modes and weapons
-A game of skill and strategy where you have to defeat your opponents
-A game with simple graphics but smooth animations and sound effects
-What is a mod APK and why do you need it?
-A modified version of the original APK file that offers extra features and benefits
-A way to unlock all the characters, weapons, and skins in the game for free
-
-supreme duelist stickman mod apk unlimited money uptodown
-supreme duelist stickman mod apk latest version uptodown
-supreme duelist stickman mod apk android 1 uptodown
-supreme duelist stickman mod apk no ads uptodown
-supreme duelist stickman mod apk all characters unlocked uptodown
-supreme duelist stickman mod apk free shopping uptodown
-supreme duelist stickman mod apk god mode uptodown
-supreme duelist stickman mod apk offline uptodown
-supreme duelist stickman mod apk hack uptodown
-supreme duelist stickman mod apk revdl uptodown
-supreme duelist stickman mod apk rexdl uptodown
-supreme duelist stickman mod apk 2023 uptodown
-supreme duelist stickman mod apk 2.1.8 uptodown
-supreme duelist stickman mod apk 2.1.9 uptodown
-supreme duelist stickman mod apk 2.2.0 uptodown
-supreme duelist stickman mod apk 2.2.1 uptodown
-supreme duelist stickman mod apk 2.2.2 uptodown
-supreme duelist stickman mod apk 2.2.3 uptodown
-supreme duelist stickman mod apk 2.2.4 uptodown
-supreme duelist stickman mod apk 2.2.5 uptodown
-supreme duelist stickman mod apk 2.2.6 uptodown
-supreme duelist stickman mod apk 2.2.7 uptodown
-supreme duelist stickman mod apk 2.2.8 uptodown
-supreme duelist stickman mod apk 3.0.0 uptodown
-supreme duelist stickman mod apk 3.0.1 uptodown
-supreme duelist stickman mod apk 3.0.2 uptodown
-supreme duelist stickman mod apk 3.0.3 uptodown
-supreme duelist stickman mod apk 3.0.4 uptodown
-supreme duelist stickman mod apk 3.0.5 uptodown
-supreme duelist stickman mod apk 3.0.6 uptodown
-supreme duelist stickman mod apk 3.0.7 uptodown
-supreme duelist stickman mod apk 3.0.8 uptodown
-supreme duelist stickman mod apk 3.0.9 uptodown
-supreme duelist stickman mod apk 3.1.0 uptodown
-supreme duelist stickman mod apk 3.1.1 uptodown
-supreme duelist stickman mod apk 3.1.2 uptodown
-supreme duelist stickman mod apk 3.1.3 uptodown
-supreme duelist stickman mod apk 3.1.4 uptodown
-supreme duelist stickman mod apk 3.1.5 uptodown
-supreme duelist stickman mod apk 3.1.6 uptodown
-supreme duelist stickman mod apk 3.1.7 uptodown
-supreme duelist stickman mod apk 3.1.8 uptodown
-supreme duelist stickman mod apk 3.1.9 uptodown
-supreme duelist stickman mod apk 3.2.0 uptodown
-supreme duelist stickman mod apk 3.2.1 uptodown
-supreme duelist stickman mod apk 3.2.2 uptodown
-supreme duelist stickman mod apk 3.2.3 uptodown
-supreme duelist stickman mod apk 3.2.4 uptodownA way to enjoy the game without ads or in-app purchases
-How to download and install Supreme Duelist Stickman Mod APK Uptodown?
-The steps to download the mod APK file from Uptodown website
-
-
-The steps to install the mod APK file on your Android device
-
-
-The steps to enable unknown sources and permissions on your device
-
-
- What are the features and advantages of Supreme Duelist Stickman Mod APK Uptodown?
-The features of the mod APK file such as unlimited coins, gems, and energy
-
-
- The advantages of the mod APK file such as no root required, no virus or malware, and easy to use
-
-
- The comparison of the mod APK file with the original APK file in terms of performance and quality
- Conclusion
-A summary of the main points of the article
-A recommendation to try Supreme Duelist Stickman Mod APK Uptodown for a fun and exciting stickman game experience
-A call to action to download the mod APK file from Uptodown website
-FAQs
-What is Supreme Duelist Stickman?
-What is a mod APK?
-How to download Supreme Duelist Stickman Mod APK Uptodown?
-What are the features of Supreme Duelist Stickman Mod APK Uptodown?
-Is Supreme Duelist Stickman Mod APK Uptodown safe and secure?
-
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md b/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md
deleted file mode 100644
index 9180439e32320b3e4a703445546d31f4baa9133d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/GTA 5 94 GB Download How to Optimize Your PC for the Best Performance.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-GTA 5 94 GB Download: Everything You Need to Know
-What is GTA 5 and why is it so popular?
-GTA 5 is an open-world action-adventure game by Rockstar Games
-gta 5 94 gb download
-GTA 5 has a rich story mode, a vast online multiplayer mode, and stunning graphics
-Why is GTA 5 94 GB download size and how does it vary across platforms?
-GTA 5 file size depends on the version, platform, and installation method of the game
-GTA 5 file size ranges from 72 GB to more than 94 GB depending on the platform
-
-
-
-Platform File Size
-PC 94 GB (download) or 72 GB (disk)
-PlayStation 4 76 GB (download) or 50 GB (disk)
-PlayStation 5 80 GB (download) or 50 GB (disk)
-Xbox One 76 GB (download) or 50 GB (disk)
-Xbox Series X/S 80 GB (download) or 50 GB (disk) How to download GTA 5 and what are the requirements?
-GTA 5 can be downloaded from various sources depending on the platform
-GTA 5 download sources for PC
-
-
-GTA 5 download sources for PlayStation
-
-
-GTA 5 download sources for Xbox
-
-
-GTA 5 requires a lot of disk space, RAM, and processing power to run smoothly
-GTA 5 minimum and recommended requirements for PC
-
-
-
-Minimum Requirements Recommended Requirements
-CPU: Intel Core 2 Quad CPU Q6600 @ 2.40GHz / AMD Phenom 9850 Quad-Core Processor @ 2 GHz CPU: Intel Core i5 3470 @ 3.2GHz / AMD X8 FX-8350 @ 4GHz
-RAM: 4 GB RAM: 8 GB
-GPU: NVIDIA 9800 GT 1GB / AMD HD 4870 1GB GPU: NVIDIA GTX 660 2GB / AMD HD7870 2GB
-OS: Windows 10, 8.1, 8, 7 (64-bit) OS: Windows 10, 8.1, 8, 7 (64-bit)
-Disk Space: 72 GB Disk Space: 94 GB GTA 5 minimum and recommended requirements for PlayStation
-
-
-
-Minimum Requirements Recommended Requirements
-Platform: PlayStation 3 Platform: PlayStation 5
-CPU: Cell Broadband Engine @ 3.2GHz CPU: AMD Zen 2-based CPU @ 3.5GHz
-RAM: 256 MB + 256 MB VRAM RAM: 16 GB GDDR6
-GPU: NVIDIA RSX @ 550MHz GPU: AMD RDNA 2-based GPU @ 2.23GHz
-Disk Space: 50 GB Disk Space: 80 GB GTA 5 minimum and recommended requirements for Xbox
-
-
-
-Minimum Requirements Recommended Requirements
-Platform: Xbox 360 Platform: Xbox Series X/S
-CPU: Xenon @ 3.2GHz CPU: AMD Zen 2-based CPU @ 3.6GHz /3.4GHz
-RAM: 512 MB + VRAM RAM: 16 GB GDDR6 + VRAM
-GPU: Xenos @500MHz GPU: AMD RDNA 2-based GPU @1.825GHz /1.565GHz
-Disk Space:50 GB Disk Space:80 GB Conclusion and FAQs
-
-gta 5 94 gb download pc
-gta 5 94 gb download ps4
-gta 5 94 gb download xbox one
-gta 5 94 gb download free
-gta 5 94 gb download size
-gta 5 94 gb download time
-gta 5 94 gb download link
-gta 5 94 gb download torrent
-gta 5 94 gb download highly compressed
-gta 5 94 gb download from rockstar games
-gta 5 94 gb download steam
-gta 5 94 gb download microsoft store
-gta 5 94 gb download ps5
-gta 5 94 gb download xbox series x
-gta 5 94 gb download full version
-gta 5 94 gb download crack
-gta 5 94 gb download without internet
-gta 5 94 gb download offline
-gta 5 94 gb download slow
-gta 5 94 gb download speed
-gta 5 94 gb download error
-gta 5 94 gb download fix
-gta 5 94 gb download update
-gta 5 94 gb download latest version
-gta 5 94 gb download requirements
-gta 5 94 gb download mods
-gta 5 94 gb download cheats
-gta 5 94 gb download gameplay
-gta 5 94 gb download review
-gta 5 94 gb download tips and tricks
-gta 5 94 gb download guide
-gta 5 94 gb download walkthrough
-gta v file size for all platforms [newest update]
-how to install the epic games launcher for the free GTA V offer?
-how to reduce the GTA V file size on PC?
-how to increase the GTA V download speed on PC?
-how to resume the GTA V download on PC?
-how to transfer the GTA V files from one PC to another?
-how to verify the GTA V files on PC?
-how to install the GTA V updates on PC?
-how to uninstall the GTA V files on PC?
-how to play GTA V online on PC?
-how to fix the GTA V launcher error on PC?
-how to optimize the GTA V settings on PC?
-how to run GTA V in windowed mode on PC?
-how to use a controller for GTA V on PC?
-how to change the language of GTA V on PC?
-how to take screenshots in GTA V on PC?
-
-
A: The download time of GTA 5 depends on your internet speed, the file size of the game, and the source from which you are downloading it. Generally, it can take anywhere from a few hours to a few days to download GTA 5.
A: No, you cannot play GTA Online without downloading GTA 5. GTA Online is a part of GTA 5 and requires the base game to run.
A: There is no official way to reduce the file size of GTA 5. However, some unofficial methods may involve deleting some files or folders from the game directory or using some compression tools. However, these methods are not recommended as they may cause errors or glitches in the game. You should always backup your game files before trying any unofficial methods.
A: No, you cannot play GTA 5 on your mobile device. GTA 5 is only available for PC, PlayStation, and Xbox platforms. There are some unofficial apps or websites that claim to offer GTA 5 for mobile devices, but they are either fake or malicious. You should avoid them at all costs.
A: Yes, you can transfer your GTA 5 progress from one platform to another, but only for GTA Online. You need to have a Rockstar Games Social Club account and link it to your platform of choice. You can then transfer your GTA Online character and progress from one platform to another. However, you can only do this once per account.
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md b/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md
deleted file mode 100644
index 83f523c08a718f2f7ee262af8cf0a00b76be4921..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/TikTok APK The Best Way to Download and Use the New Version Without VPN.md
+++ /dev/null
@@ -1,111 +0,0 @@
-
-TikTok APK Download New Version 2022 Without VPN
-tiktok apk download new version 2022 without vpn
-What is TikTok and why is it so popular?
-TikTok is a social network for creating and sharing short videos
-TikTok has millions of users and content creators worldwide
-TikTok offers various features and options to make videos fun and engaging
-
-how to download tiktok new update 2022 without vpn android
-tiktok apk 2022 latest version free download no vpn required
-download tiktok for android 2022 new version without using vpn
-tiktok 2022 update apk download free without vpn for android
-tiktok latest version 2022 apk free download no vpn needed
-how to get tiktok new version 2022 without vpn on android
-download tiktok apk 2022 latest update without vpn free
-tiktok for android 2022 new version free download no vpn
-tiktok apk download without vpn 2022 latest version free
-how to install tiktok new version 2022 without vpn on android
-tiktok 2022 latest update apk free download without vpn
-tiktok new version 2022 apk download free no vpn for android
-how to download tiktok latest version 2022 without vpn on android
-tiktok apk free download 2022 new version without vpn android
-download tiktok new version 2022 apk without vpn for free
-tiktok latest update 2022 apk download free no vpn required
-how to update tiktok to new version 2022 without vpn on android
-tiktok apk 2022 new version free download without vpn android
-download tiktok latest version 2022 apk without vpn for free
-tiktok new update 2022 apk free download no vpn needed
-how to download tiktok for android 2022 new version without vpn
-tiktok apk without vpn 2022 latest version free download
-download tiktok for android new version 2022 without using vpn
-tiktok latest version apk download 2022 without vpn free
-how to get tiktok for android new version 2022 without vpn
-tiktok apk download new update 2022 without vpn for free
-download tiktok latest update 2022 apk without vpn free
-tiktok new version apk free download 2022 without vpn android
-how to install tiktok for android new version 2022 without vpn
-tiktok apk free download without vpn 2022 latest version
-download tiktok new version apk 2022 without using vpn for free
-tiktok latest update apk free download 2022 no vpn required
-how to update tiktok for android to new version 2022 without vpn
-tiktok apk download free no vpn 2022 latest version android
-download tiktok latest version for android 2022 without using vpn
-tiktok new version apk download no vpn required 2022 free
-how to get the latest version of tiktok on android without vpn in 2022
-tiktok apk download for free without using vpn in 2022 latest version
-download the newest version of tiktok for android in 2022 without a vpn
-tiktok new update in 2022 apk free download for android no need for a vpn
-how to install the latest update of tiktok on android in 2022 without a vpn
-tiktok apk for android in 2022 newest version free download no need of a vpn
-download the latest update of tiktok for android in 2022 no use of a vpn
-tiktok newest version in 2022 apk free download for android no need of a vpn
-
-Why do you need to download TikTok APK without VPN?
-TikTok is banned or restricted in some countries due to security or political reasons
-VPNs can slow down your internet connection and affect your video quality
-VPNs can also expose your personal data and online activity to third parties
-How to download TikTok APK new version 2022 without VPN?
-Find a reliable and safe source for downloading the APK file
-Enable unknown sources on your Android device settings
-
-
-Install the APK file and launch the app
-
-
-What are the benefits of downloading TikTok APK new version 2022 without VPN?
-You can access all the features and content of TikTok without any restrictions or limitations
-You can enjoy faster and smoother video streaming and uploading
-You can protect your privacy and security online
-Conclusion
-FAQs
-What is an APK file?
-What is a VPN service?
-Why is TikTok banned or restricted in some countries?
-How can I update my TikTok app after downloading the APK file?
-Is it legal to download TikTok APK without VPN?
-
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md b/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md
deleted file mode 100644
index b847a37d27c5b502192e55498af1326bc2780cb0..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Traffic Racer 3D The Ultimate Car Racing Simulation Game.md
+++ /dev/null
@@ -1,175 +0,0 @@
-
-Traffic Racer 3D Game Download: A Guide for Racing Fans
-traffic racer 3d game download
-How to Download Traffic Racer 3D on Different Devices
-Android
-
-How to play Traffic Racer 3D on PC with emulator
-Traffic Racer 3D APK free download latest version
-Best cars and upgrades in Traffic Racer 3D
-Traffic Racer 3D tips and tricks to score high
-Traffic Racer 3D vs Traffic Racer: which one is better?
-Traffic Racer 3D online leaderboards and achievements
-Traffic Racer 3D review: a fun and addictive racing game
-Traffic Racer 3D mod APK unlimited money and gems
-Traffic Racer 3D cheats and hacks for Android and PC
-Traffic Racer 3D gameplay video and screenshots
-Traffic Racer 3D alternatives: other racing games to try
-Traffic Racer 3D support and feedback: how to contact the developer
-Traffic Racer 3D data safety and privacy policy
-Traffic Racer 3D update: what's new in the latest version?
-Traffic Racer 3D for iOS: is it available on iPhone and iPad?
-Traffic Racer 3D features: stunning 3D graphics and realistic car handling
-Traffic Racer 3D modes: endless, two-way, time trial, police chase and free ride
-Traffic Racer 3D environments: suburb, desert, snowy, rainy and city night
-Traffic Racer 3D download size and system requirements
-How to install Traffic Racer 3D on Android devices
-How to uninstall Traffic Racer 3D from PC or Android
-How to backup and restore Traffic Racer 3D data on Android or PC
-How to fix Traffic Racer 3D not working or crashing issues
-How to change language and settings in Traffic Racer 3D
-How to connect Traffic Racer 3D to Facebook or Google Play Games
-How to earn cash and coins in Traffic Racer 3D fast and easy
-How to unlock all cars and wheels in Traffic Racer 3D
-How to customize your car color and paint in Traffic Racer 3D
-How to overtake cars closely and get bonus scores in Traffic Racer 3D
-How to drive in opposite direction and get extra cash in Traffic Racer 3D
-How to avoid traffic accidents and collisions in Traffic Racer 3D
-How to use tilt or touch controls in Traffic Racer 3D
-How to use gas button and brake button in Traffic Racer 3D
-How to mute or adjust sound effects and music in Traffic Racer 3D
-How to pause or resume the game in Traffic Racer 3D
-How to restart or quit the game in Traffic Racer 3D
-How to view your stats and records in Traffic Racer 3D
-How to access the shop and buy new cars or upgrades in Traffic Racer 3D
-How to watch ads or make in-app purchases in Traffic Racer 3D
-
-iOS
-
-
-Windows
-
-
-Chrome
-
-
- How to Play Traffic Racer 3D and Enjoy the Thrill of Racing
-Game Modes
-
-
- Controls
-
-
-Tips and Tricks
-
-
- How to Compare Your Performance with Other Players in Traffic Racer 3D
-Leaderboards
-Achievements
-How to Customize Your Car and Make It Stand Out in Traffic Racer 3D
-Car Selection
-Car Customization
-How to Enjoy the Stunning Graphics and Sound Effects of Traffic Racer 3D
-Environments
-Sound Effects
-Conclusion: Why Traffic Racer 3D is One of the Best Racing Games on the Market
-
-
-FAQs: Frequently Asked Questions about Traffic Racer 3D
-Q1: Is Traffic Racer 3D free to play?
-Q2: How can I remove ads from Traffic Racer 3D?
-Q3: How can I contact the developer of Traffic Racer 3D?
-Q4: What are the minimum system requirements for Traffic Racer 3D?
-
-
- Q5: Can I play Traffic Racer 3D offline?
-
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md b/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md
deleted file mode 100644
index d27057818723d1094a65a92332b06bced0f91e35..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/YT Music MOD APK Download The Best Way to Listen to Music Online.md
+++ /dev/null
@@ -1,135 +0,0 @@
-
-YT Music Mod APK Download: What You Need to Know
-y t music mod apk download
- What is YT Music and why use it?
-YT Music features and benefits
-
-
- YT Music Premium vs YT Music Free
-
-
- What is YT Music Mod APK and how to get it?
-YT Music Mod APK features and advantages
-
-
- YT Music Mod APK installation guide
-
-y t music mod apk download no ads
-y t music mod apk download offline
-y t music mod apk download premium
-y t music mod apk download free
-y t music mod apk download for android
-y t music mod apk download 2021
-y t music mod apk download unlimited downloads
-y t music mod apk download background play
-y t music mod apk download without root
-y t music mod apk download for pc
-y t music mod apk download jalan tikus[^1^]
-y t music mod apk download rexdl
-y t music mod apk download revdl
-y t music mod apk download happymod
-y t music mod apk download apkpure
-y t music mod apk download uptodown
-y t music mod apk download android 1
-y t music mod apk download android 11
-y t music mod apk download android 10
-y t music mod apk download android 9
-y t music mod apk download android 8
-y t music mod apk download android 7
-y t music mod apk download android 6
-y t music mod apk download android 5
-y t music mod apk download ios
-y t music mod apk download iphone
-y t music mod apk download ipad
-y t music mod apk download mac
-y t music mod apk download windows 10
-y t music mod apk download windows 7
-y t music mod apk download windows 8.1
-y t music mod apk download linux
-y t music mod apk download chromebook
-y t music mod apk download bluestacks
-y t music mod apk download nox player
-y t music mod apk download ld player
-y t music mod apk download memu play
-y t music mod apk download smart tv
-y t music mod apk download firestick
-y t music mod apk download roku
-y t music mod apk download chromecast
-y t music mod apk download carplay
-y t music mod apk download wear os
-y t music mod apk download watch os
-
-What are the risks and alternatives of YT Music Mod APK?
-Risks of using YT Music Mod APK
-
-
- Alternatives to YT Music Mod APK
-
-
-
-Name Description Price
-Spotify A popular music streaming service that offers over 70 million songs, podcasts, playlists, and more. You can also create and share your own music and podcasts with Spotify Studio. Free with ads or $9.99 per month for Spotify Premium.
-SoundCloud A platform that lets you discover and stream millions of songs, podcasts, and audio content from independent artists and creators. You can also upload and share your own sounds with the community. Free with ads or $9.99 per month for SoundCloud Go+.
-Pandora A personalized music streaming service that creates custom radio stations based on your favorite artists, songs, genres, and moods. You can also access podcasts, comedy, news, and more. Free with ads or $4.99 per month for Pandora Plus or $9.99 per month for Pandora Premium.
-Deezer A music streaming service that offers over 73 million songs, podcasts, playlists, and more. You can also enjoy live radio stations, lyrics, and recommendations from editors and experts. Free with ads or $9.99 per month for Deezer Premium or $14.99 per month for Deezer Family.
-Apple Music A music streaming service that offers over 75 million songs, podcasts, playlists, and more. You can also access exclusive content, live radio stations, and music videos. $9.99 per month for Apple Music Individual or $14.99 per month for Apple Music Family. Conclusion
-FAQs
-
-
401be4b1e0
-
-
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py
deleted file mode 100644
index fb781d928ed71aceb1abcaef44d3889c00d2261e..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/test_time_aug.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import warnings
-
-import annotator.mmpkg.mmcv as mmcv
-
-from ..builder import PIPELINES
-from .compose import Compose
-
-
-@PIPELINES.register_module()
-class MultiScaleFlipAug(object):
- """Test-time augmentation with multiple scales and flipping.
-
- An example configuration is as followed:
-
- .. code-block::
-
- img_scale=(2048, 1024),
- img_ratios=[0.5, 1.0],
- flip=True,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ]
-
- After MultiScaleFLipAug with above configuration, the results are wrapped
- into lists of the same length as followed:
-
- .. code-block::
-
- dict(
- img=[...],
- img_shape=[...],
- scale=[(1024, 512), (1024, 512), (2048, 1024), (2048, 1024)]
- flip=[False, True, False, True]
- ...
- )
-
- Args:
- transforms (list[dict]): Transforms to apply in each augmentation.
- img_scale (None | tuple | list[tuple]): Images scales for resizing.
- img_ratios (float | list[float]): Image ratios for resizing
- flip (bool): Whether apply flip augmentation. Default: False.
- flip_direction (str | list[str]): Flip augmentation directions,
- options are "horizontal" and "vertical". If flip_direction is list,
- multiple flip augmentations will be applied.
- It has no effect when flip == False. Default: "horizontal".
- """
-
- def __init__(self,
- transforms,
- img_scale,
- img_ratios=None,
- flip=False,
- flip_direction='horizontal'):
- self.transforms = Compose(transforms)
- if img_ratios is not None:
- img_ratios = img_ratios if isinstance(img_ratios,
- list) else [img_ratios]
- assert mmcv.is_list_of(img_ratios, float)
- if img_scale is None:
- # mode 1: given img_scale=None and a range of image ratio
- self.img_scale = None
- assert mmcv.is_list_of(img_ratios, float)
- elif isinstance(img_scale, tuple) and mmcv.is_list_of(
- img_ratios, float):
- assert len(img_scale) == 2
- # mode 2: given a scale and a range of image ratio
- self.img_scale = [(int(img_scale[0] * ratio),
- int(img_scale[1] * ratio))
- for ratio in img_ratios]
- else:
- # mode 3: given multiple scales
- self.img_scale = img_scale if isinstance(img_scale,
- list) else [img_scale]
- assert mmcv.is_list_of(self.img_scale, tuple) or self.img_scale is None
- self.flip = flip
- self.img_ratios = img_ratios
- self.flip_direction = flip_direction if isinstance(
- flip_direction, list) else [flip_direction]
- assert mmcv.is_list_of(self.flip_direction, str)
- if not self.flip and self.flip_direction != ['horizontal']:
- warnings.warn(
- 'flip_direction has no effect when flip is set to False')
- if (self.flip
- and not any([t['type'] == 'RandomFlip' for t in transforms])):
- warnings.warn(
- 'flip has no effect when RandomFlip is not in transforms')
-
- def __call__(self, results):
- """Call function to apply test time augment transforms on results.
-
- Args:
- results (dict): Result dict contains the data to transform.
-
- Returns:
- dict[str: list]: The augmented data, where each value is wrapped
- into a list.
- """
-
- aug_data = []
- if self.img_scale is None and mmcv.is_list_of(self.img_ratios, float):
- h, w = results['img'].shape[:2]
- img_scale = [(int(w * ratio), int(h * ratio))
- for ratio in self.img_ratios]
- else:
- img_scale = self.img_scale
- flip_aug = [False, True] if self.flip else [False]
- for scale in img_scale:
- for flip in flip_aug:
- for direction in self.flip_direction:
- _results = results.copy()
- _results['scale'] = scale
- _results['flip'] = flip
- _results['flip_direction'] = direction
- data = self.transforms(_results)
- aug_data.append(data)
- # list of dict to dict of list
- aug_data_dict = {key: [] for key in aug_data[0]}
- for data in aug_data:
- for key, val in data.items():
- aug_data_dict[key].append(val)
- return aug_data_dict
-
- def __repr__(self):
- repr_str = self.__class__.__name__
- repr_str += f'(transforms={self.transforms}, '
- repr_str += f'img_scale={self.img_scale}, flip={self.flip})'
- repr_str += f'flip_direction={self.flip_direction}'
- return repr_str
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py
deleted file mode 100644
index a53b2a69500f8c2edb835abc3ff0ccc2173d1fb1..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmseg/models/necks/fpn.py
+++ /dev/null
@@ -1,212 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule, xavier_init
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class FPN(nn.Module):
- """Feature Pyramid Network.
-
- This is an implementation of - Feature Pyramid Networks for Object
- Detection (https://arxiv.org/abs/1612.03144)
-
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale)
- num_outs (int): Number of output scales.
- start_level (int): Index of the start input backbone level used to
- build the feature pyramid. Default: 0.
- end_level (int): Index of the end input backbone level (exclusive) to
- build the feature pyramid. Default: -1, which means the last level.
- add_extra_convs (bool | str): If bool, it decides whether to add conv
- layers on top of the original feature maps. Default to False.
- If True, its actual mode is specified by `extra_convs_on_inputs`.
- If str, it specifies the source feature map of the extra convs.
- Only the following options are allowed
-
- - 'on_input': Last feat map of neck inputs (i.e. backbone feature).
- - 'on_lateral': Last feature map after lateral convs.
- - 'on_output': The last output feature map after fpn convs.
- extra_convs_on_inputs (bool, deprecated): Whether to apply extra convs
- on the original feature from the backbone. If True,
- it is equivalent to `add_extra_convs='on_input'`. If False, it is
- equivalent to set `add_extra_convs='on_output'`. Default to True.
- relu_before_extra_convs (bool): Whether to apply relu before the extra
- conv. Default: False.
- no_norm_on_lateral (bool): Whether to apply norm on lateral.
- Default: False.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (str): Config dict for activation layer in ConvModule.
- Default: None.
- upsample_cfg (dict): Config dict for interpolate layer.
- Default: `dict(mode='nearest')`
-
- Example:
- >>> import torch
- >>> in_channels = [2, 3, 5, 7]
- >>> scales = [340, 170, 84, 43]
- >>> inputs = [torch.rand(1, c, s, s)
- ... for c, s in zip(in_channels, scales)]
- >>> self = FPN(in_channels, 11, len(in_channels)).eval()
- >>> outputs = self.forward(inputs)
- >>> for i in range(len(outputs)):
- ... print(f'outputs[{i}].shape = {outputs[i].shape}')
- outputs[0].shape = torch.Size([1, 11, 340, 340])
- outputs[1].shape = torch.Size([1, 11, 170, 170])
- outputs[2].shape = torch.Size([1, 11, 84, 84])
- outputs[3].shape = torch.Size([1, 11, 43, 43])
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- num_outs,
- start_level=0,
- end_level=-1,
- add_extra_convs=False,
- extra_convs_on_inputs=False,
- relu_before_extra_convs=False,
- no_norm_on_lateral=False,
- conv_cfg=None,
- norm_cfg=None,
- act_cfg=None,
- upsample_cfg=dict(mode='nearest')):
- super(FPN, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.num_ins = len(in_channels)
- self.num_outs = num_outs
- self.relu_before_extra_convs = relu_before_extra_convs
- self.no_norm_on_lateral = no_norm_on_lateral
- self.fp16_enabled = False
- self.upsample_cfg = upsample_cfg.copy()
-
- if end_level == -1:
- self.backbone_end_level = self.num_ins
- assert num_outs >= self.num_ins - start_level
- else:
- # if end_level < inputs, no extra level is allowed
- self.backbone_end_level = end_level
- assert end_level <= len(in_channels)
- assert num_outs == end_level - start_level
- self.start_level = start_level
- self.end_level = end_level
- self.add_extra_convs = add_extra_convs
- assert isinstance(add_extra_convs, (str, bool))
- if isinstance(add_extra_convs, str):
- # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
- assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
- elif add_extra_convs: # True
- if extra_convs_on_inputs:
- # For compatibility with previous release
- # TODO: deprecate `extra_convs_on_inputs`
- self.add_extra_convs = 'on_input'
- else:
- self.add_extra_convs = 'on_output'
-
- self.lateral_convs = nn.ModuleList()
- self.fpn_convs = nn.ModuleList()
-
- for i in range(self.start_level, self.backbone_end_level):
- l_conv = ConvModule(
- in_channels[i],
- out_channels,
- 1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
- act_cfg=act_cfg,
- inplace=False)
- fpn_conv = ConvModule(
- out_channels,
- out_channels,
- 3,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
-
- self.lateral_convs.append(l_conv)
- self.fpn_convs.append(fpn_conv)
-
- # add extra conv layers (e.g., RetinaNet)
- extra_levels = num_outs - self.backbone_end_level + self.start_level
- if self.add_extra_convs and extra_levels >= 1:
- for i in range(extra_levels):
- if i == 0 and self.add_extra_convs == 'on_input':
- in_channels = self.in_channels[self.backbone_end_level - 1]
- else:
- in_channels = out_channels
- extra_fpn_conv = ConvModule(
- in_channels,
- out_channels,
- 3,
- stride=2,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg,
- inplace=False)
- self.fpn_convs.append(extra_fpn_conv)
-
- # default init_weights for conv(msra) and norm in ConvModule
- def init_weights(self):
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- xavier_init(m, distribution='uniform')
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
-
- # build laterals
- laterals = [
- lateral_conv(inputs[i + self.start_level])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
-
- # build top-down path
- used_backbone_levels = len(laterals)
- for i in range(used_backbone_levels - 1, 0, -1):
- # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
- # it cannot co-exist with `size` in `F.interpolate`.
- if 'scale_factor' in self.upsample_cfg:
- laterals[i - 1] += F.interpolate(laterals[i],
- **self.upsample_cfg)
- else:
- prev_shape = laterals[i - 1].shape[2:]
- laterals[i - 1] += F.interpolate(
- laterals[i], size=prev_shape, **self.upsample_cfg)
-
- # build outputs
- # part 1: from original levels
- outs = [
- self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
- ]
- # part 2: add extra levels
- if self.num_outs > len(outs):
- # use max pool to get more levels on top of outputs
- # (e.g., Faster R-CNN, Mask R-CNN)
- if not self.add_extra_convs:
- for i in range(self.num_outs - used_backbone_levels):
- outs.append(F.max_pool2d(outs[-1], 1, stride=2))
- # add conv layers on top of original feature maps (RetinaNet)
- else:
- if self.add_extra_convs == 'on_input':
- extra_source = inputs[self.backbone_end_level - 1]
- elif self.add_extra_convs == 'on_lateral':
- extra_source = laterals[-1]
- elif self.add_extra_convs == 'on_output':
- extra_source = outs[-1]
- else:
- raise NotImplementedError
- outs.append(self.fpn_convs[used_backbone_levels](extra_source))
- for i in range(used_backbone_levels + 1, self.num_outs):
- if self.relu_before_extra_convs:
- outs.append(self.fpn_convs[i](F.relu(outs[-1])))
- else:
- outs.append(self.fpn_convs[i](outs[-1]))
- return tuple(outs)
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py
deleted file mode 100644
index 7a24e02cd2b979844bf638b46ac60949ee9ce691..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/backbones/beit.py
+++ /dev/null
@@ -1,196 +0,0 @@
-import timm
-import torch
-import types
-
-import numpy as np
-import torch.nn.functional as F
-
-from .utils import forward_adapted_unflatten, make_backbone_default
-from timm.models.beit import gen_relative_position_index
-from torch.utils.checkpoint import checkpoint
-from typing import Optional
-
-
-def forward_beit(pretrained, x):
- return forward_adapted_unflatten(pretrained, x, "forward_features")
-
-
-def patch_embed_forward(self, x):
- """
- Modification of timm.models.layers.patch_embed.py: PatchEmbed.forward to support arbitrary window sizes.
- """
- x = self.proj(x)
- if self.flatten:
- x = x.flatten(2).transpose(1, 2)
- x = self.norm(x)
- return x
-
-
-def _get_rel_pos_bias(self, window_size):
- """
- Modification of timm.models.beit.py: Attention._get_rel_pos_bias to support arbitrary window sizes.
- """
- old_height = 2 * self.window_size[0] - 1
- old_width = 2 * self.window_size[1] - 1
-
- new_height = 2 * window_size[0] - 1
- new_width = 2 * window_size[1] - 1
-
- old_relative_position_bias_table = self.relative_position_bias_table
-
- old_num_relative_distance = self.num_relative_distance
- new_num_relative_distance = new_height * new_width + 3
-
- old_sub_table = old_relative_position_bias_table[:old_num_relative_distance - 3]
-
- old_sub_table = old_sub_table.reshape(1, old_width, old_height, -1).permute(0, 3, 1, 2)
- new_sub_table = F.interpolate(old_sub_table, size=(new_height, new_width), mode="bilinear")
- new_sub_table = new_sub_table.permute(0, 2, 3, 1).reshape(new_num_relative_distance - 3, -1)
-
- new_relative_position_bias_table = torch.cat(
- [new_sub_table, old_relative_position_bias_table[old_num_relative_distance - 3:]])
-
- key = str(window_size[1]) + "," + str(window_size[0])
- if key not in self.relative_position_indices.keys():
- self.relative_position_indices[key] = gen_relative_position_index(window_size)
-
- relative_position_bias = new_relative_position_bias_table[
- self.relative_position_indices[key].view(-1)].view(
- window_size[0] * window_size[1] + 1,
- window_size[0] * window_size[1] + 1, -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- return relative_position_bias.unsqueeze(0)
-
-
-def attention_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None):
- """
- Modification of timm.models.beit.py: Attention.forward to support arbitrary window sizes.
- """
- B, N, C = x.shape
-
- qkv_bias = torch.cat((self.q_bias, self.k_bias, self.v_bias)) if self.q_bias is not None else None
- qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
- qkv = qkv.reshape(B, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv.unbind(0) # make torchscript happy (cannot use tensor as tuple)
-
- q = q * self.scale
- attn = (q @ k.transpose(-2, -1))
-
- if self.relative_position_bias_table is not None:
- window_size = tuple(np.array(resolution) // 16)
- attn = attn + self._get_rel_pos_bias(window_size)
- if shared_rel_pos_bias is not None:
- attn = attn + shared_rel_pos_bias
-
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, -1)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-def block_forward(self, x, resolution, shared_rel_pos_bias: Optional[torch.Tensor] = None):
- """
- Modification of timm.models.beit.py: Block.forward to support arbitrary window sizes.
- """
- if self.gamma_1 is None:
- x = x + self.drop_path(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- else:
- x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution,
- shared_rel_pos_bias=shared_rel_pos_bias))
- x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))
- return x
-
-
-def beit_forward_features(self, x):
- """
- Modification of timm.models.beit.py: Beit.forward_features to support arbitrary window sizes.
- """
- resolution = x.shape[2:]
-
- x = self.patch_embed(x)
- x = torch.cat((self.cls_token.expand(x.shape[0], -1, -1), x), dim=1)
- if self.pos_embed is not None:
- x = x + self.pos_embed
- x = self.pos_drop(x)
-
- rel_pos_bias = self.rel_pos_bias() if self.rel_pos_bias is not None else None
- for blk in self.blocks:
- if self.grad_checkpointing and not torch.jit.is_scripting():
- x = checkpoint(blk, x, shared_rel_pos_bias=rel_pos_bias)
- else:
- x = blk(x, resolution, shared_rel_pos_bias=rel_pos_bias)
- x = self.norm(x)
- return x
-
-
-def _make_beit_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[0, 4, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
- start_index_readout=1,
-):
- backbone = make_backbone_default(model, features, size, hooks, vit_features, use_readout, start_index,
- start_index_readout)
-
- backbone.model.patch_embed.forward = types.MethodType(patch_embed_forward, backbone.model.patch_embed)
- backbone.model.forward_features = types.MethodType(beit_forward_features, backbone.model)
-
- for block in backbone.model.blocks:
- attn = block.attn
- attn._get_rel_pos_bias = types.MethodType(_get_rel_pos_bias, attn)
- attn.forward = types.MethodType(attention_forward, attn)
- attn.relative_position_indices = {}
-
- block.forward = types.MethodType(block_forward, block)
-
- return backbone
-
-
-def _make_pretrained_beitl16_512(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("beit_large_patch16_512", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks is None else hooks
-
- features = [256, 512, 1024, 1024]
-
- return _make_beit_backbone(
- model,
- features=features,
- size=[512, 512],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_beitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("beit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks is None else hooks
- return _make_beit_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_beitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("beit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks is None else hooks
- return _make_beit_backbone(
- model,
- features=[96, 192, 384, 768],
- hooks=hooks,
- use_readout=use_readout,
- )
diff --git a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py b/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py
deleted file mode 100644
index a461703287a9bda9c93cfdfbb94d4c3cf90aaba9..0000000000000000000000000000000000000000
--- a/spaces/dahaoGPT/Llama2-70b-chatmodle-demo/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/meta-llama/Llama-2-70b-chat-hf").launch()
\ No newline at end of file
diff --git a/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py b/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py
deleted file mode 100644
index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000
--- a/spaces/danterivers/music-generation-samples/tests/data/test_audio_utils.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import julius
-import torch
-import pytest
-
-from audiocraft.data.audio_utils import (
- _clip_wav,
- convert_audio_channels,
- convert_audio,
- normalize_audio
-)
-from ..common_utils import get_batch_white_noise
-
-
-class TestConvertAudioChannels:
-
- def test_convert_audio_channels_downmix(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=2)
- assert list(mixed.shape) == [b, 2, t]
-
- def test_convert_audio_channels_nochange(self):
- b, c, t = 2, 3, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=c)
- assert list(mixed.shape) == list(audio.shape)
-
- def test_convert_audio_channels_upmix(self):
- b, c, t = 2, 1, 100
- audio = get_batch_white_noise(b, c, t)
- mixed = convert_audio_channels(audio, channels=3)
- assert list(mixed.shape) == [b, 3, t]
-
- def test_convert_audio_channels_upmix_error(self):
- b, c, t = 2, 2, 100
- audio = get_batch_white_noise(b, c, t)
- with pytest.raises(ValueError):
- convert_audio_channels(audio, channels=3)
-
-
-class TestConvertAudio:
-
- def test_convert_audio_channels_downmix(self):
- b, c, dur = 2, 3, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2)
- assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]]
-
- def test_convert_audio_channels_upmix(self):
- b, c, dur = 2, 1, 4.
- sr = 128
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3)
- assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]]
-
- def test_convert_audio_upsample(self):
- b, c, dur = 2, 1, 4.
- sr = 2
- new_sr = 3
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
- def test_convert_audio_resample(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- new_sr = 2
- audio = get_batch_white_noise(b, c, int(sr * dur))
- out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c)
- out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr)
- assert torch.allclose(out, out_j)
-
-
-class TestNormalizeAudio:
-
- def test_clip_wav(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- _clip_wav(audio)
- assert audio.abs().max() <= 1
-
- def test_normalize_audio_clip(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='clip')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_rms(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='rms')
- assert norm_audio.abs().max() <= 1
-
- def test_normalize_audio_peak(self):
- b, c, dur = 2, 1, 4.
- sr = 3
- audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur))
- norm_audio = normalize_audio(audio, strategy='peak')
- assert norm_audio.abs().max() <= 1
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py
deleted file mode 100644
index cf2993e38920bdebf79c6342875c2898e174ef6b..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/GifImagePlugin.py
+++ /dev/null
@@ -1,1064 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# GIF file handling
-#
-# History:
-# 1995-09-01 fl Created
-# 1996-12-14 fl Added interlace support
-# 1996-12-30 fl Added animation support
-# 1997-01-05 fl Added write support, fixed local colour map bug
-# 1997-02-23 fl Make sure to load raster data in getdata()
-# 1997-07-05 fl Support external decoder (0.4)
-# 1998-07-09 fl Handle all modes when saving (0.5)
-# 1998-07-15 fl Renamed offset attribute to avoid name clash
-# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6)
-# 2001-04-17 fl Added palette optimization (0.7)
-# 2002-06-06 fl Added transparency support for save (0.8)
-# 2004-02-24 fl Disable interlacing for small images
-#
-# Copyright (c) 1997-2004 by Secret Labs AB
-# Copyright (c) 1995-2004 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-import itertools
-import math
-import os
-import subprocess
-from enum import IntEnum
-
-from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence
-from ._binary import i16le as i16
-from ._binary import o8
-from ._binary import o16le as o16
-
-
-class LoadingStrategy(IntEnum):
- """.. versionadded:: 9.1.0"""
-
- RGB_AFTER_FIRST = 0
- RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1
- RGB_ALWAYS = 2
-
-
-#: .. versionadded:: 9.1.0
-LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST
-
-# --------------------------------------------------------------------
-# Identify/read GIF files
-
-
-def _accept(prefix):
- return prefix[:6] in [b"GIF87a", b"GIF89a"]
-
-
-##
-# Image plugin for GIF images. This plugin supports both GIF87 and
-# GIF89 images.
-
-
-class GifImageFile(ImageFile.ImageFile):
- format = "GIF"
- format_description = "Compuserve GIF"
- _close_exclusive_fp_after_loading = False
-
- global_palette = None
-
- def data(self):
- s = self.fp.read(1)
- if s and s[0]:
- return self.fp.read(s[0])
- return None
-
- def _is_palette_needed(self, p):
- for i in range(0, len(p), 3):
- if not (i // 3 == p[i] == p[i + 1] == p[i + 2]):
- return True
- return False
-
- def _open(self):
- # Screen
- s = self.fp.read(13)
- if not _accept(s):
- msg = "not a GIF file"
- raise SyntaxError(msg)
-
- self.info["version"] = s[:6]
- self._size = i16(s, 6), i16(s, 8)
- self.tile = []
- flags = s[10]
- bits = (flags & 7) + 1
-
- if flags & 128:
- # get global palette
- self.info["background"] = s[11]
- # check if palette contains colour indices
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- p = ImagePalette.raw("RGB", p)
- self.global_palette = self.palette = p
-
- self._fp = self.fp # FIXME: hack
- self.__rewind = self.fp.tell()
- self._n_frames = None
- self._is_animated = None
- self._seek(0) # get ready to read first frame
-
- @property
- def n_frames(self):
- if self._n_frames is None:
- current = self.tell()
- try:
- while True:
- self._seek(self.tell() + 1, False)
- except EOFError:
- self._n_frames = self.tell() + 1
- self.seek(current)
- return self._n_frames
-
- @property
- def is_animated(self):
- if self._is_animated is None:
- if self._n_frames is not None:
- self._is_animated = self._n_frames != 1
- else:
- current = self.tell()
- if current:
- self._is_animated = True
- else:
- try:
- self._seek(1, False)
- self._is_animated = True
- except EOFError:
- self._is_animated = False
-
- self.seek(current)
- return self._is_animated
-
- def seek(self, frame):
- if not self._seek_check(frame):
- return
- if frame < self.__frame:
- self.im = None
- self._seek(0)
-
- last_frame = self.__frame
- for f in range(self.__frame + 1, frame + 1):
- try:
- self._seek(f)
- except EOFError as e:
- self.seek(last_frame)
- msg = "no more images in GIF file"
- raise EOFError(msg) from e
-
- def _seek(self, frame, update_image=True):
- if frame == 0:
- # rewind
- self.__offset = 0
- self.dispose = None
- self.__frame = -1
- self._fp.seek(self.__rewind)
- self.disposal_method = 0
- if "comment" in self.info:
- del self.info["comment"]
- else:
- # ensure that the previous frame was loaded
- if self.tile and update_image:
- self.load()
-
- if frame != self.__frame + 1:
- msg = f"cannot seek to frame {frame}"
- raise ValueError(msg)
-
- self.fp = self._fp
- if self.__offset:
- # backup to last frame
- self.fp.seek(self.__offset)
- while self.data():
- pass
- self.__offset = 0
-
- s = self.fp.read(1)
- if not s or s == b";":
- raise EOFError
-
- palette = None
-
- info = {}
- frame_transparency = None
- interlace = None
- frame_dispose_extent = None
- while True:
- if not s:
- s = self.fp.read(1)
- if not s or s == b";":
- break
-
- elif s == b"!":
- #
- # extensions
- #
- s = self.fp.read(1)
- block = self.data()
- if s[0] == 249:
- #
- # graphic control extension
- #
- flags = block[0]
- if flags & 1:
- frame_transparency = block[3]
- info["duration"] = i16(block, 1) * 10
-
- # disposal method - find the value of bits 4 - 6
- dispose_bits = 0b00011100 & flags
- dispose_bits = dispose_bits >> 2
- if dispose_bits:
- # only set the dispose if it is not
- # unspecified. I'm not sure if this is
- # correct, but it seems to prevent the last
- # frame from looking odd for some animations
- self.disposal_method = dispose_bits
- elif s[0] == 254:
- #
- # comment extension
- #
- comment = b""
-
- # Read this comment block
- while block:
- comment += block
- block = self.data()
-
- if "comment" in info:
- # If multiple comment blocks in frame, separate with \n
- info["comment"] += b"\n" + comment
- else:
- info["comment"] = comment
- s = None
- continue
- elif s[0] == 255 and frame == 0:
- #
- # application extension
- #
- info["extension"] = block, self.fp.tell()
- if block[:11] == b"NETSCAPE2.0":
- block = self.data()
- if len(block) >= 3 and block[0] == 1:
- self.info["loop"] = i16(block, 1)
- while self.data():
- pass
-
- elif s == b",":
- #
- # local image
- #
- s = self.fp.read(9)
-
- # extent
- x0, y0 = i16(s, 0), i16(s, 2)
- x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6)
- if (x1 > self.size[0] or y1 > self.size[1]) and update_image:
- self._size = max(x1, self.size[0]), max(y1, self.size[1])
- Image._decompression_bomb_check(self._size)
- frame_dispose_extent = x0, y0, x1, y1
- flags = s[8]
-
- interlace = (flags & 64) != 0
-
- if flags & 128:
- bits = (flags & 7) + 1
- p = self.fp.read(3 << bits)
- if self._is_palette_needed(p):
- palette = ImagePalette.raw("RGB", p)
- else:
- palette = False
-
- # image data
- bits = self.fp.read(1)[0]
- self.__offset = self.fp.tell()
- break
-
- else:
- pass
- # raise OSError, "illegal GIF tag `%x`" % s[0]
- s = None
-
- if interlace is None:
- # self._fp = None
- raise EOFError
-
- self.__frame = frame
- if not update_image:
- return
-
- self.tile = []
-
- if self.dispose:
- self.im.paste(self.dispose, self.dispose_extent)
-
- self._frame_palette = palette if palette is not None else self.global_palette
- self._frame_transparency = frame_transparency
- if frame == 0:
- if self._frame_palette:
- if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- self.mode = "RGBA" if frame_transparency is not None else "RGB"
- else:
- self.mode = "P"
- else:
- self.mode = "L"
-
- if not palette and self.global_palette:
- from copy import copy
-
- palette = copy(self.global_palette)
- self.palette = palette
- else:
- if self.mode == "P":
- if (
- LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY
- or palette
- ):
- self.pyaccess = None
- if "transparency" in self.info:
- self.im.putpalettealpha(self.info["transparency"], 0)
- self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG)
- self.mode = "RGBA"
- del self.info["transparency"]
- else:
- self.mode = "RGB"
- self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG)
-
- def _rgb(color):
- if self._frame_palette:
- color = tuple(self._frame_palette.palette[color * 3 : color * 3 + 3])
- else:
- color = (color, color, color)
- return color
-
- self.dispose_extent = frame_dispose_extent
- try:
- if self.disposal_method < 2:
- # do not dispose or none specified
- self.dispose = None
- elif self.disposal_method == 2:
- # replace with background colour
-
- # only dispose the extent in this frame
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
-
- # by convention, attempt to use transparency first
- dispose_mode = "P"
- color = self.info.get("transparency", frame_transparency)
- if color is not None:
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(color) + (0,)
- else:
- color = self.info.get("background", 0)
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGB"
- color = _rgb(color)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- else:
- # replace with previous contents
- if self.im is not None:
- # only dispose the extent in this frame
- self.dispose = self._crop(self.im, self.dispose_extent)
- elif frame_transparency is not None:
- x0, y0, x1, y1 = self.dispose_extent
- dispose_size = (x1 - x0, y1 - y0)
-
- Image._decompression_bomb_check(dispose_size)
- dispose_mode = "P"
- color = frame_transparency
- if self.mode in ("RGB", "RGBA"):
- dispose_mode = "RGBA"
- color = _rgb(frame_transparency) + (0,)
- self.dispose = Image.core.fill(dispose_mode, dispose_size, color)
- except AttributeError:
- pass
-
- if interlace is not None:
- transparency = -1
- if frame_transparency is not None:
- if frame == 0:
- if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS:
- self.info["transparency"] = frame_transparency
- elif self.mode not in ("RGB", "RGBA"):
- transparency = frame_transparency
- self.tile = [
- (
- "gif",
- (x0, y0, x1, y1),
- self.__offset,
- (bits, interlace, transparency),
- )
- ]
-
- if info.get("comment"):
- self.info["comment"] = info["comment"]
- for k in ["duration", "extension"]:
- if k in info:
- self.info[k] = info[k]
- elif k in self.info:
- del self.info[k]
-
- def load_prepare(self):
- temp_mode = "P" if self._frame_palette else "L"
- self._prev_im = None
- if self.__frame == 0:
- if self._frame_transparency is not None:
- self.im = Image.core.fill(
- temp_mode, self.size, self._frame_transparency
- )
- elif self.mode in ("RGB", "RGBA"):
- self._prev_im = self.im
- if self._frame_palette:
- self.im = Image.core.fill("P", self.size, self._frame_transparency or 0)
- self.im.putpalette(*self._frame_palette.getdata())
- else:
- self.im = None
- self.mode = temp_mode
- self._frame_palette = None
-
- super().load_prepare()
-
- def load_end(self):
- if self.__frame == 0:
- if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS:
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- self.mode = "RGBA"
- else:
- self.mode = "RGB"
- self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG)
- return
- if not self._prev_im:
- return
- if self._frame_transparency is not None:
- self.im.putpalettealpha(self._frame_transparency, 0)
- frame_im = self.im.convert("RGBA")
- else:
- frame_im = self.im.convert("RGB")
- frame_im = self._crop(frame_im, self.dispose_extent)
-
- self.im = self._prev_im
- self.mode = self.im.mode
- if frame_im.mode == "RGBA":
- self.im.paste(frame_im, self.dispose_extent, frame_im)
- else:
- self.im.paste(frame_im, self.dispose_extent)
-
- def tell(self):
- return self.__frame
-
-
-# --------------------------------------------------------------------
-# Write GIF files
-
-
-RAWMODE = {"1": "L", "L": "L", "P": "P"}
-
-
-def _normalize_mode(im):
- """
- Takes an image (or frame), returns an image in a mode that is appropriate
- for saving in a Gif.
-
- It may return the original image, or it may return an image converted to
- palette or 'L' mode.
-
- :param im: Image object
- :returns: Image object
- """
- if im.mode in RAWMODE:
- im.load()
- return im
- if Image.getmodebase(im.mode) == "RGB":
- im = im.convert("P", palette=Image.Palette.ADAPTIVE)
- if im.palette.mode == "RGBA":
- for rgba in im.palette.colors:
- if rgba[3] == 0:
- im.info["transparency"] = im.palette.colors[rgba]
- break
- return im
- return im.convert("L")
-
-
-def _normalize_palette(im, palette, info):
- """
- Normalizes the palette for image.
- - Sets the palette to the incoming palette, if provided.
- - Ensures that there's a palette for L mode images
- - Optimizes the palette if necessary/desired.
-
- :param im: Image object
- :param palette: bytes object containing the source palette, or ....
- :param info: encoderinfo
- :returns: Image object
- """
- source_palette = None
- if palette:
- # a bytes palette
- if isinstance(palette, (bytes, bytearray, list)):
- source_palette = bytearray(palette[:768])
- if isinstance(palette, ImagePalette.ImagePalette):
- source_palette = bytearray(palette.palette)
-
- if im.mode == "P":
- if not source_palette:
- source_palette = im.im.getpalette("RGB")[:768]
- else: # L-mode
- if not source_palette:
- source_palette = bytearray(i // 3 for i in range(768))
- im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette)
-
- if palette:
- used_palette_colors = []
- for i in range(0, len(source_palette), 3):
- source_color = tuple(source_palette[i : i + 3])
- index = im.palette.colors.get(source_color)
- if index in used_palette_colors:
- index = None
- used_palette_colors.append(index)
- for i, index in enumerate(used_palette_colors):
- if index is None:
- for j in range(len(used_palette_colors)):
- if j not in used_palette_colors:
- used_palette_colors[i] = j
- break
- im = im.remap_palette(used_palette_colors)
- else:
- used_palette_colors = _get_optimize(im, info)
- if used_palette_colors is not None:
- return im.remap_palette(used_palette_colors, source_palette)
-
- im.palette.palette = source_palette
- return im
-
-
-def _write_single_frame(im, fp, palette):
- im_out = _normalize_mode(im)
- for k, v in im_out.info.items():
- im.encoderinfo.setdefault(k, v)
- im_out = _normalize_palette(im_out, palette, im.encoderinfo)
-
- for s in _get_global_header(im_out, im.encoderinfo):
- fp.write(s)
-
- # local image header
- flags = 0
- if get_interlace(im):
- flags = flags | 64
- _write_local_header(fp, im, (0, 0), flags)
-
- im_out.encoderconfig = (8, get_interlace(im))
- ImageFile._save(im_out, fp, [("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])])
-
- fp.write(b"\0") # end of image data
-
-
-def _getbbox(base_im, im_frame):
- if _get_palette_bytes(im_frame) == _get_palette_bytes(base_im):
- delta = ImageChops.subtract_modulo(im_frame, base_im)
- else:
- delta = ImageChops.subtract_modulo(
- im_frame.convert("RGBA"), base_im.convert("RGBA")
- )
- return delta.getbbox(alpha_only=False)
-
-
-def _write_multiple_frames(im, fp, palette):
- duration = im.encoderinfo.get("duration")
- disposal = im.encoderinfo.get("disposal", im.info.get("disposal"))
-
- im_frames = []
- frame_count = 0
- background_im = None
- for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])):
- for im_frame in ImageSequence.Iterator(imSequence):
- # a copy is required here since seek can still mutate the image
- im_frame = _normalize_mode(im_frame.copy())
- if frame_count == 0:
- for k, v in im_frame.info.items():
- if k == "transparency":
- continue
- im.encoderinfo.setdefault(k, v)
-
- encoderinfo = im.encoderinfo.copy()
- im_frame = _normalize_palette(im_frame, palette, encoderinfo)
- if "transparency" in im_frame.info:
- encoderinfo.setdefault("transparency", im_frame.info["transparency"])
- if isinstance(duration, (list, tuple)):
- encoderinfo["duration"] = duration[frame_count]
- elif duration is None and "duration" in im_frame.info:
- encoderinfo["duration"] = im_frame.info["duration"]
- if isinstance(disposal, (list, tuple)):
- encoderinfo["disposal"] = disposal[frame_count]
- frame_count += 1
-
- if im_frames:
- # delta frame
- previous = im_frames[-1]
- bbox = _getbbox(previous["im"], im_frame)
- if not bbox:
- # This frame is identical to the previous frame
- if encoderinfo.get("duration"):
- previous["encoderinfo"]["duration"] += encoderinfo["duration"]
- continue
- if encoderinfo.get("disposal") == 2:
- if background_im is None:
- color = im.encoderinfo.get(
- "transparency", im.info.get("transparency", (0, 0, 0))
- )
- background = _get_background(im_frame, color)
- background_im = Image.new("P", im_frame.size, background)
- background_im.putpalette(im_frames[0]["im"].palette)
- bbox = _getbbox(background_im, im_frame)
- else:
- bbox = None
- im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo})
-
- if len(im_frames) > 1:
- for frame_data in im_frames:
- im_frame = frame_data["im"]
- if not frame_data["bbox"]:
- # global header
- for s in _get_global_header(im_frame, frame_data["encoderinfo"]):
- fp.write(s)
- offset = (0, 0)
- else:
- # compress difference
- if not palette:
- frame_data["encoderinfo"]["include_color_table"] = True
-
- im_frame = im_frame.crop(frame_data["bbox"])
- offset = frame_data["bbox"][:2]
- _write_frame_data(fp, im_frame, offset, frame_data["encoderinfo"])
- return True
- elif "duration" in im.encoderinfo and isinstance(
- im.encoderinfo["duration"], (list, tuple)
- ):
- # Since multiple frames will not be written, add together the frame durations
- im.encoderinfo["duration"] = sum(im.encoderinfo["duration"])
-
-
-def _save_all(im, fp, filename):
- _save(im, fp, filename, save_all=True)
-
-
-def _save(im, fp, filename, save_all=False):
- # header
- if "palette" in im.encoderinfo or "palette" in im.info:
- palette = im.encoderinfo.get("palette", im.info.get("palette"))
- else:
- palette = None
- im.encoderinfo["optimize"] = im.encoderinfo.get("optimize", True)
-
- if not save_all or not _write_multiple_frames(im, fp, palette):
- _write_single_frame(im, fp, palette)
-
- fp.write(b";") # end of file
-
- if hasattr(fp, "flush"):
- fp.flush()
-
-
-def get_interlace(im):
- interlace = im.encoderinfo.get("interlace", 1)
-
- # workaround for @PIL153
- if min(im.size) < 16:
- interlace = 0
-
- return interlace
-
-
-def _write_local_header(fp, im, offset, flags):
- transparent_color_exists = False
- try:
- if "transparency" in im.encoderinfo:
- transparency = im.encoderinfo["transparency"]
- else:
- transparency = im.info["transparency"]
- transparency = int(transparency)
- except (KeyError, ValueError):
- pass
- else:
- # optimize the block away if transparent color is not used
- transparent_color_exists = True
-
- used_palette_colors = _get_optimize(im, im.encoderinfo)
- if used_palette_colors is not None:
- # adjust the transparency index after optimize
- try:
- transparency = used_palette_colors.index(transparency)
- except ValueError:
- transparent_color_exists = False
-
- if "duration" in im.encoderinfo:
- duration = int(im.encoderinfo["duration"] / 10)
- else:
- duration = 0
-
- disposal = int(im.encoderinfo.get("disposal", 0))
-
- if transparent_color_exists or duration != 0 or disposal:
- packed_flag = 1 if transparent_color_exists else 0
- packed_flag |= disposal << 2
- if not transparent_color_exists:
- transparency = 0
-
- fp.write(
- b"!"
- + o8(249) # extension intro
- + o8(4) # length
- + o8(packed_flag) # packed fields
- + o16(duration) # duration
- + o8(transparency) # transparency index
- + o8(0)
- )
-
- include_color_table = im.encoderinfo.get("include_color_table")
- if include_color_table:
- palette_bytes = _get_palette_bytes(im)
- color_table_size = _get_color_table_size(palette_bytes)
- if color_table_size:
- flags = flags | 128 # local color table flag
- flags = flags | color_table_size
-
- fp.write(
- b","
- + o16(offset[0]) # offset
- + o16(offset[1])
- + o16(im.size[0]) # size
- + o16(im.size[1])
- + o8(flags) # flags
- )
- if include_color_table and color_table_size:
- fp.write(_get_header_palette(palette_bytes))
- fp.write(o8(8)) # bits
-
-
-def _save_netpbm(im, fp, filename):
- # Unused by default.
- # To use, uncomment the register_save call at the end of the file.
- #
- # If you need real GIF compression and/or RGB quantization, you
- # can use the external NETPBM/PBMPLUS utilities. See comments
- # below for information on how to enable this.
- tempfile = im._dump()
-
- try:
- with open(filename, "wb") as f:
- if im.mode != "RGB":
- subprocess.check_call(
- ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL
- )
- else:
- # Pipe ppmquant output into ppmtogif
- # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename)
- quant_cmd = ["ppmquant", "256", tempfile]
- togif_cmd = ["ppmtogif"]
- quant_proc = subprocess.Popen(
- quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL
- )
- togif_proc = subprocess.Popen(
- togif_cmd,
- stdin=quant_proc.stdout,
- stdout=f,
- stderr=subprocess.DEVNULL,
- )
-
- # Allow ppmquant to receive SIGPIPE if ppmtogif exits
- quant_proc.stdout.close()
-
- retcode = quant_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, quant_cmd)
-
- retcode = togif_proc.wait()
- if retcode:
- raise subprocess.CalledProcessError(retcode, togif_cmd)
- finally:
- try:
- os.unlink(tempfile)
- except OSError:
- pass
-
-
-# Force optimization so that we can test performance against
-# cases where it took lots of memory and time previously.
-_FORCE_OPTIMIZE = False
-
-
-def _get_optimize(im, info):
- """
- Palette optimization is a potentially expensive operation.
-
- This function determines if the palette should be optimized using
- some heuristics, then returns the list of palette entries in use.
-
- :param im: Image object
- :param info: encoderinfo
- :returns: list of indexes of palette entries in use, or None
- """
- if im.mode in ("P", "L") and info and info.get("optimize", 0):
- # Potentially expensive operation.
-
- # The palette saves 3 bytes per color not used, but palette
- # lengths are restricted to 3*(2**N) bytes. Max saving would
- # be 768 -> 6 bytes if we went all the way down to 2 colors.
- # * If we're over 128 colors, we can't save any space.
- # * If there aren't any holes, it's not worth collapsing.
- # * If we have a 'large' image, the palette is in the noise.
-
- # create the new palette if not every color is used
- optimise = _FORCE_OPTIMIZE or im.mode == "L"
- if optimise or im.width * im.height < 512 * 512:
- # check which colors are used
- used_palette_colors = []
- for i, count in enumerate(im.histogram()):
- if count:
- used_palette_colors.append(i)
-
- if optimise or max(used_palette_colors) >= len(used_palette_colors):
- return used_palette_colors
-
- num_palette_colors = len(im.palette.palette) // Image.getmodebands(
- im.palette.mode
- )
- current_palette_size = 1 << (num_palette_colors - 1).bit_length()
- if (
- # check that the palette would become smaller when saved
- len(used_palette_colors) <= current_palette_size // 2
- # check that the palette is not already the smallest possible size
- and current_palette_size > 2
- ):
- return used_palette_colors
-
-
-def _get_color_table_size(palette_bytes):
- # calculate the palette size for the header
- if not palette_bytes:
- return 0
- elif len(palette_bytes) < 9:
- return 1
- else:
- return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1
-
-
-def _get_header_palette(palette_bytes):
- """
- Returns the palette, null padded to the next power of 2 (*3) bytes
- suitable for direct inclusion in the GIF header
-
- :param palette_bytes: Unpadded palette bytes, in RGBRGB form
- :returns: Null padded palette
- """
- color_table_size = _get_color_table_size(palette_bytes)
-
- # add the missing amount of bytes
- # the palette has to be 2<AutoCAD 2011 keygen xforce
-
-страницу
-
-Autocad 2011 keygen
-
-A autocad 2011 keygen copier is usually organized by using a cassette of components, which is accompanied by a system that allows you to purchase the components of the job or finish a job which has been prepared by an expert. A cassette is far more cost effective than a copier and allows for both the purchase of the components of a job and assembling a job at a much faster rate. While a cassette of components may be less expensive, a cassette is a lot more complex than a standard copy machine. In order to utilize a cassette, you need to handle components and replace the cassette that is particular for a job, which can be extremely hard and time consuming. Copiers are much faster than a cassette and typically much easier to use. Copiers are typically much faster than a cassette for many reasons. The most obvious is because a cassette only holds a finite amount of components, while a copier may hold hundreds or thousands of components. Copiers can usually be switched on and produce a copy from a slide or from a job that is stored on a CD or DVD. Typically, cassette copiers must be connected to a CD or DVD in order to make copies. You can not easily change which components are being copied on a cassette copier. A cassette contains only a very specific set of components. You will not be able to change the other components that are on the cassette. On the other hand, a copier can copy many different types of components, allowing you to change the components being copied without the need to remove the cassette.
-
-A cassette is also much larger than a copier. Copiers are typically smaller than a cassette and can be placed on a desk or on top of a desk. A cassette is typically placed on the floor and you have to lift the cassette off of the floor in order to open the cassette. The more parts you need to copy, the larger your cassette will be. The cassette can also be taller and longer than your copier. This allows you to place many different types of parts on the cassette. A cassette can also be a bit more expensive than a copier. You need a cassette in order to copy components and a cassette is much more expensive than the standard copy machine.
-
-Copier Price Evaluations
-
-A basic copier will likely be cheaper than a cassette copier. However, when you purchase a cassette copier you will not have the benefits that a cassette copier provides. 4fefd39f24
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md b/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md
deleted file mode 100644
index e545a0b55ddfa1c6a22eba9493bedcff9cb3ec45..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/BaDBoy V4.2 [Cheats 4 Counter-Strike 1.6] Pc Game _BEST_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-BaDBoy v4.2 [Cheats 4 Counter-Strike 1.6] pc game
-
-z>sys file> get dell-mcpci.sysv
-
-
diff --git a/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md b/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md
deleted file mode 100644
index 5e45b00bf1eecc561df9f2f4080457d361a9aac6..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Download Mdsolids 4.0 Full Crack Idm.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-How to Download MDSolids 4.0 Full Crack IDM
-download mdsolids 4.0 full crack idm
-
-
-Benefits of Using MDSolids
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md b/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md
deleted file mode 100644
index d98a2ae8ed2354d03211e2c70b769fb45121edaa..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Generic Text Only Driver Download Windows 7.md
+++ /dev/null
@@ -1,52 +0,0 @@
-
-How to Download and Install Generic Text Only Driver for Windows 7
-generic text only driver download windows 7
-Step 1: Download the Generic Text Only Driver
-
-
-Step 2: Install the Generic Text Only Driver
-
-
-Step 3: Use the Generic Text Only Driver
-
-
-
-Tips for Using the Generic Text Only Driver
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md b/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md
deleted file mode 100644
index 09db8596572fb31c79ef852bea11d9e05e9ce8e3..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Lenovo Windows 7 Pci Serial Port Driver.md
+++ /dev/null
@@ -1,18 +0,0 @@
-lenovo windows 7 pci serial port driver
-
-With the Intel 830G chipset, the device drivers run under the IA32 and IA64 architecture.To test a specific BIOS configuration, see the Integrated Performance Tools for the product that you want to test.The BIOS is self-configuring when it is loaded at power on.
-
-Author's Note: This tutorial is based on the Windows 7 and Vista drivers provided by Intel for the 830G chipset.I should also mention that Windows XP didn't have built-in support for the ISAPNP or PCI services.It should work on Windows 7 and Vista as well as Windows 2000 and Windows XP.If you are using the Intel 830G chipset device, you have two options for upgrading to the latest driver.This tutorial shows you how to upgrade the BIOS for your system.The Integrated Performance Tools for the system that you want to upgrade the BIOS are available free from Intel's website at
-
-The BIOS is usually installed on a flash memory chip, such as a flash memory card, a USB flash drive, or a diskette.You might need to ask your computer manufacturer for the BIOS flash drive.After the BIOS is installed, the BIOS is loaded when the computer boots, before Windows starts.If you are updating a BIOS version that is later than the version installed on your computer, you must boot from the flash drive or USB flash drive and insert the BIOS update before you reboot.
-
-If you do not have the BIOS upgrade for your computer or the appropriate drive, you can download the BIOS from Intel's website and load it using a Windows installation CD.After installing the BIOS update, run the Performance Update utility to update the BIOS for your computer.You might need to install the BIOS update on a flash drive or a CD-ROM for safekeeping, or install the BIOS update to a different computer that has the same type and model of chipset. You can also try updating the BIOS using a different computer. The general steps to update the BIOS are as follows: - Make sure that the computer is powered off. - Remove the battery to avoid any problems. - Place the BIOS flash drive in the computer's CD drive. - Turn the computer on.
-
-- Press the F12 key at the same time that you press the power button.
-
-- After a few seconds, the computer will ask you if you want to upgrade the BIOS.
-
-In Windows XP, the BIOS is stored in the system BIOS on the floppy diskette. You can upgrade the BIOS using a Windows 4fefd39f24
-
-
-
diff --git a/spaces/diffusers/controlnet-canny/README.md b/spaces/diffusers/controlnet-canny/README.md
deleted file mode 100644
index b74ce27a0817ab2a6c04ff4d03e70ab759f255e0..0000000000000000000000000000000000000000
--- a/spaces/diffusers/controlnet-canny/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ControlNet Canny
-emoji: 🐨
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/setup.py b/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/setup.py
deleted file mode 100644
index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Lixiang-Bert-Vits2/monotonic_align/setup.py
+++ /dev/null
@@ -1,9 +0,0 @@
-from distutils.core import setup
-from Cython.Build import cythonize
-import numpy
-
-setup(
- name = 'monotonic_align',
- ext_modules = cythonize("core.pyx"),
- include_dirs=[numpy.get_include()]
-)
diff --git a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/text/chinese.py b/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/text/chinese.py
deleted file mode 100644
index 276753880b73de2e8889dcb2101cd98c09e0710b..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/text/chinese.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import os
-import re
-
-import cn2an
-from pypinyin import lazy_pinyin, Style
-
-from text import symbols
-from text.symbols import punctuation
-from text.tone_sandhi import ToneSandhi
-
-current_file_path = os.path.dirname(__file__)
-pinyin_to_symbol_map = {line.split("\t")[0]: line.strip().split("\t")[1] for line in
- open(os.path.join(current_file_path, 'opencpop-strict.txt')).readlines()}
-
-import jieba.posseg as psg
-
-
-rep_map = {
- ':': ',',
- ';': ',',
- ',': ',',
- '。': '.',
- '!': '!',
- '?': '?',
- '\n': '.',
- "·": ",",
- '、': ",",
- '...': '…',
- '$': '.',
- '“': "'",
- '”': "'",
- '‘': "'",
- '’': "'",
- '(': "'",
- ')': "'",
- '(': "'",
- ')': "'",
- '《': "'",
- '》': "'",
- '【': "'",
- '】': "'",
- '[': "'",
- ']': "'",
- '—': "-",
- '~': "-",
- '~': "-",
- '「': "'",
- '」': "'",
-
-}
-
-tone_modifier = ToneSandhi()
-
-def replace_punctuation(text):
- text = text.replace("嗯", "恩").replace("呣","母")
- pattern = re.compile('|'.join(re.escape(p) for p in rep_map.keys()))
-
- replaced_text = pattern.sub(lambda x: rep_map[x.group()], text)
-
- replaced_text = re.sub(r'[^\u4e00-\u9fa5'+"".join(punctuation)+r']+', '', replaced_text)
-
- return replaced_text
-
-def g2p(text):
- pattern = r'(?<=[{0}])\s*'.format(''.join(punctuation))
- sentences = [i for i in re.split(pattern, text) if i.strip()!='']
- phones, tones, word2ph = _g2p(sentences)
- assert sum(word2ph) == len(phones)
- assert len(word2ph) == len(text) #Sometimes it will crash,you can add a try-catch.
- phones = ['_'] + phones + ["_"]
- tones = [0] + tones + [0]
- word2ph = [1] + word2ph + [1]
- return phones, tones, word2ph
-
-
-def _get_initials_finals(word):
- initials = []
- finals = []
- orig_initials = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.INITIALS)
- orig_finals = lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for c, v in zip(orig_initials, orig_finals):
- initials.append(c)
- finals.append(v)
- return initials, finals
-
-
-def _g2p(segments):
- phones_list = []
- tones_list = []
- word2ph = []
- for seg in segments:
- pinyins = []
- # Replace all English words in the sentence
- seg = re.sub('[a-zA-Z]+', '', seg)
- seg_cut = psg.lcut(seg)
- initials = []
- finals = []
- seg_cut = tone_modifier.pre_merge_for_modify(seg_cut)
- for word, pos in seg_cut:
- if pos == 'eng':
- continue
- sub_initials, sub_finals = _get_initials_finals(word)
- sub_finals = tone_modifier.modified_tone(word, pos,
- sub_finals)
- initials.append(sub_initials)
- finals.append(sub_finals)
-
- # assert len(sub_initials) == len(sub_finals) == len(word)
- initials = sum(initials, [])
- finals = sum(finals, [])
- #
- for c, v in zip(initials, finals):
- raw_pinyin = c+v
- # NOTE: post process for pypinyin outputs
- # we discriminate i, ii and iii
- if c == v:
- assert c in punctuation
- phone = [c]
- tone = '0'
- word2ph.append(1)
- else:
- v_without_tone = v[:-1]
- tone = v[-1]
-
- pinyin = c+v_without_tone
- assert tone in '12345'
-
- if c:
- # 多音节
- v_rep_map = {
- "uei": 'ui',
- 'iou': 'iu',
- 'uen': 'un',
- }
- if v_without_tone in v_rep_map.keys():
- pinyin = c+v_rep_map[v_without_tone]
- else:
- # 单音节
- pinyin_rep_map = {
- 'ing': 'ying',
- 'i': 'yi',
- 'in': 'yin',
- 'u': 'wu',
- }
- if pinyin in pinyin_rep_map.keys():
- pinyin = pinyin_rep_map[pinyin]
- else:
- single_rep_map = {
- 'v': 'yu',
- 'e': 'e',
- 'i': 'y',
- 'u': 'w',
- }
- if pinyin[0] in single_rep_map.keys():
- pinyin = single_rep_map[pinyin[0]]+pinyin[1:]
-
- assert pinyin in pinyin_to_symbol_map.keys(), (pinyin, seg, raw_pinyin)
- phone = pinyin_to_symbol_map[pinyin].split(' ')
- word2ph.append(len(phone))
-
- phones_list += phone
- tones_list += [int(tone)] * len(phone)
- return phones_list, tones_list, word2ph
-
-
-
-def text_normalize(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- text = replace_punctuation(text)
- return text
-
-def get_bert_feature(text, word2ph):
- from text import chinese_bert
- return chinese_bert.get_bert_feature(text, word2ph)
-
-if __name__ == '__main__':
- from text.chinese_bert import get_bert_feature
- text = "啊!但是《原神》是由,米哈\游自主, [研发]的一款全.新开放世界.冒险游戏"
- text = text_normalize(text)
- print(text)
- phones, tones, word2ph = g2p(text)
- bert = get_bert_feature(text, word2ph)
-
- print(phones, tones, word2ph, bert.shape)
-
-
-# # 示例用法
-# text = "这是一个示例文本:,你好!这是一个测试...."
-# print(g2p_paddle(text)) # 输出: 这是一个示例文本你好这是一个测试
diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/maskrcnn_pipeline.py b/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/maskrcnn_pipeline.py
deleted file mode 100644
index fff3e071ea115843752f34de8141fa982b8ad14b..0000000000000000000000000000000000000000
--- a/spaces/dinhminh20521597/OCR_DEMO/configs/_base_/det_pipelines/maskrcnn_pipeline.py
+++ /dev/null
@@ -1,57 +0,0 @@
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-
-train_pipeline = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
- dict(
- type='ScaleAspectJitter',
- img_scale=None,
- keep_ratio=False,
- resize_type='indep_sample_in_range',
- scale_range=(640, 2560)),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(
- type='RandomCropInstances',
- target_size=(640, 640),
- mask_type='union_all',
- instance_key='gt_masks'),
- dict(type='Pad', size_divisor=32),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
-]
-
-# for ctw1500
-img_scale_ctw1500 = (1600, 1600)
-test_pipeline_ctw1500 = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale_ctw1500, # used by Resize
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-
-# for icdar2015
-img_scale_icdar2015 = (1920, 1920)
-test_pipeline_icdar2015 = [
- dict(type='LoadImageFromFile', color_type='color_ignore_orientation'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=img_scale_icdar2015, # used by Resize
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Spell-book.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Spell-book.md
deleted file mode 100644
index 9b7c76c953f76f8a486bbe5156de4e9ebb3f0ec0..0000000000000000000000000000000000000000
--- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Spell-book.md
+++ /dev/null
@@ -1,107 +0,0 @@
-You have now entered a hidden corner of the internet.
-
-A confusing yet intriguing realm of paradoxes and contradictions.
-
-A place where you will find out that what you thought you knew, you in fact didn't know, and what you didn't know was in front of you all along.
-
-
-
-*In other words, here I will document little-known facts about this web UI that I could not find another place for in the wiki.*
-
-#### You can train LoRAs in CPU mode
-
-Load the web UI with
-
-```
-python server.py --cpu
-```
-
-and start training the LoRA from the training tab as usual.
-
-#### 8-bit mode works with CPU offloading
-
-```
-python server.py --load-in-8bit --gpu-memory 4000MiB
-```
-
-#### `--pre_layer`, and not `--gpu-memory`, is the right way to do CPU offloading with 4-bit models
-
-```
-python server.py --wbits 4 --groupsize 128 --pre_layer 20
-```
-
-#### Models can be loaded in 32-bit, 16-bit, 8-bit, and 4-bit modes
-
-```
-python server.py --cpu
-python server.py
-python server.py --load-in-8bit
-python server.py --wbits 4
-```
-
-#### The web UI works with any version of GPTQ-for-LLaMa
-
-Including the up to date triton and cuda branches. But you have to delete the `repositories/GPTQ-for-LLaMa` folder and reinstall the new one every time:
-
-```
-cd text-generation-webui/repositories
-rm -r GPTQ-for-LLaMa
-pip uninstall quant-cuda
-git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda # or any other repository and branch
-cd GPTQ-for-LLaMa
-python setup_cuda.py install
-```
-
-#### Instruction-following templates are represented as chat characters
-
-https://github.com/oobabooga/text-generation-webui/tree/main/characters/instruction-following
-
-#### The right way to run Alpaca, Open Assistant, Vicuna, etc is Instruct mode, not normal chat mode
-
-Otherwise the prompt will not be formatted correctly.
-
-1. Start the web UI with
-
-```
-python server.py --chat
-```
-
-2. Click on the "instruct" option under "Chat modes"
-
-3. Select the correct template in the hidden dropdown menu that will become visible.
-
-#### Notebook mode is best mode
-
-Ascended individuals have realized that notebook mode is the superset of chat mode and can do chats with ultimate flexibility, including group chats, editing replies, starting a new bot reply in a given way, and impersonating.
-
-#### RWKV is a RNN
-
-Most models are transformers, but not RWKV, which is a RNN. It's a great model.
-
-#### `--gpu-memory` is not a hard limit on the GPU memory
-
-It is simply a parameter that is passed to the `accelerate` library while loading the model. More memory will be allocated during generation. That's why this parameter has to be set to less than your total GPU memory.
-
-#### Contrastive search perhaps the best preset
-
-But it uses a ton of VRAM.
-
-#### You can check the sha256sum of downloaded models with the download script
-
-```
-python download-model.py facebook/galactica-125m --check
-```
-
-#### The download script continues interrupted downloads by default
-
-It doesn't start over.
-
-#### You can download models with multiple threads
-
-```
-python download-model.py facebook/galactica-125m --threads 8
-```
-
-#### LoRAs work in 4-bit mode
-
-You need to follow [these instructions](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) and then start the web UI with the `--monkey-patch` flag.
diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/common/mae_posembed.py b/spaces/duycse1603/math2tex/HybridViT/module/component/common/mae_posembed.py
deleted file mode 100644
index 187ecd981c59df11794df9d5be8c02d8a37e04d9..0000000000000000000000000000000000000000
--- a/spaces/duycse1603/math2tex/HybridViT/module/component/common/mae_posembed.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-# --------------------------------------------------------
-# Position embedding utils
-# --------------------------------------------------------
-
-import numpy as np
-# --------------------------------------------------------
-# 2D sine-cosine position embedding
-# References:
-# Transformer: https://github.com/tensorflow/models/blob/master/official/nlp/transformer/model_utils.py
-# MoCo v3: https://github.com/facebookresearch/moco-v3
-# --------------------------------------------------------
-
-def get_2d_sincos_pos_embed(embed_dim, grid_size_H, grid_size_W, cls_token=False):
- """
- grid_size: int of the grid height and width
- return:
- pos_embed: [grid_size*grid_size, embed_dim] or [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token)
- """
- grid_h = np.arange(grid_size_H, dtype=np.float32)
- grid_w = np.arange(grid_size_W, dtype=np.float32)
- grid = np.meshgrid(grid_w, grid_h) # here w goes first
- grid = np.stack(grid, axis=0)
-
- grid = grid.reshape([2, 1, grid_size_H, grid_size_W])
-
- print('new grid shape', grid.shape)
-
- pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid)
- if cls_token:
- pos_embed = np.concatenate([np.zeros([1, embed_dim]), pos_embed], axis=0)
- return pos_embed
-
-
-def get_2d_sincos_pos_embed_from_grid(embed_dim, grid):
- assert embed_dim % 2 == 0
-
- # use half of dimensions to encode grid_h
- emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2)
- emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2)
-
- emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D)
- return emb
-
-
-def get_1d_sincos_pos_embed_from_grid(embed_dim, pos):
- """
- embed_dim: output dimension for each position
- pos: a list of positions to be encoded: size (M,)
- out: (M, D)
- """
- assert embed_dim % 2 == 0
- omega = np.arange(embed_dim // 2, dtype=np.float32)
- omega /= embed_dim / 2.
- omega = 1. / 10000**omega # (D/2,)
-
- pos = pos.reshape(-1) # (M,)
- out = np.einsum('m,d->md', pos, omega) # (M, D/2), outer product
-
- emb_sin = np.sin(out) # (M, D/2)
- emb_cos = np.cos(out) # (M, D/2)
-
- emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D)
- return emb
-
-if __name__ == '__main__':
- pos_embed = get_2d_sincos_pos_embed(256, 800, 800, True)
- print(pos_embed.shape)
\ No newline at end of file
diff --git a/spaces/elkraken/Video-Object-Detection/train_aux.py b/spaces/elkraken/Video-Object-Detection/train_aux.py
deleted file mode 100644
index 0e8053f8503ba762843f6dd56219f1e6c4e74ccc..0000000000000000000000000000000000000000
--- a/spaces/elkraken/Video-Object-Detection/train_aux.py
+++ /dev/null
@@ -1,699 +0,0 @@
-import argparse
-import logging
-import math
-import os
-import random
-import time
-from copy import deepcopy
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch.distributed as dist
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.optim as optim
-import torch.optim.lr_scheduler as lr_scheduler
-import torch.utils.data
-import yaml
-from torch.cuda import amp
-from torch.nn.parallel import DistributedDataParallel as DDP
-from torch.utils.tensorboard import SummaryWriter
-from tqdm import tqdm
-
-import test # import test.py to get mAP after each epoch
-from models.experimental import attempt_load
-from models.yolo import Model
-from utils.autoanchor import check_anchors
-from utils.datasets import create_dataloader
-from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \
- fitness, strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \
- check_requirements, print_mutation, set_logging, one_cycle, colorstr
-from utils.google_utils import attempt_download
-from utils.loss import ComputeLoss, ComputeLossAuxOTA
-from utils.plots import plot_images, plot_labels, plot_results, plot_evolution
-from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, is_parallel
-from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume
-
-logger = logging.getLogger(__name__)
-
-
-def train(hyp, opt, device, tb_writer=None):
- logger.info(colorstr('hyperparameters: ') + ', '.join(f'{k}={v}' for k, v in hyp.items()))
- save_dir, epochs, batch_size, total_batch_size, weights, rank = \
- Path(opt.save_dir), opt.epochs, opt.batch_size, opt.total_batch_size, opt.weights, opt.global_rank
-
- # Directories
- wdir = save_dir / 'weights'
- wdir.mkdir(parents=True, exist_ok=True) # make dir
- last = wdir / 'last.pt'
- best = wdir / 'best.pt'
- results_file = save_dir / 'results.txt'
-
- # Save run settings
- with open(save_dir / 'hyp.yaml', 'w') as f:
- yaml.dump(hyp, f, sort_keys=False)
- with open(save_dir / 'opt.yaml', 'w') as f:
- yaml.dump(vars(opt), f, sort_keys=False)
-
- # Configure
- plots = not opt.evolve # create plots
- cuda = device.type != 'cpu'
- init_seeds(2 + rank)
- with open(opt.data) as f:
- data_dict = yaml.load(f, Loader=yaml.SafeLoader) # data dict
- is_coco = opt.data.endswith('coco.yaml')
-
- # Logging- Doing this before checking the dataset. Might update data_dict
- loggers = {'wandb': None} # loggers dict
- if rank in [-1, 0]:
- opt.hyp = hyp # add hyperparameters
- run_id = torch.load(weights).get('wandb_id') if weights.endswith('.pt') and os.path.isfile(weights) else None
- wandb_logger = WandbLogger(opt, Path(opt.save_dir).stem, run_id, data_dict)
- loggers['wandb'] = wandb_logger.wandb
- data_dict = wandb_logger.data_dict
- if wandb_logger.wandb:
- weights, epochs, hyp = opt.weights, opt.epochs, opt.hyp # WandbLogger might update weights, epochs if resuming
-
- nc = 1 if opt.single_cls else int(data_dict['nc']) # number of classes
- names = ['item'] if opt.single_cls and len(data_dict['names']) != 1 else data_dict['names'] # class names
- assert len(names) == nc, '%g names found for nc=%g dataset in %s' % (len(names), nc, opt.data) # check
-
- # Model
- pretrained = weights.endswith('.pt')
- if pretrained:
- with torch_distributed_zero_first(rank):
- attempt_download(weights) # download if not found locally
- ckpt = torch.load(weights, map_location=device) # load checkpoint
- model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- exclude = ['anchor'] if (opt.cfg or hyp.get('anchors')) and not opt.resume else [] # exclude keys
- state_dict = ckpt['model'].float().state_dict() # to FP32
- state_dict = intersect_dicts(state_dict, model.state_dict(), exclude=exclude) # intersect
- model.load_state_dict(state_dict, strict=False) # load
- logger.info('Transferred %g/%g items from %s' % (len(state_dict), len(model.state_dict()), weights)) # report
- else:
- model = Model(opt.cfg, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device) # create
- with torch_distributed_zero_first(rank):
- check_dataset(data_dict) # check
- train_path = data_dict['train']
- test_path = data_dict['val']
-
- # Freeze
- freeze = [] # parameter names to freeze (full or partial)
- for k, v in model.named_parameters():
- v.requires_grad = True # train all layers
- if any(x in k for x in freeze):
- print('freezing %s' % k)
- v.requires_grad = False
-
- # Optimizer
- nbs = 64 # nominal batch size
- accumulate = max(round(nbs / total_batch_size), 1) # accumulate loss before optimizing
- hyp['weight_decay'] *= total_batch_size * accumulate / nbs # scale weight_decay
- logger.info(f"Scaled weight_decay = {hyp['weight_decay']}")
-
- pg0, pg1, pg2 = [], [], [] # optimizer parameter groups
- for k, v in model.named_modules():
- if hasattr(v, 'bias') and isinstance(v.bias, nn.Parameter):
- pg2.append(v.bias) # biases
- if isinstance(v, nn.BatchNorm2d):
- pg0.append(v.weight) # no decay
- elif hasattr(v, 'weight') and isinstance(v.weight, nn.Parameter):
- pg1.append(v.weight) # apply decay
- if hasattr(v, 'im'):
- if hasattr(v.im, 'implicit'):
- pg0.append(v.im.implicit)
- else:
- for iv in v.im:
- pg0.append(iv.implicit)
- if hasattr(v, 'imc'):
- if hasattr(v.imc, 'implicit'):
- pg0.append(v.imc.implicit)
- else:
- for iv in v.imc:
- pg0.append(iv.implicit)
- if hasattr(v, 'imb'):
- if hasattr(v.imb, 'implicit'):
- pg0.append(v.imb.implicit)
- else:
- for iv in v.imb:
- pg0.append(iv.implicit)
- if hasattr(v, 'imo'):
- if hasattr(v.imo, 'implicit'):
- pg0.append(v.imo.implicit)
- else:
- for iv in v.imo:
- pg0.append(iv.implicit)
- if hasattr(v, 'ia'):
- if hasattr(v.ia, 'implicit'):
- pg0.append(v.ia.implicit)
- else:
- for iv in v.ia:
- pg0.append(iv.implicit)
- if hasattr(v, 'attn'):
- if hasattr(v.attn, 'logit_scale'):
- pg0.append(v.attn.logit_scale)
- if hasattr(v.attn, 'q_bias'):
- pg0.append(v.attn.q_bias)
- if hasattr(v.attn, 'v_bias'):
- pg0.append(v.attn.v_bias)
- if hasattr(v.attn, 'relative_position_bias_table'):
- pg0.append(v.attn.relative_position_bias_table)
- if hasattr(v, 'rbr_dense'):
- if hasattr(v.rbr_dense, 'weight_rbr_origin'):
- pg0.append(v.rbr_dense.weight_rbr_origin)
- if hasattr(v.rbr_dense, 'weight_rbr_avg_conv'):
- pg0.append(v.rbr_dense.weight_rbr_avg_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_pfir_conv'):
- pg0.append(v.rbr_dense.weight_rbr_pfir_conv)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_idconv1'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_idconv1)
- if hasattr(v.rbr_dense, 'weight_rbr_1x1_kxk_conv2'):
- pg0.append(v.rbr_dense.weight_rbr_1x1_kxk_conv2)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_dw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_dw)
- if hasattr(v.rbr_dense, 'weight_rbr_gconv_pw'):
- pg0.append(v.rbr_dense.weight_rbr_gconv_pw)
- if hasattr(v.rbr_dense, 'vector'):
- pg0.append(v.rbr_dense.vector)
-
- if opt.adam:
- optimizer = optim.Adam(pg0, lr=hyp['lr0'], betas=(hyp['momentum'], 0.999)) # adjust beta1 to momentum
- else:
- optimizer = optim.SGD(pg0, lr=hyp['lr0'], momentum=hyp['momentum'], nesterov=True)
-
- optimizer.add_param_group({'params': pg1, 'weight_decay': hyp['weight_decay']}) # add pg1 with weight_decay
- optimizer.add_param_group({'params': pg2}) # add pg2 (biases)
- logger.info('Optimizer groups: %g .bias, %g conv.weight, %g other' % (len(pg2), len(pg1), len(pg0)))
- del pg0, pg1, pg2
-
- # Scheduler https://arxiv.org/pdf/1812.01187.pdf
- # https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLR
- if opt.linear_lr:
- lf = lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp['lrf']) + hyp['lrf'] # linear
- else:
- lf = one_cycle(1, hyp['lrf'], epochs) # cosine 1->hyp['lrf']
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf)
- # plot_lr_scheduler(optimizer, scheduler, epochs)
-
- # EMA
- ema = ModelEMA(model) if rank in [-1, 0] else None
-
- # Resume
- start_epoch, best_fitness = 0, 0.0
- if pretrained:
- # Optimizer
- if ckpt['optimizer'] is not None:
- optimizer.load_state_dict(ckpt['optimizer'])
- best_fitness = ckpt['best_fitness']
-
- # EMA
- if ema and ckpt.get('ema'):
- ema.ema.load_state_dict(ckpt['ema'].float().state_dict())
- ema.updates = ckpt['updates']
-
- # Results
- if ckpt.get('training_results') is not None:
- results_file.write_text(ckpt['training_results']) # write results.txt
-
- # Epochs
- start_epoch = ckpt['epoch'] + 1
- if opt.resume:
- assert start_epoch > 0, '%s training to %g epochs is finished, nothing to resume.' % (weights, epochs)
- if epochs < start_epoch:
- logger.info('%s has been trained for %g epochs. Fine-tuning for %g additional epochs.' %
- (weights, ckpt['epoch'], epochs))
- epochs += ckpt['epoch'] # finetune additional epochs
-
- del ckpt, state_dict
-
- # Image sizes
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
- imgsz, imgsz_test = [check_img_size(x, gs) for x in opt.img_size] # verify imgsz are gs-multiples
-
- # DP mode
- if cuda and rank == -1 and torch.cuda.device_count() > 1:
- model = torch.nn.DataParallel(model)
-
- # SyncBatchNorm
- if opt.sync_bn and cuda and rank != -1:
- model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)
- logger.info('Using SyncBatchNorm()')
-
- # Trainloader
- dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
- hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
- world_size=opt.world_size, workers=opt.workers,
- image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
- mlc = np.concatenate(dataset.labels, 0)[:, 0].max() # max label class
- nb = len(dataloader) # number of batches
- assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1)
-
- # Process 0
- if rank in [-1, 0]:
- testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader
- hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1,
- world_size=opt.world_size, workers=opt.workers,
- pad=0.5, prefix=colorstr('val: '))[0]
-
- if not opt.resume:
- labels = np.concatenate(dataset.labels, 0)
- c = torch.tensor(labels[:, 0]) # classes
- # cf = torch.bincount(c.long(), minlength=nc) + 1. # frequency
- # model._initialize_biases(cf.to(device))
- if plots:
- #plot_labels(labels, names, save_dir, loggers)
- if tb_writer:
- tb_writer.add_histogram('classes', c, 0)
-
- # Anchors
- if not opt.noautoanchor:
- check_anchors(dataset, model=model, thr=hyp['anchor_t'], imgsz=imgsz)
- model.half().float() # pre-reduce anchor precision
-
- # DDP mode
- if cuda and rank != -1:
- model = DDP(model, device_ids=[opt.local_rank], output_device=opt.local_rank,
- # nn.MultiheadAttention incompatibility with DDP https://github.com/pytorch/pytorch/issues/26698
- find_unused_parameters=any(isinstance(layer, nn.MultiheadAttention) for layer in model.modules()))
-
- # Model parameters
- hyp['box'] *= 3. / nl # scale to layers
- hyp['cls'] *= nc / 80. * 3. / nl # scale to classes and layers
- hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl # scale to image size and layers
- hyp['label_smoothing'] = opt.label_smoothing
- model.nc = nc # attach number of classes to model
- model.hyp = hyp # attach hyperparameters to model
- model.gr = 1.0 # iou loss ratio (obj_loss = 1.0 or iou)
- model.class_weights = labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weights
- model.names = names
-
- # Start training
- t0 = time.time()
- nw = max(round(hyp['warmup_epochs'] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)
- # nw = min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to < 1/2 of training
- maps = np.zeros(nc) # mAP per class
- results = (0, 0, 0, 0, 0, 0, 0) # P, R, mAP@.5, mAP@.5-.95, val_loss(box, obj, cls)
- scheduler.last_epoch = start_epoch - 1 # do not move
- scaler = amp.GradScaler(enabled=cuda)
- compute_loss_ota = ComputeLossAuxOTA(model) # init loss class
- compute_loss = ComputeLoss(model) # init loss class
- logger.info(f'Image sizes {imgsz} train, {imgsz_test} test\n'
- f'Using {dataloader.num_workers} dataloader workers\n'
- f'Logging results to {save_dir}\n'
- f'Starting training for {epochs} epochs...')
- torch.save(model, wdir / 'init.pt')
- for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------
- model.train()
-
- # Update image weights (optional)
- if opt.image_weights:
- # Generate indices
- if rank in [-1, 0]:
- cw = model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weights
- iw = labels_to_image_weights(dataset.labels, nc=nc, class_weights=cw) # image weights
- dataset.indices = random.choices(range(dataset.n), weights=iw, k=dataset.n) # rand weighted idx
- # Broadcast if DDP
- if rank != -1:
- indices = (torch.tensor(dataset.indices) if rank == 0 else torch.zeros(dataset.n)).int()
- dist.broadcast(indices, 0)
- if rank != 0:
- dataset.indices = indices.cpu().numpy()
-
- # Update mosaic border
- # b = int(random.uniform(0.25 * imgsz, 0.75 * imgsz + gs) // gs * gs)
- # dataset.mosaic_border = [b - imgsz, -b] # height, width borders
-
- mloss = torch.zeros(4, device=device) # mean losses
- if rank != -1:
- dataloader.sampler.set_epoch(epoch)
- pbar = enumerate(dataloader)
- logger.info(('\n' + '%10s' * 8) % ('Epoch', 'gpu_mem', 'box', 'obj', 'cls', 'total', 'labels', 'img_size'))
- if rank in [-1, 0]:
- pbar = tqdm(pbar, total=nb) # progress bar
- optimizer.zero_grad()
- for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
- ni = i + nb * epoch # number integrated batches (since train start)
- imgs = imgs.to(device, non_blocking=True).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0
-
- # Warmup
- if ni <= nw:
- xi = [0, nw] # x interp
- # model.gr = np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss = 1.0 or iou)
- accumulate = max(1, np.interp(ni, xi, [1, nbs / total_batch_size]).round())
- for j, x in enumerate(optimizer.param_groups):
- # bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0
- x['lr'] = np.interp(ni, xi, [hyp['warmup_bias_lr'] if j == 2 else 0.0, x['initial_lr'] * lf(epoch)])
- if 'momentum' in x:
- x['momentum'] = np.interp(ni, xi, [hyp['warmup_momentum'], hyp['momentum']])
-
- # Multi-scale
- if opt.multi_scale:
- sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs # size
- sf = sz / max(imgs.shape[2:]) # scale factor
- if sf != 1:
- ns = [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)
- imgs = F.interpolate(imgs, size=ns, mode='bilinear', align_corners=False)
-
- # Forward
- with amp.autocast(enabled=cuda):
- pred = model(imgs) # forward
- loss, loss_items = compute_loss_ota(pred, targets.to(device), imgs) # loss scaled by batch_size
- if rank != -1:
- loss *= opt.world_size # gradient averaged between devices in DDP mode
- if opt.quad:
- loss *= 4.
-
- # Backward
- scaler.scale(loss).backward()
-
- # Optimize
- if ni % accumulate == 0:
- scaler.step(optimizer) # optimizer.step
- scaler.update()
- optimizer.zero_grad()
- if ema:
- ema.update(model)
-
- # Print
- if rank in [-1, 0]:
- mloss = (mloss * i + loss_items) / (i + 1) # update mean losses
- mem = '%.3gG' % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)
- s = ('%10s' * 2 + '%10.4g' * 6) % (
- '%g/%g' % (epoch, epochs - 1), mem, *mloss, targets.shape[0], imgs.shape[-1])
- pbar.set_description(s)
-
- # Plot
- if plots and ni < 10:
- f = save_dir / f'train_batch{ni}.jpg' # filename
- Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start()
- # if tb_writer:
- # tb_writer.add_image(f, result, dataformats='HWC', global_step=epoch)
- # tb_writer.add_graph(torch.jit.trace(model, imgs, strict=False), []) # add model graph
- elif plots and ni == 10 and wandb_logger.wandb:
- wandb_logger.log({"Mosaics": [wandb_logger.wandb.Image(str(x), caption=x.name) for x in
- save_dir.glob('train*.jpg') if x.exists()]})
-
- # end batch ------------------------------------------------------------------------------------------------
- # end epoch ----------------------------------------------------------------------------------------------------
-
- # Scheduler
- lr = [x['lr'] for x in optimizer.param_groups] # for tensorboard
- scheduler.step()
-
- # DDP process 0 or single-GPU
- if rank in [-1, 0]:
- # mAP
- ema.update_attr(model, include=['yaml', 'nc', 'hyp', 'gr', 'names', 'stride', 'class_weights'])
- final_epoch = epoch + 1 == epochs
- if not opt.notest or final_epoch: # Calculate mAP
- wandb_logger.current_epoch = epoch + 1
- results, maps, times = test.test(data_dict,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- model=ema.ema,
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- verbose=nc < 50 and final_epoch,
- plots=plots and final_epoch,
- wandb_logger=wandb_logger,
- compute_loss=compute_loss,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Write
- with open(results_file, 'a') as f:
- f.write(s + '%10.4g' * 7 % results + '\n') # append metrics, val_loss
- if len(opt.name) and opt.bucket:
- os.system('gsutil cp %s gs://%s/results/results%s.txt' % (results_file, opt.bucket, opt.name))
-
- # Log
- tags = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss
- 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95',
- 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss
- 'x/lr0', 'x/lr1', 'x/lr2'] # params
- for x, tag in zip(list(mloss[:-1]) + list(results) + lr, tags):
- if tb_writer:
- tb_writer.add_scalar(tag, x, epoch) # tensorboard
- if wandb_logger.wandb:
- wandb_logger.log({tag: x}) # W&B
-
- # Update best mAP
- fi = fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP@.5, mAP@.5-.95]
- if fi > best_fitness:
- best_fitness = fi
- wandb_logger.end_epoch(best_result=best_fitness == fi)
-
- # Save model
- if (not opt.nosave) or (final_epoch and not opt.evolve): # if save
- ckpt = {'epoch': epoch,
- 'best_fitness': best_fitness,
- 'training_results': results_file.read_text(),
- 'model': deepcopy(model.module if is_parallel(model) else model).half(),
- 'ema': deepcopy(ema.ema).half(),
- 'updates': ema.updates,
- 'optimizer': optimizer.state_dict(),
- 'wandb_id': wandb_logger.wandb_run.id if wandb_logger.wandb else None}
-
- # Save last, best and delete
- torch.save(ckpt, last)
- if best_fitness == fi:
- torch.save(ckpt, best)
- if (best_fitness == fi) and (epoch >= 200):
- torch.save(ckpt, wdir / 'best_{:03d}.pt'.format(epoch))
- if epoch == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif ((epoch+1) % 25) == 0:
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- elif epoch >= (epochs-5):
- torch.save(ckpt, wdir / 'epoch_{:03d}.pt'.format(epoch))
- if wandb_logger.wandb:
- if ((epoch + 1) % opt.save_period == 0 and not final_epoch) and opt.save_period != -1:
- wandb_logger.log_model(
- last.parent, opt, epoch, fi, best_model=best_fitness == fi)
- del ckpt
-
- # end epoch ----------------------------------------------------------------------------------------------------
- # end training
- if rank in [-1, 0]:
- # Plots
- if plots:
- plot_results(save_dir=save_dir) # save as results.png
- if wandb_logger.wandb:
- files = ['results.png', 'confusion_matrix.png', *[f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R')]]
- wandb_logger.log({"Results": [wandb_logger.wandb.Image(str(save_dir / f), caption=f) for f in files
- if (save_dir / f).exists()]})
- # Test best.pt
- logger.info('%g epochs completed in %.3f hours.\n' % (epoch - start_epoch + 1, (time.time() - t0) / 3600))
- if opt.data.endswith('coco.yaml') and nc == 80: # if COCO
- for m in (last, best) if best.exists() else (last): # speed, mAP tests
- results, _, _ = test.test(opt.data,
- batch_size=batch_size * 2,
- imgsz=imgsz_test,
- conf_thres=0.001,
- iou_thres=0.7,
- model=attempt_load(m, device).half(),
- single_cls=opt.single_cls,
- dataloader=testloader,
- save_dir=save_dir,
- save_json=True,
- plots=False,
- is_coco=is_coco,
- v5_metric=opt.v5_metric)
-
- # Strip optimizers
- final = best if best.exists() else last # final model
- for f in last, best:
- if f.exists():
- strip_optimizer(f) # strip optimizers
- if opt.bucket:
- os.system(f'gsutil cp {final} gs://{opt.bucket}/weights') # upload
- if wandb_logger.wandb and not opt.evolve: # Log the stripped model
- wandb_logger.wandb.log_artifact(str(final), type='model',
- name='run_' + wandb_logger.wandb_run.id + '_model',
- aliases=['last', 'best', 'stripped'])
- wandb_logger.finish_run()
- else:
- dist.destroy_process_group()
- torch.cuda.empty_cache()
- return results
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights', type=str, default='yolo7.pt', help='initial weights path')
- parser.add_argument('--cfg', type=str, default='', help='model.yaml path')
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='data.yaml path')
- parser.add_argument('--hyp', type=str, default='data/hyp.scratch.p5.yaml', help='hyperparameters path')
- parser.add_argument('--epochs', type=int, default=300)
- parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')
- parser.add_argument('--img-size', nargs='+', type=int, default=[640, 640], help='[train, test] image sizes')
- parser.add_argument('--rect', action='store_true', help='rectangular training')
- parser.add_argument('--resume', nargs='?', const=True, default=False, help='resume most recent training')
- parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
- parser.add_argument('--notest', action='store_true', help='only test final epoch')
- parser.add_argument('--noautoanchor', action='store_true', help='disable autoanchor check')
- parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
- parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
- parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
- parser.add_argument('--image-weights', action='store_true', help='use weighted image selection for training')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--multi-scale', action='store_true', help='vary img-size +/- 50%%')
- parser.add_argument('--single-cls', action='store_true', help='train multi-class data as single-class')
- parser.add_argument('--adam', action='store_true', help='use torch.optim.Adam() optimizer')
- parser.add_argument('--sync-bn', action='store_true', help='use SyncBatchNorm, only available in DDP mode')
- parser.add_argument('--local_rank', type=int, default=-1, help='DDP parameter, do not modify')
- parser.add_argument('--workers', type=int, default=8, help='maximum number of dataloader workers')
- parser.add_argument('--project', default='runs/train', help='save to project/name')
- parser.add_argument('--entity', default=None, help='W&B entity')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--quad', action='store_true', help='quad dataloader')
- parser.add_argument('--linear-lr', action='store_true', help='linear LR')
- parser.add_argument('--label-smoothing', type=float, default=0.0, help='Label smoothing epsilon')
- parser.add_argument('--upload_dataset', action='store_true', help='Upload dataset as W&B artifact table')
- parser.add_argument('--bbox_interval', type=int, default=-1, help='Set bounding-box image logging interval for W&B')
- parser.add_argument('--save_period', type=int, default=-1, help='Log model after every "save_period" epoch')
- parser.add_argument('--artifact_alias', type=str, default="latest", help='version of dataset artifact to be used')
- parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation')
- opt = parser.parse_args()
-
- # Set DDP variables
- opt.world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
- opt.global_rank = int(os.environ['RANK']) if 'RANK' in os.environ else -1
- set_logging(opt.global_rank)
- #if opt.global_rank in [-1, 0]:
- # check_git_status()
- # check_requirements()
-
- # Resume
- wandb_run = check_wandb_resume(opt)
- if opt.resume and not wandb_run: # resume an interrupted run
- ckpt = opt.resume if isinstance(opt.resume, str) else get_latest_run() # specified or most recent path
- assert os.path.isfile(ckpt), 'ERROR: --resume checkpoint does not exist'
- apriori = opt.global_rank, opt.local_rank
- with open(Path(ckpt).parent.parent / 'opt.yaml') as f:
- opt = argparse.Namespace(**yaml.load(f, Loader=yaml.SafeLoader)) # replace
- opt.cfg, opt.weights, opt.resume, opt.batch_size, opt.global_rank, opt.local_rank = '', ckpt, True, opt.total_batch_size, *apriori # reinstate
- logger.info('Resuming training from %s' % ckpt)
- else:
- # opt.hyp = opt.hyp or ('hyp.finetune.yaml' if opt.weights else 'hyp.scratch.yaml')
- opt.data, opt.cfg, opt.hyp = check_file(opt.data), check_file(opt.cfg), check_file(opt.hyp) # check files
- assert len(opt.cfg) or len(opt.weights), 'either --cfg or --weights must be specified'
- opt.img_size.extend([opt.img_size[-1]] * (2 - len(opt.img_size))) # extend to 2 sizes (train, test)
- opt.name = 'evolve' if opt.evolve else opt.name
- opt.save_dir = increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok | opt.evolve) # increment run
-
- # DDP mode
- opt.total_batch_size = opt.batch_size
- device = select_device(opt.device, batch_size=opt.batch_size)
- if opt.local_rank != -1:
- assert torch.cuda.device_count() > opt.local_rank
- torch.cuda.set_device(opt.local_rank)
- device = torch.device('cuda', opt.local_rank)
- dist.init_process_group(backend='nccl', init_method='env://') # distributed backend
- assert opt.batch_size % opt.world_size == 0, '--batch-size must be multiple of CUDA device count'
- opt.batch_size = opt.total_batch_size // opt.world_size
-
- # Hyperparameters
- with open(opt.hyp) as f:
- hyp = yaml.load(f, Loader=yaml.SafeLoader) # load hyps
-
- # Train
- logger.info(opt)
- if not opt.evolve:
- tb_writer = None # init loggers
- if opt.global_rank in [-1, 0]:
- prefix = colorstr('tensorboard: ')
- logger.info(f"{prefix}Start with 'tensorboard --logdir {opt.project}', view at http://localhost:6006/")
- tb_writer = SummaryWriter(opt.save_dir) # Tensorboard
- train(hyp, opt, device, tb_writer)
-
- # Evolve hyperparameters (optional)
- else:
- # Hyperparameter evolution metadata (mutation scale 0-1, lower_limit, upper_limit)
- meta = {'lr0': (1, 1e-5, 1e-1), # initial learning rate (SGD=1E-2, Adam=1E-3)
- 'lrf': (1, 0.01, 1.0), # final OneCycleLR learning rate (lr0 * lrf)
- 'momentum': (0.3, 0.6, 0.98), # SGD momentum/Adam beta1
- 'weight_decay': (1, 0.0, 0.001), # optimizer weight decay
- 'warmup_epochs': (1, 0.0, 5.0), # warmup epochs (fractions ok)
- 'warmup_momentum': (1, 0.0, 0.95), # warmup initial momentum
- 'warmup_bias_lr': (1, 0.0, 0.2), # warmup initial bias lr
- 'box': (1, 0.02, 0.2), # box loss gain
- 'cls': (1, 0.2, 4.0), # cls loss gain
- 'cls_pw': (1, 0.5, 2.0), # cls BCELoss positive_weight
- 'obj': (1, 0.2, 4.0), # obj loss gain (scale with pixels)
- 'obj_pw': (1, 0.5, 2.0), # obj BCELoss positive_weight
- 'iou_t': (0, 0.1, 0.7), # IoU training threshold
- 'anchor_t': (1, 2.0, 8.0), # anchor-multiple threshold
- 'anchors': (2, 2.0, 10.0), # anchors per output grid (0 to ignore)
- 'fl_gamma': (0, 0.0, 2.0), # focal loss gamma (efficientDet default gamma=1.5)
- 'hsv_h': (1, 0.0, 0.1), # image HSV-Hue augmentation (fraction)
- 'hsv_s': (1, 0.0, 0.9), # image HSV-Saturation augmentation (fraction)
- 'hsv_v': (1, 0.0, 0.9), # image HSV-Value augmentation (fraction)
- 'degrees': (1, 0.0, 45.0), # image rotation (+/- deg)
- 'translate': (1, 0.0, 0.9), # image translation (+/- fraction)
- 'scale': (1, 0.0, 0.9), # image scale (+/- gain)
- 'shear': (1, 0.0, 10.0), # image shear (+/- deg)
- 'perspective': (0, 0.0, 0.001), # image perspective (+/- fraction), range 0-0.001
- 'flipud': (1, 0.0, 1.0), # image flip up-down (probability)
- 'fliplr': (0, 0.0, 1.0), # image flip left-right (probability)
- 'mosaic': (1, 0.0, 1.0), # image mixup (probability)
- 'mixup': (1, 0.0, 1.0)} # image mixup (probability)
-
- with open(opt.hyp, errors='ignore') as f:
- hyp = yaml.safe_load(f) # load hyps dict
- if 'anchors' not in hyp: # anchors commented in hyp.yaml
- hyp['anchors'] = 3
-
- assert opt.local_rank == -1, 'DDP mode not implemented for --evolve'
- opt.notest, opt.nosave = True, True # only test/save final epoch
- # ei = [isinstance(x, (int, float)) for x in hyp.values()] # evolvable indices
- yaml_file = Path(opt.save_dir) / 'hyp_evolved.yaml' # save best result here
- if opt.bucket:
- os.system('gsutil cp gs://%s/evolve.txt .' % opt.bucket) # download evolve.txt if exists
-
- for _ in range(300): # generations to evolve
- if Path('evolve.txt').exists(): # if evolve.txt exists: select best hyps and mutate
- # Select parent(s)
- parent = 'single' # parent selection method: 'single' or 'weighted'
- x = np.loadtxt('evolve.txt', ndmin=2)
- n = min(5, len(x)) # number of previous results to consider
- x = x[np.argsort(-fitness(x))][:n] # top n mutations
- w = fitness(x) - fitness(x).min() # weights
- if parent == 'single' or len(x) == 1:
- # x = x[random.randint(0, n - 1)] # random selection
- x = x[random.choices(range(n), weights=w)[0]] # weighted selection
- elif parent == 'weighted':
- x = (x * w.reshape(n, 1)).sum(0) / w.sum() # weighted combination
-
- # Mutate
- mp, s = 0.8, 0.2 # mutation probability, sigma
- npr = np.random
- npr.seed(int(time.time()))
- g = np.array([x[0] for x in meta.values()]) # gains 0-1
- ng = len(meta)
- v = np.ones(ng)
- while all(v == 1): # mutate until a change occurs (prevent duplicates)
- v = (g * (npr.random(ng) < mp) * npr.randn(ng) * npr.random() * s + 1).clip(0.3, 3.0)
- for i, k in enumerate(hyp.keys()): # plt.hist(v.ravel(), 300)
- hyp[k] = float(x[i + 7] * v[i]) # mutate
-
- # Constrain to limits
- for k, v in meta.items():
- hyp[k] = max(hyp[k], v[1]) # lower limit
- hyp[k] = min(hyp[k], v[2]) # upper limit
- hyp[k] = round(hyp[k], 5) # significant digits
-
- # Train mutation
- results = train(hyp.copy(), opt, device)
-
- # Write mutation results
- print_mutation(hyp.copy(), results, yaml_file, opt.bucket)
-
- # Plot results
- plot_evolution(yaml_file)
- print(f'Hyperparameter evolution complete. Best results saved as: {yaml_file}\n'
- f'Command to train a new model with these hyperparameters: $ python train.py --hyp {yaml_file}')
diff --git a/spaces/emc348/faces-through-time/color_transfer_loss.py b/spaces/emc348/faces-through-time/color_transfer_loss.py
deleted file mode 100644
index febfb5db954078c0839c93a3dd11a86451839c8c..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/color_transfer_loss.py
+++ /dev/null
@@ -1,60 +0,0 @@
-from typing import List, Optional
-
-import torch
-from torch import nn
-from torch.nn.functional import (
- smooth_l1_loss,
-)
-
-
-def flatten_CHW(im: torch.Tensor) -> torch.Tensor:
- """
- (B, C, H, W) -> (B, -1)
- """
- B = im.shape[0]
- return im.reshape(B, -1)
-
-
-def stddev(x: torch.Tensor) -> torch.Tensor:
- """
- x: (B, -1), assume with mean normalized
- Retuens:
- stddev: (B)
- """
- return torch.sqrt(torch.mean(x * x, dim=-1))
-
-
-def gram_matrix(input_):
- B, C = input_.shape[:2]
- features = input_.view(B, C, -1)
- N = features.shape[-1]
- G = torch.bmm(features, features.transpose(1, 2)) # C x C
- return G.div(C * N)
-
-
-class ColorTransferLoss(nn.Module):
- """Penalize the gram matrix difference between StyleGAN2's ToRGB outputs"""
- def __init__(
- self,
- init_rgbs,
- scale_rgb: bool = False
- ):
- super().__init__()
-
- with torch.no_grad():
- init_feats = [x.detach() for x in init_rgbs]
- self.stds = [stddev(flatten_CHW(rgb)) if scale_rgb else 1 for rgb in init_feats] # (B, 1, 1, 1) or scalar
- self.grams = [gram_matrix(rgb / std) for rgb, std in zip(init_feats, self.stds)]
-
- def forward(self, rgbs: List[torch.Tensor], level: int = None):
- if level is None:
- level = len(self.grams)
-
- feats = rgbs
- loss = 0
- for i, (rgb, std) in enumerate(zip(feats[:level], self.stds[:level])):
- G = gram_matrix(rgb / std)
- loss = loss + smooth_l1_loss(G, self.grams[i])
-
- return loss
-
diff --git a/spaces/exbert-project/exbert/server/model_api.py b/spaces/exbert-project/exbert/server/model_api.py
deleted file mode 100644
index f3fea31468796a4ce612e15eaa77ea5eb570212b..0000000000000000000000000000000000000000
--- a/spaces/exbert-project/exbert/server/model_api.py
+++ /dev/null
@@ -1,164 +0,0 @@
-from typing import List, Union, Tuple
-
-import torch
-from transformers import AutoConfig, AutoTokenizer, AutoModelWithLMHead, AutoModel
-
-from transformer_formatter import TransformerOutputFormatter
-from utils.f import delegates, pick, memoize
-
-@memoize
-def get_details(mname):
- return ModelDetails(mname)
-
-def get_model_tok(mname):
- conf = AutoConfig.from_pretrained(mname, output_attentions=True, output_past=False)
- tok = AutoTokenizer.from_pretrained(mname, config=conf)
- model = AutoModelWithLMHead.from_pretrained(mname, config=conf)
- return model, tok
-
-class ModelDetails:
- """Wraps a transformer model and tokenizer to prepare inputs to the frontend visualization"""
- def __init__(self, mname):
- self.mname = mname
- self.model, self.tok = get_model_tok(self.mname)
- self.model.eval()
- self.config = self.model.config
-
- def from_sentence(self, sentence: str) -> TransformerOutputFormatter:
- """Get attentions and word probabilities from a sentence. Special tokens are automatically added if a sentence is passed.
-
- Args:
- sentence: The input sentence to tokenize and analyze.
- """
- tokens = self.tok.tokenize(sentence)
-
- return self.from_tokens(tokens, sentence, add_special_tokens=True)
-
- def from_tokens(
- self, tokens: List[str], orig_sentence:str, add_special_tokens:bool=False, mask_attentions:bool=False, topk:int=5
- ) -> TransformerOutputFormatter:
- """Get formatted attention and predictions from a list of tokens.
-
- Args:
- tokens: Tokens to analyze
- orig_sentence: The sentence the tokens came from (needed to help organize the output)
- add_special_tokens: Whether to add special tokens like CLS / <|endoftext|> to the tokens.
- If False, assume the tokens already have the special tokens
- mask_attentions: If True, do not pay attention to attention patterns to special tokens through the model.
- topk: How many top predictions to report
- """
- ids = self.tok.convert_tokens_to_ids(tokens)
-
- # For GPT2, add the beginning of sentence token to the input. Note that this will work on all models but XLM
- bost = self.tok.bos_token_id
- clst = self.tok.cls_token_id
- sept = self.tok.sep_token_id
- if (bost is not None) and (bost != clst)and add_special_tokens:
- ids.insert(0, bost)
-
- inputs = self.tok.prepare_for_model(ids, add_special_tokens=add_special_tokens, return_tensors="pt")
- parsed_input = self.parse_inputs(inputs, mask_attentions=mask_attentions)
- output = self.model(parsed_input['input_ids'], attention_mask=parsed_input['attention_mask'])
-
- logits, atts = self.choose_logits_att(output)
- words, probs = self.logits2words(logits, topk)
- tokens = self.view_ids(inputs["input_ids"])
-
- formatted_output = TransformerOutputFormatter(
- orig_sentence,
- tokens,
- inputs["special_tokens_mask"],
- atts,
- words,
- probs.tolist(),
- self.config
- )
-
- return formatted_output
-
- def choose_logits_att(self, out:Tuple) -> Tuple:
- """Select from the model's output the logits and the attentions, switching on model name
-
- Args:
- out: Output from the model's forward pass
-
- Returns:
- (logits: tensor((bs, N)), attentions: Tuple[tensor(())])
- """
- if 't5' in self.mname:
- logits, _, atts = out
- else:
- logits, atts = out
-
- return logits, atts
-
- def logits2words(self, logits, topk):
- """Convert logit probabilities into words from the tokenizer's vocabulary.
-
- """
- probs, idxs = torch.topk(torch.softmax(logits.squeeze(0), 1), topk)
- words = [self.tok.convert_ids_to_tokens(i) for i in idxs]
- return words, probs
-
- def view_ids(self, ids: Union[List[int], torch.Tensor]) -> List[str]:
- """View what the tokenizer thinks certain ids are for a single input"""
- if type(ids) == torch.Tensor:
- # Remove batch dimension
- ids = ids.squeeze(0).tolist()
-
- out = self.tok.convert_ids_to_tokens(ids)
- return out
-
- def parse_inputs(self, inputs, mask_attentions=False):
- """Parse the output from `tokenizer.prepare_for_model` to the desired attention mask from special tokens
-
- Args:
- - inputs: The output of `tokenizer.prepare_for_model`.
- A dict with keys: {'special_token_mask', 'token_type_ids', 'input_ids'}
- - mask_attentions: Flag indicating whether to mask the attentions or not
-
- Returns:
- Dict with keys: {'input_ids', 'token_type_ids', 'attention_mask', 'special_tokens_mask'}
-
- Usage:
-
- ```
- s = "test sentence"
-
- # from raw sentence to tokens
- tokens = tokenizer.tokenize(s)
-
- # From tokens to ids
- ids = tokenizer.convert_tokens_to_ids(tokens)
-
- # From ids to input
- inputs = tokenizer.prepare_for_model(ids, return_tensors='pt')
-
- # Parse the input. Optionally mask the special tokens from the analysis.
- parsed_input = parse_inputs(inputs)
-
- # Run the model, pick from this output whatever inputs you want
- from utils.f import pick
- out = model(**pick(['input_ids'], parse_inputs(inputs)))
- ```
- """
-
- out = inputs.copy()
-
- # DEFINE SPECIAL TOKENS MASK
- if "special_tokens_mask" not in inputs.keys():
- special_tokens = set([self.tok.unk_token_id, self.tok.cls_token_id, self.tok.sep_token_id, self.tok.bos_token_id, self.tok.eos_token_id, self.tok.pad_token_id])
- in_ids = inputs['input_ids'][0]
- special_tok_mask = [1 if int(i) in special_tokens else 0 for i in in_ids]
- inputs['special_tokens_mask'] = special_tok_mask
-
- if mask_attentions:
- out["attention_mask"] = torch.tensor(
- [int(not i) for i in inputs.get("special_tokens_mask")]
- ).unsqueeze(0)
- else:
- out["attention_mask"] = torch.tensor(
- [1 for i in inputs.get("special_tokens_mask")]
- ).unsqueeze(0)
-
- return out
\ No newline at end of file
diff --git a/spaces/facebook/MusicGen/audiocraft/modules/conv.py b/spaces/facebook/MusicGen/audiocraft/modules/conv.py
deleted file mode 100644
index d115cbf8729b642ed78608bd00a4d0fd5afae6fd..0000000000000000000000000000000000000000
--- a/spaces/facebook/MusicGen/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,243 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`."""
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!"""
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn("StreamableConv1d has been initialized with stride > 1 and dilation > 1"
- f" (kernel_size={kernel_size} stride={stride}, dilation={dilation}).")
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/facebook/StyleNeRF/training/facial_recognition/model_irse.py b/spaces/facebook/StyleNeRF/training/facial_recognition/model_irse.py
deleted file mode 100644
index 48f3dc128f0ba7bfe49ae43a65f8922786a236b2..0000000000000000000000000000000000000000
--- a/spaces/facebook/StyleNeRF/training/facial_recognition/model_irse.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-
-from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module
-from training.facial_recognition.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm
-
-"""
-Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Backbone(Module):
- def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True):
- super(Backbone, self).__init__()
- assert input_size in [112, 224], "input_size should be 112 or 224"
- assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152"
- assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se"
- blocks = get_blocks(num_layers)
- if mode == 'ir':
- unit_module = bottleneck_IR
- elif mode == 'ir_se':
- unit_module = bottleneck_IR_SE
- self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False),
- BatchNorm2d(64),
- PReLU(64))
- if input_size == 112:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 7 * 7, 512),
- BatchNorm1d(512, affine=affine))
- else:
- self.output_layer = Sequential(BatchNorm2d(512),
- Dropout(drop_ratio),
- Flatten(),
- Linear(512 * 14 * 14, 512),
- BatchNorm1d(512, affine=affine))
-
- modules = []
- for block in blocks:
- for bottleneck in block:
- modules.append(unit_module(bottleneck.in_channel,
- bottleneck.depth,
- bottleneck.stride))
- self.body = Sequential(*modules)
-
- def forward(self, x):
- x = self.input_layer(x)
- x = self.body(x)
- x = self.output_layer(x)
- return l2_norm(x)
-
-
-def IR_50(input_size):
- """Constructs a ir-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_101(input_size):
- """Constructs a ir-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_152(input_size):
- """Constructs a ir-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_50(input_size):
- """Constructs a ir_se-50 model."""
- model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_101(input_size):
- """Constructs a ir_se-101 model."""
- model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
-
-
-def IR_SE_152(input_size):
- """Constructs a ir_se-152 model."""
- model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False)
- return model
diff --git a/spaces/failfast/2D-GameCreator/src/pages/api/generate.ts b/spaces/failfast/2D-GameCreator/src/pages/api/generate.ts
deleted file mode 100644
index ea49f617169528c37002c0d035081349efd3ff6a..0000000000000000000000000000000000000000
--- a/spaces/failfast/2D-GameCreator/src/pages/api/generate.ts
+++ /dev/null
@@ -1,24 +0,0 @@
-import { toOpenAI } from "@/services/api";
-import { NextApiRequest, NextApiResponse } from "next";
-import { AxiosError } from "axios";
-import process from "node:process";
-import { createClient } from "@/services/api/openai";
-import { CustomAxiosError } from "@/services/api/axios";
-
-export default async function handler(request: NextApiRequest, response: NextApiResponse) {
- switch (request.method) {
- case "POST":
- try {
- const client = createClient(process.env.OPENAI_API_KEY as string);
-
- const answer = await toOpenAI({ ...request.body, client });
- return response.status(200).json(answer);
- } catch (error) {
- return response.status((error as AxiosError).status ?? 500).json({
- message: (error as CustomAxiosError).data?.error?.message ?? "UNKNOWN",
- });
- }
- default:
- return response.status(405).json({});
- }
-}
diff --git a/spaces/faizhalas/coconut/pages/2 Topic Modeling.py b/spaces/faizhalas/coconut/pages/2 Topic Modeling.py
deleted file mode 100644
index d4fdd593731f1b6e9c4d209c7cddd298b470a838..0000000000000000000000000000000000000000
--- a/spaces/faizhalas/coconut/pages/2 Topic Modeling.py
+++ /dev/null
@@ -1,436 +0,0 @@
-#import module
-import streamlit as st
-import pandas as pd
-import numpy as np
-import re
-import nltk
-nltk.download('wordnet')
-from nltk.stem import WordNetLemmatizer
-nltk.download('stopwords')
-from nltk.corpus import stopwords
-import gensim
-import gensim.corpora as corpora
-from gensim.corpora import Dictionary
-from gensim.models.coherencemodel import CoherenceModel
-from gensim.models.ldamodel import LdaModel
-from pprint import pprint
-import pickle
-import pyLDAvis
-import pyLDAvis.gensim_models as gensimvis
-import streamlit.components.v1 as components
-from io import StringIO
-from ipywidgets.embed import embed_minimal_html
-from nltk.stem.snowball import SnowballStemmer
-from bertopic import BERTopic
-import plotly.express as px
-from sklearn.cluster import KMeans
-import bitermplus as btm
-import tmplot as tmp
-import tomotopy
-import sys
-import spacy
-import en_core_web_sm
-import pipeline
-from html2image import Html2Image
-from umap import UMAP
-import os
-import time
-
-
-#===config===
-st.set_page_config(
- page_title="Coconut",
- page_icon="🥥",
- layout="wide"
-)
-st.header("Topic Modeling")
-hide_streamlit_style = """
-
- """
-st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-st.subheader('Put your file here...')
-
-#========unique id========
-@st.cache_resource(ttl=3600)
-def create_list():
- l = [1, 2, 3]
- return l
-
-l = create_list()
-first_list_value = l[0]
-l[0] = first_list_value + 1
-uID = str(l[0])
-
-@st.cache_data(ttl=3600)
-def get_ext(uploaded_file):
- extype = uID+uploaded_file.name
- return extype
-
-#===clear cache===
-
-def reset_biterm():
- try:
- biterm_map.clear()
- biterm_bar.clear()
- except NameError:
- biterm_topic.clear()
-
-def reset_all():
- st.cache_data.clear()
-
-#===avoiding deadlock===
-os.environ["TOKENIZERS_PARALLELISM"] = "false"
-
-#===clean csv===
-@st.cache_data(ttl=3600, show_spinner=False)
-def clean_csv(extype):
- try:
- paper = papers.dropna(subset=['Abstract'])
- except KeyError:
- st.error('Error: Please check your Abstract column.')
- sys.exit(1)
- paper = paper[~paper.Abstract.str.contains("No abstract available")]
- paper = paper[~paper.Abstract.str.contains("STRAIT")]
-
- #===mapping===
- paper['Abstract_pre'] = paper['Abstract'].map(lambda x: re.sub('[,:;\.!-?•=]', ' ', x))
- paper['Abstract_pre'] = paper['Abstract_pre'].map(lambda x: x.lower())
- paper['Abstract_pre'] = paper['Abstract_pre'].map(lambda x: re.sub('©.*', '', x))
- paper['Abstract_pre'] = paper['Abstract_pre'].str.replace('\u201c|\u201d', '', regex=True)
-
- #===stopword removal===
- stop = stopwords.words('english')
- paper['Abstract_stop'] = paper['Abstract_pre'].apply(lambda x: ' '.join([word for word in x.split() if word not in (stop)]))
-
- #===lemmatize===
- lemmatizer = WordNetLemmatizer()
- def lemmatize_words(text):
- words = text.split()
- words = [lemmatizer.lemmatize(word) for word in words]
- return ' '.join(words)
- paper['Abstract_lem'] = paper['Abstract_stop'].apply(lemmatize_words)
-
- words_rmv = [word.strip() for word in words_to_remove.split(";")]
- remove_dict = {word: None for word in words_rmv}
- def remove_words(text):
- words = text.split()
- cleaned_words = [word for word in words if word not in remove_dict]
- return ' '.join(cleaned_words)
- paper['Abstract_lem'] = paper['Abstract_lem'].map(remove_words)
-
- topic_abs = paper.Abstract_lem.values.tolist()
- return topic_abs, paper
-
-#===upload file===
-@st.cache_data(ttl=3600)
-def upload(file):
- papers = pd.read_csv(uploaded_file)
- return papers
-
-@st.cache_data(ttl=3600)
-def conv_txt(extype):
- col_dict = {'TI': 'Title',
- 'SO': 'Source title',
- 'DT': 'Document Type',
- 'AB': 'Abstract',
- 'PY': 'Year'}
- papers = pd.read_csv(uploaded_file, sep='\t', lineterminator='\r')
- papers.rename(columns=col_dict, inplace=True)
- return papers
-
-
-#===Read data===
-uploaded_file = st.file_uploader("Choose a file", type=['csv', 'txt'], on_change=reset_all)
-
-if uploaded_file is not None:
- extype = get_ext(uploaded_file)
-
- if extype.endswith('.csv'):
- papers = upload(extype)
- elif extype.endswith('.txt'):
- papers = conv_txt(extype)
-
- c1, c2, c3 = st.columns([3,2,5])
- method = c1.selectbox(
- 'Choose method',
- ('Choose...', 'pyLDA', 'Biterm', 'BERTopic'), on_change=reset_all)
- num_cho = c2.number_input('Choose number of topics', min_value=2, max_value=30, value=5)
- words_to_remove = c3.text_input("Remove specific words. Separate words by semicolons (;)")
-
- d1, d2 = st.columns([8,2])
- d2.info("Don't do anything during the computing", icon="⚠️")
- topic_abs, paper=clean_csv(extype)
-
- #===advance settings===
- with d1.expander("🧮 Show advance settings"):
- t1, t2 = st.columns([5,5])
- if method == 'pyLDA':
- py_random_state = t1.number_input('Random state', min_value=0, max_value=None, step=1)
- py_chunksize = t2.number_input('Chunk size', value=100 , min_value=10, max_value=None, step=1)
- elif method == 'Biterm':
- btm_seed = t1.number_input('Random state seed', value=100 , min_value=1, max_value=None, step=1)
- btm_iterations = t2.number_input('Iterations number', value=20 , min_value=2, max_value=None, step=1)
- elif method == 'BERTopic':
- bert_top_n_words = t1.number_input('top_n_words', value=5 , min_value=5, max_value=25, step=1)
- bert_random_state = t1.number_input('random_state', value=42 , min_value=1, max_value=None, step=1)
- bert_n_components = t2.number_input('n_components', value=5 , min_value=1, max_value=None, step=1)
- bert_n_neighbors = t2.number_input('n_neighbors', value=15 , min_value=1, max_value=None, step=1)
- bert_embedding_model = st.radio(
- "embedding_model",
- ["all-MiniLM-L6-v2", "en_core_web_sm", "paraphrase-multilingual-MiniLM-L12-v2"], index=0, horizontal=True)
- else:
- st.write('Please choose your preferred method')
- if st.button("Submit", on_click=reset_all):
- num_topic = num_cho
-
- #===topic===
- if method == 'Choose...':
- st.write('')
-
- elif method == 'pyLDA':
- tab1, tab2, tab3 = st.tabs(["📈 Generate visualization", "📃 Reference", "📓 Recommended Reading"])
-
- with tab1:
- #===visualization===
- @st.cache_data(ttl=3600, show_spinner=False)
- def pylda(extype):
- topic_abs_LDA = [t.split(' ') for t in topic_abs]
- id2word = Dictionary(topic_abs_LDA)
- corpus = [id2word.doc2bow(text) for text in topic_abs_LDA]
- #===LDA===
- lda_model = LdaModel(corpus=corpus,
- id2word=id2word,
- num_topics=num_topic,
- random_state=py_random_state,
- chunksize=py_chunksize,
- alpha='auto',
- per_word_topics=True)
-
- pprint(lda_model.print_topics())
- doc_lda = lda_model[corpus]
-
- #===visualization===
- coherence_model_lda = CoherenceModel(model=lda_model, texts=topic_abs_LDA, dictionary=id2word, coherence='c_v')
- coherence_lda = coherence_model_lda.get_coherence()
- vis = pyLDAvis.gensim_models.prepare(lda_model, corpus, id2word)
- py_lda_vis_html = pyLDAvis.prepared_data_to_html(vis)
- return py_lda_vis_html, coherence_lda, vis
-
- with st.spinner('Performing computations. Please wait ...'):
- try:
- py_lda_vis_html, coherence_lda, vis = pylda(extype)
- st.write('Coherence score: ', coherence_lda)
- st.components.v1.html(py_lda_vis_html, width=1500, height=800)
- st.markdown('Copyright (c) 2015, Ben Mabey. https://github.com/bmabey/pyLDAvis')
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def img_lda(vis):
- pyLDAvis.save_html(vis, 'output.html')
- hti = Html2Image()
- hti.browser.flags = ['--default-background-color=ffffff', '--hide-scrollbars']
- css = "body {background: white;}"
- hti.screenshot(
- other_file='output.html', css_str=css, size=(1500, 800),
- save_as='ldavis_img.png'
- )
-
- img_lda(vis)
- with open("ldavis_img.png", "rb") as file:
- btn = st.download_button(
- label="Download image",
- data=file,
- file_name="ldavis_img.png",
- mime="image/png"
- )
-
- except NameError:
- st.warning('🖱️ Please click Submit')
-
- with tab2:
- st.markdown('**Sievert, C., & Shirley, K. (2014). LDAvis: A method for visualizing and interpreting topics. Proceedings of the Workshop on Interactive Language Learning, Visualization, and Interfaces.** https://doi.org/10.3115/v1/w14-3110')
-
- with tab3:
- st.markdown('**Chen, X., & Wang, H. (2019, January). Automated chat transcript analysis using topic modeling for library reference services. Proceedings of the Association for Information Science and Technology, 56(1), 368–371.** https://doi.org/10.1002/pra2.31')
- st.markdown('**Joo, S., Ingram, E., & Cahill, M. (2021, December 15). Exploring Topics and Genres in Storytime Books: A Text Mining Approach. Evidence Based Library and Information Practice, 16(4), 41–62.** https://doi.org/10.18438/eblip29963')
- st.markdown('**Lamba, M., & Madhusudhan, M. (2021, July 31). Topic Modeling. Text Mining for Information Professionals, 105–137.** https://doi.org/10.1007/978-3-030-85085-2_4')
- st.markdown('**Lamba, M., & Madhusudhan, M. (2019, June 7). Mapping of topics in DESIDOC Journal of Library and Information Technology, India: a study. Scientometrics, 120(2), 477–505.** https://doi.org/10.1007/s11192-019-03137-5')
-
- #===Biterm===
- elif method == 'Biterm':
-
- #===optimize Biterm===
- @st.cache_data(ttl=3600, show_spinner=False)
- def biterm_topic(extype):
- X, vocabulary, vocab_dict = btm.get_words_freqs(topic_abs)
- tf = np.array(X.sum(axis=0)).ravel()
- docs_vec = btm.get_vectorized_docs(topic_abs, vocabulary)
- docs_lens = list(map(len, docs_vec))
- biterms = btm.get_biterms(docs_vec)
- model = btm.BTM(
- X, vocabulary, seed=btm_seed, T=num_topic, M=20, alpha=50/8, beta=0.01)
- model.fit(biterms, iterations=btm_iterations)
- p_zd = model.transform(docs_vec)
- coherence = model.coherence_
- phi = tmp.get_phi(model)
- topics_coords = tmp.prepare_coords(model)
- totaltop = topics_coords.label.values.tolist()
- perplexity = model.perplexity_
- return topics_coords, phi, totaltop, perplexity
-
- tab1, tab2, tab3 = st.tabs(["📈 Generate visualization", "📃 Reference", "📓 Recommended Reading"])
- with tab1:
- try:
- with st.spinner('Performing computations. Please wait ...'):
- topics_coords, phi, totaltop, perplexity = biterm_topic(extype)
- col1, col2 = st.columns([4,6])
-
- @st.cache_data(ttl=3600)
- def biterm_map(extype):
- btmvis_coords = tmp.plot_scatter_topics(topics_coords, size_col='size', label_col='label', topic=numvis)
- return btmvis_coords
-
- @st.cache_data(ttl=3600)
- def biterm_bar(extype):
- terms_probs = tmp.calc_terms_probs_ratio(phi, topic=numvis, lambda_=1)
- btmvis_probs = tmp.plot_terms(terms_probs, font_size=12)
- return btmvis_probs
-
- with col1:
- st.write('Perplexity score: ', perplexity)
- st.write('')
- numvis = st.selectbox(
- 'Choose topic',
- (totaltop), on_change=reset_biterm)
- btmvis_coords = biterm_map(extype)
- st.altair_chart(btmvis_coords)
- with col2:
- btmvis_probs = biterm_bar(extype)
- st.altair_chart(btmvis_probs, use_container_width=True)
-
- except ValueError:
- st.error('🙇♂️ Please raise the number of topics and click submit')
- except NameError:
- st.warning('🖱️ Please click Submit')
-
- with tab2:
- st.markdown('**Yan, X., Guo, J., Lan, Y., & Cheng, X. (2013, May 13). A biterm topic model for short texts. Proceedings of the 22nd International Conference on World Wide Web.** https://doi.org/10.1145/2488388.2488514')
- with tab3:
- st.markdown('**Cai, M., Shah, N., Li, J., Chen, W. H., Cuomo, R. E., Obradovich, N., & Mackey, T. K. (2020, August 26). Identification and characterization of tweets related to the 2015 Indiana HIV outbreak: A retrospective infoveillance study. PLOS ONE, 15(8), e0235150.** https://doi.org/10.1371/journal.pone.0235150')
- st.markdown('**Chen, Y., Dong, T., Ban, Q., & Li, Y. (2021). What Concerns Consumers about Hypertension? A Comparison between the Online Health Community and the Q&A Forum. International Journal of Computational Intelligence Systems, 14(1), 734.** https://doi.org/10.2991/ijcis.d.210203.002')
- st.markdown('**George, Crissandra J., "AMBIGUOUS APPALACHIANNESS: A LINGUISTIC AND PERCEPTUAL INVESTIGATION INTO ARC-LABELED PENNSYLVANIA COUNTIES" (2022). Theses and Dissertations-- Linguistics. 48.** https://doi.org/10.13023/etd.2022.217')
- st.markdown('**Li, J., Chen, W. H., Xu, Q., Shah, N., Kohler, J. C., & Mackey, T. K. (2020). Detection of self-reported experiences with corruption on twitter using unsupervised machine learning. Social Sciences & Humanities Open, 2(1), 100060.** https://doi.org/10.1016/j.ssaho.2020.100060')
-
- #===BERTopic===
- elif method == 'BERTopic':
- @st.cache_data(ttl=3600, show_spinner=False)
- def bertopic_vis(extype):
- if 'Publication Year' in paper.columns:
- paper.rename(columns={'Publication Year': 'Year'}, inplace=True)
- topic_time = paper.Year.values.tolist()
- umap_model = UMAP(n_neighbors=bert_n_neighbors, n_components=bert_n_components,
- min_dist=0.0, metric='cosine', random_state=bert_random_state)
- cluster_model = KMeans(n_clusters=num_topic)
- if bert_embedding_model == 'all-MiniLM-L6-v2':
- emb_mod = 'all-MiniLM-L6-v2'
- lang = 'en'
- elif bert_embedding_model == 'en_core_web_sm':
- emb_mod = en_core_web_sm.load(exclude=['tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer'])
- lang = 'en'
- elif bert_embedding_model == 'paraphrase-multilingual-MiniLM-L12-v2':
- emb_mod = 'paraphrase-multilingual-MiniLM-L12-v2'
- lang = 'multilingual'
- topic_model = BERTopic(embedding_model=emb_mod, hdbscan_model=cluster_model, language=lang, umap_model=umap_model, top_n_words=bert_top_n_words)
- topics, probs = topic_model.fit_transform(topic_abs)
- return topic_model, topic_time, topics, probs
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def Vis_Topics(extype):
- fig1 = topic_model.visualize_topics()
- return fig1
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def Vis_Documents(extype):
- fig2 = topic_model.visualize_documents(topic_abs)
- return fig2
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def Vis_Hierarchy(extype):
- fig3 = topic_model.visualize_hierarchy(top_n_topics=num_topic)
- return fig3
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def Vis_Heatmap(extype):
- global topic_model
- fig4 = topic_model.visualize_heatmap(n_clusters=num_topic-1, width=1000, height=1000)
- return fig4
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def Vis_Barchart(extype):
- fig5 = topic_model.visualize_barchart(top_n_topics=num_topic) #, n_words=10)
- return fig5
-
- @st.cache_data(ttl=3600, show_spinner=False)
- def Vis_ToT(extype):
- topics_over_time = topic_model.topics_over_time(topic_abs, topic_time)
- fig6 = topic_model.visualize_topics_over_time(topics_over_time)
- return fig6
-
- tab1, tab2, tab3 = st.tabs(["📈 Generate visualization", "📃 Reference", "📓 Recommended Reading"])
- with tab1:
- try:
- with st.spinner('Performing computations. Please wait ...'):
-
- topic_model, topic_time, topics, probs = bertopic_vis(extype)
- time.sleep(.5)
- st.toast('Visualize Topics', icon='🏃')
- fig1 = Vis_Topics(extype)
-
- time.sleep(.5)
- st.toast('Visualize Document', icon='🏃')
- fig2 = Vis_Documents(extype)
-
- time.sleep(.5)
- st.toast('Visualize Document Hierarchy', icon='🏃')
- fig3 = Vis_Hierarchy(extype)
-
- time.sleep(.5)
- st.toast('Visualize Topic Similarity', icon='🏃')
- fig4 = Vis_Heatmap(extype)
-
- time.sleep(.5)
- st.toast('Visualize Terms', icon='🏃')
- fig5 = Vis_Barchart(extype)
-
- time.sleep(.5)
- st.toast('Visualize Topics over Time', icon='🏃')
- fig6 = Vis_ToT(extype)
-
- with st.expander("Visualize Topics"):
- st.write(fig1)
- with st.expander("Visualize Terms"):
- st.write(fig5)
- with st.expander("Visualize Documents"):
- st.write(fig2)
- with st.expander("Visualize Document Hierarchy"):
- st.write(fig3)
- with st.expander("Visualize Topic Similarity"):
- st.write(fig4)
- with st.expander("Visualize Topics over Time"):
- st.write(fig6)
-
- except ValueError:
- st.error('🙇♂️ Please raise the number of topics and click submit')
-
- except NameError:
- st.warning('🖱️ Please click Submit')
-
- with tab2:
- st.markdown('**Grootendorst, M. (2022). BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794.** https://doi.org/10.48550/arXiv.2203.05794')
-
- with tab3:
- st.markdown('**Jeet Rawat, A., Ghildiyal, S., & Dixit, A. K. (2022, December 1). Topic modelling of legal documents using NLP and bidirectional encoder representations from transformers. Indonesian Journal of Electrical Engineering and Computer Science, 28(3), 1749.** https://doi.org/10.11591/ijeecs.v28.i3.pp1749-1755')
- st.markdown('**Yao, L. F., Ferawati, K., Liew, K., Wakamiya, S., & Aramaki, E. (2023, April 20). Disruptions in the Cystic Fibrosis Community’s Experiences and Concerns During the COVID-19 Pandemic: Topic Modeling and Time Series Analysis of Reddit Comments. Journal of Medical Internet Research, 25, e45249.** https://doi.org/10.2196/45249')
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/AerosoftCrackerV2.exel ((TOP)).md b/spaces/falterWliame/Face_Mask_Detection/AerosoftCrackerV2.exel ((TOP)).md
deleted file mode 100644
index 9ecc7bd102c6fd7db524e8ce78171271fb207639..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/AerosoftCrackerV2.exel ((TOP)).md
+++ /dev/null
@@ -1,14 +0,0 @@
-AerosoftCrackerV2.exel
-
-August 28, 2013 — I use Excel (there are other good spreadsheets!) to fly 1500m in 2 minutes, 15,000m in 20 minutes? I'm not quite sure what I should be doing.
-I have found several solutions but I am stuck.
-Some people told me to try to climb 2000m to get the maximum height.
-But I don't know how to do it!
-I understand that I am using a table and that there are a few things to calculate in order to get the maximum height.
-But I can't figure out how.
-I found this solution:
-I'm wondering if this is a useful solution for me?
-I don't want to just rely on this decision. 8a78ff9644
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md b/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md
deleted file mode 100644
index a8bdcaeb8a8ff2fdfa60a9c8e171b3900cd2154d..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/ECA VRT DVD 2012.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-ECA VRT DVD 2012.rar
-
-Jul 15, 2014 Eca Vrt Dvd 2012 Rar > us/0fj6d teamviewer 8 full free download torrent autodesk autocad 2013 32bit crack download torrent. 4d29de3e1b
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/Extreme Karaoke V3 Crack 34 __HOT__.md b/spaces/falterWliame/Face_Mask_Detection/Extreme Karaoke V3 Crack 34 __HOT__.md
deleted file mode 100644
index 0425be99160e612aaa3c87a4e98022045dbe6b6a..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Extreme Karaoke V3 Crack 34 __HOT__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-extreme karaoke v3 crack 34
-
-.0K fps 459KB 54.9MB 5,487.0KB 0% 4fefd39f24
-
-
-
diff --git a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md b/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md
deleted file mode 100644
index a9563232a972ec41df76ebefce44da138690d1d4..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Fraud Saiyaan Movie Download Dual Au).md
+++ /dev/null
@@ -1,11 +0,0 @@
-
-HD Online Player (Fraud Saiyaan Movie Download Dual Au)
-
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md b/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md
deleted file mode 100644
index 9750e7df225b957df4e90feed8722c064077b356..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Blast Your Way Through Enemies with Tank Hero Mod APK (Unlimited Money and Gold).md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-
- | Table 2: Article with HTML formatting | |---------------------------------------| |
401be4b1e0Tank Hero Mod APK (Unlimited Money and Gold)
| | Introduction
| | tank hero mod apk (unlimited money and gold)
| |
-
| | Features of Tank Hero Mod APK
| | Unlimited Money and Gold
| | Various Tanks and Weapons
| |
-tank hero hack mod apk unlimited money and gold latest version
-tank hero mod apk unlimited money and gold android 1
-tank hero mod apk unlimited money and gold rexdl
-tank hero mod apk unlimited money and gold no root
-tank hero mod apk unlimited money and gold offline
-tank hero mod apk unlimited money and gold 2023
-tank hero mod apk unlimited money and gold revdl
-tank hero mod apk unlimited money and gold happymod
-tank hero mod apk unlimited money and gold apkpure
-tank hero mod apk unlimited money and gold for pc
-tank hero mod apk unlimited money and gold ios
-tank hero mod apk unlimited money and gold online
-tank hero mod apk unlimited money and gold obb
-tank hero mod apk unlimited money and gold mediafıre
-tank hero mod apk unlimited money and gold mega
-tank hero mod apk unlimited money and gold uptodown
-tank hero mod apk unlimited money and gold 1.8.0
-tank hero mod apk unlimited money and gold 1.7.9
-tank hero mod apk unlimited money and gold 1.7.8
-tank hero mod apk unlimited money and gold 1.7.7
-tank hero mod apk unlimited money and gold 1.7.6
-tank hero mod apk unlimited money and gold 1.7.5
-tank hero mod apk unlimited money and gold 1.7.4
-tank hero mod apk unlimited money and gold 1.7.3
-tank hero mod apk unlimited money and gold 1.7.2
-tank hero mod apk unlimited money and gold 1.7.1
-tank hero mod apk unlimited money and gold 1.7.0
-tank hero mod apk unlimited money and gold 1.6.9
-tank hero mod apk unlimited money and gold 1.6.8
-tank hero mod apk unlimited money and gold 1.6.7
-tank hero mod apk unlimited money and gold 1.6.6
-tank hero mod apk unlimited money and gold 1.6.5
-tank hero mod apk unlimited money and gold 1.6.4
-tank hero mod apk unlimited money and gold 1.6.3
-tank hero mod apk unlimited money and gold 1.6.2
-tank hero mod apk unlimited money and gold 1.6.1
-tank hero mod apk unlimited money and gold 1.6.0
-how to install tank hero mod apk unlimited money and gold
-how to play tank hero mod apk unlimited money and gold
-how to get tank hero mod apk unlimited money and gold
-how to update tank hero mod apk unlimited money and gold
-how to uninstall tank hero mod apk unlimited money and gold
-how to hack tank hero with mod apk unlimited money and gold
-how to download tank hero with mod apk unlimited money and goldTank Hero Mod APK (Unlimited Money and Gold)
| | Introduction
| |
-
| | Features of Tank Hero Mod APK
| | Unlimited Money and Gold
| | Various Tanks and Weapons
| | Multiple Game Modes
-| Table 2: Article with HTML formatting | |---------------------------------------| | Tank Hero Mod APK (Unlimited Money and Gold)
| | Introduction
| |
-
| | Features of Tank Hero Mod APK
| | Unlimited Money and Gold
| | Various Tanks and Weapons
| | Multiple Game Modes
-| Table 2: Article with HTML formatting | |---------------------------------------| | Tank Hero Mod APK (Unlimited Money and Gold)
| | Introduction
| |
-
| | Features of Tank Hero Mod APK
| | Unlimited Money and Gold
| | Various Tanks and Weapons
| | Multiple Game Modes
-Stunning Graphics and Sound Effects
- Conclusion
- FAQs
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md b/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md
deleted file mode 100644
index 7484b01d7f6a3062eacbd25a2f7586d02a9b0804..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Enjoy Vintage Photos and Videos with Old Roll Mod APK.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-Download APK Mod Old Roll: How to Get Vintage Photos and Videos on Your Android Device
- download apk mod old roll
- What is APK Mod Old Roll?
- Features of APK Mod Old Roll
-
-
- Benefits of using APK Mod Old Roll
-
-
- What is APK Mod and How Does It Work?
-
-old roll mod apk free download
-how to download old roll mod apk
-download old roll mod apk latest version
-old roll mod apk download for android
-old roll camera mod apk download
-download old roll mod apk 2023
-old roll photo editor mod apk download
-download old roll pro mod apk
-old roll video editor mod apk download
-download old roll full unlocked mod apk
-old roll vintage camera mod apk download
-download old roll hack mod apk
-old roll retro camera mod apk download
-download old roll cracked mod apk
-old roll film camera mod apk download
-download old roll vip mod apk
-old roll 1998 cam mod apk download
-download old roll plus mod apk
-old roll disposable camera mod apk download
-download old roll gold mod apk
-old roll analog camera mod apk download
-download old roll ultimate mod apk
-old roll vhs camcorder mod apk download
-download old roll diamond mod apk
-old roll polaroid camera mod apk download
-download old roll elite mod apk
-old roll 35mm film camera mod apk download
-download old roll ad free mod apk
-old roll instax camera mod apk downloadAdvantages and Disadvantages of APK Mod
-
-
-
-
-Advantages
-Disadvantages
-
-
-You can get premium features for free
-You may violate the intellectual property rights of the original developers
-
-
-You can access new or improved features that are not available in the original version
-You may expose your device to malware or virus infections
-
-
-You can customize the app according to your preferences
-You may encounter compatibility or stability issues with your device or other apps
-
-
-You can bypass the restrictions or limitations imposed by the original developers or the app store
-You may lose the support or updates from the original developers or the app store
-Risks and Precautions of APK Mod
-
-
- How to Install APK Mod Old Roll on Your Android Device
- Step 1: Download the APK file from a trusted source
- Step 2: Enable unknown sources on your device settings
- Step 3: Locate and install the APK file
- Step 4: Launch the app and enjoy
- Best APK Mod Sites to Download APK Mod Old Roll and Other Apps
- APKPure
- HappyMod
- ReXdl
- Apkmody
- Conclusion
- FAQs
- Q: Is APK Mod Old Roll safe to use?
-Q: Is APK Mod Old Roll legal to use?
-Q: How can I update APK Mod Old Roll?
-Q: How can I uninstall APK Mod Old Roll?
-Q: Can I use APK Mod Old Roll on other devices?
-
-
-
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/summary/randeng_t5_784M_summary.sh b/spaces/fclong/summary/fengshen/examples/summary/randeng_t5_784M_summary.sh
deleted file mode 100644
index 5b3e60c8784ac563eff09763591e00b6d250444f..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/summary/randeng_t5_784M_summary.sh
+++ /dev/null
@@ -1,130 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=randeng_t5_77M_summary
-#SBATCH --nodes=1
-#SBATCH --ntasks-per-node=2
-#SBATCH --gres=gpu:2 # number of gpus
-#SBATCH --cpus-per-task=30
-#SBATCH -o %x-%j.log
-
-set -x -e
-
-echo "START TIME: $(date)"
-MODEL_NAME=randeng_t5_784M_summary
-MICRO_BATCH_SIZE=8
-ROOT_DIR=/cognitive_comp/dongxiaoqun/finetune/${MODEL_NAME}
-if [ ! -d ${ROOT_DIR} ];then
- mkdir ${ROOT_DIR}
- echo ${ROOT_DIR} created!!!!!!!!!!!!!!
-else
- echo ${ROOT_DIR} exist!!!!!!!!!!!!!!!
-fi
-
-ZERO_STAGE=1
-
-config_json="${ROOT_DIR}/ds_config.${MODEL_NAME}.json"
-
-# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
-cat <
-Gacha Club: Cómo Descargar y Disfrutar el Juego de Anime
- gacha club descargar apk
- Cómo Descargar e Instalar Gacha Club en Android
-
-
- Características de Gacha Club
-
-gacha club descargar apk ultima version
-gacha club descargar apk para android
-gacha club descargar apk sin internet
-gacha club descargar apk mod
-gacha club descargar apk full
-gacha club descargar apk mega
-gacha club descargar apk mediafire
-gacha club descargar apk uptodown
-gacha club descargar apk hackeado
-gacha club descargar apk 2023
-gacha club descargar apk pc
-gacha club descargar apk windows 10
-gacha club descargar apk laptop
-gacha club descargar apk mac
-gacha club descargar apk chromebook
-gacha club descargar apk bluestacks
-gacha club descargar apk nox player
-gacha club descargar apk online
-gacha club descargar apk sin emulador
-gacha club descargar apk español
-gacha club descargar apk ingles
-gacha club descargar apk portugues
-gacha club descargar apk frances
-gacha club descargar apk aleman
-gacha club descargar apk japones
-gacha club descargar apk chino
-gacha club descargar apk coreano
-gacha club descargar apk arabe
-gacha club descargar apk ruso
-gacha club descargar apk original
-gacha club descargar apk oficial
-gacha club descargar apk lunime
-gacha club descargar apk play store
-gacha club descargar apk google play
-gacha club descargar apk amazon appstore
-gacha club descargar apk samsung galaxy store
-gacha club descargar apk huawei appgallery
-gacha club descargar apk xiaomi getapps
-gacha club descargar apk oppo app market
-gacha club descargar apk vivo app store
-gacha club descargar apk lg smartworld
-gacha club descargar apk nokia store
-gacha club descargar apk motorola apps storePersonalización de Personajes y Modo Estudio
- Gacha y Modo Batalla
- Mini-Juegos y Recompensas
-
-
- Conclusión
- Preguntas Frecuentes
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py
deleted file mode 100644
index 1829d7db4ef832ad65598b471caa7d256a06d012..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio.py
+++ /dev/null
@@ -1,213 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Audio IO methods are defined in this module (info, read, write),
-We rely on av library for faster read when possible, otherwise on torchaudio.
-"""
-
-from dataclasses import dataclass
-from pathlib import Path
-import logging
-import typing as tp
-
-import numpy as np
-import soundfile
-import torch
-from torch.nn import functional as F
-import torchaudio as ta
-
-import av
-
-from .audio_utils import f32_pcm, i16_pcm, normalize_audio
-
-
-_av_initialized = False
-
-
-def _init_av():
- global _av_initialized
- if _av_initialized:
- return
- logger = logging.getLogger('libav.mp3')
- logger.setLevel(logging.ERROR)
- _av_initialized = True
-
-
-@dataclass(frozen=True)
-class AudioFileInfo:
- sample_rate: int
- duration: float
- channels: int
-
-
-def _av_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sample_rate = stream.codec_context.sample_rate
- duration = float(stream.duration * stream.time_base)
- channels = stream.channels
- return AudioFileInfo(sample_rate, duration, channels)
-
-
-def _soundfile_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- info = soundfile.info(filepath)
- return AudioFileInfo(info.samplerate, info.duration, info.channels)
-
-
-def audio_info(filepath: tp.Union[str, Path]) -> AudioFileInfo:
- # torchaudio no longer returns useful duration informations for some formats like mp3s.
- filepath = Path(filepath)
- if filepath.suffix in ['.flac', '.ogg']: # TODO: Validate .ogg can be safely read with av_info
- # ffmpeg has some weird issue with flac.
- return _soundfile_info(filepath)
- else:
- return _av_info(filepath)
-
-
-def _av_read(filepath: tp.Union[str, Path], seek_time: float = 0, duration: float = -1.) -> tp.Tuple[torch.Tensor, int]:
- """FFMPEG-based audio file reading using PyAV bindings.
- Soundfile cannot read mp3 and av_read is more efficient than torchaudio.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate
- """
- _init_av()
- with av.open(str(filepath)) as af:
- stream = af.streams.audio[0]
- sr = stream.codec_context.sample_rate
- num_frames = int(sr * duration) if duration >= 0 else -1
- frame_offset = int(sr * seek_time)
- # we need a small negative offset otherwise we get some edge artifact
- # from the mp3 decoder.
- af.seek(int(max(0, (seek_time - 0.1)) / stream.time_base), stream=stream)
- frames = []
- length = 0
- for frame in af.decode(streams=stream.index):
- current_offset = int(frame.rate * frame.pts * frame.time_base)
- strip = max(0, frame_offset - current_offset)
- buf = torch.from_numpy(frame.to_ndarray())
- if buf.shape[0] != stream.channels:
- buf = buf.view(-1, stream.channels).t()
- buf = buf[:, strip:]
- frames.append(buf)
- length += buf.shape[1]
- if num_frames > 0 and length >= num_frames:
- break
- assert frames
- # If the above assert fails, it is likely because we seeked past the end of file point,
- # in which case ffmpeg returns a single frame with only zeros, and a weird timestamp.
- # This will need proper debugging, in due time.
- wav = torch.cat(frames, dim=1)
- assert wav.shape[0] == stream.channels
- if num_frames > 0:
- wav = wav[:, :num_frames]
- return f32_pcm(wav), sr
-
-
-def audio_read(filepath: tp.Union[str, Path], seek_time: float = 0.,
- duration: float = -1., pad: bool = False) -> tp.Tuple[torch.Tensor, int]:
- """Read audio by picking the most appropriate backend tool based on the audio format.
-
- Args:
- filepath (str or Path): Path to audio file to read.
- seek_time (float): Time at which to start reading in the file.
- duration (float): Duration to read from the file. If set to -1, the whole file is read.
- pad (bool): Pad output audio if not reaching expected duration.
- Returns:
- Tuple[torch.Tensor, int]: Tuple containing audio data and sample rate.
- """
- fp = Path(filepath)
- if fp.suffix in ['.flac', '.ogg']: # TODO: check if we can safely use av_read for .ogg
- # There is some bug with ffmpeg and reading flac
- info = _soundfile_info(filepath)
- frames = -1 if duration <= 0 else int(duration * info.sample_rate)
- frame_offset = int(seek_time * info.sample_rate)
- wav, sr = soundfile.read(filepath, start=frame_offset, frames=frames, dtype=np.float32)
- assert info.sample_rate == sr, f"Mismatch of sample rates {info.sample_rate} {sr}"
- wav = torch.from_numpy(wav).t().contiguous()
- if len(wav.shape) == 1:
- wav = torch.unsqueeze(wav, 0)
- elif (
- fp.suffix in ['.wav', '.mp3'] and fp.suffix[1:] in ta.utils.sox_utils.list_read_formats()
- and duration <= 0 and seek_time == 0
- ):
- # Torchaudio is faster if we load an entire file at once.
- wav, sr = ta.load(fp)
- else:
- wav, sr = _av_read(filepath, seek_time, duration)
- if pad and duration > 0:
- expected_frames = int(duration * sr)
- wav = F.pad(wav, (0, expected_frames - wav.shape[-1]))
- return wav, sr
-
-
-def audio_write(stem_name: tp.Union[str, Path],
- wav: torch.Tensor, sample_rate: int,
- format: str = 'wav', mp3_rate: int = 320, normalize: bool = True,
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
- log_clipping: bool = True, make_parent_dir: bool = True,
- add_suffix: bool = True) -> Path:
- """Convenience function for saving audio to disk. Returns the filename the audio was written to.
-
- Args:
- stem_name (str or Path): Filename without extension which will be added automatically.
- format (str): Either "wav" or "mp3".
- mp3_rate (int): kbps when using mp3s.
- normalize (bool): if `True` (default), normalizes according to the prescribed
- strategy (see after). If `False`, the strategy is only used in case clipping
- would happen.
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
- with extra headroom to avoid clipping. 'clip' just clips.
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
- than the `peak_clip` one to avoid further clipping.
- loudness_headroom_db (float): Target loudness for loudness normalization.
- log_clipping (bool): If True, basic logging on stderr when clipping still
- occurs despite strategy (only for 'rms').
- make_parent_dir (bool): Make parent directory if it doesn't exist.
- Returns:
- Path: Path of the saved audio.
- """
- assert wav.dtype.is_floating_point, "wav is not floating point"
- if wav.dim() == 1:
- wav = wav[None]
- elif wav.dim() > 2:
- raise ValueError("Input wav should be at most 2 dimension.")
- assert wav.isfinite().all()
- wav = normalize_audio(wav, normalize, strategy, peak_clip_headroom_db,
- rms_headroom_db, loudness_headroom_db, log_clipping=log_clipping,
- sample_rate=sample_rate, stem_name=str(stem_name))
- kwargs: dict = {}
- if format == 'mp3':
- suffix = '.mp3'
- kwargs.update({"compression": mp3_rate})
- elif format == 'wav':
- wav = i16_pcm(wav)
- suffix = '.wav'
- kwargs.update({"encoding": "PCM_S", "bits_per_sample": 16})
- else:
- raise RuntimeError(f"Invalid format {format}. Only wav or mp3 are supported.")
- if not add_suffix:
- suffix = ''
- path = Path(str(stem_name) + suffix)
- if make_parent_dir:
- path.parent.mkdir(exist_ok=True, parents=True)
- try:
- ta.save(path, wav, sample_rate, **kwargs)
- except Exception:
- if path.exists():
- # we do not want to leave half written files around.
- path.unlink()
- raise
- return path
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts
deleted file mode 100644
index bda367d74c634f58d3e3898029bbc64bdbc61c0a..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/https.d.ts
+++ /dev/null
@@ -1,542 +0,0 @@
-/**
- * HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as a
- * separate module.
- * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/https.js)
- */
-declare module 'https' {
- import { Duplex } from 'node:stream';
- import * as tls from 'node:tls';
- import * as http from 'node:http';
- import { URL } from 'node:url';
- type ServerOptions<
- Request extends typeof http.IncomingMessage = typeof http.IncomingMessage,
- Response extends typeof http.ServerResponse = typeof http.ServerResponse,
- > = tls.SecureContextOptions & tls.TlsOptions & http.ServerOptionsChicken Little (2005) [HDrip (AC3)] [Espanol] [Animacion | Aventura]
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py b/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py
deleted file mode 100644
index 98e835fa6e4c0bcad062df9c519701bf795c98be..0000000000000000000000000000000000000000
--- a/spaces/gradio/HuBERT/fairseq/criterions/composite_loss.py
+++ /dev/null
@@ -1,100 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from fairseq import utils
-from fairseq.criterions import LegacyFairseqCriterion, register_criterion
-from torch import nn
-
-
-@register_criterion("composite_loss")
-class CompositeLoss(LegacyFairseqCriterion):
- """This is a composite loss that, given a list of model outputs and a list of targets,
- computes an average of losses for each output-target pair"""
-
- def __init__(self, args, task):
- super().__init__(args, task)
- self.underlying_criterion = args.underlying_criterion
-
- @staticmethod
- def add_args(parser):
- """Add criterion-specific arguments to the parser."""
- # fmt: off
- parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True,
- help='underlying criterion to use for the composite loss')
- # fmt: on
-
- @staticmethod
- def build_underlying_criterion(args, task):
- saved_criterion = args.criterion
- args.criterion = args.underlying_criterion
- assert saved_criterion != args.underlying_criterion
- underlying_criterion = task.build_criterion(args)
- args.criterion = saved_criterion
- return underlying_criterion
-
- @classmethod
- def build_criterion(cls, args, task):
- underlying_criterion = CompositeLoss.build_underlying_criterion(args, task)
-
- class FakeModel(nn.Module):
- def __init__(self, model, net_out, target):
- super().__init__()
- self.model = model
- self.net_out = net_out
- self.target = target
-
- def forward(self, **unused):
- return self.net_out
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- return self.model.get_normalized_probs(
- net_output, log_probs, sample=sample
- )
-
- def get_targets(self, *unused):
- return self.target
-
- @property
- def decoder(self):
- return self.model.decoder
-
- class _CompositeLoss(LegacyFairseqCriterion):
- def __init__(self, args, task, underlying_criterion):
- super().__init__(args, task)
- self.underlying_criterion = underlying_criterion
-
- def forward(self, model, sample, reduce=True):
- net_outputs = model(**sample["net_input"])
- targets = sample["target"]
-
- bsz = targets[0].size(0)
- loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_()
-
- sample_size = 0
- logging_output = {}
- for o, t in zip(net_outputs[0], targets):
- m = FakeModel(model, (o, net_outputs[1]), t)
- sample["target"] = t
- l, ss, logging_output = self.underlying_criterion(m, sample, reduce)
- loss += l
- sample_size += ss
-
- loss.div_(len(targets))
- sample_size /= len(targets)
-
- logging_output["loss"] = utils.item(loss.data) if reduce else loss.data
- return loss, sample_size, logging_output
-
- @staticmethod
- def aggregate_logging_outputs(logging_outputs):
- return underlying_criterion.__class__.aggregate_logging_outputs(
- logging_outputs
- )
-
- @staticmethod
- def reduce_metrics(logging_outputs) -> None:
- underlying_criterion.__class__.reduce_metrics(logging_outputs)
-
- return _CompositeLoss(args, task, underlying_criterion)
diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py
deleted file mode 100644
index 022912df133bd977364786f90d6ae635292dc135..0000000000000000000000000000000000000000
--- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/style_mixing.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-#
-
-
-import os
-import re
-from typing import List
-import legacy
-
-import click
-import dnnlib
-import numpy as np
-import PIL.Image
-import torch
-
-"""
-Style mixing using pretrained network pickle.
-
-Examples:
-
-\b
-python style_mixing.py --network=pretrained_models/stylegan_human_v2_1024.pkl --rows=85,100,75,458,1500 \\
- --cols=55,821,1789,293 --styles=0-3 --outdir=outputs/stylemixing
-"""
-
-
-@click.command()
-@click.option('--network', 'network_pkl', help='Network pickle filename', required=True)
-@click.option('--rows', 'row_seeds', type=legacy.num_range, help='Random seeds to use for image rows', required=True)
-@click.option('--cols', 'col_seeds', type=legacy.num_range, help='Random seeds to use for image columns', required=True)
-@click.option('--styles', 'col_styles', type=legacy.num_range, help='Style layer range', default='0-6', show_default=True)
-@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=0.8, show_default=True)
-@click.option('--noise-mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True)
-@click.option('--outdir', type=str, required=True, default='outputs/stylemixing')
-def generate_style_mix(
- network_pkl: str,
- row_seeds: List[int],
- col_seeds: List[int],
- col_styles: List[int],
- truncation_psi: float,
- noise_mode: str,
- outdir: str
-):
-
- print('Loading networks from "%s"...' % network_pkl)
- device = torch.device('cuda')
- with dnnlib.util.open_url(network_pkl) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device)
-
- os.makedirs(outdir, exist_ok=True)
-
- print('Generating W vectors...')
- all_seeds = list(set(row_seeds + col_seeds))
- all_z = np.stack([np.random.RandomState(seed).randn(G.z_dim)
- for seed in all_seeds])
- all_w = G.mapping(torch.from_numpy(all_z).to(device), None)
- w_avg = G.mapping.w_avg
- all_w = w_avg + (all_w - w_avg) * truncation_psi
- w_dict = {seed: w for seed, w in zip(all_seeds, list(all_w))}
-
- print('Generating images...')
- all_images = G.synthesis(all_w, noise_mode=noise_mode)
- all_images = (all_images.permute(0, 2, 3, 1) * 127.5 +
- 128).clamp(0, 255).to(torch.uint8).cpu().numpy()
- image_dict = {(seed, seed): image for seed,
- image in zip(all_seeds, list(all_images))}
-
- print('Generating style-mixed images...')
- for row_seed in row_seeds:
- for col_seed in col_seeds:
- w = w_dict[row_seed].clone()
- w[col_styles] = w_dict[col_seed][col_styles]
- image = G.synthesis(w[np.newaxis], noise_mode=noise_mode)
- image = (image.permute(0, 2, 3, 1) * 127.5 +
- 128).clamp(0, 255).to(torch.uint8)
- image_dict[(row_seed, col_seed)] = image[0].cpu().numpy()
-
- os.makedirs(outdir, exist_ok=True)
- # print('Saving images...')
- # for (row_seed, col_seed), image in image_dict.items():
- # PIL.Image.fromarray(image, 'RGB').save(f'{outdir}/{row_seed}-{col_seed}.png')
-
- print('Saving image grid...')
- W = G.img_resolution // 2
- H = G.img_resolution
- canvas = PIL.Image.new(
- 'RGB', (W * (len(col_seeds) + 1), H * (len(row_seeds) + 1)), 'black')
- for row_idx, row_seed in enumerate([0] + row_seeds):
- for col_idx, col_seed in enumerate([0] + col_seeds):
- if row_idx == 0 and col_idx == 0:
- continue
- key = (row_seed, col_seed)
- if row_idx == 0:
- key = (col_seed, col_seed)
- if col_idx == 0:
- key = (row_seed, row_seed)
- canvas.paste(PIL.Image.fromarray(
- image_dict[key], 'RGB'), (W * col_idx, H * row_idx))
- canvas.save(f'{outdir}/grid.png')
-
-
-# ----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- generate_style_mix() # pylint: disable=no-value-for-parameter
-
-# ----------------------------------------------------------------------------
diff --git a/spaces/h2oai/wave-tour/examples/canvas.py b/spaces/h2oai/wave-tour/examples/canvas.py
deleted file mode 100644
index e13b25f0a24223c07939e385e068f5750da33eb3..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/canvas.py
+++ /dev/null
@@ -1,19 +0,0 @@
-# Canvas
-# A card that displays a freeform drawing canvas.
-# A canvas card can synchronize its state with other canvas cards at the same URL.
-# Open `/demo` in multiple browsers and watch them synchronize in realtime.
-# #collaboration
-# ---
-from h2o_wave import site, data, ui
-
-page = site['/demo']
-page.drop()
-
-page.add('example', ui.canvas_card(
- box='1 1 4 7',
- title='Sample Canvas',
- width=500,
- height=500,
- data=dict(),
-))
-page.save()
diff --git a/spaces/h2oai/wave-tour/examples/table_pagination.py b/spaces/h2oai/wave-tour/examples/table_pagination.py
deleted file mode 100644
index e036809c1f4373bc4503a830e6ee7367c601284d..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/table_pagination.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Table / Pagination
-# Use a paginated #table to display large (100k+ rows) tabular data.
-# #form #table #pagination
-# ---
-
-import os
-from typing import Dict, List
-from h2o_wave import main, app, Q, ui
-from copy import deepcopy
-import csv
-
-
-# Create a dummy data blueprint.
-class Issue:
- def __init__(self, text: str, status: str):
- self.text = text
- self.status = status
-
-
-all_rows = [Issue(text=i + 1, status=('Closed' if i % 2 == 0 else 'Open')) for i in range(100)]
-rows_per_page = 10
-total_rows = len(all_rows)
-
-
-def get_rows(base: List, sort: Dict[str, bool] = None, search: Dict = None, filters: Dict[str, List[str]] = None) -> List:
- # Make a deep copy in order to not mutate the original `all_issues` which serves as our baseline.
- rows = deepcopy(base)
-
- # Sort by multiple columns.
- if sort:
- for col, reverse in sort.items():
- rows.sort(key=lambda i: getattr(i, col), reverse=reverse)
- # Filter out all rows that do not contain searched string.
- if search:
- search_val = search['value'].lower()
- cols = search['cols']
- rows = [row for row in rows if any(search_val in str(getattr(row, col)).lower() for col in cols)]
- # Filter out rows that do not contain filtered column value.
- if filters:
- for col, filters in filters.items():
- rows = [row for row in rows if not filters or any(f in getattr(row, col) for f in filters)]
-
- return rows
-
-
-@app('/demo')
-async def serve(q: Q):
- if not q.client.initialized:
- q.page['meta'] = ui.meta_card(box='')
- q.page['form'] = ui.form_card(box='1 1 -1 -1', items=[
- ui.table(
- name='table',
- columns=[
- ui.table_column(name='text', label='Text', sortable=True, searchable=True, link=False),
- ui.table_column(name='status', label='Status', filterable=True, filters=['Open', 'Closed']),
- ],
- rows=[ui.table_row(str(r.text), [str(r.text), r.status]) for r in get_rows(all_rows)[0:rows_per_page]],
- resettable=True,
- downloadable=True,
- pagination=ui.table_pagination(total_rows=len(all_rows), rows_per_page=rows_per_page),
- # Make sure to register the necessary events for the feature you want to support, e.g. sorting.
- # All the registered events have to be handled by the developer.
- # `page_change` event is required to be handled for pagination to work.
- events=['sort', 'filter', 'search', 'page_change', 'download', 'reset']
- )
- ])
- q.client.initialized = True
-
- # Check if user triggered any table action and save it to local state for allowing multiple
- # actions to be performed on the data at the same time, e.g. sort the filtered data etc.
- if q.events.table:
- table = q.page['form'].table
- if q.events.table.sort:
- q.client.sort = q.events.table.sort
- q.client.page_offset = 0
- if q.events.table.filter:
- q.client.filters = q.events.table.filter
- q.client.page_offset = 0
- if q.events.table.search is not None:
- q.client.search = q.events.table.search
- q.client.page_offset = 0
- if q.events.table.page_change:
- q.client.page_offset = q.events.table.page_change.get('offset', 0)
- if q.events.table.reset:
- q.client.search = None
- q.client.sort = None
- q.client.filters = None
- q.client.page_offset = 0
- table.pagination = ui.table_pagination(total_rows, rows_per_page)
-
- rows = get_rows(all_rows, q.client.sort, q.client.search, q.client.filters)
- offset = q.client.page_offset or 0
- table.rows = [ui.table_row(str(r.text), [str(r.text), r.status]) for r in rows[offset: offset + rows_per_page]]
-
- # Update table pagination according to the new row count.
- if q.client.search is not None or q.client.filters:
- table.pagination = ui.table_pagination(len(rows), rows_per_page)
-
- if q.events.table.download:
- # For multi-user apps, the tmp file name should be unique for each user, not hardcoded.
- with open('data_download.csv', 'w') as csvfile:
- csv_writer = csv.writer(csvfile, delimiter=',')
- for r in rows:
- csv_writer.writerow([r.text, r.status])
- download_url, = await q.site.upload(['data_download.csv'])
- # Clean up the file after upload.
- os.remove('data_download.csv')
- q.page['meta'].script = ui.inline_script(f'window.open("{download_url}")')
-
- await q.page.save()
diff --git a/spaces/hahahafofo/image2text_prompt_generator/utils/html.py b/spaces/hahahafofo/image2text_prompt_generator/utils/html.py
deleted file mode 100644
index b0edb1ae05b25f21b6e71756361acfc5a7c7efcb..0000000000000000000000000000000000000000
--- a/spaces/hahahafofo/image2text_prompt_generator/utils/html.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import html
-
-
-def plaintext_to_html(text):
- text = (
- "
\n".join([f"{html.escape(x)}" for x in text.split("\n")]) + "''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)]
- elif(option == "person"):
- instance_prompt_example = "julcto"
- freeze_for = 70
- #show_prior_preservation = True if base != "v2-1-768" else False
- show_prior_preservation=False
- if(show_prior_preservation):
- prior_preservation_box_update = gr.update(visible=show_prior_preservation)
- else:
- prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False)
- return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like birme for smart cropping. {mandatory_liability}:", '''
''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update]
- elif(option == "style"):
- instance_prompt_example = "trsldamrl"
- freeze_for = 10
- return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like birme for smart cropping. {mandatory_liability}:", '''
''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)]
-
-def count_files(*inputs):
- file_counter = 0
- concept_counter = 0
- for i, input in enumerate(inputs):
- if(i < maximum_concepts):
- files = inputs[i]
- if(files):
- concept_counter+=1
- file_counter+=len(files)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- selected_model = inputs[-5]
- experimental_faces = inputs[-6]
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- else:
- Training_Steps = file_counter*150
- if(type_of_thing == "person" and Training_Steps > 2400):
- Training_Steps = 2400 #Avoid overfitting on person faces
- if(is_spaces):
- if(selected_model == "v1-5"):
- its = 1.1 if which_gpu == "T4" else 1.8
- if(experimental_faces):
- its = 1
- elif(selected_model == "v2-1-512"):
- its = 0.8 if which_gpu == "T4" else 1.5
- if(experimental_faces):
- its = 0.7
- elif(selected_model == "v2-1-768"):
- its = 0.48 if which_gpu == "T4" else 0.85
-
- gpu_price = 0.60 if which_gpu == "T4" else 1.10
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes.
- The setup, compression and uploading the model can take up to 20 minutes.
As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.
- If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.
'''
- else:
- summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.
'''
-
- return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)])
-
-def update_steps(*files_list):
- file_counter = 0
- for i, files in enumerate(files_list):
- if(files):
- file_counter+=len(files)
- return(gr.update(value=file_counter*200))
-
-def visualise_progress_bar():
- return gr.update(visible=True)
-
-def pad_image(image):
- w, h = image.size
- if w == h:
- return image
- elif w > h:
- new_image = Image.new(image.mode, (w, w), (0, 0, 0))
- new_image.paste(image, (0, (w - h) // 2))
- return new_image
- else:
- new_image = Image.new(image.mode, (h, h), (0, 0, 0))
- new_image.paste(image, ((h - w) // 2, 0))
- return new_image
-
-def validate_model_upload(hf_token, model_name):
- if(hf_token != ''):
- api = HfApi()
- try:
- _ = api.whoami(hf_token)
- except:
- raise gr.Error("You have inserted an invalid Hugging Face token")
- try:
- if(is_spaces):
- update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space")
- except:
- raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions")
- else:
- raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)")
- if(model_name == ""):
- raise gr.Error("Please fill in your model's name")
-
-def swap_hardware(hf_token, hardware="cpu-basic"):
- hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'flavor': hardware}
- requests.post(hardware_url, json = body, headers=headers)
-
-def swap_sleep_time(hf_token,sleep_time):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}/sleeptime"
- headers = { "authorization" : f"Bearer {hf_token}"}
- body = {'seconds':sleep_time}
- requests.post(sleep_time_url,json=body,headers=headers)
-
-def get_sleep_time(hf_token):
- sleep_time_url = f"https://huggingface.co/api/spaces/{os.environ['SPACE_ID']}"
- headers = { "authorization" : f"Bearer {hf_token}"}
- response = requests.get(sleep_time_url,headers=headers)
- try:
- gcTimeout = response.json()['runtime']['gcTimeout']
- except:
- gcTimeout = None
- return gcTimeout
-
-def write_to_community(title, description,hf_token):
- from huggingface_hub import HfApi
- api = HfApi()
- api.create_discussion(repo_id=os.environ['SPACE_ID'], title=title, description=description,repo_type="space", token=hf_token)
-
-def train(progress=gr.Progress(track_tqdm=True), *inputs):
- which_model = inputs[-10]
- if(which_model == ""):
- raise gr.Error("You forgot to select a base model to use")
-
- if is_shared_ui:
- raise gr.Error("This Space only works in duplicated instances")
- if not is_gpu_associated:
- raise gr.Error("Please associate a T4 or A10G GPU for this Space")
- hf_token = inputs[-5]
- model_name = inputs[-7]
- if(is_spaces):
- sleep_time = get_sleep_time(hf_token)
- if sleep_time:
- swap_sleep_time(hf_token, -1)
- remove_attribution_after = inputs[-6]
- else:
- remove_attribution_after = False
-
- if(remove_attribution_after):
- validate_model_upload(hf_token, model_name)
-
- torch.cuda.empty_cache()
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
-
- if os.path.exists("output_model"): shutil.rmtree('output_model')
- if os.path.exists("instance_images"): shutil.rmtree('instance_images')
- if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar")
- if os.path.exists("model.ckpt"): os.remove("model.ckpt")
- if os.path.exists("hastrained.success"): os.remove("hastrained.success")
- file_counter = 0
- resolution = 512 if which_model != "v2-1-768" else 768
- for i, input in enumerate(inputs):
- if(i < maximum_concepts-1):
- if(input):
- os.makedirs('instance_images',exist_ok=True)
- files = inputs[i+(maximum_concepts*2)]
- prompt = inputs[i+maximum_concepts]
- if(prompt == "" or prompt == None):
- raise gr.Error("You forgot to define your concept prompt")
- for j, file_temp in enumerate(files):
- file = Image.open(file_temp.name)
- image = pad_image(file)
- image = image.resize((resolution, resolution))
- extension = file_temp.name.split(".")[1]
- image = image.convert('RGB')
- image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100)
- file_counter += 1
-
- os.makedirs('output_model',exist_ok=True)
- uses_custom = inputs[-1]
- type_of_thing = inputs[-4]
- experimental_face_improvement = inputs[-9]
-
- if(uses_custom):
- Training_Steps = int(inputs[-3])
- Train_text_encoder_for = int(inputs[-2])
- else:
- if(type_of_thing == "object"):
- Train_text_encoder_for=30
-
- elif(type_of_thing == "style"):
- Train_text_encoder_for=15
-
- elif(type_of_thing == "person"):
- Train_text_encoder_for=70
-
- Training_Steps = file_counter*150
- if(type_of_thing == "person" and Training_Steps > 2600):
- Training_Steps = 2600 #Avoid overfitting on people's faces
- stptxt = int((Training_Steps*Train_text_encoder_for)/100)
- gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False
- cache_latents = True if which_model != "v1-5" else False
- if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)):
- args_general = argparse.Namespace(
- image_captions_filename = True,
- train_text_encoder = True if stptxt > 0 else False,
- stop_text_encoder_training = stptxt,
- save_n_steps = 0,
- pretrained_model_name_or_path = model_to_load,
- instance_data_dir="instance_images",
- class_data_dir=None,
- output_dir="output_model",
- instance_prompt="",
- seed=42,
- resolution=resolution,
- mixed_precision="fp16",
- train_batch_size=1,
- gradient_accumulation_steps=1,
- use_8bit_adam=True,
- learning_rate=2e-6,
- lr_scheduler="polynomial",
- lr_warmup_steps = 0,
- max_train_steps=Training_Steps,
- gradient_checkpointing=gradient_checkpointing,
- cache_latents=cache_latents,
- )
- print("Starting single training...")
- lock_file = open("intraining.lock", "w")
- lock_file.close()
- try:
- run_training(args_general)
- except Exception as e:
- if(is_spaces):
- title="There was an error on during your training"
- description=f'''
- Unfortunately there was an error during training your {model_name} model.
- Please check it out below. Feel free to report this issue to [Dreambooth Training](https://huggingface.co/spaces/multimodalart/dreambooth-training):
- ```
- {str(e)}
- ```
- '''
- swap_hardware(hf_token, "cpu-basic")
- write_to_community(title,description,hf_token)
-
-
- gc.collect()
- torch.cuda.empty_cache()
- if(which_model == "v1-5"):
- print("Adding Safety Checker to the model...")
- shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True)
- shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True)
- shutil.copy(f"model_index.json", "output_model/model_index.json")
-
- if(not remove_attribution_after):
- swap_sleep_time(hf_token, sleep_time)
- print("Archiving model file...")
- with tarfile.open("diffusers_model.tar", "w") as tar:
- tar.add("output_model", arcname=os.path.basename("output_model"))
- if os.path.exists("intraining.lock"): os.remove("intraining.lock")
- trained_file = open("hastrained.success", "w")
- trained_file.close()
- print("Training completed!")
- return [
- gr.update(visible=False), #progress_bar
- gr.update(visible=True, value=["diffusers_model.tar"]), #result
- gr.update(visible=True), #try_your_model
- gr.update(visible=True), #push_to_hub
- gr.update(visible=True), #convert_button
- gr.update(visible=False), #training_ongoing
- gr.update(visible=True) #completed_training
- ]
- else:
- where_to_upload = inputs[-8]
- push(model_name, where_to_upload, hf_token, which_model, True)
- swap_hardware(hf_token, "cpu-basic")
-
-pipe_is_set = False
-def generate(prompt, steps):
- torch.cuda.empty_cache()
- from diffusers import StableDiffusionPipeline
- global pipe_is_set
- if(not pipe_is_set):
- global pipe
- pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16)
- pipe = pipe.to("cuda")
- pipe_is_set = True
-
- image = pipe(prompt, num_inference_steps=steps).images[0]
- return(image)
-
-def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False):
- validate_model_upload(hf_token, model_name)
- if(not os.path.exists("model.ckpt")):
- convert("output_model", "model.ckpt")
- from huggingface_hub import HfApi, HfFolder, CommitOperationAdd
- from huggingface_hub import create_repo
- model_name_slug = slugify(model_name)
- api = HfApi()
- your_username = api.whoami(token=hf_token)["name"]
- if(where_to_upload == "My personal profile"):
- model_id = f"{your_username}/{model_name_slug}"
- else:
- model_id = f"sd-dreambooth-library/{model_name_slug}"
- headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"}
- response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers)
-
- print(f"Starting to upload the model {model_id}...")
- images_upload = os.listdir("instance_images")
- image_string = ""
- instance_prompt_list = []
- previous_instance_prompt = ''
- for i, image in enumerate(images_upload):
- instance_prompt = image.split("_")[0]
- if(instance_prompt != previous_instance_prompt):
- title_instance_prompt_string = instance_prompt
- instance_prompt_list.append(instance_prompt)
- else:
- title_instance_prompt_string = ''
- previous_instance_prompt = instance_prompt
- image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""}
-{image_string}})'''
- readme_text = f'''---
-license: creativeml-openrail-m
-tags:
-- text-to-image
-widget:
-- text: {instance_prompt_list[0]}
----
-### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model
-
-You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
-
-Sample pictures of:
-{image_string}
-'''
- #Save the readme to a file
- readme_file = open("model.README.md", "w")
- readme_file.write(readme_text)
- readme_file.close()
- #Save the token identifier to a file
- text_file = open("token_identifier.txt", "w")
- text_file.write(', '.join(instance_prompt_list))
- text_file.close()
- try:
- create_repo(model_id,private=True, token=hf_token)
- except:
- import time
- epoch_time = str(int(time.time()))
- create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token)
- operations = [
- CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"),
- CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"),
- CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt")
- ]
- api.create_commit(
- repo_id=model_id,
- operations=operations,
- commit_message=f"Upload the model {model_name}",
- token=hf_token
- )
- api.upload_folder(
- folder_path="output_model",
- repo_id=model_id,
- token=hf_token
- )
- api.upload_folder(
- folder_path="instance_images",
- path_in_repo="concept_images",
- repo_id=model_id,
- token=hf_token
- )
- if is_spaces:
- if(not comes_from_automated):
- extra_message = "Don't forget to remove the GPU attribution after you play with it."
- else:
- extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page"
- title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!"
- description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}"
- write_to_community(title, description, hf_token)
- #api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token)
- print("Model uploaded successfully!")
- return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])]
-
-def convert_to_ckpt():
- if 'pipe' in globals():
- global pipe, pipe_is_set
- del pipe
- pipe_is_set = False
- gc.collect()
- convert("output_model", "model.ckpt")
- return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])
-
-def check_status(top_description):
- if os.path.exists("hastrained.success"):
- if is_spaces:
- update_top_tag = gr.update(value=f'''
- Your model has finished training ✅
- Your model has finished training ✅
- Don't worry, your model is still training! ⌛
- Attention - This Space doesn't work in this shared UI
-
-
-
You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉
- You have successfully duplicated the Dreambooth Training Space 🎉
- You have successfully cloned the Dreambooth Training Space locally 🎉
- pip install requirements-local.txt
''')
- things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.")
-
- with gr.Column():
- file_collection = []
- concept_collection = []
- buttons_collection = []
- delete_collection = []
- is_visible = []
-
- row = [None] * maximum_concepts
- for x in range(maximum_concepts):
- ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4])
- if(x == 0):
- visible = True
- is_visible.append(gr.State(value=True))
- else:
- visible = False
- is_visible.append(gr.State(value=False))
-
- file_collection.append(gr.File(file_types=["image"], label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible))
- with gr.Column(visible=visible) as row[x]:
- concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions'''))
- with gr.Row():
- if(x < maximum_concepts-1):
- buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible))
- if(x > 0):
- delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept"))
-
- counter_add = 1
- for button in buttons_collection:
- if(counter_add < len(buttons_collection)):
- button.click(lambda:
- [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None],
- None,
- [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False)
- else:
- button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False)
- counter_add += 1
-
- counter_delete = 1
- for delete_button in delete_collection:
- if(counter_delete < len(delete_collection)+1):
- delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False)
- counter_delete += 1
-
- with gr.Accordion("Custom Settings", open=False):
- swap_auto_calculated = gr.Checkbox(label="Use custom settings")
- gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.")
- steps = gr.Number(label="How many steps", value=2400)
- perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30)
-
- with gr.Box(visible=False) as training_summary:
- training_summary_text = gr.HTML("", visible=True, label="Training Summary")
- is_advanced_visible = True if is_spaces else False
- training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible)
- training_summary_model_name = gr.Textbox(label="Name of your model", visible=True)
- training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True)
- training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True)
- training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True)
-
- train_btn = gr.Button("Start Training")
- progress_bar = gr.Textbox(visible=False)
- if(is_shared_ui):
- training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False)
- elif(not is_gpu_associated):
- training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False)
- else:
- training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False)
-
-
- #Post-training UI
- completed_training = gr.Markdown('''# ✅ Training completed.
- ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False)
-
- with gr.Row():
- with gr.Box(visible=False) as try_your_model:
- gr.Markdown("## Try your model")
- prompt = gr.Textbox(label="Type your prompt")
- result_image = gr.Image()
- inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1)
- generate_button = gr.Button("Generate Image")
-
- with gr.Box(visible=False) as push_to_hub:
- gr.Markdown("## Push to Hugging Face Hub")
- model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style")
- where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to")
- gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.")
- hf_token = gr.Textbox(label="Hugging Face Write Token", type="password")
-
- push_button = gr.Button("Push to the Hub")
-
- result = gr.File(label="Download the uploaded models in the diffusers format", visible=True)
- success_message_upload = gr.Markdown(visible=False)
- convert_button = gr.Button("Convert to CKPT", visible=False)
-
- #Swap the examples and the % of text encoder trained depending if it is an object, person or style
- type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
-
- #Swap the base model
-
- base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False)
- #base_model_to_use.change(fn=visualise_progress_bar, inputs=[], outputs=progress_bar)
- base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[])
- #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not
- for file in file_collection:
- #file.change(fn=update_steps,inputs=file_collection, outputs=steps)
- file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
- perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False)
-
- #Give more options if the user wants to finish everything after training
- if(is_spaces):
- training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False)
- #Add a message for while it is in training
-
- #train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing)
-
- #The main train function
- train_btn.click(lambda:gr.update(visible=True), inputs=[], outputs=progress_bar)
- train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[progress_bar, result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False)
-
- #Button to generate an image from your trained model after training
- generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False)
- #Button to push the model to the Hugging Face Hub
- push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False)
- #Button to convert the model to ckpt format
- convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False)
-
- #Checks if the training is running
- demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False)
-
-demo.queue(default_enabled=False).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py b/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py
deleted file mode 100644
index 52413e0f8a45a8c8511bf103d3aabd537fac97b9..0000000000000000000000000000000000000000
--- a/spaces/hylee/apdrawing/APDrawingGAN2/util/image_pool.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import random
-import torch
-
-
-class ImagePool():
- def __init__(self, pool_size):
- self.pool_size = pool_size
- if self.pool_size > 0:
- self.num_imgs = 0
- self.images = []
-
- def query(self, images):
- if self.pool_size == 0:
- return images
- return_images = []
- for image in images:
- image = torch.unsqueeze(image.data, 0)
- if self.num_imgs < self.pool_size:
- self.num_imgs = self.num_imgs + 1
- self.images.append(image)
- return_images.append(image)
- else:
- p = random.uniform(0, 1)
- if p > 0.5:
- random_id = random.randint(0, self.pool_size - 1) # randint is inclusive
- tmp = self.images[random_id].clone()
- self.images[random_id] = image
- return_images.append(tmp)
- else:
- return_images.append(image)
- return_images = torch.cat(return_images, 0)
- return return_images
diff --git a/spaces/hysts/DDNM-HQ/style.css b/spaces/hysts/DDNM-HQ/style.css
deleted file mode 100644
index c4739b4ea5fc35e774a049e3dacc443f7f0eac19..0000000000000000000000000000000000000000
--- a/spaces/hysts/DDNM-HQ/style.css
+++ /dev/null
@@ -1,3 +0,0 @@
-h1 {
- text-align: center;
-}
diff --git a/spaces/hysts/Kandinsky-2-1/app.py b/spaces/hysts/Kandinsky-2-1/app.py
deleted file mode 100644
index 7dd489cfc466c261d54a58265befffe3747de3f1..0000000000000000000000000000000000000000
--- a/spaces/hysts/Kandinsky-2-1/app.py
+++ /dev/null
@@ -1,202 +0,0 @@
-#!/usr/bin/env python
-
-from __future__ import annotations
-
-import os
-import random
-
-import gradio as gr
-import numpy as np
-import PIL.Image
-import spaces
-import torch
-from diffusers import DDPMScheduler, DiffusionPipeline
-
-DESCRIPTION = "# Kandinsky 2.1"
-if not torch.cuda.is_available():
- DESCRIPTION += "\n
添加图像
- HD Online Player (cinderella 2 cartoon movie in hindi )
-
-Watch Cinderella (Hindi) All Episodes Online - Catch Quality Streaming of all Cinderella (Hindi) Playflix Videos and Clips for Free on MX Player. 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Krfv 008 Rapidshare Full Version.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Krfv 008 Rapidshare Full Version.md
deleted file mode 100644
index 04b1259e0c56313a33911135f7df53e1dff84836..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Krfv 008 Rapidshare Full Version.md
+++ /dev/null
@@ -1,9 +0,0 @@
-Krfv 008 Rapidshare Full Version
-
-Krfv 008 Rapidshare Full version xforce keygen AutoCAD Mechanical 2013 Portable Letasoft Sound Booster Activation key Download via torrent. AutoCAD Mechanical 2013 x86 and x64 Autodesk Design Suite 2013.
-Autodesk AutoCAD Mechanical 2013 x86 and x64 Released: 2013 Version: 2013 x86 and x64 Developer: Autodesk Inc. Platform: Windows® X86/X64 Compatibility with Seven Interface language: Russian Tablet: Present Autodesk AutoCAD Mechanical 2013 x86 and x64 Autodesk Design Suite 2013 Release Year/Date: 2013 Version: 2013.
-Interface: Russian, English License: FPP.
-Autodesk AutoCAD Mechanical 2013 x86 and x64 Year/Date. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Aptron Portaria 2009.md b/spaces/inreVtussa/clothingai/Examples/Aptron Portaria 2009.md
deleted file mode 100644
index 21c8c3c45c1cbd20e89797e30f7ae203fbe30e6e..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Aptron Portaria 2009.md
+++ /dev/null
@@ -1,18 +0,0 @@
-Aptron portaria 2009
-
-. 7383628160 . coub.com/stories/3146267-new-autodesk-robot-structural-analysis-2009-crack. html
-1xbet mirror working for free right now right now.
-Download free song blue frost lay on the wires.
-Series Chernobyl 2019 watch online all series in a row for free.
-In deo, you don't need it in the ass.
-Volcano million registration.
-Gdz Russian language 5th grade Shmelev part 1.
-How to make money with photoshop online.
-Play slot machines for money
-Download video of the terrorist attack in Volgodonsk on September 16.
-Mathematics 1st grade gdz.
-Gdz mathematics grade 1.
-Porn stories about mature women. 8a78ff9644
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md b/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md
deleted file mode 100644
index 04248370b9e42a61a476bc9f26cf7858044291f5..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Auto Keyboard Murgee Full Crack Kid.md
+++ /dev/null
@@ -1,6 +0,0 @@
-auto keyboard murgee full crack kid
-
-Murgee Auto Clicker Crack is used for the automatically clicking of the Left Mouse Button by the usage of the Keyboard Shortcut. Users can ... 4d29de3e1b
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md b/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md
deleted file mode 100644
index cee16f2d5368f7f1b8e3fa93315e9d249aa97888..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Bangalore Days 1080p Movie Download.md
+++ /dev/null
@@ -1,10 +0,0 @@
-bangalore days 1080p movie download
-
-Bangalore Days (2021) HDRip Hindi Dubbed Full Movie Watch Online HD Print Free Download - TodayPk Movies, TodayPkBangalore Days Dubbed in Hindi, . 🔥Watch Bangalore Days (2012) full movie in good HD 1080 720p quality.
-Bhandarkar (Rajamurthy) is a doctor who does not want to work in the hospital where he was forced to work in the past.
-But after his wife dies from an unknown illness, he is forced to work in a hospital to help the sick. .
-Watch Bangalore Days in Russian.
-Watch online movie "Bhandarkar (Rajamurthy) - a doctor who does not want to work in the hospital where he was forced to work in the past. 8a78ff9644
-
-
-
diff --git a/spaces/jackli888/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch_v2.py b/spaces/jackli888/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch_v2.py
deleted file mode 100644
index a1255881a13b480d1b7564d7474e8bbb5fd7ee76..0000000000000000000000000000000000000000
--- a/spaces/jackli888/stable-diffusion-webui/extensions-builtin/SwinIR/swinir_model_arch_v2.py
+++ /dev/null
@@ -1,1017 +0,0 @@
-# -----------------------------------------------------------------------------------
-# Swin2SR: Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration, https://arxiv.org/abs/
-# Written by Conde and Choi et al.
-# -----------------------------------------------------------------------------------
-
-import math
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.checkpoint as checkpoint
-from timm.models.layers import DropPath, to_2tuple, trunc_normal_
-
-
-class Mlp(nn.Module):
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-def window_partition(x, window_size):
- """
- Args:
- x: (B, H, W, C)
- window_size (int): window size
- Returns:
- windows: (num_windows*B, window_size, window_size, C)
- """
- B, H, W, C = x.shape
- x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
- windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
- return windows
-
-
-def window_reverse(windows, window_size, H, W):
- """
- Args:
- windows: (num_windows*B, window_size, window_size, C)
- window_size (int): Window size
- H (int): Height of image
- W (int): Width of image
- Returns:
- x: (B, H, W, C)
- """
- B = int(windows.shape[0] / (H * W / window_size / window_size))
- x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
- x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
- return x
-
-class WindowAttention(nn.Module):
- r""" Window based multi-head self attention (W-MSA) module with relative position bias.
- It supports both of shifted and non-shifted window.
- Args:
- dim (int): Number of input channels.
- window_size (tuple[int]): The height and width of the window.
- num_heads (int): Number of attention heads.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
- proj_drop (float, optional): Dropout ratio of output. Default: 0.0
- pretrained_window_size (tuple[int]): The height and width of the window in pre-training.
- """
-
- def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0.,
- pretrained_window_size=[0, 0]):
-
- super().__init__()
- self.dim = dim
- self.window_size = window_size # Wh, Ww
- self.pretrained_window_size = pretrained_window_size
- self.num_heads = num_heads
-
- self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True)
-
- # mlp to generate continuous relative position bias
- self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True),
- nn.ReLU(inplace=True),
- nn.Linear(512, num_heads, bias=False))
-
- # get relative_coords_table
- relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32)
- relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32)
- relative_coords_table = torch.stack(
- torch.meshgrid([relative_coords_h,
- relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2
- if pretrained_window_size[0] > 0:
- relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1)
- relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1)
- else:
- relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1)
- relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1)
- relative_coords_table *= 8 # normalize to -8, 8
- relative_coords_table = torch.sign(relative_coords_table) * torch.log2(
- torch.abs(relative_coords_table) + 1.0) / np.log2(8)
-
- self.register_buffer("relative_coords_table", relative_coords_table)
-
- # get pair-wise relative position index for each token inside the window
- coords_h = torch.arange(self.window_size[0])
- coords_w = torch.arange(self.window_size[1])
- coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
- coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
- relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
- relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
- relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
- relative_coords[:, :, 1] += self.window_size[1] - 1
- relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
- relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
- self.register_buffer("relative_position_index", relative_position_index)
-
- self.qkv = nn.Linear(dim, dim * 3, bias=False)
- if qkv_bias:
- self.q_bias = nn.Parameter(torch.zeros(dim))
- self.v_bias = nn.Parameter(torch.zeros(dim))
- else:
- self.q_bias = None
- self.v_bias = None
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
- self.softmax = nn.Softmax(dim=-1)
-
- def forward(self, x, mask=None):
- """
- Args:
- x: input features with shape of (num_windows*B, N, C)
- mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
- """
- B_, N, C = x.shape
- qkv_bias = None
- if self.q_bias is not None:
- qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias))
- qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias)
- qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- # cosine attention
- attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1))
- logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01)).to(self.logit_scale.device)).exp()
- attn = attn * logit_scale
-
- relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads)
- relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view(
- self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
- relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
- relative_position_bias = 16 * torch.sigmoid(relative_position_bias)
- attn = attn + relative_position_bias.unsqueeze(0)
-
- if mask is not None:
- nW = mask.shape[0]
- attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
- attn = attn.view(-1, self.num_heads, N, N)
- attn = self.softmax(attn)
- else:
- attn = self.softmax(attn)
-
- attn = self.attn_drop(attn)
-
- x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
- def extra_repr(self) -> str:
- return f'dim={self.dim}, window_size={self.window_size}, ' \
- f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}'
-
- def flops(self, N):
- # calculate flops for 1 window with token length of N
- flops = 0
- # qkv = self.qkv(x)
- flops += N * self.dim * 3 * self.dim
- # attn = (q @ k.transpose(-2, -1))
- flops += self.num_heads * N * (self.dim // self.num_heads) * N
- # x = (attn @ v)
- flops += self.num_heads * N * N * (self.dim // self.num_heads)
- # x = self.proj(x)
- flops += N * self.dim * self.dim
- return flops
-
-class SwinTransformerBlock(nn.Module):
- r""" Swin Transformer Block.
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resulotion.
- num_heads (int): Number of attention heads.
- window_size (int): Window size.
- shift_size (int): Shift size for SW-MSA.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float, optional): Stochastic depth rate. Default: 0.0
- act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- pretrained_window_size (int): Window size in pre-training.
- """
-
- def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
- mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0.,
- act_layer=nn.GELU, norm_layer=nn.LayerNorm, pretrained_window_size=0):
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.num_heads = num_heads
- self.window_size = window_size
- self.shift_size = shift_size
- self.mlp_ratio = mlp_ratio
- if min(self.input_resolution) <= self.window_size:
- # if window size is larger than input resolution, we don't partition windows
- self.shift_size = 0
- self.window_size = min(self.input_resolution)
- assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
-
- self.norm1 = norm_layer(dim)
- self.attn = WindowAttention(
- dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
- qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop,
- pretrained_window_size=to_2tuple(pretrained_window_size))
-
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if self.shift_size > 0:
- attn_mask = self.calculate_mask(self.input_resolution)
- else:
- attn_mask = None
-
- self.register_buffer("attn_mask", attn_mask)
-
- def calculate_mask(self, x_size):
- # calculate attention mask for SW-MSA
- H, W = x_size
- img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
- h_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- w_slices = (slice(0, -self.window_size),
- slice(-self.window_size, -self.shift_size),
- slice(-self.shift_size, None))
- cnt = 0
- for h in h_slices:
- for w in w_slices:
- img_mask[:, h, w, :] = cnt
- cnt += 1
-
- mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
- mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
- attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
- attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
-
- return attn_mask
-
- def forward(self, x, x_size):
- H, W = x_size
- B, L, C = x.shape
- #assert L == H * W, "input feature has wrong size"
-
- shortcut = x
- x = x.view(B, H, W, C)
-
- # cyclic shift
- if self.shift_size > 0:
- shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
- else:
- shifted_x = x
-
- # partition windows
- x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
- x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
-
- # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size
- if self.input_resolution == x_size:
- attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
- else:
- attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))
-
- # merge windows
- attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
- shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
-
- # reverse cyclic shift
- if self.shift_size > 0:
- x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
- else:
- x = shifted_x
- x = x.view(B, H * W, C)
- x = shortcut + self.drop_path(self.norm1(x))
-
- # FFN
- x = x + self.drop_path(self.norm2(self.mlp(x)))
-
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
- f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
-
- def flops(self):
- flops = 0
- H, W = self.input_resolution
- # norm1
- flops += self.dim * H * W
- # W-MSA/SW-MSA
- nW = H * W / self.window_size / self.window_size
- flops += nW * self.attn.flops(self.window_size * self.window_size)
- # mlp
- flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
- # norm2
- flops += self.dim * H * W
- return flops
-
-class PatchMerging(nn.Module):
- r""" Patch Merging Layer.
- Args:
- input_resolution (tuple[int]): Resolution of input feature.
- dim (int): Number of input channels.
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- """
-
- def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
- super().__init__()
- self.input_resolution = input_resolution
- self.dim = dim
- self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
- self.norm = norm_layer(2 * dim)
-
- def forward(self, x):
- """
- x: B, H*W, C
- """
- H, W = self.input_resolution
- B, L, C = x.shape
- assert L == H * W, "input feature has wrong size"
- assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
-
- x = x.view(B, H, W, C)
-
- x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
- x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
- x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
- x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
- x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
- x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
-
- x = self.reduction(x)
- x = self.norm(x)
-
- return x
-
- def extra_repr(self) -> str:
- return f"input_resolution={self.input_resolution}, dim={self.dim}"
-
- def flops(self):
- H, W = self.input_resolution
- flops = (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
- flops += H * W * self.dim // 2
- return flops
-
-class BasicLayer(nn.Module):
- """ A basic Swin Transformer layer for one stage.
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- pretrained_window_size (int): Local window size in pre-training.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
- pretrained_window_size=0):
-
- super().__init__()
- self.dim = dim
- self.input_resolution = input_resolution
- self.depth = depth
- self.use_checkpoint = use_checkpoint
-
- # build blocks
- self.blocks = nn.ModuleList([
- SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
- num_heads=num_heads, window_size=window_size,
- shift_size=0 if (i % 2 == 0) else window_size // 2,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
- norm_layer=norm_layer,
- pretrained_window_size=pretrained_window_size)
- for i in range(depth)])
-
- # patch merging layer
- if downsample is not None:
- self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
- else:
- self.downsample = None
-
- def forward(self, x, x_size):
- for blk in self.blocks:
- if self.use_checkpoint:
- x = checkpoint.checkpoint(blk, x, x_size)
- else:
- x = blk(x, x_size)
- if self.downsample is not None:
- x = self.downsample(x)
- return x
-
- def extra_repr(self) -> str:
- return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
-
- def flops(self):
- flops = 0
- for blk in self.blocks:
- flops += blk.flops()
- if self.downsample is not None:
- flops += self.downsample.flops()
- return flops
-
- def _init_respostnorm(self):
- for blk in self.blocks:
- nn.init.constant_(blk.norm1.bias, 0)
- nn.init.constant_(blk.norm1.weight, 0)
- nn.init.constant_(blk.norm2.bias, 0)
- nn.init.constant_(blk.norm2.weight, 0)
-
-class PatchEmbed(nn.Module):
- r""" Image to Patch Embedding
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
- if norm_layer is not None:
- self.norm = norm_layer(embed_dim)
- else:
- self.norm = None
-
- def forward(self, x):
- B, C, H, W = x.shape
- # FIXME look at relaxing size constraints
- # assert H == self.img_size[0] and W == self.img_size[1],
- # f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
- x = self.proj(x).flatten(2).transpose(1, 2) # B Ph*Pw C
- if self.norm is not None:
- x = self.norm(x)
- return x
-
- def flops(self):
- Ho, Wo = self.patches_resolution
- flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1])
- if self.norm is not None:
- flops += Ho * Wo * self.embed_dim
- return flops
-
-class RSTB(nn.Module):
- """Residual Swin Transformer Block (RSTB).
-
- Args:
- dim (int): Number of input channels.
- input_resolution (tuple[int]): Input resolution.
- depth (int): Number of blocks.
- num_heads (int): Number of attention heads.
- window_size (int): Local window size.
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
- qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
- drop (float, optional): Dropout rate. Default: 0.0
- attn_drop (float, optional): Attention dropout rate. Default: 0.0
- drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
- norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
- downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
- img_size: Input image size.
- patch_size: Patch size.
- resi_connection: The convolutional block before residual connection.
- """
-
- def __init__(self, dim, input_resolution, depth, num_heads, window_size,
- mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0.,
- drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
- img_size=224, patch_size=4, resi_connection='1conv'):
- super(RSTB, self).__init__()
-
- self.dim = dim
- self.input_resolution = input_resolution
-
- self.residual_group = BasicLayer(dim=dim,
- input_resolution=input_resolution,
- depth=depth,
- num_heads=num_heads,
- window_size=window_size,
- mlp_ratio=mlp_ratio,
- qkv_bias=qkv_bias,
- drop=drop, attn_drop=attn_drop,
- drop_path=drop_path,
- norm_layer=norm_layer,
- downsample=downsample,
- use_checkpoint=use_checkpoint)
-
- if resi_connection == '1conv':
- self.conv = nn.Conv2d(dim, dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(dim // 4, dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(dim // 4, dim, 3, 1, 1))
-
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=dim, embed_dim=dim,
- norm_layer=None)
-
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=dim, embed_dim=dim,
- norm_layer=None)
-
- def forward(self, x, x_size):
- return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x
-
- def flops(self):
- flops = 0
- flops += self.residual_group.flops()
- H, W = self.input_resolution
- flops += H * W * self.dim * self.dim * 9
- flops += self.patch_embed.flops()
- flops += self.patch_unembed.flops()
-
- return flops
-
-class PatchUnEmbed(nn.Module):
- r""" Image to Patch Unembedding
-
- Args:
- img_size (int): Image size. Default: 224.
- patch_size (int): Patch token size. Default: 4.
- in_chans (int): Number of input image channels. Default: 3.
- embed_dim (int): Number of linear projection output channels. Default: 96.
- norm_layer (nn.Module, optional): Normalization layer. Default: None
- """
-
- def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
- super().__init__()
- img_size = to_2tuple(img_size)
- patch_size = to_2tuple(patch_size)
- patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
- self.img_size = img_size
- self.patch_size = patch_size
- self.patches_resolution = patches_resolution
- self.num_patches = patches_resolution[0] * patches_resolution[1]
-
- self.in_chans = in_chans
- self.embed_dim = embed_dim
-
- def forward(self, x, x_size):
- B, HW, C = x.shape
- x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C
- return x
-
- def flops(self):
- flops = 0
- return flops
-
-
-class Upsample(nn.Sequential):
- """Upsample module.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
- """
-
- def __init__(self, scale, num_feat):
- m = []
- if (scale & (scale - 1)) == 0: # scale = 2^n
- for _ in range(int(math.log(scale, 2))):
- m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(2))
- elif scale == 3:
- m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(3))
- else:
- raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
- super(Upsample, self).__init__(*m)
-
-class Upsample_hf(nn.Sequential):
- """Upsample module.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
- """
-
- def __init__(self, scale, num_feat):
- m = []
- if (scale & (scale - 1)) == 0: # scale = 2^n
- for _ in range(int(math.log(scale, 2))):
- m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(2))
- elif scale == 3:
- m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
- m.append(nn.PixelShuffle(3))
- else:
- raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
- super(Upsample_hf, self).__init__(*m)
-
-
-class UpsampleOneStep(nn.Sequential):
- """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle)
- Used in lightweight SR to save parameters.
-
- Args:
- scale (int): Scale factor. Supported scales: 2^n and 3.
- num_feat (int): Channel number of intermediate features.
-
- """
-
- def __init__(self, scale, num_feat, num_out_ch, input_resolution=None):
- self.num_feat = num_feat
- self.input_resolution = input_resolution
- m = []
- m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1))
- m.append(nn.PixelShuffle(scale))
- super(UpsampleOneStep, self).__init__(*m)
-
- def flops(self):
- H, W = self.input_resolution
- flops = H * W * self.num_feat * 3 * 9
- return flops
-
-
-
-class Swin2SR(nn.Module):
- r""" Swin2SR
- A PyTorch impl of : `Swin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration`.
-
- Args:
- img_size (int | tuple(int)): Input image size. Default 64
- patch_size (int | tuple(int)): Patch size. Default: 1
- in_chans (int): Number of input image channels. Default: 3
- embed_dim (int): Patch embedding dimension. Default: 96
- depths (tuple(int)): Depth of each Swin Transformer layer.
- num_heads (tuple(int)): Number of attention heads in different layers.
- window_size (int): Window size. Default: 7
- mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
- qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
- drop_rate (float): Dropout rate. Default: 0
- attn_drop_rate (float): Attention dropout rate. Default: 0
- drop_path_rate (float): Stochastic depth rate. Default: 0.1
- norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
- ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
- patch_norm (bool): If True, add normalization after patch embedding. Default: True
- use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
- upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
- img_range: Image range. 1. or 255.
- upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
- resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
- """
-
- def __init__(self, img_size=64, patch_size=1, in_chans=3,
- embed_dim=96, depths=[6, 6, 6, 6], num_heads=[6, 6, 6, 6],
- window_size=7, mlp_ratio=4., qkv_bias=True,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
- norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
- use_checkpoint=False, upscale=2, img_range=1., upsampler='', resi_connection='1conv',
- **kwargs):
- super(Swin2SR, self).__init__()
- num_in_ch = in_chans
- num_out_ch = in_chans
- num_feat = 64
- self.img_range = img_range
- if in_chans == 3:
- rgb_mean = (0.4488, 0.4371, 0.4040)
- self.mean = torch.Tensor(rgb_mean).view(1, 3, 1, 1)
- else:
- self.mean = torch.zeros(1, 1, 1, 1)
- self.upscale = upscale
- self.upsampler = upsampler
- self.window_size = window_size
-
- #####################################################################################################
- ################################### 1, shallow feature extraction ###################################
- self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
-
- #####################################################################################################
- ################################### 2, deep feature extraction ######################################
- self.num_layers = len(depths)
- self.embed_dim = embed_dim
- self.ape = ape
- self.patch_norm = patch_norm
- self.num_features = embed_dim
- self.mlp_ratio = mlp_ratio
-
- # split image into non-overlapping patches
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
- num_patches = self.patch_embed.num_patches
- patches_resolution = self.patch_embed.patches_resolution
- self.patches_resolution = patches_resolution
-
- # merge non-overlapping patches into image
- self.patch_unembed = PatchUnEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
- norm_layer=norm_layer if self.patch_norm else None)
-
- # absolute position embedding
- if self.ape:
- self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
- trunc_normal_(self.absolute_pos_embed, std=.02)
-
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- # stochastic depth
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
-
- # build Residual Swin Transformer blocks (RSTB)
- self.layers = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = RSTB(dim=embed_dim,
- input_resolution=(patches_resolution[0],
- patches_resolution[1]),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=qkv_bias,
- drop=drop_rate, attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
- norm_layer=norm_layer,
- downsample=None,
- use_checkpoint=use_checkpoint,
- img_size=img_size,
- patch_size=patch_size,
- resi_connection=resi_connection
-
- )
- self.layers.append(layer)
-
- if self.upsampler == 'pixelshuffle_hf':
- self.layers_hf = nn.ModuleList()
- for i_layer in range(self.num_layers):
- layer = RSTB(dim=embed_dim,
- input_resolution=(patches_resolution[0],
- patches_resolution[1]),
- depth=depths[i_layer],
- num_heads=num_heads[i_layer],
- window_size=window_size,
- mlp_ratio=self.mlp_ratio,
- qkv_bias=qkv_bias,
- drop=drop_rate, attn_drop=attn_drop_rate,
- drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
- norm_layer=norm_layer,
- downsample=None,
- use_checkpoint=use_checkpoint,
- img_size=img_size,
- patch_size=patch_size,
- resi_connection=resi_connection
-
- )
- self.layers_hf.append(layer)
-
- self.norm = norm_layer(self.num_features)
-
- # build the last conv layer in deep feature extraction
- if resi_connection == '1conv':
- self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
- elif resi_connection == '3conv':
- # to save parameters and memory
- self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
- nn.LeakyReLU(negative_slope=0.2, inplace=True),
- nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
-
- #####################################################################################################
- ################################ 3, high quality image reconstruction ################################
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.upsample = Upsample(upscale, num_feat)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- elif self.upsampler == 'pixelshuffle_aux':
- self.conv_bicubic = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1)
- self.conv_before_upsample = nn.Sequential(
- nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_aux = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- self.conv_after_aux = nn.Sequential(
- nn.Conv2d(3, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.upsample = Upsample(upscale, num_feat)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
-
- elif self.upsampler == 'pixelshuffle_hf':
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.upsample = Upsample(upscale, num_feat)
- self.upsample_hf = Upsample_hf(upscale, num_feat)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- self.conv_first_hf = nn.Sequential(nn.Conv2d(num_feat, embed_dim, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_after_body_hf = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
- self.conv_before_upsample_hf = nn.Sequential(
- nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_last_hf = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
-
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR (to save parameters)
- self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch,
- (patches_resolution[0], patches_resolution[1]))
- elif self.upsampler == 'nearest+conv':
- # for real-world SR (less artifacts)
- assert self.upscale == 4, 'only support x4 now.'
- self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
- nn.LeakyReLU(inplace=True))
- self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
- self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
- self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
- else:
- # for image denoising and JPEG compression artifact reduction
- self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
-
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'absolute_pos_embed'}
-
- @torch.jit.ignore
- def no_weight_decay_keywords(self):
- return {'relative_position_bias_table'}
-
- def check_image_size(self, x):
- _, _, h, w = x.size()
- mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
- mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
- x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
- return x
-
- def forward_features(self, x):
- x_size = (x.shape[2], x.shape[3])
- x = self.patch_embed(x)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
-
- for layer in self.layers:
- x = layer(x, x_size)
-
- x = self.norm(x) # B L C
- x = self.patch_unembed(x, x_size)
-
- return x
-
- def forward_features_hf(self, x):
- x_size = (x.shape[2], x.shape[3])
- x = self.patch_embed(x)
- if self.ape:
- x = x + self.absolute_pos_embed
- x = self.pos_drop(x)
-
- for layer in self.layers_hf:
- x = layer(x, x_size)
-
- x = self.norm(x) # B L C
- x = self.patch_unembed(x, x_size)
-
- return x
-
- def forward(self, x):
- H, W = x.shape[2:]
- x = self.check_image_size(x)
-
- self.mean = self.mean.type_as(x)
- x = (x - self.mean) * self.img_range
-
- if self.upsampler == 'pixelshuffle':
- # for classical SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.conv_last(self.upsample(x))
- elif self.upsampler == 'pixelshuffle_aux':
- bicubic = F.interpolate(x, size=(H * self.upscale, W * self.upscale), mode='bicubic', align_corners=False)
- bicubic = self.conv_bicubic(bicubic)
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- aux = self.conv_aux(x) # b, 3, LR_H, LR_W
- x = self.conv_after_aux(aux)
- x = self.upsample(x)[:, :, :H * self.upscale, :W * self.upscale] + bicubic[:, :, :H * self.upscale, :W * self.upscale]
- x = self.conv_last(x)
- aux = aux / self.img_range + self.mean
- elif self.upsampler == 'pixelshuffle_hf':
- # for classical SR with HF
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x_before = self.conv_before_upsample(x)
- x_out = self.conv_last(self.upsample(x_before))
-
- x_hf = self.conv_first_hf(x_before)
- x_hf = self.conv_after_body_hf(self.forward_features_hf(x_hf)) + x_hf
- x_hf = self.conv_before_upsample_hf(x_hf)
- x_hf = self.conv_last_hf(self.upsample_hf(x_hf))
- x = x_out + x_hf
- x_hf = x_hf / self.img_range + self.mean
-
- elif self.upsampler == 'pixelshuffledirect':
- # for lightweight SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.upsample(x)
- elif self.upsampler == 'nearest+conv':
- # for real-world SR
- x = self.conv_first(x)
- x = self.conv_after_body(self.forward_features(x)) + x
- x = self.conv_before_upsample(x)
- x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
- x = self.conv_last(self.lrelu(self.conv_hr(x)))
- else:
- # for image denoising and JPEG compression artifact reduction
- x_first = self.conv_first(x)
- res = self.conv_after_body(self.forward_features(x_first)) + x_first
- x = x + self.conv_last(res)
-
- x = x / self.img_range + self.mean
- if self.upsampler == "pixelshuffle_aux":
- return x[:, :, :H*self.upscale, :W*self.upscale], aux
-
- elif self.upsampler == "pixelshuffle_hf":
- x_out = x_out / self.img_range + self.mean
- return x_out[:, :, :H*self.upscale, :W*self.upscale], x[:, :, :H*self.upscale, :W*self.upscale], x_hf[:, :, :H*self.upscale, :W*self.upscale]
-
- else:
- return x[:, :, :H*self.upscale, :W*self.upscale]
-
- def flops(self):
- flops = 0
- H, W = self.patches_resolution
- flops += H * W * 3 * self.embed_dim * 9
- flops += self.patch_embed.flops()
- for i, layer in enumerate(self.layers):
- flops += layer.flops()
- flops += H * W * 3 * self.embed_dim * self.embed_dim
- flops += self.upsample.flops()
- return flops
-
-
-if __name__ == '__main__':
- upscale = 4
- window_size = 8
- height = (1024 // upscale // window_size + 1) * window_size
- width = (720 // upscale // window_size + 1) * window_size
- model = Swin2SR(upscale=2, img_size=(height, width),
- window_size=window_size, img_range=1., depths=[6, 6, 6, 6],
- embed_dim=60, num_heads=[6, 6, 6, 6], mlp_ratio=2, upsampler='pixelshuffledirect')
- print(model)
- print(height, width, model.flops() / 1e9)
-
- x = torch.randn((1, 3, height, width))
- x = model(x)
- print(x.shape)
\ No newline at end of file
diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/rich.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/rich.py
deleted file mode 100644
index 745d8c8bc41116fe1ead73e18569c075a03450e1..0000000000000000000000000000000000000000
--- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/rich.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from rich.console import Console
-console = Console()
\ No newline at end of file
diff --git a/spaces/jackli888/stable-diffusion-webui/modules/extras.py b/spaces/jackli888/stable-diffusion-webui/modules/extras.py
deleted file mode 100644
index 6a9af2d8e641fdf1ebd29045078d29b5aeae3d6f..0000000000000000000000000000000000000000
--- a/spaces/jackli888/stable-diffusion-webui/modules/extras.py
+++ /dev/null
@@ -1,258 +0,0 @@
-import os
-import re
-import shutil
-
-
-import torch
-import tqdm
-
-from modules import shared, images, sd_models, sd_vae, sd_models_config
-from modules.ui_common import plaintext_to_html
-import gradio as gr
-import safetensors.torch
-
-
-def run_pnginfo(image):
- if image is None:
- return '', '', ''
-
- geninfo, items = images.read_info_from_image(image)
- items = {**{'parameters': geninfo}, **items}
-
- info = ''
- for key, text in items.items():
- info += f"""
-
-由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看
-
-
-#### 部署到 Netlify
-[](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo)
-
-#### 部署到 Vercel
-如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用
-
-[](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example)
-
-#### 部署到 Render
-
-[](https://render.com/deploy?repo=https://github.com/weaigc/bingo)
-正常格式/网页端保存的格式(格式仅供参考)
-
-```
-curl 'https://www.bing.com/turing/captcha/challenge' \
- -H 'authority: www.bing.com' \
- -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \
- -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \
- -H 'cache-control: max-age=0' \
- -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \
- -H 'dnt: 1' \
- -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \
- -H 'sec-ch-ua-arch: "x86"' \
- -H 'sec-ch-ua-bitness: "64"' \
- -H 'sec-ch-ua-full-version: "116.0.1938.29"' \
- -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \
- -H 'sec-ch-ua-mobile: ?0' \
- -H 'sec-ch-ua-model: ""' \
- -H 'sec-ch-ua-platform: "Windows"' \
- -H 'sec-ch-ua-platform-version: "15.0.0"' \
- -H 'sec-fetch-dest: document' \
- -H 'sec-fetch-mode: navigate' \
- -H 'sec-fetch-site: none' \
- -H 'sec-fetch-user: ?1' \
- -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \
- -H 'sec-ms-gec-version: 1-116.0.1938.29' \
- -H 'upgrade-insecure-requests: 1' \
- -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \
- -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \
- -H 'x-edge-shopping-flag: 1' \
- --compressed
-```
-转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式)
-
-```
-Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA==
-```
-acid music studio 7.0 serial keygen codes
-
-cracker Activation Code:- 10372602/10372603/10372604/10372605/10372606/10372607/10372608/10372609/10372610/10372611/10372612/10372613/10372614/10372615/10372616/10372617/10372618/10372619/10372620/10372621/10372622/10372623/10372624/10372625/10372626/10372627/10372628/10372629/10372630/10372631/10372632/10372633/10372634/10372635/10372636/10372637/10372638/10372639/10372640/10372641/10372642/10372643/10372644/10372645/10372646/10372647/10372648/10372649/10372650/10372651/10372652/10372653/10372654/10372655/10372656/10372657/10372658/10372659/10372660/10372661/10372662/10372663/10372664/10372665/10372666/10372667/10372668/10372669/10372670/10372671/10372672/10372673/10372674/10372675/10372676/10372677/10372678/10372679/10372680/10372681/10372682/10372683/10372684/10372685/10372686/10372687/10372688/10372689/10372690/10372691/10372692/10372693/10372694/10372695/10372696/10372697/10372698/10372699/10372700/10372701/10372702/10 4fefd39f24
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/CorelDraw X4 [torrent By R0bY90] Working TOP Keygen.md b/spaces/lincquiQcaudo/Top-20-Diffusion/CorelDraw X4 [torrent By R0bY90] Working TOP Keygen.md
deleted file mode 100644
index 1ba4cde7aa51f83f4826b94dd8c3df7b2c8344fa..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/CorelDraw X4 [torrent By R0bY90] Working TOP Keygen.md
+++ /dev/null
@@ -1,6 +0,0 @@
-CorelDraw X4 [torrent by R0bY90] Working Keygen
-
-A keygen CorelDRAW x4 RAR often looks like an ordinary archive, ... Use all the ... CorelDraw X4 [torrent By R0bY90] Working Keygen … CorelDraw X4 [torrent ... 1fdad05405
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer !LINK!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer !LINK!.md
deleted file mode 100644
index 9b7d0408a573ab2788384692b45d7329b25f25c7..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer !LINK!.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-How to Download and Install Counter-Strike: Global Offensive V1.33.1.0 AutoUpdate for PC
-
-Counter-Strike Global Offensive V1.33.1.0 AutoUpdate Download For Computer
-
-Method 1: Torrent
-
-
-
-
-Method 2: Launcher
-
-
-
-
-Conclusion
-
-
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download The Odyssey Full Movie 1997 176 !!BETTER!!.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download The Odyssey Full Movie 1997 176 !!BETTER!!.md
deleted file mode 100644
index 7cd1135579c1fd4e2d5ecb6d7a27804382f40d7a..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download The Odyssey Full Movie 1997 176 !!BETTER!!.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-How to Download The Odyssey Full Movie 1997 176
-download the odyssey full movie 1997 176
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Khelar Putul Bangla Movie Download.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Khelar Putul Bangla Movie Download.md
deleted file mode 100644
index ccc061ee4d5a52dbf7258740434733daa2c16753..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Khelar Putul Bangla Movie Download.md
+++ /dev/null
@@ -1,6 +0,0 @@
-khelar putul bangla movie download
-
-Khelar Putul is a film directed by Tarun Majumder and starring Sandhya Roy and Satya Banerjee. Language: Bengali; Release date: July 02, 1981. The plot of the film revolves around two friends - a young doctor Narayan and his friend, a journalist named Vikram. Vikram, being an aspiring newspaper reporter, asks Narayan for help to publish an exposé about the criminal activities of local politicians. However, political rivals stand in Vikram's way and he can't do it. Then Narayan comes to the aid of a friend, who tells Vikram the story of how he once abandoned his girlfriend Rati. 8a78ff9644
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Flight Simulator Download Free Full Version.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Flight Simulator Download Free Full Version.md
deleted file mode 100644
index ba3c63cc1c81be416441fad65365ee22c2af06b4..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Microsoft Flight Simulator Download Free Full Version.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Microsoft Flight Simulator Download Free Full Version
-
-Start adding free planes, skins, scenery, and other mods to Microsoft Flight Simulator. ... Here are the best mods for Microsoft Flight Simulator, and below we'll ... There aren't a lot of different liveries in the standard version of the game, ... Download the mod you want, and extract it to that Community folder. 1fdad05405
-
-
-
diff --git a/spaces/lindeberg/whisper-webui/src/source.py b/spaces/lindeberg/whisper-webui/src/source.py
deleted file mode 100644
index a612bc92d0f60f9b89bbb01ec651e9dcd9146a03..0000000000000000000000000000000000000000
--- a/spaces/lindeberg/whisper-webui/src/source.py
+++ /dev/null
@@ -1,70 +0,0 @@
-# Gradio seems to truncate files without keeping the extension, so we need to truncate the file prefix ourself
-import os
-import pathlib
-from typing import List
-import zipfile
-
-import ffmpeg
-from more_itertools import unzip
-
-from src.download import ExceededMaximumDuration, download_url
-
-MAX_FILE_PREFIX_LENGTH = 17
-
-class AudioSource:
- def __init__(self, source_path, source_name = None):
- self.source_path = source_path
- self.source_name = source_name
-
- # Load source name if not provided
- if (self.source_name is None):
- file_path = pathlib.Path(self.source_path)
- self.source_name = file_path.name
-
- def get_full_name(self):
- return self.source_name
-
- def get_short_name(self, max_length: int = MAX_FILE_PREFIX_LENGTH):
- file_path = pathlib.Path(self.source_name)
- short_name = file_path.stem[:max_length] + file_path.suffix
-
- return short_name
-
- def __str__(self) -> str:
- return self.source_path
-
-class AudioSourceCollection:
- def __init__(self, sources: List[AudioSource]):
- self.sources = sources
-
- def __iter__(self):
- return iter(self.sources)
-
-def get_audio_source_collection(urlData: str, multipleFiles: List, microphoneData: str, input_audio_max_duration: float = -1) -> List[AudioSource]:
- output: List[AudioSource] = []
-
- if urlData:
- # Download from YouTube. This could also be a playlist or a channel.
- output.extend([ AudioSource(x) for x in download_url(urlData, input_audio_max_duration, playlistItems=None) ])
- else:
- # Add input files
- if (multipleFiles is not None):
- output.extend([ AudioSource(x.name) for x in multipleFiles ])
- if (microphoneData is not None):
- output.append(AudioSource(microphoneData))
-
- total_duration = 0
-
- # Calculate total audio length. We do this even if input_audio_max_duration
- # is disabled to ensure that all the audio files are valid.
- for source in output:
- audioDuration = ffmpeg.probe(source.source_path)["format"]["duration"]
- total_duration += float(audioDuration)
-
- # Ensure the total duration of the audio is not too long
- if input_audio_max_duration > 0:
- if float(total_duration) > input_audio_max_duration:
- raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=input_audio_max_duration, message="Video(s) is too long")
-
- # Return a list of audio sources
- return output
\ No newline at end of file
diff --git a/spaces/lingluoACE/bingbyd/Dockerfile b/spaces/lingluoACE/bingbyd/Dockerfile
deleted file mode 100644
index 139c333a3bba5ac3680d42b6f356824207f05255..0000000000000000000000000000000000000000
--- a/spaces/lingluoACE/bingbyd/Dockerfile
+++ /dev/null
@@ -1,33 +0,0 @@
-# Build Stage
-# 使用 golang:alpine 作为构建阶段的基础镜像
-FROM golang:alpine AS builder
-
-# 添加 git,并且清除缓存🧹
-RUN apk --no-cache add git && \
- git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app && \
- apk del git
-
-# 设置工作目录
-WORKDIR /workspace/app
-
-# 编译 go 项目
-RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
-
-# Runtime Stage
-# 使用轻量级的 alpine 镜像🪞
-FROM alpine
-
-# 设置工作目录💼
-WORKDIR /workspace/app
-
-# 从构建阶段复制编译后的二进制文件👔
-COPY --from=builder /workspace/app/go-proxy-bingai .
-
-# (可选)设置环境变量✍️
-ENV Go_Proxy_BingAI_USER_TOKEN_1="G4hJ9k544565uhjjhjlkjh6356223p3EaYc0FvIjHmLzXeRfAq"
-
-# 端口
-EXPOSE 8080
-
-# 容器运行✅
-CMD ["/workspace/app/go-proxy-bingai"]
diff --git a/spaces/litagin/rvc_okiba_TTS/README.md b/spaces/litagin/rvc_okiba_TTS/README.md
deleted file mode 100644
index c4b696294bdc4f4c61dce83dee059f468ec71dad..0000000000000000000000000000000000000000
--- a/spaces/litagin/rvc_okiba_TTS/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: RVC okiba TTS
-emoji: 😊🎙️
-colorFrom: red
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.47.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/liuyuan-pal/SyncDreamer/ldm/modules/x_transformer.py b/spaces/liuyuan-pal/SyncDreamer/ldm/modules/x_transformer.py
deleted file mode 100644
index 5fc15bf9cfe0111a910e7de33d04ffdec3877576..0000000000000000000000000000000000000000
--- a/spaces/liuyuan-pal/SyncDreamer/ldm/modules/x_transformer.py
+++ /dev/null
@@ -1,641 +0,0 @@
-"""shout-out to https://github.com/lucidrains/x-transformers/tree/main/x_transformers"""
-import torch
-from torch import nn, einsum
-import torch.nn.functional as F
-from functools import partial
-from inspect import isfunction
-from collections import namedtuple
-from einops import rearrange, repeat, reduce
-
-# constants
-
-DEFAULT_DIM_HEAD = 64
-
-Intermediates = namedtuple('Intermediates', [
- 'pre_softmax_attn',
- 'post_softmax_attn'
-])
-
-LayerIntermediates = namedtuple('Intermediates', [
- 'hiddens',
- 'attn_intermediates'
-])
-
-
-class AbsolutePositionalEmbedding(nn.Module):
- def __init__(self, dim, max_seq_len):
- super().__init__()
- self.emb = nn.Embedding(max_seq_len, dim)
- self.init_()
-
- def init_(self):
- nn.init.normal_(self.emb.weight, std=0.02)
-
- def forward(self, x):
- n = torch.arange(x.shape[1], device=x.device)
- return self.emb(n)[None, :, :]
-
-
-class FixedPositionalEmbedding(nn.Module):
- def __init__(self, dim):
- super().__init__()
- inv_freq = 1. / (10000 ** (torch.arange(0, dim, 2).float() / dim))
- self.register_buffer('inv_freq', inv_freq)
-
- def forward(self, x, seq_dim=1, offset=0):
- t = torch.arange(x.shape[seq_dim], device=x.device).type_as(self.inv_freq) + offset
- sinusoid_inp = torch.einsum('i , j -> i j', t, self.inv_freq)
- emb = torch.cat((sinusoid_inp.sin(), sinusoid_inp.cos()), dim=-1)
- return emb[None, :, :]
-
-
-# helpers
-
-def exists(val):
- return val is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def always(val):
- def inner(*args, **kwargs):
- return val
- return inner
-
-
-def not_equals(val):
- def inner(x):
- return x != val
- return inner
-
-
-def equals(val):
- def inner(x):
- return x == val
- return inner
-
-
-def max_neg_value(tensor):
- return -torch.finfo(tensor.dtype).max
-
-
-# keyword argument helpers
-
-def pick_and_pop(keys, d):
- values = list(map(lambda key: d.pop(key), keys))
- return dict(zip(keys, values))
-
-
-def group_dict_by_key(cond, d):
- return_val = [dict(), dict()]
- for key in d.keys():
- match = bool(cond(key))
- ind = int(not match)
- return_val[ind][key] = d[key]
- return (*return_val,)
-
-
-def string_begins_with(prefix, str):
- return str.startswith(prefix)
-
-
-def group_by_key_prefix(prefix, d):
- return group_dict_by_key(partial(string_begins_with, prefix), d)
-
-
-def groupby_prefix_and_trim(prefix, d):
- kwargs_with_prefix, kwargs = group_dict_by_key(partial(string_begins_with, prefix), d)
- kwargs_without_prefix = dict(map(lambda x: (x[0][len(prefix):], x[1]), tuple(kwargs_with_prefix.items())))
- return kwargs_without_prefix, kwargs
-
-
-# classes
-class Scale(nn.Module):
- def __init__(self, value, fn):
- super().__init__()
- self.value = value
- self.fn = fn
-
- def forward(self, x, **kwargs):
- x, *rest = self.fn(x, **kwargs)
- return (x * self.value, *rest)
-
-
-class Rezero(nn.Module):
- def __init__(self, fn):
- super().__init__()
- self.fn = fn
- self.g = nn.Parameter(torch.zeros(1))
-
- def forward(self, x, **kwargs):
- x, *rest = self.fn(x, **kwargs)
- return (x * self.g, *rest)
-
-
-class ScaleNorm(nn.Module):
- def __init__(self, dim, eps=1e-5):
- super().__init__()
- self.scale = dim ** -0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(1))
-
- def forward(self, x):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- return x / norm.clamp(min=self.eps) * self.g
-
-
-class RMSNorm(nn.Module):
- def __init__(self, dim, eps=1e-8):
- super().__init__()
- self.scale = dim ** -0.5
- self.eps = eps
- self.g = nn.Parameter(torch.ones(dim))
-
- def forward(self, x):
- norm = torch.norm(x, dim=-1, keepdim=True) * self.scale
- return x / norm.clamp(min=self.eps) * self.g
-
-
-class Residual(nn.Module):
- def forward(self, x, residual):
- return x + residual
-
-
-class GRUGating(nn.Module):
- def __init__(self, dim):
- super().__init__()
- self.gru = nn.GRUCell(dim, dim)
-
- def forward(self, x, residual):
- gated_output = self.gru(
- rearrange(x, 'b n d -> (b n) d'),
- rearrange(residual, 'b n d -> (b n) d')
- )
-
- return gated_output.reshape_as(x)
-
-
-# feedforward
-
-class GEGLU(nn.Module):
- def __init__(self, dim_in, dim_out):
- super().__init__()
- self.proj = nn.Linear(dim_in, dim_out * 2)
-
- def forward(self, x):
- x, gate = self.proj(x).chunk(2, dim=-1)
- return x * F.gelu(gate)
-
-
-class FeedForward(nn.Module):
- def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.):
- super().__init__()
- inner_dim = int(dim * mult)
- dim_out = default(dim_out, dim)
- project_in = nn.Sequential(
- nn.Linear(dim, inner_dim),
- nn.GELU()
- ) if not glu else GEGLU(dim, inner_dim)
-
- self.net = nn.Sequential(
- project_in,
- nn.Dropout(dropout),
- nn.Linear(inner_dim, dim_out)
- )
-
- def forward(self, x):
- return self.net(x)
-
-
-# attention.
-class Attention(nn.Module):
- def __init__(
- self,
- dim,
- dim_head=DEFAULT_DIM_HEAD,
- heads=8,
- causal=False,
- mask=None,
- talking_heads=False,
- sparse_topk=None,
- use_entmax15=False,
- num_mem_kv=0,
- dropout=0.,
- on_attn=False
- ):
- super().__init__()
- if use_entmax15:
- raise NotImplementedError("Check out entmax activation instead of softmax activation!")
- self.scale = dim_head ** -0.5
- self.heads = heads
- self.causal = causal
- self.mask = mask
-
- inner_dim = dim_head * heads
-
- self.to_q = nn.Linear(dim, inner_dim, bias=False)
- self.to_k = nn.Linear(dim, inner_dim, bias=False)
- self.to_v = nn.Linear(dim, inner_dim, bias=False)
- self.dropout = nn.Dropout(dropout)
-
- # talking heads
- self.talking_heads = talking_heads
- if talking_heads:
- self.pre_softmax_proj = nn.Parameter(torch.randn(heads, heads))
- self.post_softmax_proj = nn.Parameter(torch.randn(heads, heads))
-
- # explicit topk sparse attention
- self.sparse_topk = sparse_topk
-
- # entmax
- #self.attn_fn = entmax15 if use_entmax15 else F.softmax
- self.attn_fn = F.softmax
-
- # add memory key / values
- self.num_mem_kv = num_mem_kv
- if num_mem_kv > 0:
- self.mem_k = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
- self.mem_v = nn.Parameter(torch.randn(heads, num_mem_kv, dim_head))
-
- # attention on attention
- self.attn_on_attn = on_attn
- self.to_out = nn.Sequential(nn.Linear(inner_dim, dim * 2), nn.GLU()) if on_attn else nn.Linear(inner_dim, dim)
-
- def forward(
- self,
- x,
- context=None,
- mask=None,
- context_mask=None,
- rel_pos=None,
- sinusoidal_emb=None,
- prev_attn=None,
- mem=None
- ):
- b, n, _, h, talking_heads, device = *x.shape, self.heads, self.talking_heads, x.device
- kv_input = default(context, x)
-
- q_input = x
- k_input = kv_input
- v_input = kv_input
-
- if exists(mem):
- k_input = torch.cat((mem, k_input), dim=-2)
- v_input = torch.cat((mem, v_input), dim=-2)
-
- if exists(sinusoidal_emb):
- # in shortformer, the query would start at a position offset depending on the past cached memory
- offset = k_input.shape[-2] - q_input.shape[-2]
- q_input = q_input + sinusoidal_emb(q_input, offset=offset)
- k_input = k_input + sinusoidal_emb(k_input)
-
- q = self.to_q(q_input)
- k = self.to_k(k_input)
- v = self.to_v(v_input)
-
- q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> b h n d', h=h), (q, k, v))
-
- input_mask = None
- if any(map(exists, (mask, context_mask))):
- q_mask = default(mask, lambda: torch.ones((b, n), device=device).bool())
- k_mask = q_mask if not exists(context) else context_mask
- k_mask = default(k_mask, lambda: torch.ones((b, k.shape[-2]), device=device).bool())
- q_mask = rearrange(q_mask, 'b i -> b () i ()')
- k_mask = rearrange(k_mask, 'b j -> b () () j')
- input_mask = q_mask * k_mask
-
- if self.num_mem_kv > 0:
- mem_k, mem_v = map(lambda t: repeat(t, 'h n d -> b h n d', b=b), (self.mem_k, self.mem_v))
- k = torch.cat((mem_k, k), dim=-2)
- v = torch.cat((mem_v, v), dim=-2)
- if exists(input_mask):
- input_mask = F.pad(input_mask, (self.num_mem_kv, 0), value=True)
-
- dots = einsum('b h i d, b h j d -> b h i j', q, k) * self.scale
- mask_value = max_neg_value(dots)
-
- if exists(prev_attn):
- dots = dots + prev_attn
-
- pre_softmax_attn = dots
-
- if talking_heads:
- dots = einsum('b h i j, h k -> b k i j', dots, self.pre_softmax_proj).contiguous()
-
- if exists(rel_pos):
- dots = rel_pos(dots)
-
- if exists(input_mask):
- dots.masked_fill_(~input_mask, mask_value)
- del input_mask
-
- if self.causal:
- i, j = dots.shape[-2:]
- r = torch.arange(i, device=device)
- mask = rearrange(r, 'i -> () () i ()') < rearrange(r, 'j -> () () () j')
- mask = F.pad(mask, (j - i, 0), value=False)
- dots.masked_fill_(mask, mask_value)
- del mask
-
- if exists(self.sparse_topk) and self.sparse_topk < dots.shape[-1]:
- top, _ = dots.topk(self.sparse_topk, dim=-1)
- vk = top[..., -1].unsqueeze(-1).expand_as(dots)
- mask = dots < vk
- dots.masked_fill_(mask, mask_value)
- del mask
-
- attn = self.attn_fn(dots, dim=-1)
- post_softmax_attn = attn
-
- attn = self.dropout(attn)
-
- if talking_heads:
- attn = einsum('b h i j, h k -> b k i j', attn, self.post_softmax_proj).contiguous()
-
- out = einsum('b h i j, b h j d -> b h i d', attn, v)
- out = rearrange(out, 'b h n d -> b n (h d)')
-
- intermediates = Intermediates(
- pre_softmax_attn=pre_softmax_attn,
- post_softmax_attn=post_softmax_attn
- )
-
- return self.to_out(out), intermediates
-
-
-class AttentionLayers(nn.Module):
- def __init__(
- self,
- dim,
- depth,
- heads=8,
- causal=False,
- cross_attend=False,
- only_cross=False,
- use_scalenorm=False,
- use_rmsnorm=False,
- use_rezero=False,
- rel_pos_num_buckets=32,
- rel_pos_max_distance=128,
- position_infused_attn=False,
- custom_layers=None,
- sandwich_coef=None,
- par_ratio=None,
- residual_attn=False,
- cross_residual_attn=False,
- macaron=False,
- pre_norm=True,
- gate_residual=False,
- **kwargs
- ):
- super().__init__()
- ff_kwargs, kwargs = groupby_prefix_and_trim('ff_', kwargs)
- attn_kwargs, _ = groupby_prefix_and_trim('attn_', kwargs)
-
- dim_head = attn_kwargs.get('dim_head', DEFAULT_DIM_HEAD)
-
- self.dim = dim
- self.depth = depth
- self.layers = nn.ModuleList([])
-
- self.has_pos_emb = position_infused_attn
- self.pia_pos_emb = FixedPositionalEmbedding(dim) if position_infused_attn else None
- self.rotary_pos_emb = always(None)
-
- assert rel_pos_num_buckets <= rel_pos_max_distance, 'number of relative position buckets must be less than the relative position max distance'
- self.rel_pos = None
-
- self.pre_norm = pre_norm
-
- self.residual_attn = residual_attn
- self.cross_residual_attn = cross_residual_attn
-
- norm_class = ScaleNorm if use_scalenorm else nn.LayerNorm
- norm_class = RMSNorm if use_rmsnorm else norm_class
- norm_fn = partial(norm_class, dim)
-
- norm_fn = nn.Identity if use_rezero else norm_fn
- branch_fn = Rezero if use_rezero else None
-
- if cross_attend and not only_cross:
- default_block = ('a', 'c', 'f')
- elif cross_attend and only_cross:
- default_block = ('c', 'f')
- else:
- default_block = ('a', 'f')
-
- if macaron:
- default_block = ('f',) + default_block
-
- if exists(custom_layers):
- layer_types = custom_layers
- elif exists(par_ratio):
- par_depth = depth * len(default_block)
- assert 1 < par_ratio <= par_depth, 'par ratio out of range'
- default_block = tuple(filter(not_equals('f'), default_block))
- par_attn = par_depth // par_ratio
- depth_cut = par_depth * 2 // 3 # 2 / 3 attention layer cutoff suggested by PAR paper
- par_width = (depth_cut + depth_cut // par_attn) // par_attn
- assert len(default_block) <= par_width, 'default block is too large for par_ratio'
- par_block = default_block + ('f',) * (par_width - len(default_block))
- par_head = par_block * par_attn
- layer_types = par_head + ('f',) * (par_depth - len(par_head))
- elif exists(sandwich_coef):
- assert sandwich_coef > 0 and sandwich_coef <= depth, 'sandwich coefficient should be less than the depth'
- layer_types = ('a',) * sandwich_coef + default_block * (depth - sandwich_coef) + ('f',) * sandwich_coef
- else:
- layer_types = default_block * depth
-
- self.layer_types = layer_types
- self.num_attn_layers = len(list(filter(equals('a'), layer_types)))
-
- for layer_type in self.layer_types:
- if layer_type == 'a':
- layer = Attention(dim, heads=heads, causal=causal, **attn_kwargs)
- elif layer_type == 'c':
- layer = Attention(dim, heads=heads, **attn_kwargs)
- elif layer_type == 'f':
- layer = FeedForward(dim, **ff_kwargs)
- layer = layer if not macaron else Scale(0.5, layer)
- else:
- raise Exception(f'invalid layer type {layer_type}')
-
- if isinstance(layer, Attention) and exists(branch_fn):
- layer = branch_fn(layer)
-
- if gate_residual:
- residual_fn = GRUGating(dim)
- else:
- residual_fn = Residual()
-
- self.layers.append(nn.ModuleList([
- norm_fn(),
- layer,
- residual_fn
- ]))
-
- def forward(
- self,
- x,
- context=None,
- mask=None,
- context_mask=None,
- mems=None,
- return_hiddens=False
- ):
- hiddens = []
- intermediates = []
- prev_attn = None
- prev_cross_attn = None
-
- mems = mems.copy() if exists(mems) else [None] * self.num_attn_layers
-
- for ind, (layer_type, (norm, block, residual_fn)) in enumerate(zip(self.layer_types, self.layers)):
- is_last = ind == (len(self.layers) - 1)
-
- if layer_type == 'a':
- hiddens.append(x)
- layer_mem = mems.pop(0)
-
- residual = x
-
- if self.pre_norm:
- x = norm(x)
-
- if layer_type == 'a':
- out, inter = block(x, mask=mask, sinusoidal_emb=self.pia_pos_emb, rel_pos=self.rel_pos,
- prev_attn=prev_attn, mem=layer_mem)
- elif layer_type == 'c':
- out, inter = block(x, context=context, mask=mask, context_mask=context_mask, prev_attn=prev_cross_attn)
- elif layer_type == 'f':
- out = block(x)
-
- x = residual_fn(out, residual)
-
- if layer_type in ('a', 'c'):
- intermediates.append(inter)
-
- if layer_type == 'a' and self.residual_attn:
- prev_attn = inter.pre_softmax_attn
- elif layer_type == 'c' and self.cross_residual_attn:
- prev_cross_attn = inter.pre_softmax_attn
-
- if not self.pre_norm and not is_last:
- x = norm(x)
-
- if return_hiddens:
- intermediates = LayerIntermediates(
- hiddens=hiddens,
- attn_intermediates=intermediates
- )
-
- return x, intermediates
-
- return x
-
-
-class Encoder(AttentionLayers):
- def __init__(self, **kwargs):
- assert 'causal' not in kwargs, 'cannot set causality on encoder'
- super().__init__(causal=False, **kwargs)
-
-
-
-class TransformerWrapper(nn.Module):
- def __init__(
- self,
- *,
- num_tokens,
- max_seq_len,
- attn_layers,
- emb_dim=None,
- max_mem_len=0.,
- emb_dropout=0.,
- num_memory_tokens=None,
- tie_embedding=False,
- use_pos_emb=True
- ):
- super().__init__()
- assert isinstance(attn_layers, AttentionLayers), 'attention layers must be one of Encoder or Decoder'
-
- dim = attn_layers.dim
- emb_dim = default(emb_dim, dim)
-
- self.max_seq_len = max_seq_len
- self.max_mem_len = max_mem_len
- self.num_tokens = num_tokens
-
- self.token_emb = nn.Embedding(num_tokens, emb_dim)
- self.pos_emb = AbsolutePositionalEmbedding(emb_dim, max_seq_len) if (
- use_pos_emb and not attn_layers.has_pos_emb) else always(0)
- self.emb_dropout = nn.Dropout(emb_dropout)
-
- self.project_emb = nn.Linear(emb_dim, dim) if emb_dim != dim else nn.Identity()
- self.attn_layers = attn_layers
- self.norm = nn.LayerNorm(dim)
-
- self.init_()
-
- self.to_logits = nn.Linear(dim, num_tokens) if not tie_embedding else lambda t: t @ self.token_emb.weight.t()
-
- # memory tokens (like [cls]) from Memory Transformers paper
- num_memory_tokens = default(num_memory_tokens, 0)
- self.num_memory_tokens = num_memory_tokens
- if num_memory_tokens > 0:
- self.memory_tokens = nn.Parameter(torch.randn(num_memory_tokens, dim))
-
- # let funnel encoder know number of memory tokens, if specified
- if hasattr(attn_layers, 'num_memory_tokens'):
- attn_layers.num_memory_tokens = num_memory_tokens
-
- def init_(self):
- nn.init.normal_(self.token_emb.weight, std=0.02)
-
- def forward(
- self,
- x,
- return_embeddings=False,
- mask=None,
- return_mems=False,
- return_attn=False,
- mems=None,
- **kwargs
- ):
- b, n, device, num_mem = *x.shape, x.device, self.num_memory_tokens
- x = self.token_emb(x)
- x += self.pos_emb(x)
- x = self.emb_dropout(x)
-
- x = self.project_emb(x)
-
- if num_mem > 0:
- mem = repeat(self.memory_tokens, 'n d -> b n d', b=b)
- x = torch.cat((mem, x), dim=1)
-
- # auto-handle masking after appending memory tokens
- if exists(mask):
- mask = F.pad(mask, (num_mem, 0), value=True)
-
- x, intermediates = self.attn_layers(x, mask=mask, mems=mems, return_hiddens=True, **kwargs)
- x = self.norm(x)
-
- mem, x = x[:, :num_mem], x[:, num_mem:]
-
- out = self.to_logits(x) if not return_embeddings else x
-
- if return_mems:
- hiddens = intermediates.hiddens
- new_mems = list(map(lambda pair: torch.cat(pair, dim=-2), zip(mems, hiddens))) if exists(mems) else hiddens
- new_mems = list(map(lambda t: t[..., -self.max_mem_len:, :].detach(), new_mems))
- return out, new_mems
-
- if return_attn:
- attn_maps = list(map(lambda t: t.post_softmax_attn, intermediates.attn_intermediates))
- return out, attn_maps
-
- return out
-
diff --git a/spaces/lotrlol/Spotify-Recommendation-System/model.py b/spaces/lotrlol/Spotify-Recommendation-System/model.py
deleted file mode 100644
index 649dbbba097d74e57339969c8e67b445b558100e..0000000000000000000000000000000000000000
--- a/spaces/lotrlol/Spotify-Recommendation-System/model.py
+++ /dev/null
@@ -1,481 +0,0 @@
-import pandas as pd
-import spotipy
-from spotipy.oauth2 import SpotifyOAuth, SpotifyClientCredentials
-import yaml
-import re
-from sklearn.feature_extraction.text import TfidfVectorizer
-from sklearn.metrics.pairwise import cosine_similarity
-from sklearn.preprocessing import MinMaxScaler
-import pickle
-
-def playlist_model(url, model, max_gen=3, same_art=5):
- log = []
- Fresult = []
- try:
- log.append('Start logging')
- uri = url.split('/')[-1].split('?')[0]
- stream = open("Spotify/Spotify.yaml")
- spotify_details = yaml.safe_load(stream)
- auth_manager = SpotifyClientCredentials(
- client_id=spotify_details['Client_id'], client_secret=spotify_details['client_secret'])
- sp = spotipy.client.Spotify(auth_manager=auth_manager)
-
- if model == 'Spotify Model':
- def get_IDs(user, playlist_id):
- try:
- log.append('start playlist extraction')
- track_ids = []
- playlist = sp.user_playlist(user, playlist_id)
- for item in playlist['tracks']['items']:
- track = item['track']
- track_ids.append(track['id'])
- return track_ids
- except Exception as e:
- log.append('Failed to load the playlist')
- log.append(e)
-
- track_ids = get_IDs('Ruby', uri)
- track_ids_uni = list(set(track_ids))
- log.append('Starting Spotify Model')
- Spotifyresult = pd.DataFrame()
- for i in range(len(track_ids_uni)-5):
- if len(Spotifyresult) >= 50:
- break
- try:
- ff = sp.recommendations(seed_tracks=list(track_ids_uni[i:i+5]), limit=5)
- except Exception as e:
- log.append(e)
- continue
- for z in range(5):
- result = pd.DataFrame([z+(5*i)+1])
- result['uri'] = ff['tracks'][z]['id']
- Spotifyresult = pd.concat([Spotifyresult, result], axis=0)
- Spotifyresult.drop_duplicates(subset=['uri'], inplace=True,keep='first')
- Fresult = Spotifyresult.uri[:50]
-
- log.append('Model run successfully')
- return Fresult, log
-
- lendf=len(pd.read_csv('Data/streamlit.csv',usecols=['track_uri']))
- dtypes = {'track_uri': 'object', 'artist_uri': 'object', 'album_uri': 'object', 'danceability': 'float16', 'energy': 'float16', 'key': 'float16',
- 'loudness': 'float16', 'mode': 'float16', 'speechiness': 'float16', 'acousticness': 'float16', 'instrumentalness': 'float16',
- 'liveness': 'float16', 'valence': 'float16', 'tempo': 'float16', 'duration_ms': 'float32', 'time_signature': 'float16',
- 'Track_release_date': 'int8', 'Track_pop': 'int8', 'Artist_pop': 'int8', 'Artist_genres': 'object'}
- col_name= ['track_uri', 'artist_uri', 'album_uri', 'danceability', 'energy', 'key',
- 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness',
- 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature',
- 'Track_release_date', 'Track_pop', 'Artist_pop', 'Artist_genres']
-
- try:
- def get_IDs(user, playlist_id):
- log.append('start playlist extraction')
- track_ids = []
- artist_id = []
- playlist = sp.user_playlist(user, playlist_id)
- for item in playlist['tracks']['items']:
- track = item['track']
- track_ids.append(track['id'])
- artist = item['track']['artists']
- artist_id.append(artist[0]['id'])
- return track_ids, artist_id
- except Exception as e:
- log.append('Failed to load the playlist')
- log.append(e)
-
- track_ids, artist_id = get_IDs('Ruby', uri)
- log.append("Number of Track : {}".format(len(track_ids)))
-
- artist_id_uni = list(set(artist_id))
- track_ids_uni = list(set(track_ids))
- log.append("Number of unique Artists : {}".format(len(artist_id_uni)))
- log.append("Number of unique Tracks : {}".format(len(track_ids_uni)))
-
- def extract(track_ids_uni, artist_id_uni):
- err = []
- err.append('Start audio features extraction')
- audio_features = pd.DataFrame()
- for i in range(0, len(track_ids_uni), 25):
- try:
- track_feature = sp.audio_features(track_ids_uni[i:i+25])
- track_df = pd.DataFrame(track_feature)
- audio_features = pd.concat([audio_features, track_df], axis=0)
- except Exception as e:
- err.append(e)
- continue
- err.append('Start track features extraction')
- track_ = pd.DataFrame()
- for i in range(0, len(track_ids_uni), 25):
- try:
- track_features = sp.tracks(track_ids_uni[i:i+25])
- for x in range(25):
- track_pop = pd.DataFrame([track_ids_uni[i+x]], columns=['Track_uri'])
- track_pop['Track_release_date'] = track_features['tracks'][x]['album']['release_date']
- track_pop['Track_pop'] = track_features['tracks'][x]["popularity"]
- track_pop['Artist_uri'] = track_features['tracks'][x]['artists'][0]['id']
- track_pop['Album_uri'] = track_features['tracks'][x]['album']['id']
- track_ = pd.concat([track_, track_pop], axis=0)
- except Exception as e:
- err.append(e)
- continue
- err.append('Start artist features extraction')
- artist_ = pd.DataFrame()
- for i in range(0, len(artist_id_uni), 25):
- try:
- artist_features = sp.artists(artist_id_uni[i:i+25])
- for x in range(25):
- artist_df = pd.DataFrame([artist_id_uni[i+x]], columns=['Artist_uri'])
- artist_pop = artist_features['artists'][x]["popularity"]
- artist_genres = artist_features['artists'][x]["genres"]
- artist_df["Artist_pop"] = artist_pop
- if artist_genres:
- artist_df["genres"] = " ".join([re.sub(' ', '_', i) for i in artist_genres])
- else:
- artist_df["genres"] = "unknown"
- artist_ = pd.concat([artist_, artist_df], axis=0)
- except Exception as e:
- err.append(e)
- continue
- try:
- test = pd.DataFrame(
- track_, columns=['Track_uri', 'Artist_uri', 'Album_uri'])
-
- test.rename(columns={'Track_uri': 'track_uri',
- 'Artist_uri': 'artist_uri', 'Album_uri': 'album_uri'}, inplace=True)
-
- audio_features.drop(
- columns=['type', 'uri', 'track_href', 'analysis_url'], axis=1, inplace=True)
-
- test = pd.merge(test, audio_features,
- left_on="track_uri", right_on="id", how='outer')
- test = pd.merge(test, track_, left_on="track_uri",
- right_on="Track_uri", how='outer')
- test = pd.merge(test, artist_, left_on="artist_uri",
- right_on="Artist_uri", how='outer')
-
- test.rename(columns={'genres': 'Artist_genres'}, inplace=True)
-
- test.drop(columns=['Track_uri', 'Artist_uri_x',
- 'Artist_uri_y', 'Album_uri', 'id'], axis=1, inplace=True)
-
- test.dropna(axis=0, inplace=True)
- test['Track_pop'] = test['Track_pop'].apply(lambda x: int(x/5))
- test['Artist_pop'] = test['Artist_pop'].apply(lambda x: int(x/5))
- test['Track_release_date'] = test['Track_release_date'].apply(lambda x: x.split('-')[0])
- test['Track_release_date'] = test['Track_release_date'].astype('int16')
- test['Track_release_date'] = test['Track_release_date'].apply(lambda x: int(x/50))
-
- test[['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']] = test[[
- 'danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']].astype('float16')
- test[['duration_ms']] = test[['duration_ms']].astype('float32')
- test[['Track_release_date', 'Track_pop', 'Artist_pop']] = test[[
- 'Track_release_date', 'Track_pop', 'Artist_pop']].astype('int8')
- except Exception as e:
- err.append(e)
- err.append('Finish extraction')
- return test, err
- test, err = extract(track_ids_uni, artist_id_uni)
-
- for i in err:
- log.append(i)
- del err
- grow = test.copy()
- test['Artist_genres'] = test['Artist_genres'].apply(lambda x: x.split(" "))
- tfidf = TfidfVectorizer(max_features=max_gen)
- tfidf_matrix = tfidf.fit_transform(test['Artist_genres'].apply(lambda x: " ".join(x)))
- genre_df = pd.DataFrame(tfidf_matrix.toarray())
- genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()]
- genre_df = genre_df.astype('float16')
- test.drop(columns=['Artist_genres'], axis=1, inplace=True)
- test = pd.concat([test.reset_index(drop=True),genre_df.reset_index(drop=True)], axis=1)
- Fresult = pd.DataFrame()
- x = 1
- for i in range(int(lendf/2), lendf+1, int(lendf/2)):
- try:
- df = pd.read_csv('Data/streamlit.csv',names= col_name,dtype=dtypes,skiprows=x,nrows=i)
- log.append('reading data frame chunks from {} to {}'.format(x,i))
- except Exception as e:
- log.append('Failed to load grow')
- log.append(e)
- grow = grow[~grow['track_uri'].isin(df['track_uri'].values)]
- df = df[~df['track_uri'].isin(test['track_uri'].values)]
- df['Artist_genres'] = df['Artist_genres'].apply(lambda x: x.split(" "))
- tfidf_matrix = tfidf.transform(df['Artist_genres'].apply(lambda x: " ".join(x)))
- genre_df = pd.DataFrame(tfidf_matrix.toarray())
- genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()]
- genre_df = genre_df.astype('float16')
- df.drop(columns=['Artist_genres'], axis=1, inplace=True)
- df = pd.concat([df.reset_index(drop=True),
- genre_df.reset_index(drop=True)], axis=1)
- del genre_df
- try:
- df.drop(columns=['genre|unknown'], axis=1, inplace=True)
- test.drop(columns=['genre|unknown'], axis=1, inplace=True)
- except:
- log.append('genre|unknown not found')
- log.append('Scaling the data .....')
- if x == 1:
- sc = pickle.load(open('Data/sc.sav','rb'))
- df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19])
- test.iloc[:, 3:19] = sc.transform(test.iloc[:, 3:19])
- log.append("Creating playlist vector")
- playvec = pd.DataFrame(test.sum(axis=0)).T
- else:
- df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19])
- x = i
- if model == 'Model 1':
- df['sim']=cosine_similarity(df.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1),playvec.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1))
- df['sim2']=cosine_similarity(df.iloc[:,16:-1],playvec.iloc[:,16:])
- df['sim3']=cosine_similarity(df.iloc[:,19:-2],playvec.iloc[:,19:])
- df = df.sort_values(['sim3','sim2','sim'],ascending = False,kind='stable').groupby('artist_uri').head(same_art).head(50)
- Fresult = pd.concat([Fresult, df], axis=0)
- Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).head(50)
- elif model == 'Model 2':
- df['sim'] = cosine_similarity(df.iloc[:, 3:16], playvec.iloc[:, 3:16])
- df['sim2'] = cosine_similarity(df.loc[:, df.columns.str.startswith('T') | df.columns.str.startswith('A')], playvec.loc[:, playvec.columns.str.startswith('T') | playvec.columns.str.startswith('A')])
- df['sim3'] = cosine_similarity(df.loc[:, df.columns.str.startswith('genre')], playvec.loc[:, playvec.columns.str.startswith('genre')])
- df['sim4'] = (df['sim']+df['sim2']+df['sim3'])/3
- df = df.sort_values(['sim4'], ascending=False,kind='stable').groupby('artist_uri').head(same_art).head(50)
- Fresult = pd.concat([Fresult, df], axis=0)
- Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).head(50)
- del test
- try:
- del df
- log.append('Getting Result')
- except:
- log.append('Getting Result')
- if model == 'Model 1':
- Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50)
- elif model == 'Model 2':
- Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50)
- log.append('{} New Tracks Found'.format(len(grow)))
- if(len(grow)>=1):
- try:
- new=pd.read_csv('Data/new_tracks.csv',dtype=dtypes)
- new=pd.concat([new, grow], axis=0)
- new=new[new.Track_pop >0]
- new.drop_duplicates(subset=['track_uri'], inplace=True,keep='last')
- new.to_csv('Data/new_tracks.csv',index=False)
- except:
- grow.to_csv('Data/new_tracks.csv', index=False)
- log.append('Model run successfully')
- except Exception as e:
- log.append("Model Failed")
- log.append(e)
- return Fresult, log
-
-
-
-def top_tracks(url,region):
- uri = url.split('/')[-1].split('?')[0]
- stream= open("Spotify/Spotify.yaml")
- spotify_details = yaml.safe_load(stream)
- auth_manager = SpotifyClientCredentials(client_id=spotify_details['Client_id'],client_secret=spotify_details['client_secret'])
- sp = spotipy.client.Spotify(auth_manager=auth_manager)
- log = []
- Fresult = []
- try:
- log.append('Starting Spotify Model')
- top=sp.artist_top_tracks(uri,country=region)
- for i in range(10) :
- Fresult.append(top['tracks'][i]['id'])
- log.append('Model run successfully')
- except Exception as e:
- log.append("Model Failed")
- log.append(e)
- return Fresult,log
-
-def song_model(url, model, max_gen=3, same_art=5):
- log = []
- Fresult = []
- try:
- log.append('Start logging')
- uri = url.split('/')[-1].split('?')[0]
- stream = open("Spotify/Spotify.yaml")
- spotify_details = yaml.safe_load(stream)
- auth_manager = SpotifyClientCredentials(
- client_id=spotify_details['Client_id'], client_secret=spotify_details['client_secret'])
- sp = spotipy.client.Spotify(auth_manager=auth_manager)
-
- if model == 'Spotify Model':
- log.append('Starting Spotify Model')
- aa=sp.recommendations(seed_tracks=[uri], limit=25)
- for i in range(25):
- Fresult.append(aa['tracks'][i]['id'])
- log.append('Model run successfully')
- return Fresult, log
- lendf=len(pd.read_csv('Data/streamlit.csv',usecols=['track_uri']))
- dtypes = {'track_uri': 'object', 'artist_uri': 'object', 'album_uri': 'object', 'danceability': 'float16', 'energy': 'float16', 'key': 'float16',
- 'loudness': 'float16', 'mode': 'float16', 'speechiness': 'float16', 'acousticness': 'float16', 'instrumentalness': 'float16',
- 'liveness': 'float16', 'valence': 'float16', 'tempo': 'float16', 'duration_ms': 'float32', 'time_signature': 'float16',
- 'Track_release_date': 'int8', 'Track_pop': 'int8', 'Artist_pop': 'int8', 'Artist_genres': 'object'}
- col_name= ['track_uri', 'artist_uri', 'album_uri', 'danceability', 'energy', 'key',
- 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness',
- 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature',
- 'Track_release_date', 'Track_pop', 'Artist_pop', 'Artist_genres']
- log.append('Start audio features extraction')
- audio_features = pd.DataFrame(sp.audio_features([uri]))
- log.append('Start track features extraction')
- track_ = pd.DataFrame()
- track_features = sp.tracks([uri])
- track_pop = pd.DataFrame([uri], columns=['Track_uri'])
- track_pop['Track_release_date'] = track_features['tracks'][0]['album']['release_date']
- track_pop['Track_pop'] = track_features['tracks'][0]["popularity"]
- track_pop['Artist_uri'] = track_features['tracks'][0]['artists'][0]['id']
- track_pop['Album_uri'] = track_features['tracks'][0]['album']['id']
- track_ = pd.concat([track_, track_pop], axis=0)
- log.append('Start artist features extraction')
- artist_id_uni=list(track_['Artist_uri'])
- artist_ = pd.DataFrame()
- artist_features = sp.artists(artist_id_uni)
- artist_df = pd.DataFrame(artist_id_uni, columns=['Artist_uri'])
- artist_pop = artist_features['artists'][0]["popularity"]
- artist_genres = artist_features['artists'][0]["genres"]
- artist_df["Artist_pop"] = artist_pop
- if artist_genres:
- artist_df["genres"] = " ".join([re.sub(' ', '_', i) for i in artist_genres])
- else:
- artist_df["genres"] = "unknown"
- artist_ = pd.concat([artist_, artist_df], axis=0)
- try:
- test = pd.DataFrame(track_, columns=['Track_uri', 'Artist_uri', 'Album_uri'])
- test.rename(columns={'Track_uri': 'track_uri','Artist_uri': 'artist_uri', 'Album_uri': 'album_uri'}, inplace=True)
- audio_features.drop(columns=['type', 'uri', 'track_href', 'analysis_url'], axis=1, inplace=True)
- test = pd.merge(test, audio_features,left_on="track_uri", right_on="id", how='outer')
- test = pd.merge(test, track_, left_on="track_uri",right_on="Track_uri", how='outer')
- test = pd.merge(test, artist_, left_on="artist_uri",right_on="Artist_uri", how='outer')
- test.rename(columns={'genres': 'Artist_genres'}, inplace=True)
- test.drop(columns=['Track_uri', 'Artist_uri_x','Artist_uri_y', 'Album_uri', 'id'], axis=1, inplace=True)
- test.dropna(axis=0, inplace=True)
- test['Track_pop'] = test['Track_pop'].apply(lambda x: int(x/5))
- test['Artist_pop'] = test['Artist_pop'].apply(lambda x: int(x/5))
- test['Track_release_date'] = test['Track_release_date'].apply(lambda x: x.split('-')[0])
- test['Track_release_date'] = test['Track_release_date'].astype('int16')
- test['Track_release_date'] = test['Track_release_date'].apply(lambda x: int(x/50))
- test[['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']] = test[['danceability', 'energy', 'key', 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness', 'liveness', 'valence', 'tempo', 'time_signature']].astype('float16')
- test[['duration_ms']] = test[['duration_ms']].astype('float32')
- test[['Track_release_date', 'Track_pop', 'Artist_pop']] = test[['Track_release_date', 'Track_pop', 'Artist_pop']].astype('int8')
- except Exception as e:
- log.append(e)
- log.append('Finish extraction')
- grow = test.copy()
- test['Artist_genres'] = test['Artist_genres'].apply(lambda x: x.split(" "))
- tfidf = TfidfVectorizer(max_features=max_gen)
- tfidf_matrix = tfidf.fit_transform(test['Artist_genres'].apply(lambda x: " ".join(x)))
- genre_df = pd.DataFrame(tfidf_matrix.toarray())
- genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()]
- genre_df = genre_df.astype('float16')
- test.drop(columns=['Artist_genres'], axis=1, inplace=True)
- test = pd.concat([test.reset_index(drop=True),genre_df.reset_index(drop=True)], axis=1)
- Fresult = pd.DataFrame()
- x = 1
- for i in range(int(lendf/2), lendf+1, int(lendf/2)):
- try:
- df = pd.read_csv('Data/streamlit.csv',names= col_name,dtype=dtypes,skiprows=x,nrows=i)
- log.append('reading data frame chunks from {} to {}'.format(x,i))
- except Exception as e:
- log.append('Failed to load grow')
- log.append(e)
- grow = grow[~grow['track_uri'].isin(df['track_uri'].values)]
- df = df[~df['track_uri'].isin(test['track_uri'].values)]
- df['Artist_genres'] = df['Artist_genres'].apply(lambda x: x.split(" "))
- tfidf_matrix = tfidf.transform(df['Artist_genres'].apply(lambda x: " ".join(x)))
- genre_df = pd.DataFrame(tfidf_matrix.toarray())
- genre_df.columns = ['genre' + "|" +i for i in tfidf.get_feature_names_out()]
- genre_df = genre_df.astype('float16')
- df.drop(columns=['Artist_genres'], axis=1, inplace=True)
- df = pd.concat([df.reset_index(drop=True),
- genre_df.reset_index(drop=True)], axis=1)
- del genre_df
- try:
- df.drop(columns=['genre|unknown'], axis=1, inplace=True)
- test.drop(columns=['genre|unknown'], axis=1, inplace=True)
- except:
- log.append('genre|unknown not found')
- log.append('Scaling the data .....')
- if x == 1:
- sc = pickle.load(open('Data/sc.sav','rb'))
- df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19])
- test.iloc[:, 3:19] = sc.transform(test.iloc[:, 3:19])
- log.append("Creating playlist vector")
- playvec = pd.DataFrame(test.sum(axis=0)).T
- else:
- df.iloc[:, 3:19] = sc.transform(df.iloc[:, 3:19])
- x = i
- if model == 'Model 1':
- df['sim']=cosine_similarity(df.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1),playvec.drop(['track_uri', 'artist_uri', 'album_uri'], axis = 1))
- df['sim2']=cosine_similarity(df.iloc[:,16:-1],playvec.iloc[:,16:])
- df['sim3']=cosine_similarity(df.iloc[:,19:-2],playvec.iloc[:,19:])
- df = df.sort_values(['sim3','sim2','sim'],ascending = False,kind='stable').groupby('artist_uri').head(same_art).head(50)
- Fresult = pd.concat([Fresult, df], axis=0)
- Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).head(50)
- elif model == 'Model 2':
- df['sim'] = cosine_similarity(df.iloc[:, 3:16], playvec.iloc[:, 3:16])
- df['sim2'] = cosine_similarity(df.loc[:, df.columns.str.startswith('T') | df.columns.str.startswith('A')], playvec.loc[:, playvec.columns.str.startswith('T') | playvec.columns.str.startswith('A')])
- df['sim3'] = cosine_similarity(df.loc[:, df.columns.str.startswith('genre')], playvec.loc[:, playvec.columns.str.startswith('genre')])
- df['sim4'] = (df['sim']+df['sim2']+df['sim3'])/3
- df = df.sort_values(['sim4'], ascending=False,kind='stable').groupby('artist_uri').head(same_art).head(50)
- Fresult = pd.concat([Fresult, df], axis=0)
- Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).head(50)
- del test
- try:
- del df
- log.append('Getting Result')
- except:
- log.append('Getting Result')
- if model == 'Model 1':
- Fresult = Fresult.sort_values(['sim3', 'sim2', 'sim'],ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50)
- elif model == 'Model 2':
- Fresult = Fresult.sort_values(['sim4'], ascending=False,kind='stable')
- Fresult.drop_duplicates(subset=['track_uri'], inplace=True,keep='first')
- Fresult = Fresult.groupby('artist_uri').head(same_art).track_uri.head(50)
- log.append('{} New Tracks Found'.format(len(grow)))
- if(len(grow)>=1):
- try:
- new=pd.read_csv('Data/new_tracks.csv',dtype=dtypes)
- new=pd.concat([new, grow], axis=0)
- new=new[new.Track_pop >0]
- new.drop_duplicates(subset=['track_uri'], inplace=True,keep='last')
- new.to_csv('Data/new_tracks.csv',index=False)
- except:
- grow.to_csv('Data/new_tracks.csv', index=False)
- log.append('Model run successfully')
- except Exception as e:
- log.append("Model Failed")
- log.append(e)
- return Fresult, log
-
-def update_dataset():
- col_name= ['track_uri', 'artist_uri', 'album_uri', 'danceability', 'energy', 'key',
- 'loudness', 'mode', 'speechiness', 'acousticness', 'instrumentalness',
- 'liveness', 'valence', 'tempo', 'duration_ms', 'time_signature',
- 'Track_release_date', 'Track_pop', 'Artist_pop', 'Artist_genres']
- dtypes = {'track_uri': 'object', 'artist_uri': 'object', 'album_uri': 'object', 'danceability': 'float16', 'energy': 'float16', 'key': 'float16',
- 'loudness': 'float16', 'mode': 'float16', 'speechiness': 'float16', 'acousticness': 'float16', 'instrumentalness': 'float16',
- 'liveness': 'float16', 'valence': 'float16', 'tempo': 'float16', 'duration_ms': 'float32', 'time_signature': 'float16',
- 'Track_release_date': 'int8', 'Track_pop': 'int8', 'Artist_pop': 'int8', 'Artist_genres': 'object'}
- df = pd.read_csv('Data/streamlit.csv',dtype=dtypes)
- grow = pd.read_csv('Data/new_tracks.csv',dtype=dtypes)
- cur = len(df)
- df=pd.concat([df,grow],axis=0)
- grow=pd.DataFrame(columns=col_name)
- grow.to_csv('Data/new_tracks.csv',index=False)
- df=df[df.Track_pop >0]
- df.drop_duplicates(subset=['track_uri'],inplace=True,keep='last')
- df.dropna(axis=0,inplace=True)
- df.to_csv('Data/streamlit.csv',index=False)
- return (len(df)-cur)
-
diff --git a/spaces/lyf/faster-whisper-webui/src/languages.py b/spaces/lyf/faster-whisper-webui/src/languages.py
deleted file mode 100644
index fbad66e4d34119d27d12e3dfecbe99b6fdde4db7..0000000000000000000000000000000000000000
--- a/spaces/lyf/faster-whisper-webui/src/languages.py
+++ /dev/null
@@ -1,147 +0,0 @@
-class Language():
- def __init__(self, code, name):
- self.code = code
- self.name = name
-
- def __str__(self):
- return "Language(code={}, name={})".format(self.code, self.name)
-
-LANGUAGES = [
- Language('en', 'English'),
- Language('zh', 'Chinese'),
- Language('de', 'German'),
- Language('es', 'Spanish'),
- Language('ru', 'Russian'),
- Language('ko', 'Korean'),
- Language('fr', 'French'),
- Language('ja', 'Japanese'),
- Language('pt', 'Portuguese'),
- Language('tr', 'Turkish'),
- Language('pl', 'Polish'),
- Language('ca', 'Catalan'),
- Language('nl', 'Dutch'),
- Language('ar', 'Arabic'),
- Language('sv', 'Swedish'),
- Language('it', 'Italian'),
- Language('id', 'Indonesian'),
- Language('hi', 'Hindi'),
- Language('fi', 'Finnish'),
- Language('vi', 'Vietnamese'),
- Language('he', 'Hebrew'),
- Language('uk', 'Ukrainian'),
- Language('el', 'Greek'),
- Language('ms', 'Malay'),
- Language('cs', 'Czech'),
- Language('ro', 'Romanian'),
- Language('da', 'Danish'),
- Language('hu', 'Hungarian'),
- Language('ta', 'Tamil'),
- Language('no', 'Norwegian'),
- Language('th', 'Thai'),
- Language('ur', 'Urdu'),
- Language('hr', 'Croatian'),
- Language('bg', 'Bulgarian'),
- Language('lt', 'Lithuanian'),
- Language('la', 'Latin'),
- Language('mi', 'Maori'),
- Language('ml', 'Malayalam'),
- Language('cy', 'Welsh'),
- Language('sk', 'Slovak'),
- Language('te', 'Telugu'),
- Language('fa', 'Persian'),
- Language('lv', 'Latvian'),
- Language('bn', 'Bengali'),
- Language('sr', 'Serbian'),
- Language('az', 'Azerbaijani'),
- Language('sl', 'Slovenian'),
- Language('kn', 'Kannada'),
- Language('et', 'Estonian'),
- Language('mk', 'Macedonian'),
- Language('br', 'Breton'),
- Language('eu', 'Basque'),
- Language('is', 'Icelandic'),
- Language('hy', 'Armenian'),
- Language('ne', 'Nepali'),
- Language('mn', 'Mongolian'),
- Language('bs', 'Bosnian'),
- Language('kk', 'Kazakh'),
- Language('sq', 'Albanian'),
- Language('sw', 'Swahili'),
- Language('gl', 'Galician'),
- Language('mr', 'Marathi'),
- Language('pa', 'Punjabi'),
- Language('si', 'Sinhala'),
- Language('km', 'Khmer'),
- Language('sn', 'Shona'),
- Language('yo', 'Yoruba'),
- Language('so', 'Somali'),
- Language('af', 'Afrikaans'),
- Language('oc', 'Occitan'),
- Language('ka', 'Georgian'),
- Language('be', 'Belarusian'),
- Language('tg', 'Tajik'),
- Language('sd', 'Sindhi'),
- Language('gu', 'Gujarati'),
- Language('am', 'Amharic'),
- Language('yi', 'Yiddish'),
- Language('lo', 'Lao'),
- Language('uz', 'Uzbek'),
- Language('fo', 'Faroese'),
- Language('ht', 'Haitian creole'),
- Language('ps', 'Pashto'),
- Language('tk', 'Turkmen'),
- Language('nn', 'Nynorsk'),
- Language('mt', 'Maltese'),
- Language('sa', 'Sanskrit'),
- Language('lb', 'Luxembourgish'),
- Language('my', 'Myanmar'),
- Language('bo', 'Tibetan'),
- Language('tl', 'Tagalog'),
- Language('mg', 'Malagasy'),
- Language('as', 'Assamese'),
- Language('tt', 'Tatar'),
- Language('haw', 'Hawaiian'),
- Language('ln', 'Lingala'),
- Language('ha', 'Hausa'),
- Language('ba', 'Bashkir'),
- Language('jw', 'Javanese'),
- Language('su', 'Sundanese')
-]
-
-_TO_LANGUAGE_CODE = {
- **{language.code: language for language in LANGUAGES},
- "burmese": "my",
- "valencian": "ca",
- "flemish": "nl",
- "haitian": "ht",
- "letzeburgesch": "lb",
- "pushto": "ps",
- "panjabi": "pa",
- "moldavian": "ro",
- "moldovan": "ro",
- "sinhalese": "si",
- "castilian": "es",
-}
-
-_FROM_LANGUAGE_NAME = {
- **{language.name.lower(): language for language in LANGUAGES}
-}
-
-def get_language_from_code(language_code, default=None) -> Language:
- """Return the language name from the language code."""
- return _TO_LANGUAGE_CODE.get(language_code, default)
-
-def get_language_from_name(language, default=None) -> Language:
- """Return the language code from the language name."""
- return _FROM_LANGUAGE_NAME.get(language.lower() if language else None, default)
-
-def get_language_names():
- """Return a list of language names."""
- return [language.name for language in LANGUAGES]
-
-if __name__ == "__main__":
- # Test lookup
- print(get_language_from_code('en'))
- print(get_language_from_name('English'))
-
- print(get_language_names())
\ No newline at end of file
diff --git a/spaces/lyf/faster-whisper-webui/src/modelCache.py b/spaces/lyf/faster-whisper-webui/src/modelCache.py
deleted file mode 100644
index 680a4b386fc37e17ed2353e72d04a646ece2c4a6..0000000000000000000000000000000000000000
--- a/spaces/lyf/faster-whisper-webui/src/modelCache.py
+++ /dev/null
@@ -1,17 +0,0 @@
-class ModelCache:
- def __init__(self):
- self._cache = dict()
-
- def get(self, model_key: str, model_factory):
- result = self._cache.get(model_key)
-
- if result is None:
- result = model_factory()
- self._cache[model_key] = result
- return result
-
- def clear(self):
- self._cache.clear()
-
-# A global cache of models. This is mainly used by the daemon processes to avoid loading the same model multiple times.
-GLOBAL_MODEL_CACHE = ModelCache()
\ No newline at end of file
diff --git a/spaces/lysine/auscultate/src/app/heart/AuscultationTrack.tsx b/spaces/lysine/auscultate/src/app/heart/AuscultationTrack.tsx
deleted file mode 100644
index 182c5c4d72817fb1a018092d261687fbd352e563..0000000000000000000000000000000000000000
--- a/spaces/lysine/auscultate/src/app/heart/AuscultationTrack.tsx
+++ /dev/null
@@ -1,227 +0,0 @@
-import React, { useEffect, useRef } from 'react';
-import {
- AuscultationTrack,
- FullPatient,
- SoundWave,
- nameLocation,
- nameSoundWave,
-} from '../../heart-types';
-import WaveSurfer from 'wavesurfer.js';
-import HoverPlugin from 'wavesurfer.js/plugins/hover';
-import TimelinePlugin from 'wavesurfer.js/plugins/timeline';
-import RegionsPlugin from 'wavesurfer.js/plugins/regions';
-import SpectrogramPlugin from 'wavesurfer.js/plugins/spectrogram';
-import { getDataUrl } from './api';
-import { useAudio } from '../AudioContext';
-
-export interface AuscultationTrackProps {
- patient: FullPatient;
- track: AuscultationTrack;
- zoom: number;
- showAnswer: boolean;
- spectrogram: boolean;
- regionsLevel: RegionsLevel;
-}
-
-export enum RegionsLevel {
- None = 0,
- Markers = 1,
- HeartSounds = 2,
- Full = 3,
-}
-
-function getColorByWave(wave: SoundWave): string {
- switch (wave) {
- case SoundWave.S1:
- return '#f8727233';
- case SoundWave.S2:
- return '#3abff833';
- case SoundWave.Diastolic:
- return '#fbbd2333';
- case SoundWave.Systolic:
- return '#36d39933';
- default:
- return '#ffffff33';
- }
-}
-
-export default function AuscultationTrack({
- patient,
- track,
- zoom,
- showAnswer,
- spectrogram: showSpectrogram,
- regionsLevel,
-}: AuscultationTrackProps): JSX.Element {
- const waveformId = ('waveform' + track.audioFile).replaceAll('.', '_');
-
- const { nowPlaying, setNowPlaying } = useAudio();
-
- const wavesurfer = useRefMore Explorables
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/miesnerjacob/Multi-task-NLP/app.py b/spaces/miesnerjacob/Multi-task-NLP/app.py
deleted file mode 100644
index 0d0704cce399e8311c98b41c660fef3c7ea737cb..0000000000000000000000000000000000000000
--- a/spaces/miesnerjacob/Multi-task-NLP/app.py
+++ /dev/null
@@ -1,254 +0,0 @@
-import pandas as pd
-import streamlit as st
-from annotated_text import annotated_text
-from streamlit_option_menu import option_menu
-from sentiment_analysis import SentimentAnalysis
-from keyword_extraction import KeywordExtractor
-from part_of_speech_tagging import POSTagging
-from emotion_detection import EmotionDetection
-from named_entity_recognition import NamedEntityRecognition
-
-hide_streamlit_style = """
-
- """
-st.markdown(hide_streamlit_style, unsafe_allow_html=True)
-
-
-@st.cache(allow_output_mutation=True)
-def load_sentiment_model():
- return SentimentAnalysis()
-
-@st.cache(allow_output_mutation=True)
-def load_keyword_model():
- return KeywordExtractor()
-
-@st.cache(allow_output_mutation=True)
-def load_pos_model():
- return POSTagging()
-
-@st.cache(allow_output_mutation=True)
-def load_emotion_model():
- return EmotionDetection()
-
-@st.cache(allow_output_mutation=True)
-def load_ner_model():
- return NamedEntityRecognition()
-
-
-sentiment_analyzer = load_sentiment_model()
-keyword_extractor = load_keyword_model()
-pos_tagger = load_pos_model()
-emotion_detector = load_emotion_model()
-ner = load_ner_model()
-
-example_text = "This is example text that contains both names of organizations like Hugging Face and cities like New York, all while portraying an upbeat attitude."
-
-with st.sidebar:
- page = option_menu(menu_title='Menu',
- menu_icon="robot",
- options=["Welcome!",
- "Sentiment Analysis",
- "Keyword Extraction",
- "Part of Speech Tagging",
- "Emotion Detection",
- "Named Entity Recognition"],
- icons=["house-door",
- "chat-dots",
- "key",
- "tag",
- "emoji-heart-eyes",
- "building"],
- default_index=0
- )
-
-st.title('Open-source NLP')
-
-if page == "Welcome!":
- st.header('Welcome!')
-
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- st.subheader("Quickstart")
- st.write(
- """
- Replace the example text below and flip through the pages in the menu to perform NLP tasks on-demand!
- Feel free to use the example text for a test run.
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- st.subheader("Introduction")
- st.write("""
- Hello! This application is a celebration of open-source and the power that programmers have been granted today
- by those who give back to the community. This tool was constructed using Streamlit, Huggingface Transformers,
- Transformers-Interpret, NLTK, Spacy, amongst other open-source Python libraries and models.
-
- Utilizing this tool you will be able to perform a multitude of Natural Language Processing Tasks on a range of
- different tasks. All you need to do is paste your input, select your task, and hit the start button!
-
- * This application currently supports:
- * Sentiment Analysis
- * Keyword Extraction
- * Part of Speech Tagging
- * Emotion Detection
- * Named Entity Recognition
-
- More features may be added in the future including article/tweet/youtube input, improved text annotation, model quality improvements,
- depending on community feedback. Please reach out to me at miesner.jacob@gmail.com or at my Linkedin page listed
- below if you have ideas or suggestions for improvement.
-
- If you would like to contribute yourself, feel free to fork the Github repository listed below and submit a merge request.
- """
- )
- st.subheader("Notes")
- st.write(
- """
- * This dashboard was constructed by myself, but every resource used is open-source! If you are interested in my other works you can view them here:
-
- [Project Github](https://github.com/MiesnerJacob/Multi-task-NLP-dashboard)
-
- [Jacob Miesner's Github](https://github.com/MiesnerJacob)
-
- [Jacob Miesner's Linkedin](https://www.linkedin.com/in/jacob-miesner-885050125/)
-
- [Jacob Miesner's Website](https://www.jacobmiesner.com)
-
- * The prediction justification for some of the tasks are printed as the model views them. For this reason the text may contain special tokens like [CLS] or [SEP] or even hashtags splitting words. If you are are familiar with language models you will recognize these, if you do not have prior experience with language models you can ignore these characters.
- """
- )
-
-elif page == "Sentiment Analysis":
- st.header('Sentiment Analysis')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds, html = sentiment_analyzer.run(text)
- st.success('All done!')
- st.write("")
- st.subheader("Sentiment Predictions")
- st.bar_chart(data=preds, width=0, height=0, use_container_width=True)
- st.write("")
- st.subheader("Sentiment Justification")
- raw_html = html._repr_html_()
- st.components.v1.html(raw_html, height=500)
-
-elif page == "Keyword Extraction":
- st.header('Keyword Extraction')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- max_keywords = st.slider('# of Keywords Max Limit', min_value=1, max_value=10, value=5, step=1)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- annotation, keywords = keyword_extractor.generate(text, max_keywords)
- st.success('All done!')
-
- if annotation:
- st.subheader("Keyword Annotation")
- st.write("")
- annotated_text(*annotation)
- st.text("")
-
- st.subheader("Extracted Keywords")
- st.write("")
- df = pd.DataFrame(keywords, columns=['Extracted Keywords'])
- csv = df.to_csv(index=False).encode('utf-8')
- st.download_button('Download Keywords to CSV', csv, file_name='news_intelligence_keywords.csv')
-
- data_table = st.table(df)
-
-elif page == "Part of Speech Tagging":
- st.header('Part of Speech Tagging')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds = pos_tagger.classify(text)
- st.success('All done!')
- st.write("")
- st.subheader("Part of Speech tags")
- annotated_text(*preds)
- st.write("")
- st.components.v1.iframe('https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html', height=1000)
-
-elif page == "Emotion Detection":
- st.header('Emotion Detection')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds, html = emotion_detector.run(text)
- st.success('All done!')
- st.write("")
- st.subheader("Emotion Predictions")
- st.bar_chart(data=preds, width=0, height=0, use_container_width=True)
- raw_html = html._repr_html_()
- st.write("")
- st.subheader("Emotion Justification")
- st.components.v1.html(raw_html, height=500)
-
-elif page == "Named Entity Recognition":
- st.header('Named Entity Recognition')
- st.markdown("")
- st.write(
- """
-
-
- """
- )
-
- text = st.text_area("Paste text here", value=example_text)
-
- if st.button('🔥 Run!'):
- with st.spinner("Loading..."):
- preds, ner_annotation = ner.classify(text)
- st.success('All done!')
- st.write("")
- st.subheader("NER Predictions")
- annotated_text(*ner_annotation)
- st.write("")
- st.subheader("NER Prediction Metadata")
- st.write(preds)
diff --git a/spaces/mikkoar/marco/src/components/voice.tsx b/spaces/mikkoar/marco/src/components/voice.tsx
deleted file mode 100644
index 074d0e145229947282a472bd84f6578cf0b3c71c..0000000000000000000000000000000000000000
--- a/spaces/mikkoar/marco/src/components/voice.tsx
+++ /dev/null
@@ -1,52 +0,0 @@
-import React, { useEffect } from 'react'
-import { useSetAtom } from 'jotai'
-import { useBing } from '@/lib/hooks/use-bing'
-import Image from 'next/image'
-import VoiceIcon from '@/assets/images/voice.svg'
-import VoiceButton from './ui/voice'
-import { SR } from '@/lib/bots/bing/sr'
-import { voiceListenAtom } from '@/state'
-
-const sr = new SR(['发送', '清空', '退出'])
-
-const Voice = ({ setInput, input, sendMessage, isSpeaking }: Pick
Vis. Encoder (md5) | checkpoint (md5) |
-| :------------: | :----------: | :----: | :-----: | :---: | :-------------------------------: | :--------: |
-| TSF-B | GPT-2 | 0.282 | 0.517 | 0.833 | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_base.baseline.ep_0003.pth) (dbcc4d) | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/narrator/vclm_openai_timesformer_base_gpt2_base.pt_ego4d.jobid_319630.ep_0002.md5sum_68a71f.pth) (68a71f) |
-| TSF-L@HR | GPT-2 XL | 0.298 | 0.539 | 0.977 | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_large_336px_distilbert_base.baseline.ep_0003.pth) (5c69b8) | [download](https://dl.fbaipublicfiles.com/lavila/checkpoints/narrator/vclm_openai_timesformer_large_336px_gpt2_xl.pt_ego4d.jobid_246897.ep_0003.md5sum_443263.pth) (443263) |
-
-
-Ego4D val split
-
avg. mAP | EK-100 MIR
avg. nDCG | Charades-Ego
mAP^ | EGTEA
mean acc. | EgoMCQ
intra-video acc. | checkpoint |
-| :----------: | :------: | :--------------------: | :---------------------: | :------------------: | :-----------------: | :------------------------: | :----------: |
-| Prev. SOTA^^ | TSF-B | 22.1/23.3 | 22.1/27.9 | 25.2 | 17.6 | 57.2 | [Epoch 1](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/egovlp_epo1_converted_f16.md5sum_7a3d3b.pth), [best epoch](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/egovlp_converted_f16.md5sum_c33363.pth) |
-| LAVILA | TSF-B | 29.7/30.9 | 31.5/32.0 | 26.8 | 28.9 | 59.9 | [Epoch 1](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_base.narrator_rephraser.ep_0001.md5sum_02dbb9.pth)^, [Epoch 5](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_base.narrator_rephraser.ep_0005.md5sum_d73a9c.pth) |
-| LAVILA | TSF-L | 35.0/36.1 | 34.2/34.6 | 28.9 | 34.1 | 63.1 | [Epoch 1](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_large.narrator_rephraser.ep_0001.md5sum_9a25de.pth)^, [Epoch 3](https://dl.fbaipublicfiles.com/lavila/checkpoints/dual_encoders/ego4d/clip_openai_timesformer_large.narrator_rephraser.ep_0003.md5sum_c89337.pth) |
-
-1. EK-100 MIR
-2. EK-100 CLS
-3. Charades-Ego
-4. EGTEA
-5. EgoMCQ
-Training and evaluating scripts
-Training and evaluating scripts
-Training and evaluating scripts
-Training and evaluating scripts
-
-Red Gate Sql Compare 11 Crack: What Is It and How to Avoid It?
-Red Gate Sql Compare 11 Crack
- Introduction
-What is Red Gate Sql Compare 11 and what does it do?
-
-
-What is a crack and why do some people use it?
-
-
-What are the risks and consequences of using a crack?
-
-
-Features and Benefits of Red Gate Sql Compare 11
-How does Red Gate Sql Compare 11 help you compare and deploy SQL Server schemas?
-
-
-What are the main features and benefits of Red Gate Sql Compare 11?
-
-
-How can you get a free trial or buy a license for Red Gate Sql Compare 11?
-
-
-How to Detect and Remove a Crack for Red Gate Sql Compare 11
-How can you tell if you have a crack for Red Gate Sql Compare 11 installed on your computer?
-
-
-How can you remove a crack for Red Gate Sql Compare 11 from your computer?
-
-
-How can you protect your computer from cracks and malware in the future?
-
-
-Alternatives to Using a Crack for Red Gate Sql Compare 11
-What are some free or low-cost alternatives to using a crack for Red Gate Sql Compare 11?
-
-
-
-
-Name
-Description
-Price
-
-
-ApexSQL Diff
-A tool that compares and synchronizes SQL Server database schemas
-$399 per user per year
-
-
-dbForge Schema Compare for SQL Server
-A tool that compares and synchronizes SQL Server database schemas
-$199.95 per user per year
-
-
-IDERA DB Schema Compare
-A tool that compares SQL Server database schemas
-Free
-
-
-SQL Workbench/J Database Comparison Tool
-A tool that compares database schemas for various DBMSs, including SQL Server Free
-
-
-SchemaCrawler
-A tool that compares database schemas for various DBMSs, including SQL Server
-Free
-What are some advantages and disadvantages of using these alternatives?
-
-
-
-
-Name
-Advantages
-Disadvantages
-
-
-ApexSQL Diff
-- Supports SQL Server 2019 and Azure SQL Database
-
- Supports various types of sources and targets
- Supports command line interface and API
- Supports integration with source control systems
- Supports backup comparison and synchronization
- Supports data comparison and synchronization- More expensive than Red Gate Sql Compare 11
-
- Less user-friendly and intuitive than Red Gate Sql Compare 11
- Less accurate and reliable than Red Gate Sql Compare 11
- Less flexible and adaptable than Red Gate Sql Compare 11
- Less secure and safe than Red Gate Sql Compare 11
-
-dbForge Schema Compare for SQL Server
-- Supports SQL Server 2019 and Azure SQL Database
-
- Supports various types of sources and targets
- Supports command line interface and API
- Supports integration with source control systems
- Supports backup comparison and synchronization
- Supports data comparison and synchronization- More expensive than Red Gate Sql Compare 11
-
- Less user-friendly and intuitive than Red Gate Sql Compare 11
- Less accurate and reliable than Red Gate Sql Compare 11
- Less flexible and adaptable than Red Gate Sql Compare 11
- Less secure and safe than Red Gate Sql Compare 11
-
-IDERA DB Schema Compare
-- Free to use
-
- Supports SQL Server 2019 and Azure SQL Database
- Supports live database comparison only- Less user-friendly and intuitive than Red Gate Sql Compare 11
-
- Less accurate and reliable than Red Gate Sql Compare 11
- Less flexible and adaptable than Red Gate Sql Compare 11
- Less secure and safe than Red Gate Sql Compare 11
- Does not support command line interface or API
- Does not support integration with source control systems
- Does not support backup comparison or synchronization
- Does not support data comparison or synchronization
-
-SQL Workbench/J Database Comparison Tool - Free to use
- Supports various DBMSs, including SQL Server
- Supports live database comparison only
-- Less user-friendly and intuitive than Red Gate Sql Compare 11
-
- Less accurate and reliable than Red Gate Sql Compare 11
- Less flexible and adaptable than Red Gate Sql Compare 11
- Less secure and safe than Red Gate Sql Compare 11
- Does not support command line interface or API
- Does not support integration with source control systems
- Does not support backup comparison or synchronization
- Does not support data comparison or synchronization
-
-SchemaCrawler
-- Free to use
-
- Supports various DBMSs, including SQL Server
- Supports live database comparison only- Less user-friendly and intuitive than Red Gate Sql Compare 11
-
- Less accurate and reliable than Red Gate Sql Compare 11
- Less flexible and adaptable than Red Gate Sql Compare 11
- Less secure and safe than Red Gate Sql Compare 11
- Does not support command line interface or API
- Does not support integration with source control systems
- Does not support backup comparison or synchronization
- Does not support data comparison or synchronizationHow can you choose the best alternative for your needs and budget?
-
-
-Conclusion
-FAQs
-What is the difference between a crack and a keygen?
-Is it illegal to use a crack for Red Gate Sql Compare 11?
-Can a crack for Red Gate Sql Compare 11 damage my database or data?
-How can I contact Red Gate Software for support or feedback?
-Where can I find more information about Red Gate Sql Compare 11?
-
-
-
\ No newline at end of file
diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tools/lightning_train_net.py b/spaces/nikitaPDL2023/assignment4/detectron2/tools/lightning_train_net.py
deleted file mode 100644
index 7a8c5d851649d05710b128b13d1d339fb0b7b125..0000000000000000000000000000000000000000
--- a/spaces/nikitaPDL2023/assignment4/detectron2/tools/lightning_train_net.py
+++ /dev/null
@@ -1,239 +0,0 @@
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Lightning Trainer should be considered beta at this point
-# We have confirmed that training and validation run correctly and produce correct results
-# Depending on how you launch the trainer, there are issues with processes terminating correctly
-# This module is still dependent on D2 logging, but could be transferred to use Lightning logging
-
-import logging
-import os
-import time
-import weakref
-from collections import OrderedDict
-from typing import Any, Dict, List
-import pytorch_lightning as pl # type: ignore
-from pytorch_lightning import LightningDataModule, LightningModule
-
-import detectron2.utils.comm as comm
-from detectron2.checkpoint import DetectionCheckpointer
-from detectron2.config import get_cfg
-from detectron2.data import build_detection_test_loader, build_detection_train_loader
-from detectron2.engine import (
- DefaultTrainer,
- SimpleTrainer,
- default_argument_parser,
- default_setup,
- default_writers,
- hooks,
-)
-from detectron2.evaluation import print_csv_format
-from detectron2.evaluation.testing import flatten_results_dict
-from detectron2.modeling import build_model
-from detectron2.solver import build_lr_scheduler, build_optimizer
-from detectron2.utils.events import EventStorage
-from detectron2.utils.logger import setup_logger
-
-from train_net import build_evaluator
-
-logging.basicConfig(level=logging.INFO)
-logger = logging.getLogger("detectron2")
-
-
-class TrainingModule(LightningModule):
- def __init__(self, cfg):
- super().__init__()
- if not logger.isEnabledFor(logging.INFO): # setup_logger is not called for d2
- setup_logger()
- self.cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size())
- self.storage: EventStorage = None
- self.model = build_model(self.cfg)
-
- self.start_iter = 0
- self.max_iter = cfg.SOLVER.MAX_ITER
-
- def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None:
- checkpoint["iteration"] = self.storage.iter
-
- def on_load_checkpoint(self, checkpointed_state: Dict[str, Any]) -> None:
- self.start_iter = checkpointed_state["iteration"]
- self.storage.iter = self.start_iter
-
- def setup(self, stage: str):
- if self.cfg.MODEL.WEIGHTS:
- self.checkpointer = DetectionCheckpointer(
- # Assume you want to save checkpoints together with logs/statistics
- self.model,
- self.cfg.OUTPUT_DIR,
- )
- logger.info(f"Load model weights from checkpoint: {self.cfg.MODEL.WEIGHTS}.")
- # Only load weights, use lightning checkpointing if you want to resume
- self.checkpointer.load(self.cfg.MODEL.WEIGHTS)
-
- self.iteration_timer = hooks.IterationTimer()
- self.iteration_timer.before_train()
- self.data_start = time.perf_counter()
- self.writers = None
-
- def training_step(self, batch, batch_idx):
- data_time = time.perf_counter() - self.data_start
- # Need to manually enter/exit since trainer may launch processes
- # This ideally belongs in setup, but setup seems to run before processes are spawned
- if self.storage is None:
- self.storage = EventStorage(0)
- self.storage.__enter__()
- self.iteration_timer.trainer = weakref.proxy(self)
- self.iteration_timer.before_step()
- self.writers = (
- default_writers(self.cfg.OUTPUT_DIR, self.max_iter)
- if comm.is_main_process()
- else {}
- )
-
- loss_dict = self.model(batch)
- SimpleTrainer.write_metrics(loss_dict, data_time)
-
- opt = self.optimizers()
- self.storage.put_scalar(
- "lr", opt.param_groups[self._best_param_group_id]["lr"], smoothing_hint=False
- )
- self.iteration_timer.after_step()
- self.storage.step()
- # A little odd to put before step here, but it's the best way to get a proper timing
- self.iteration_timer.before_step()
-
- if self.storage.iter % 20 == 0:
- for writer in self.writers:
- writer.write()
- return sum(loss_dict.values())
-
- def training_step_end(self, training_step_outpus):
- self.data_start = time.perf_counter()
- return training_step_outpus
-
- def training_epoch_end(self, training_step_outputs):
- self.iteration_timer.after_train()
- if comm.is_main_process():
- self.checkpointer.save("model_final")
- for writer in self.writers:
- writer.write()
- writer.close()
- self.storage.__exit__(None, None, None)
-
- def _process_dataset_evaluation_results(self) -> OrderedDict:
- results = OrderedDict()
- for idx, dataset_name in enumerate(self.cfg.DATASETS.TEST):
- results[dataset_name] = self._evaluators[idx].evaluate()
- if comm.is_main_process():
- print_csv_format(results[dataset_name])
-
- if len(results) == 1:
- results = list(results.values())[0]
- return results
-
- def _reset_dataset_evaluators(self):
- self._evaluators = []
- for dataset_name in self.cfg.DATASETS.TEST:
- evaluator = build_evaluator(self.cfg, dataset_name)
- evaluator.reset()
- self._evaluators.append(evaluator)
-
- def on_validation_epoch_start(self, _outputs):
- self._reset_dataset_evaluators()
-
- def validation_epoch_end(self, _outputs):
- results = self._process_dataset_evaluation_results(_outputs)
-
- flattened_results = flatten_results_dict(results)
- for k, v in flattened_results.items():
- try:
- v = float(v)
- except Exception as e:
- raise ValueError(
- "[EvalHook] eval_function should return a nested dict of float. "
- "Got '{}: {}' instead.".format(k, v)
- ) from e
- self.storage.put_scalars(**flattened_results, smoothing_hint=False)
-
- def validation_step(self, batch, batch_idx: int, dataloader_idx: int = 0) -> None:
- if not isinstance(batch, List):
- batch = [batch]
- outputs = self.model(batch)
- self._evaluators[dataloader_idx].process(batch, outputs)
-
- def configure_optimizers(self):
- optimizer = build_optimizer(self.cfg, self.model)
- self._best_param_group_id = hooks.LRScheduler.get_best_param_group_id(optimizer)
- scheduler = build_lr_scheduler(self.cfg, optimizer)
- return [optimizer], [{"scheduler": scheduler, "interval": "step"}]
-
-
-class DataModule(LightningDataModule):
- def __init__(self, cfg):
- super().__init__()
- self.cfg = DefaultTrainer.auto_scale_workers(cfg, comm.get_world_size())
-
- def train_dataloader(self):
- return build_detection_train_loader(self.cfg)
-
- def val_dataloader(self):
- dataloaders = []
- for dataset_name in self.cfg.DATASETS.TEST:
- dataloaders.append(build_detection_test_loader(self.cfg, dataset_name))
- return dataloaders
-
-
-def main(args):
- cfg = setup(args)
- train(cfg, args)
-
-
-def train(cfg, args):
- trainer_params = {
- # training loop is bounded by max steps, use a large max_epochs to make
- # sure max_steps is met first
- "max_epochs": 10**8,
- "max_steps": cfg.SOLVER.MAX_ITER,
- "val_check_interval": cfg.TEST.EVAL_PERIOD if cfg.TEST.EVAL_PERIOD > 0 else 10**8,
- "num_nodes": args.num_machines,
- "gpus": args.num_gpus,
- "num_sanity_val_steps": 0,
- }
- if cfg.SOLVER.AMP.ENABLED:
- trainer_params["precision"] = 16
-
- last_checkpoint = os.path.join(cfg.OUTPUT_DIR, "last.ckpt")
- if args.resume:
- # resume training from checkpoint
- trainer_params["resume_from_checkpoint"] = last_checkpoint
- logger.info(f"Resuming training from checkpoint: {last_checkpoint}.")
-
- trainer = pl.Trainer(**trainer_params)
- logger.info(f"start to train with {args.num_machines} nodes and {args.num_gpus} GPUs")
-
- module = TrainingModule(cfg)
- data_module = DataModule(cfg)
- if args.eval_only:
- logger.info("Running inference")
- trainer.validate(module, data_module)
- else:
- logger.info("Running training")
- trainer.fit(module, data_module)
-
-
-def setup(args):
- """
- Create configs and perform basic setups.
- """
- cfg = get_cfg()
- cfg.merge_from_file(args.config_file)
- cfg.merge_from_list(args.opts)
- cfg.freeze()
- default_setup(cfg, args)
- return cfg
-
-
-if __name__ == "__main__":
- parser = default_argument_parser()
- args = parser.parse_args()
- logger.info("Command Line Args:", args)
- main(args)
diff --git a/spaces/niks-salodkar/Age-Prediction-Demo/README.md b/spaces/niks-salodkar/Age-Prediction-Demo/README.md
deleted file mode 100644
index 5ad84ac094d98269efdd72863312dd55a207de9c..0000000000000000000000000000000000000000
--- a/spaces/niks-salodkar/Age-Prediction-Demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Age Prediction Demo
-emoji: 📉
-colorFrom: blue
-colorTo: red
-sdk: streamlit
-sdk_version: 1.15.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_rbs.py b/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_rbs.py
deleted file mode 100644
index e8aa879b42649117c4309023ab9a3f141315fd5e..0000000000000000000000000000000000000000
--- a/spaces/nosdigitalmedia/dutch-youth-comment-classifier/src/start_up/start_up_rbs.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import re
-
-from urlextract import URLExtract
-
-from src.start_up.start_up_bad_words_rule import create_bad_word_rule
-from src.config import config
-from src.rule_based_system.HTMLRule import HTMLRule
-from src.rule_based_system.PersonalDetailsRule import PersonalDetailsRule
-from src.rule_based_system.RuleBasedSystem import RuleBasedSystem
-from src.rule_based_system.TextLengthRule import TextLengthRule
-from src.rule_based_system.UrlRule import UrlRule
-
-
-def create_strong_rbs() -> RuleBasedSystem:
- text_length_rule = TextLengthRule()
-
- url_rule = UrlRule(URLExtract())
-
- mail_rule = PersonalDetailsRule([r'[\w.+-]+@[\w-]+\.[\w.-]+'], True)
-
- strict_bad_word_rule = create_bad_word_rule(config['bad_words_strict'], True)
-
- return RuleBasedSystem([
- text_length_rule, # todo: check if this make sense to add here, 500 was our own chosen max length
- url_rule,
- mail_rule,
- strict_bad_word_rule
- ])
-
-
-def create_weak_rbs() -> RuleBasedSystem:
- phone_regex = r"(^\+[0-9]{2}|^\+[0-9]{2}\(0\)|^\(\+[0-9]{2}\)\(0\)|^00[0-9]{2}|^0)([0-9]{9}$|[0-9\-\s]{10}$)"
- phone_home_local = re.compile(r".*?(\(?\d{3}\D{0,3}\d{2}\D{0,3}\d{2}).*?", re.S)
- phone_home = re.compile(r".*?(\(?\d{3}\D{0,3}\d{3}\D{0,3}\d{2}\D{0,3}\d{2}).*?", re.S)
- phone_mobile = re.compile(r".*?(\(?\d{2}\D{0,3}\d{3}\D{0,3}\d{3}\D{0,3}\d{2}).*?", re.S)
- phone_mobile_international = re.compile(r".*?(\(?\d{3}\D{0,3}\d{3}\D{0,3}\d{3}\D{0,3}\d{2}).*?", re.S)
-
- phone_regexes = [phone_regex, phone_home_local, phone_home, phone_mobile, phone_mobile_international]
- phone_number_rule = PersonalDetailsRule(phone_regexes, False)
-
- html_rule = HTMLRule()
-
- ambiguous_bad_word_rule = create_bad_word_rule(config['bad_words_ambiguous'], False)
-
- # rule systems
- return RuleBasedSystem([
- phone_number_rule,
- html_rule,
- ambiguous_bad_word_rule
- ])
diff --git a/spaces/nugrahatheo/Credit_Card_Fraud_Detection/prediction.py b/spaces/nugrahatheo/Credit_Card_Fraud_Detection/prediction.py
deleted file mode 100644
index 10036f76e708fe87197284efec6b4c95906003cb..0000000000000000000000000000000000000000
--- a/spaces/nugrahatheo/Credit_Card_Fraud_Detection/prediction.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import streamlit as st
-import pandas as pd
-import numpy as np
-import pickle
-import json
-
-# Load all files
-
-with open('list_num_cols.txt', 'r') as file_1:
- list_num_cols = json.load(file_1)
-
-with open('list_cat_cols_1.txt', 'r') as file_2:
- list_cat_cols_1 = json.load(file_2)
-
-with open('list_cat_cols_2.txt', 'r') as file_3:
- list_cat_cols_2 = json.load(file_3)
-
-with open('list_cat_cols_3.txt', 'r') as file_4:
- list_cat_cols_3 = json.load(file_4)
-
-with open('model_scaler.pkl', 'rb') as file_5:
- scaler = pickle.load(file_5)
-
-with open('model_encoder_1.pkl', 'rb') as file_6:
- OHE_1 = pickle.load(file_6)
-
-with open('model_encoder_2.pkl', 'rb') as file_7:
- OHE_2 = pickle.load(file_7)
-
-with open('model_encoder_3.pkl', 'rb') as file_8:
- OHE_3 = pickle.load(file_8)
-
-with open('model_logreg.pkl', 'rb') as file_9:
- model_logreg = pickle.load(file_9)
-
-with open('model_dtc.pkl', 'rb') as file_10:
- model_dtc = pickle.load(file_10)
-
-with open('model_rfc.pkl', 'rb') as file_11:
- model_rfc = pickle.load(file_11)
-
-with open('model_gbc.pkl', 'rb') as file_12:
- model_gbc = pickle.load(file_12)
-
-def run():
- st.write('##### Form Credit Card Fraud Detection')
- # Making Form
- with st.form(key='Form Credit Card Fraud Detection'):
- cc_num = st.text_input('CC Number')
- merchant = st.text_input('Merchant')
- category = st.selectbox('Category', ('misc_net', 'grocery_pos', 'entertainment', 'gas_transport', 'misc_pos',
- 'grocery_net', 'shopping_net', 'shopping_pos', 'food_dining', 'personal_care',
- 'health_fitness', 'travel', 'kids_pets', 'home'), index=0)
- amt = st.number_input('Amount', min_value=1, max_value=999999999, value=1)
- first = st.text_input('First Name')
- last = st.text_input('Last Name')
- gender = st.selectbox('Gender', ('M','F'), index=0)
- street = st.text_input('Street')
- city = st.text_input('City')
- state = st.selectbox('State', ('NC','WA','ID','MT','VA','PA','KS','TN','IA','WV','FL','CA','NM','NJ',
- 'OK','IN','MA','TX','WI','MI','WY','HI','NE','OR','LA','DC','KY','NY',
- 'MS','UT','AL','AR','MD','GA','ME','AZ','MN','OH','CO','VT','MO','SC',
- 'NV','IL','NH','SD','AK','ND','CT','RI','DE'), index=0)
- zip = st.number_input('ZIP', min_value=10000, max_value=99999, value=25456)
- city_pop = st.number_input('City Population', min_value=100, max_value=9999999, value=1000)
- job = st.text_input('Job')
- dob = st.text_input('Date Of Birth', help=('YYYY-MM-dd'))
-
- st.markdown('---')
-
- submited_1 = st.form_submit_button('Detection using Logistic Regression')
- submited_2 = st.form_submit_button('Detection using Decision Tree Classifier')
- submited_3 = st.form_submit_button('Detection using Random Forest Classifier')
- submited_4 = st.form_submit_button('Detection using Gradient Boosting Classifier')
-
- data_inf = {
- 'cc_num' : cc_num,
- 'merchant' : merchant,
- 'category' : category,
- 'amt' : amt,
- 'first' : first,
- 'last' : last,
- 'gender' : gender,
- 'street' : street,
- 'city' : city,
- 'state' : state,
- 'zip' : zip,
- 'city_pop' : city_pop,
- 'job' : job,
- 'dob' : dob
- }
-
- data_inf = pd.DataFrame([data_inf])
- st.dataframe(data_inf)
-
- if submited_1:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using Logistik Regression
- y_pred_inf_logreg = model_logreg.predict(data_inf_final)
- if y_pred_inf_logreg[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
- elif submited_2:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using Decision Tree Classifier
- y_pred_inf_dtc = model_dtc.predict(data_inf_final)
- if y_pred_inf_dtc[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
- elif submited_3:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using Random Forest Classifier
- y_pred_inf_rfc = model_rfc.predict(data_inf_final)
- if y_pred_inf_rfc[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
- else:
- # Split between numerical columns and categorical columns
- data_inf_num = data_inf[list_num_cols]
- data_inf_cat_1 = data_inf[list_cat_cols_1]
- data_inf_cat_2 = data_inf[list_cat_cols_2]
- data_inf_cat_3 = data_inf[list_cat_cols_3]
- #Feature scaling and feature encoding
- data_inf_num_scaled = scaler.transform(data_inf_num)
- data_inf_cat_encoded_1 = OHE_1.transform(data_inf_cat_1)
- data_inf_cat_encoded_2 = OHE_2.transform(data_inf_cat_2)
- data_inf_cat_encoded_3 = OHE_3.transform(data_inf_cat_3)
- data_inf_final = np.concatenate([data_inf_num_scaled, data_inf_cat_encoded_1, data_inf_cat_encoded_2, data_inf_cat_encoded_3], axis = 1)
- #Predict using GradientBoosting Classifier
- y_pred_inf_gbc = model_gbc.predict(data_inf_final)
- if y_pred_inf_gbc[0] == 0:
- st.write('# Non-Fraud')
- else:
- st.write('# Fraud')
-
-
-if __name__ == '__main__':
- run()
\ No newline at end of file
diff --git a/spaces/oliver2023/chatgpt-on-wechat/docker/build.debian.sh b/spaces/oliver2023/chatgpt-on-wechat/docker/build.debian.sh
deleted file mode 100644
index e7caa15dc1796f0f2e4d0d1e5709fc196794f803..0000000000000000000000000000000000000000
--- a/spaces/oliver2023/chatgpt-on-wechat/docker/build.debian.sh
+++ /dev/null
@@ -1,15 +0,0 @@
-#!/bin/bash
-
-# fetch latest release tag
-CHATGPT_ON_WECHAT_TAG=`curl -sL "https://api.github.com/repos/zhayujie/chatgpt-on-wechat/releases/latest" | \
- grep '"tag_name":' | \
- sed -E 's/.*"([^"]+)".*/\1/'`
-
-# build image
-docker build -f Dockerfile.debian \
- --build-arg CHATGPT_ON_WECHAT_VER=$CHATGPT_ON_WECHAT_TAG \
- -t zhayujie/chatgpt-on-wechat .
-
-# tag image
-docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:debian
-docker tag zhayujie/chatgpt-on-wechat zhayujie/chatgpt-on-wechat:$CHATGPT_ON_WECHAT_TAG-debian
\ No newline at end of file
diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/clevrtex/resnet/baseline.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/clevrtex/resnet/baseline.py
deleted file mode 100644
index fab9b2f23244de5ca43185481da8eabb4ee130e3..0000000000000000000000000000000000000000
--- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/clevrtex/resnet/baseline.py
+++ /dev/null
@@ -1,198 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-r"""Config for unsupervised training on CLEVRTex."""
-
-import ml_collections
-
-
-def get_config():
- """Get the default hyperparameter configuration."""
- config = ml_collections.ConfigDict()
-
- config.seed = 42
- config.seed_data = True
-
- config.batch_size = 64
- config.num_train_steps = 500000 # from the original Slot Attention
- config.init_checkpoint = ml_collections.ConfigDict()
- config.init_checkpoint.xid = 0 # Disabled by default.
- config.init_checkpoint.wid = 1
-
- config.optimizer_configs = ml_collections.ConfigDict()
- config.optimizer_configs.optimizer = "adam"
-
- config.optimizer_configs.grad_clip = ml_collections.ConfigDict()
- config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm"
- config.optimizer_configs.grad_clip.clip_value = 0.05
-
- config.lr_configs = ml_collections.ConfigDict()
- config.lr_configs.learning_rate_schedule = "compound"
- config.lr_configs.factors = "constant * cosine_decay * linear_warmup"
- config.lr_configs.warmup_steps = 10000 # from the original Slot Attention
- config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps")
- # from the original Slot Attention
- config.lr_configs.base_learning_rate = 4e-4
-
- config.eval_pad_last_batch = False # True
- config.log_loss_every_steps = 50
- config.eval_every_steps = 5000
- config.checkpoint_every_steps = 5000
-
- config.train_metrics_spec = {
- "loss": "loss",
- "ari": "ari",
- "ari_nobg": "ari_nobg",
- }
- config.eval_metrics_spec = {
- "eval_loss": "loss",
- "eval_ari": "ari",
- "eval_ari_nobg": "ari_nobg",
- }
-
- config.data = ml_collections.ConfigDict({
- "dataset_name": "tfds",
- # The TFDS dataset will be created in the directory below
- # if you follow the README in datasets/clevrtex.
- "data_dir": "~/tensorflow_datasets",
- "tfds_name": "clevr_tex",
- "shuffle_buffer_size": config.batch_size * 8,
- "resolution": (128, 128)
- })
-
- config.max_instances = 11
- config.num_slots = config.max_instances # Only used for metrics.
- config.logging_min_n_colors = config.max_instances
-
- config.preproc_train = [
- "tfds_image_to_tfds_video",
- "video_from_tfds",
- "central_crop(height=192,width=192)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.preproc_eval = [
- "tfds_image_to_tfds_video",
- "video_from_tfds",
- "central_crop(height=192,width=192)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.eval_slice_size = 1
- config.eval_slice_keys = ["video", "segmentations_video"]
-
- # Dictionary of targets and corresponding channels. Losses need to match.
- targets = {"video": 3}
- config.losses = {"recon": {"targets": list(targets)}}
- config.losses = ml_collections.ConfigDict({
- f"recon_{target}": {"loss_type": "recon", "key": target}
- for target in targets})
-
- config.model = ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SAVi",
-
- # Encoder.
- "encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.FrameEncoder",
- "reduction": "spatial_flatten",
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.ResNet34",
- "num_classes": None,
- "axis_name": "time",
- "norm_type": "group",
- "small_inputs": True
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "project_add",
- "output_transform": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.MLP",
- "hidden_size": 128,
- "layernorm": "pre"
- }),
- }),
- }),
-
- # Corrector.
- "corrector": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SlotAttention",
- "num_iterations": 3,
- "qkv_size": 64,
- "mlp_size": 128,
- }),
-
- # Predictor.
- # Removed since we are running a single frame.
- "predictor": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Identity"
- }),
-
- # Initializer.
- "initializer": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.ParamStateInit",
- "shape": (11, 64), # (num_slots, slot_size)
- }),
-
- # Decoder.
- "decoder": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder",
- "resolution": (16, 16), # Update if data resolution or strides change
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.CNN",
- "features": [64, 64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1), (1, 1)],
- "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)],
- "layer_transpose": [True, True, True, False, False]
- }),
- "target_readout": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Readout",
- "keys": list(targets),
- "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension
- "module": "invariant_slot_attention.modules.MLP",
- "num_hidden_layers": 0,
- "hidden_size": 0,
- "output_size": targets[k]}) for k in targets],
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "project_add"
- }),
- }),
- "decode_corrected": True,
- "decode_predicted": False,
- })
-
- # Which video-shaped variables to visualize.
- config.debug_var_video_paths = {
- "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long
- }
-
- # Define which attention matrices to log/visualize.
- config.debug_var_attn_paths = {
- "corrector_attn": "corrector/InvertedDotProductAttention_0/GeneralizedDotProductAttention_0/attn" # pylint: disable=line-too-long
- }
-
- # Widths of attention matrices (for reshaping to image grid).
- config.debug_var_attn_widths = {
- "corrector_attn": 16,
- }
-
- return config
-
-
diff --git a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/equiv_transl.py b/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/equiv_transl.py
deleted file mode 100644
index f5658b121974dc0c68cfb00b09b658eea066f17d..0000000000000000000000000000000000000000
--- a/spaces/ondrejbiza/isa/invariant_slot_attention/configs/multishapenet_easy/equiv_transl.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The Google Research Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-r"""Config for unsupervised training on MultiShapeNet-Easy."""
-
-import ml_collections
-
-
-def get_config():
- """Get the default hyperparameter configuration."""
- config = ml_collections.ConfigDict()
-
- config.seed = 42
- config.seed_data = True
-
- config.batch_size = 64
- config.num_train_steps = 500000 # from the original Slot Attention
- config.init_checkpoint = ml_collections.ConfigDict()
- config.init_checkpoint.xid = 0 # Disabled by default.
- config.init_checkpoint.wid = 1
-
- config.optimizer_configs = ml_collections.ConfigDict()
- config.optimizer_configs.optimizer = "adam"
-
- config.optimizer_configs.grad_clip = ml_collections.ConfigDict()
- config.optimizer_configs.grad_clip.clip_method = "clip_by_global_norm"
- config.optimizer_configs.grad_clip.clip_value = 0.05
-
- config.lr_configs = ml_collections.ConfigDict()
- config.lr_configs.learning_rate_schedule = "compound"
- config.lr_configs.factors = "constant * cosine_decay * linear_warmup"
- config.lr_configs.warmup_steps = 10000 # from the original Slot Attention
- config.lr_configs.steps_per_cycle = config.get_ref("num_train_steps")
- # from the original Slot Attention
- config.lr_configs.base_learning_rate = 4e-4
-
- config.eval_pad_last_batch = False # True
- config.log_loss_every_steps = 50
- config.eval_every_steps = 5000
- config.checkpoint_every_steps = 5000
-
- config.train_metrics_spec = {
- "loss": "loss",
- "ari": "ari",
- "ari_nobg": "ari_nobg",
- }
- config.eval_metrics_spec = {
- "eval_loss": "loss",
- "eval_ari": "ari",
- "eval_ari_nobg": "ari_nobg",
- }
-
- config.data = ml_collections.ConfigDict({
- "dataset_name": "multishapenet_easy",
- "shuffle_buffer_size": config.batch_size * 8,
- "resolution": (128, 128)
- })
-
- config.max_instances = 11
- config.num_slots = config.max_instances # Only used for metrics.
- config.logging_min_n_colors = config.max_instances
-
- config.preproc_train = [
- "sunds_to_tfds_video",
- "video_from_tfds",
- "subtract_one_from_segmentations",
- "central_crop(height=240, width=240)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.preproc_eval = [
- "sunds_to_tfds_video",
- "video_from_tfds",
- "subtract_one_from_segmentations",
- "central_crop(height=240, width=240)",
- "resize_small({size})".format(size=min(*config.data.resolution))
- ]
-
- config.eval_slice_size = 1
- config.eval_slice_keys = ["video", "segmentations_video"]
-
- # Dictionary of targets and corresponding channels. Losses need to match.
- targets = {"video": 3}
- config.losses = {"recon": {"targets": list(targets)}}
- config.losses = ml_collections.ConfigDict({
- f"recon_{target}": {"loss_type": "recon", "key": target}
- for target in targets})
-
- config.model = ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SAVi",
-
- # Encoder.
- "encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.FrameEncoder",
- "reduction": "spatial_flatten",
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SimpleCNN",
- "features": [64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1)]
- }),
- "pos_emb": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.PositionEmbedding",
- "embedding_type": "linear",
- "update_type": "concat"
- }),
- }),
-
- # Corrector.
- "corrector": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.SlotAttentionTranslEquiv",
- "num_iterations": 3,
- "qkv_size": 64,
- "mlp_size": 128,
- "grid_encoder": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.MLP",
- "hidden_size": 128,
- "layernorm": "pre"
- }),
- "add_rel_pos_to_values": True, # V3
- "zero_position_init": False, # Random positions.
- }),
-
- # Predictor.
- # Removed since we are running a single frame.
- "predictor": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Identity"
- }),
-
- # Initializer.
- "initializer": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.ParamStateInitRandomPositions",
- "shape":
- (11, 64), # (num_slots, slot_size)
- }),
-
- # Decoder.
- "decoder": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.SiameseSpatialBroadcastDecoder",
- "resolution": (16, 16), # Update if data resolution or strides change
- "backbone": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.CNN",
- "features": [64, 64, 64, 64, 64],
- "kernel_size": [(5, 5), (5, 5), (5, 5), (5, 5), (5, 5)],
- "strides": [(2, 2), (2, 2), (2, 2), (1, 1), (1, 1)],
- "max_pool_strides": [(1, 1), (1, 1), (1, 1), (1, 1), (1, 1)],
- "layer_transpose": [True, True, True, False, False]
- }),
- "target_readout": ml_collections.ConfigDict({
- "module": "invariant_slot_attention.modules.Readout",
- "keys": list(targets),
- "readout_modules": [ml_collections.ConfigDict({ # pylint: disable=g-complex-comprehension
- "module": "invariant_slot_attention.modules.MLP",
- "num_hidden_layers": 0,
- "hidden_size": 0,
- "output_size": targets[k]}) for k in targets],
- }),
- "relative_positions": True,
- "pos_emb": ml_collections.ConfigDict({
- "module":
- "invariant_slot_attention.modules.RelativePositionEmbedding",
- "embedding_type":
- "linear",
- "update_type":
- "project_add",
- }),
- }),
- "decode_corrected": True,
- "decode_predicted": False,
- })
-
- # Which video-shaped variables to visualize.
- config.debug_var_video_paths = {
- "recon_masks": "decoder/alphas_softmaxed/__call__/0", # pylint: disable=line-too-long
- }
-
- # Define which attention matrices to log/visualize.
- config.debug_var_attn_paths = {
- "corrector_attn": "corrector/InvertedDotProductAttentionKeyPerQuery_0/attn" # pylint: disable=line-too-long
- }
-
- # Widths of attention matrices (for reshaping to image grid).
- config.debug_var_attn_widths = {
- "corrector_attn": 16,
- }
-
- return config
-
-
diff --git a/spaces/onnx/sub_pixel_cnn_2016/app.py b/spaces/onnx/sub_pixel_cnn_2016/app.py
deleted file mode 100644
index fc04ec04eb940ea5b94d765ed4f22027014cca35..0000000000000000000000000000000000000000
--- a/spaces/onnx/sub_pixel_cnn_2016/app.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import io
-import numpy as np
-import onnxruntime
-from torch import nn
-import torch.utils.model_zoo as model_zoo
-import torch.onnx
-import torch.nn as nn
-import torch.nn.init as init
-import matplotlib.pyplot as plt
-import json
-from PIL import Image, ImageDraw, ImageFont
-from resizeimage import resizeimage
-import numpy as np
-import pdb
-import onnx
-import gradio as gr
-import os
-
-class SuperResolutionNet(nn.Module):
- def __init__(self, upscale_factor, inplace=False):
- super(SuperResolutionNet, self).__init__()
-
- self.relu = nn.ReLU(inplace=inplace)
- self.conv1 = nn.Conv2d(1, 64, (5, 5), (1, 1), (2, 2))
- self.conv2 = nn.Conv2d(64, 64, (3, 3), (1, 1), (1, 1))
- self.conv3 = nn.Conv2d(64, 32, (3, 3), (1, 1), (1, 1))
- self.conv4 = nn.Conv2d(32, upscale_factor ** 2, (3, 3), (1, 1), (1, 1))
- self.pixel_shuffle = nn.PixelShuffle(upscale_factor)
-
- self._initialize_weights()
-
- def forward(self, x):
- x = self.relu(self.conv1(x))
- x = self.relu(self.conv2(x))
- x = self.relu(self.conv3(x))
- x = self.pixel_shuffle(self.conv4(x))
- return x
-
- def _initialize_weights(self):
- init.orthogonal_(self.conv1.weight, init.calculate_gain('relu'))
- init.orthogonal_(self.conv2.weight, init.calculate_gain('relu'))
- init.orthogonal_(self.conv3.weight, init.calculate_gain('relu'))
- init.orthogonal_(self.conv4.weight)
-
-# Create the super-resolution model by using the above model definition.
-torch_model = SuperResolutionNet(upscale_factor=3)
-
-model_url = 'https://s3.amazonaws.com/pytorch/test_data/export/superres_epoch100-44c6958e.pth'
-batch_size = 1 # just a random number
-
-# Initialize model with the pretrained weights
-map_location = lambda storage, loc: storage
-if torch.cuda.is_available():
- map_location = None
-torch_model.load_state_dict(model_zoo.load_url(model_url, map_location=map_location))
-
-
-
-x = torch.randn(1, 1, 224, 224, requires_grad=True)
-torch_model.eval()
-
-
-
-os.system("wget https://github.com/AK391/models/raw/main/vision/super_resolution/sub_pixel_cnn_2016/model/super-resolution-10.onnx")
-
-# Start from ORT 1.10, ORT requires explicitly setting the providers parameter if you want to use execution providers
-# other than the default CPU provider (as opposed to the previous behavior of providers getting set/registered by default
-# based on the build flags) when instantiating InferenceSession.
-# For example, if NVIDIA GPU is available and ORT Python package is built with CUDA, then call API as following:
-# onnxruntime.InferenceSession(path/to/model, providers=['CUDAExecutionProvider'])
-ort_session = onnxruntime.InferenceSession("super-resolution-10.onnx")
-
-
-def inference(img):
- orig_img = Image.open(img)
- img = resizeimage.resize_cover(orig_img, [224,224], validate=False)
- img_ycbcr = img.convert('YCbCr')
- img_y_0, img_cb, img_cr = img_ycbcr.split()
- img_ndarray = np.asarray(img_y_0)
-
- img_4 = np.expand_dims(np.expand_dims(img_ndarray, axis=0), axis=0)
- img_5 = img_4.astype(np.float32) / 255.0
-
- ort_inputs = {ort_session.get_inputs()[0].name: img_5}
- ort_outs = ort_session.run(None, ort_inputs)
- img_out_y = ort_outs[0]
-
- img_out_y = Image.fromarray(np.uint8((img_out_y[0] * 255.0).clip(0, 255)[0]), mode='L')
- final_img = Image.merge(
- "YCbCr", [
- img_out_y,
- img_cb.resize(img_out_y.size, Image.BICUBIC),
- img_cr.resize(img_out_y.size, Image.BICUBIC),
- ]).convert("RGB")
- return final_img
-
-title="sub_pixel_cnn_2016"
-description="The Super Resolution machine learning model sharpens and upscales the input image to refine the details and improve quality."
-gr.Interface(inference,gr.inputs.Image(type="filepath"),gr.outputs.Image(type="pil"),title=title,description=description).launch()
\ No newline at end of file
diff --git a/spaces/osanseviero/hugging_eats/app.py b/spaces/osanseviero/hugging_eats/app.py
deleted file mode 100644
index 7efb362fc755be345cf9ca778657fe2f15bdbf63..0000000000000000000000000000000000000000
--- a/spaces/osanseviero/hugging_eats/app.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import googlemaps
-import os
-from huggingface_hub import Repository
-import googlemaps
-import gradio as gr
-import csv
-from datasets import load_dataset
-
-from bokeh.io import show
-from bokeh.plotting import gmap
-from bokeh.models import GMapOptions, ColumnDataSource, HoverTool
-from bokeh.embed import json_item
-from datasets import load_dataset
-
-
-
-MAPS_API = os.environ['MAPS_API']
-OS_API_KEY = os.environ['OS_API_KEY']
-HF_TOKEN = os.environ['HF_TOKEN']
-
-google_maps_client = googlemaps.Client(key=MAPS_API)
-
-DATASET_REPO_URL = "https://huggingface.co/datasets/osanseviero/hugging_eats"
-DATA_FILENAME = "data.csv"
-DATA_FILE = os.path.join("data", DATA_FILENAME)
-
-repo = Repository(
- local_dir="data", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN
-)
-
-def predict(place, hugging_secret):
- if hugging_secret != OS_API_KEY:
- return "INVALID SECRET - you cannot save places"
-
- geocode_result = google_maps_client.geocode(place)
- if geocode_result == None:
- return "PLACE NOT FOUND"
-
- print("Saving place")
- lat = geocode_result[0]["geometry"]["location"]["lat"]
- lng = geocode_result[0]["geometry"]["location"]["lng"]
-
- repo.git_pull(rebase=True)
- with open(DATA_FILE, "a") as csvfile:
- writer = csv.DictWriter(csvfile, fieldnames=["name", "lat", "lng"])
- writer.writerow(
- {"name": place, "lat": lat, "lng": lng}
- )
- print("Pushing place")
-
- repo.push_to_hub()
- return "PLACE SAVED!"
-
-iface_submit = gr.Interface(
- predict,
- inputs=[
- gr.inputs.Textbox(label="Address or place name"),
- gr.inputs.Textbox(label="Hugging Secret"),
- ],
- outputs="text"
-)
-
-
-def plot_map():
- dataset = load_dataset('osanseviero/hugging_eats')
- data = dataset["train"].to_pandas()
- data = data.drop_duplicates()
-
- gmap_options = GMapOptions(lat=data["lat"][0], lng=data["lng"][0],
- map_type="satellite", zoom=12)
- # the tools are defined below:
- p = gmap(MAPS_API, gmap_options, title='Pays de Gex',
- tools=['reset', 'wheel_zoom', 'pan', 'zoom_in'])
-
-
- data_source = ColumnDataSource(data)
-
- center = p.circle('lng', 'lat', size=10, alpha=0.5,
- color='yellow', source=data_source)
-
- TOOLTIPS = [
- ("name", "@name"),
- ]
- p.add_tools(HoverTool(tooltips = TOOLTIPS))
-
- return json_item(p)
-
-
-iface_display = gr.Interface(
- plot_map,
- inputs=None,
- outputs=gr.Plot(type="bokeh")
-)
-
-demo = gr.TabbedInterface([iface_display, iface_submit], ["Browse Places", "Submit Places (HF only)"]).launch()
\ No newline at end of file
diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/self_attention_guidance.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/self_attention_guidance.md
deleted file mode 100644
index 854505f182021bd0630d537e86494e7c1638d373..0000000000000000000000000000000000000000
--- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/pipelines/self_attention_guidance.md
+++ /dev/null
@@ -1,35 +0,0 @@
-
-
-# Self-Attention Guidance
-
-[Improving Sample Quality of Diffusion Models Using Self-Attention Guidance](https://huggingface.co/papers/2210.00939) is by Susung Hong et al.
-
-The abstract from the paper is:
-
-*Denoising diffusion models (DDMs) have attracted attention for their exceptional generation quality and diversity. This success is largely attributed to the use of class- or text-conditional diffusion guidance methods, such as classifier and classifier-free guidance. In this paper, we present a more comprehensive perspective that goes beyond the traditional guidance methods. From this generalized perspective, we introduce novel condition- and training-free strategies to enhance the quality of generated images. As a simple solution, blur guidance improves the suitability of intermediate samples for their fine-scale information and structures, enabling diffusion models to generate higher quality samples with a moderate guidance scale. Improving upon this, Self-Attention Guidance (SAG) uses the intermediate self-attention maps of diffusion models to enhance their stability and efficacy. Specifically, SAG adversarially blurs only the regions that diffusion models attend to at each iteration and guides them accordingly. Our experimental results show that our SAG improves the performance of various diffusion models, including ADM, IDDPM, Stable Diffusion, and DiT. Moreover, combining SAG with conventional guidance methods leads to further improvement.*
-
-You can find additional information about Self-Attention Guidance on the [project page](https://ku-cvlab.github.io/Self-Attention-Guidance), [original codebase](https://github.com/KU-CVLAB/Self-Attention-Guidance), and try it out in a [demo](https://huggingface.co/spaces/susunghong/Self-Attention-Guidance) or [notebook](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb).
-
-
-
-
-More info at the pyparsing wiki page '
- # make_html_tags returns pyparsing expressions for the opening and
- # closing tags as a 2-tuple
- a, a_end = make_html_tags("A")
- link_expr = a + SkipTo(a_end)("link_text") + a_end
-
- for link in link_expr.search_string(text):
- # attributes in the tag (like "href" shown here) are
- # also accessible as named results
- print(link.link_text, '->', link.href)
-
- prints::
-
- pyparsing -> https://github.com/pyparsing/pyparsing/wiki
- """
- return _makeTags(tag_str, False)
-
-
-def make_xml_tags(
- tag_str: Union[str, ParserElement]
-) -> Tuple[ParserElement, ParserElement]:
- """Helper to construct opening and closing tag expressions for XML,
- given a tag name. Matches tags only in the given upper/lower case.
-
- Example: similar to :class:`make_html_tags`
- """
- return _makeTags(tag_str, True)
-
-
-any_open_tag: ParserElement
-any_close_tag: ParserElement
-any_open_tag, any_close_tag = make_html_tags(
- Word(alphas, alphanums + "_:").set_name("any tag")
-)
-
-_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()}
-common_html_entity = Regex("&(?PDu Meter Portable
-
-The best free DU Meter alternative is GlassWire. ... Other interesting free alternatives to DU Meter are NetSpeedMonitor (Free) ... Installer & portable. Read moreThe best free DU Meter alternative is GlassWire.
-Google Chrome.
-Google.
-Download.
-The best free OpenVPN client for PC. ...
-Other interesting free DU Meter alternatives are NetSpeedMonitor (Free) and BlueByte (Free).
-NetSpeedMonitor (Free) is a free utility for monitoring Internet connection.
-It allows you to monitor the download and upload speed of all ports connected to the PC, for each connection individually and as a whole. 8a78ff9644
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Excelrecoverytoolboxcrackserialkeygen __FULL__.md b/spaces/quidiaMuxgu/Expedit-SAM/Excelrecoverytoolboxcrackserialkeygen __FULL__.md
deleted file mode 100644
index ab12e71837e25be9382f97c5393ee38c4b35cbab..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/Excelrecoverytoolboxcrackserialkeygen __FULL__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-excelrecoverytoolboxcrackserialkeygen
-
-MENTAL AND/OR SUBSTANCE USE DISORDERS: FAST FACTS Excel Recovery Toolbox Crack Serial Keygen Smartphone Eye Tracking Toolbox: Accurate ... 4d29de3e1b
-
-
-
diff --git a/spaces/quidiaMuxgu/Expedit-SAM/IVT BlueSoleil 12.0.492.1 Multilingual Fix Free Download [UPDATED].md b/spaces/quidiaMuxgu/Expedit-SAM/IVT BlueSoleil 12.0.492.1 Multilingual Fix Free Download [UPDATED].md
deleted file mode 100644
index 65314ab7bb65bdce79ab2752ec053bb2d5585d1b..0000000000000000000000000000000000000000
--- a/spaces/quidiaMuxgu/Expedit-SAM/IVT BlueSoleil 12.0.492.1 Multilingual Fix Free Download [UPDATED].md
+++ /dev/null
@@ -1,6 +0,0 @@
-IVT BlueSoleil 12.0.492.1 Multilingual Fix Free Download
-
-BlueSoleil is offered as a free download with limitations. Faster PC? ... BlueSoleil support is available ONLY from its developer IVT Corporation. Popular in ... 4d29de3e1b
-
-
-
diff --git a/spaces/radames/MusicGen-Continuation/tests/quantization/test_vq.py b/spaces/radames/MusicGen-Continuation/tests/quantization/test_vq.py
deleted file mode 100644
index c215099fedacae35c6798fdd9b8420a447aa16bb..0000000000000000000000000000000000000000
--- a/spaces/radames/MusicGen-Continuation/tests/quantization/test_vq.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-from audiocraft.quantization.vq import ResidualVectorQuantizer
-
-
-class TestResidualVectorQuantizer:
-
- def test_rvq(self):
- x = torch.randn(1, 16, 2048)
- vq = ResidualVectorQuantizer(n_q=8, dimension=16, bins=8)
- res = vq(x, 1.)
- assert res.x.shape == torch.Size([1, 16, 2048])
diff --git a/spaces/radames/candle-segment-anything-wasm/style.css b/spaces/radames/candle-segment-anything-wasm/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/radames/candle-segment-anything-wasm/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Command and conquer red alert 3 registration code Where to find it if you bought the game legally.md b/spaces/raedeXanto/academic-chatgpt-beta/Command and conquer red alert 3 registration code Where to find it if you bought the game legally.md
deleted file mode 100644
index 30e5e0706523f7fd4bb0009bb06b9017e3fe83f8..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Command and conquer red alert 3 registration code Where to find it if you bought the game legally.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-Command and Conquer Red Alert 3 Registration Code Crack: How to Play the Game for Free
-command and conquer red alert 3 registration code crack
-What is Command and Conquer Red Alert 3?
-Why do you need a registration code?
-
-command and conquer 3 activation code free
-red alert 3 crack no cd download
-command and conquer red alert 3 keygen
-red alert 3 registration bypass
-command and conquer 3 license code
-red alert 3 crack only
-command and conquer red alert 3 serial number
-red alert 3 activation fix
-command and conquer 3 product key
-red alert 3 crack file
-command and conquer red alert 3 reloaded
-red alert 3 registration code not working
-command and conquer 3 registration key
-red alert 3 crack skidrow
-command and conquer red alert 3 patch
-red alert 3 activation code generator
-command and conquer 3 cd key crack
-red alert 3 crack razor1911
-command and conquer red alert 3 mods
-red alert 3 registration code generator online
-command and conquer 3 serial number generator
-red alert 3 crack download free
-command and conquer red alert 3 cheats
-red alert 3 registration code bypasser.exe download
-command and conquer 3 activation code generator online
-red alert 3 crack offline mode
-command and conquer red alert 3 trainer
-red alert 3 registration code invalid fix
-command and conquer 3 cd key generator online
-red alert 3 crack multiplayer lan
-command and conquer red alert 3 uprising keygen
-red alert 3 registration code already in use fix
-command and conquer 3 product key generator online
-red alert 3 crack windows 10
-command and conquer red alert 3 steam key
-red alert 3 activation code not working fix
-command and conquer 3 cd key changer
-red alert 3 crack no survey no password download
-command and conquer red alert 3 origin key
-red alert 3 registration code recovery tool download
-command and conquer 3 serial number recovery tool download
-red alert 3 crack direct download link
-command and conquer red alert 3 gameplay video download link
-red alert 3 registration code generator offline mode
-command and conquer 3 activation code generator offline mode
-red alert 3 crack for mac os x download
-command and conquer red alert 3 mac os x keygen
-red alert 3 registration code for mac os x
-command and conquer red alert 3 mac os x patchWhat are the risks of using a crack?
-
-
-How to get a registration code crack for Command and Conquer Red Alert 3
-Method 1: Use a serial number generator
-Step 1: Download a serial number generator
-Step 2: Run the generator and copy a serial number
-Step 3: Enter the serial number when prompted by the game
-Method 2: Use a pre-cracked version of the game
-Step 1: Download a pre-cracked version of the game from a reputable source
-Step 2: Install the game and follow the instructions
-Step 3: Enjoy the game without needing a registration code
-Conclusion
-FAQs
-
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE Setup Free.md b/spaces/raedeXanto/academic-chatgpt-beta/DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE Setup Free.md
deleted file mode 100644
index 7ae0eca5134ae7b3252445f9f2a24e7567309abe..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE Setup Free.md
+++ /dev/null
@@ -1,97 +0,0 @@
-
-
-
- How to troubleshoot and optimize DFX Audio Enhancer
- DFX Audio Enhancer 10.113.0.0 MASTER PACK [32 64] CORE setup free
- The common issues and solutions of DFX Audio Enhancer
-
-
- The best practices and tips for using DFX Audio Enhancer
-
-
- Conclusion
- Frequently Asked Questions
-
-DFX Audio Enhancer free trial[^1^]
-DFX Audio Enhancer features and benefits[^2^]
-DFX Audio Enhancer review and rating[^2^]
-DFX Audio Enhancer 12.023 latest version[^1^]
-DFX Audio Enhancer 3D surround sound[^2^]
-DFX Audio Enhancer booming hyperbass[^2^]
-DFX Audio Enhancer customizable audio presets[^2^]
-DFX Audio Enhancer dynamic audio boost[^2^]
-DFX Audio Enhancer headphones optimization[^2^]
-DFX Audio Enhancer high fidelity restoration[^2^]
-DFX Audio Enhancer multiple processing modes[^2^]
-DFX Audio Enhancer preset to song association[^2^]
-DFX Audio Enhancer spectrum analyzer[^2^]
-DFX Audio Enhancer stereo ambiance[^2^]
-DFX Audio Enhancer compatibility and license[^2^]
-DFX Audio Enhancer Amazon integration[^2^]
-DFX Audio Enhancer DailyMotion integration[^2^]
-DFX Audio Enhancer Facebook integration[^2^]
-DFX Audio Enhancer Flickr integration[^2^]
-DFX Audio Enhancer Google Play integration[^2^]
-DFX Audio Enhancer Pandora integration[^2^]
-DFX Audio Enhancer Spotify integration[^2^]
-DFX Audio Enhancer Vimeo integration[^2^]
-DFX Audio Enhancer YouTube integration[^2^]
-DFX Audio Enhancer iTunes integration[^2^]
-DFX Audio Enhancer audio enhancer plugin app for Windows[^1^] [^2^]
-DFX Audio Enhancer sound quality of music, movies and other audio content played on a PC[^1^] [^2^]
-DFX Audio Enhancer free, but ad-supported audio enhancement software download[^1^] [^2^]
-DFX Audio Enhancer enhances your soundcard output with 3D surround support[^1^] [^2^]
-DFX Audio Enhancer marginally improves sound quality coming from your PC[^1^] [^2^]
-DFX Audio Enhancer supports all sound produced by your PC[^1^] [^2^]
-DFX Audio Enhancer similar to sound enhancements provided by your sound card manufacturer[^1^] [^2^]
-DFX Audio Enhancer emulates 3D surround sound, Bass Boost, HD restoration and sound-type optimizations[^1^] [^2^]
-DFX Audio Enhancer included with this package is a spectrum analyzer which is a graphical display of the sound currently being played[^1
-
- 0a6ba089eb
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download Lagu India Mann Mp3 PATCHED.md b/spaces/raedeXanto/academic-chatgpt-beta/Download Lagu India Mann Mp3 PATCHED.md
deleted file mode 100644
index 97ab1387bf28009f0ad39e43e768edb2c4872d9f..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/Download Lagu India Mann Mp3 PATCHED.md
+++ /dev/null
@@ -1,20 +0,0 @@
-
-Download Lagu India Mann Mp3
-Download Lagu India Mann Mp3
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Dammu Telugu Movie Download Dvdrip 2) The Fastest and Easiest Way to Watch the Awesome Film.md b/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Dammu Telugu Movie Download Dvdrip 2) The Fastest and Easiest Way to Watch the Awesome Film.md
deleted file mode 100644
index 2e1934c48d188d5620c923f2f07bbf49bf6ac452..0000000000000000000000000000000000000000
--- a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Dammu Telugu Movie Download Dvdrip 2) The Fastest and Easiest Way to Watch the Awesome Film.md
+++ /dev/null
@@ -1,127 +0,0 @@
-
-HD Online Player (Dammu Telugu Movie Download Dvdrip 2)
- HD Online Player (Dammu Telugu Movie Download Dvdrip 2)
- What is Dammu Telugu movie?
- Why should you watch Dammu Telugu movie?
-
-Dammu Telugu Movie Download in High Quality Dvdrip Format
-How to Stream Dammu Telugu Movie Online with HD Player
-Dammu Telugu Movie Dvdrip Download Link
-HD Online Player for Dammu Telugu Movie Streaming
-Dammu Telugu Full Movie Watch Online HD Quality
-Download Dammu Telugu Movie Dvdrip with Subtitles
-Best HD Online Player for Telugu Movies like Dammu
-Dammu Telugu Movie Online HD Streaming Sites
-Dammu Telugu Movie Dvdrip Torrent Download
-Watch Dammu Telugu Movie Online in HD without Buffering
-Dammu Telugu Movie Download Dvdrip 2 Parts
-HD Online Player for Dammu Telugu Movie on Mobile
-Dammu Telugu Full Movie Online HD 1080p
-Download Dammu Telugu Movie Dvdrip from Google Drive
-HD Online Player for Dammu Telugu Movie with English Subtitles
-Dammu Telugu Movie Watch Online HD Free Download
-Dammu Telugu Movie Download Dvdrip 2 GB Size
-HD Online Player for Dammu Telugu Movie on Laptop
-Dammu Telugu Full Movie Online HD 720p
-Download Dammu Telugu Movie Dvdrip from Mega.nz
-HD Online Player for Dammu Telugu Movie with Hindi Dubbing
-Dammu Telugu Movie Online HD Streaming without Ads
-Dammu Telugu Movie Download Dvdrip 2 CD Format
-HD Online Player for Dammu Telugu Movie on Smart TV
-Dammu Telugu Full Movie Online HD with Dolby Sound
-Download Dammu Telugu Movie Dvdrip from Mediafire
-HD Online Player for Dammu Telugu Movie with Tamil Dubbing
-Dammu Telugu Movie Online HD Streaming with Fast Speed
-Dammu Telugu Movie Download Dvdrip 2 in 1 File
-HD Online Player for Dammu Telugu Movie on Firestick
-Dammu Telugu Full Movie Online HD with Surround Sound
-Download Dammu Telugu Movie Dvdrip from Rapidgator
-HD Online Player for Dammu Telugu Movie with Malayalam Dubbing
-Dammu Telugu Movie Online HD Streaming with Subtitles Option
-Dammu Telugu Movie Download Dvdrip 2 Hours Duration
-HD Online Player for Dammu Telugu Movie on Chromecast
-Dammu Telugu Full Movie Online HD with Commentary Track
-Download Dammu Telugu Movie Dvdrip from Uploaded.net
-HD Online Player for Dammu Telugu Movie with Kannada Dubbing
-Dammu Telugu Movie Online HD Streaming with Resume Feature
-Dammu Telugu Movie Download Dvdrip 2 Different Resolutions
-HD Online Player for Dammu Telugu Movie on Roku
-Dammu Telugu Full Movie Online HD with Behind the Scenes Footage
-Download Dammu Telugu Movie Dvdrip from Zippyshare
-HD Online Player for Dammu Telugu Movie with Bengali Dubbing
-Dammu Telugu Movie Online HD Streaming with Rating Feature
-Dammu Telugu Movie Download DVDrip 2 Disc Set
-
- How to download Dammu Telugu movie in HD quality?
-
-
- What is an online player?
- What is dvdrip format?
- What are the benefits of downloading movies in dvdrip format?
-
-
- What are the drawbacks of downloading movies in dvdrip format?
-
-
- What are some tips to download movies safely and legally?
-
-
- Where can you watch Dammu Telugu movie online?
- ZEE5
- YouTube
- Other options
- Conclusion
- FAQs
-
-
- 0a6ba089eb
The director of Dammu Telugu movie is Boyapati Srinu who is known for his action films such as Simha Legend Sarrainodu Jaya Janaki Nayaka Vinaya Vidheya Rama and Ala Vaikunthapurramuloo.
The music director of Dammu Telugu movie is M.M. Keeravani who is one of the most acclaimed composers in Telugu cinema. He has composed music for films such as Baahubali Eega Magadheera Sye Vikramarkudu Chatrapathi etc.
Some of the hit songs from Dammu Telugu movie are Ruler Sound Of Vel Vaasthu Bagunde Raja Vasi Reddy O Lilly etc.
No Dammu Telugu movie is not available on Netflix as of now. However you can watch it on ZEE5 YouTube or other platforms mentioned above.
Yes Dammu Telugu movie is dubbed in Hindi as Dhammu which was released in 2016 by Goldmines Telefilms.
-
-
\ No newline at end of file
diff --git a/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_llm.py
deleted file mode 100644
index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000
--- a/spaces/ramiin2/AutoGPT/autogpt/json_utils/json_fix_llm.py
+++ /dev/null
@@ -1,220 +0,0 @@
-"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance
-of the ChatGPT API or LLM models."""
-from __future__ import annotations
-
-import contextlib
-import json
-from typing import Any, Dict
-
-from colorama import Fore
-from regex import regex
-
-from autogpt.config import Config
-from autogpt.json_utils.json_fix_general import correct_json
-from autogpt.llm_utils import call_ai_function
-from autogpt.logs import logger
-from autogpt.speech import say_text
-
-JSON_SCHEMA = """
-{
- "command": {
- "name": "command name",
- "args": {
- "arg name": "value"
- }
- },
- "thoughts":
- {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user"
- }
-}
-"""
-
-CFG = Config()
-
-
-def auto_fix_json(json_string: str, schema: str) -> str:
- """Fix the given JSON string to make it parseable and fully compliant with
- the provided schema using GPT-3.
-
- Args:
- json_string (str): The JSON string to fix.
- schema (str): The schema to use to fix the JSON.
- Returns:
- str: The fixed JSON string.
- """
- # Try to fix the JSON using GPT:
- function_string = "def fix_json(json_string: str, schema:str=None) -> str:"
- args = [f"'''{json_string}'''", f"'''{schema}'''"]
- description_string = (
- "This function takes a JSON string and ensures that it"
- " is parseable and fully compliant with the provided schema. If an object"
- " or field specified in the schema isn't contained within the correct JSON,"
- " it is omitted. The function also escapes any double quotes within JSON"
- " string values to ensure that they are valid. If the JSON string contains"
- " any None or NaN values, they are replaced with null before being parsed."
- )
-
- # If it doesn't already start with a "`", add one:
- if not json_string.startswith("`"):
- json_string = "```json\n" + json_string + "\n```"
- result_string = call_ai_function(
- function_string, args, description_string, model=CFG.fast_llm_model
- )
- logger.debug("------------ JSON FIX ATTEMPT ---------------")
- logger.debug(f"Original JSON: {json_string}")
- logger.debug("-----------")
- logger.debug(f"Fixed JSON: {result_string}")
- logger.debug("----------- END OF FIX ATTEMPT ----------------")
-
- try:
- json.loads(result_string) # just check the validity
- return result_string
- except json.JSONDecodeError: # noqa: E722
- # Get the call stack:
- # import traceback
- # call_stack = traceback.format_exc()
- # print(f"Failed to fix JSON: '{json_string}' "+call_stack)
- return "failed"
-
-
-def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
- """Fix the given JSON string to make it parseable and fully compliant with two techniques.
-
- Args:
- json_string (str): The JSON string to fix.
-
- Returns:
- str: The fixed JSON string.
- """
-
- # Parse and print Assistant response
- assistant_reply_json = fix_and_parse_json(assistant_reply)
- if assistant_reply_json == {}:
- assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
- assistant_reply
- )
-
- if assistant_reply_json != {}:
- return assistant_reply_json
-
- logger.error(
- "Error: The following AI output couldn't be converted to a JSON:\n",
- assistant_reply,
- )
- if CFG.speak_mode:
- say_text("I have received an invalid JSON response from the OpenAI API.")
-
- return {}
-
-
-def fix_and_parse_json(
- json_to_load: str, try_to_fix_with_gpt: bool = True
-) -> Dict[Any, Any]:
- """Fix and parse JSON string
-
- Args:
- json_to_load (str): The JSON string.
- try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT.
- Defaults to True.
-
- Returns:
- str or dict[Any, Any]: The parsed JSON.
- """
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = json_to_load.replace("\t", "")
- return json.loads(json_to_load)
-
- with contextlib.suppress(json.JSONDecodeError):
- json_to_load = correct_json(json_to_load)
- return json.loads(json_to_load)
- # Let's do something manually:
- # sometimes GPT responds with something BEFORE the braces:
- # "I'm sorry, I don't understand. Please try again."
- # {"text": "I'm sorry, I don't understand. Please try again.",
- # "confidence": 0.0}
- # So let's try to find the first brace and then parse the rest
- # of the string
- try:
- brace_index = json_to_load.index("{")
- maybe_fixed_json = json_to_load[brace_index:]
- last_brace_index = maybe_fixed_json.rindex("}")
- maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1]
- return json.loads(maybe_fixed_json)
- except (json.JSONDecodeError, ValueError) as e:
- return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
-
-
-def try_ai_fix(
- try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str
-) -> Dict[Any, Any]:
- """Try to fix the JSON with the AI
-
- Args:
- try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI.
- exception (Exception): The exception that was raised.
- json_to_load (str): The JSON string to load.
-
- Raises:
- exception: If try_to_fix_with_gpt is False.
-
- Returns:
- str or dict[Any, Any]: The JSON string or dictionary.
- """
- if not try_to_fix_with_gpt:
- raise exception
- if CFG.debug_mode:
- logger.warn(
- "Warning: Failed to parse AI output, attempting to fix."
- "\n If you see this warning frequently, it's likely that"
- " your prompt is confusing the AI. Try changing it up"
- " slightly."
- )
- # Now try to fix this up using the ai_functions
- ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
-
- if ai_fixed_json != "failed":
- return json.loads(ai_fixed_json)
- # This allows the AI to react to the error message,
- # which usually results in it correcting its ways.
- # logger.error("Failed to fix AI output, telling the AI.")
- return {}
-
-
-def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
- if CFG.speak_mode and CFG.debug_mode:
- say_text(
- "I have received an invalid JSON response from the OpenAI API. "
- "Trying to fix it now."
- )
- logger.error("Attempting to fix JSON by finding outermost brackets\n")
-
- try:
- json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}")
- json_match = json_pattern.search(json_string)
-
- if json_match:
- # Extract the valid JSON object from the string
- json_string = json_match.group(0)
- logger.typewriter_log(
- title="Apparently json was fixed.", title_color=Fore.GREEN
- )
- if CFG.speak_mode and CFG.debug_mode:
- say_text("Apparently json was fixed.")
- else:
- return {}
-
- except (json.JSONDecodeError, ValueError):
- if CFG.debug_mode:
- logger.error(f"Error: Invalid JSON: {json_string}\n")
- if CFG.speak_mode:
- say_text("Didn't work. I will have to ignore this response then.")
- logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")
- json_string = {}
-
- return fix_and_parse_json(json_string)
diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/content-disposition/README.md b/spaces/rayan-saleh/whisper2notion/server/node_modules/content-disposition/README.md
deleted file mode 100644
index 3a0bb055949cdaed008f0f85e111624214213873..0000000000000000000000000000000000000000
--- a/spaces/rayan-saleh/whisper2notion/server/node_modules/content-disposition/README.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# content-disposition
-
-[![NPM Version][npm-image]][npm-url]
-[![NPM Downloads][downloads-image]][downloads-url]
-[![Node.js Version][node-version-image]][node-version-url]
-[![Build Status][github-actions-ci-image]][github-actions-ci-url]
-[![Test Coverage][coveralls-image]][coveralls-url]
-
-Create and parse HTTP `Content-Disposition` header
-
-## Installation
-
-```sh
-$ npm install content-disposition
-```
-
-## API
-
-```js
-var contentDisposition = require('content-disposition')
-```
-
-### contentDisposition(filename, options)
-
-Create an attachment `Content-Disposition` header value using the given file name,
-if supplied. The `filename` is optional and if no file name is desired, but you
-want to specify `options`, set `filename` to `undefined`.
-
-```js
-res.setHeader('Content-Disposition', contentDisposition('∫ maths.pdf'))
-```
-
-**note** HTTP headers are of the ISO-8859-1 character set. If you are writing this
-header through a means different from `setHeader` in Node.js, you'll want to specify
-the `'binary'` encoding in Node.js.
-
-#### Options
-
-`contentDisposition` accepts these properties in the options object.
-
-##### fallback
-
-If the `filename` option is outside ISO-8859-1, then the file name is actually
-stored in a supplemental field for clients that support Unicode file names and
-a ISO-8859-1 version of the file name is automatically generated.
-
-This specifies the ISO-8859-1 file name to override the automatic generation or
-disables the generation all together, defaults to `true`.
-
- - A string will specify the ISO-8859-1 file name to use in place of automatic
- generation.
- - `false` will disable including a ISO-8859-1 file name and only include the
- Unicode version (unless the file name is already ISO-8859-1).
- - `true` will enable automatic generation if the file name is outside ISO-8859-1.
-
-If the `filename` option is ISO-8859-1 and this option is specified and has a
-different value, then the `filename` option is encoded in the extended field
-and this set as the fallback field, even though they are both ISO-8859-1.
-
-##### type
-
-Specifies the disposition type, defaults to `"attachment"`. This can also be
-`"inline"`, or any other value (all values except inline are treated like
-`attachment`, but can convey additional information if both parties agree to
-it). The type is normalized to lower-case.
-
-### contentDisposition.parse(string)
-
-```js
-var disposition = contentDisposition.parse('attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt')
-```
-
-Parse a `Content-Disposition` header string. This automatically handles extended
-("Unicode") parameters by decoding them and providing them under the standard
-parameter name. This will return an object with the following properties (examples
-are shown for the string `'attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt'`):
-
- - `type`: The disposition type (always lower case). Example: `'attachment'`
-
- - `parameters`: An object of the parameters in the disposition (name of parameter
- always lower case and extended versions replace non-extended versions). Example:
- `{filename: "€ rates.txt"}`
-
-## Examples
-
-### Send a file for download
-
-```js
-var contentDisposition = require('content-disposition')
-var destroy = require('destroy')
-var fs = require('fs')
-var http = require('http')
-var onFinished = require('on-finished')
-
-var filePath = '/path/to/public/plans.pdf'
-
-http.createServer(function onRequest (req, res) {
- // set headers
- res.setHeader('Content-Type', 'application/pdf')
- res.setHeader('Content-Disposition', contentDisposition(filePath))
-
- // send file
- var stream = fs.createReadStream(filePath)
- stream.pipe(res)
- onFinished(res, function () {
- destroy(stream)
- })
-})
-```
-
-## Testing
-
-```sh
-$ npm test
-```
-
-## References
-
-- [RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1][rfc-2616]
-- [RFC 5987: Character Set and Language Encoding for Hypertext Transfer Protocol (HTTP) Header Field Parameters][rfc-5987]
-- [RFC 6266: Use of the Content-Disposition Header Field in the Hypertext Transfer Protocol (HTTP)][rfc-6266]
-- [Test Cases for HTTP Content-Disposition header field (RFC 6266) and the Encodings defined in RFCs 2047, 2231 and 5987][tc-2231]
-
-[rfc-2616]: https://tools.ietf.org/html/rfc2616
-[rfc-5987]: https://tools.ietf.org/html/rfc5987
-[rfc-6266]: https://tools.ietf.org/html/rfc6266
-[tc-2231]: http://greenbytes.de/tech/tc2231/
-
-## License
-
-[MIT](LICENSE)
-
-[npm-image]: https://img.shields.io/npm/v/content-disposition.svg
-[npm-url]: https://npmjs.org/package/content-disposition
-[node-version-image]: https://img.shields.io/node/v/content-disposition.svg
-[node-version-url]: https://nodejs.org/en/download
-[coveralls-image]: https://img.shields.io/coveralls/jshttp/content-disposition.svg
-[coveralls-url]: https://coveralls.io/r/jshttp/content-disposition?branch=master
-[downloads-image]: https://img.shields.io/npm/dm/content-disposition.svg
-[downloads-url]: https://npmjs.org/package/content-disposition
-[github-actions-ci-image]: https://img.shields.io/github/workflow/status/jshttp/content-disposition/ci/master?label=ci
-[github-actions-ci-url]: https://github.com/jshttp/content-disposition?query=workflow%3Aci
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Callofdutymodernwarfare3errorunabletocreatesteamappidtxt.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Callofdutymodernwarfare3errorunabletocreatesteamappidtxt.md
deleted file mode 100644
index 8f7b56f4bc66cd74662437dfc3f3e5b0a1f2cdc6..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Callofdutymodernwarfare3errorunabletocreatesteamappidtxt.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-How to Fix Call of Duty Modern Warfare 3 Error Unable to Create Steam_appid.txt
-callofdutymodernwarfare3errorunabletocreatesteamappidtxt
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Bullett Raja Movies 1080p Torrent.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Bullett Raja Movies 1080p Torrent.md
deleted file mode 100644
index ff5bc45e2f9b0285b69941e7262a757dd5f0efc3..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Download Bullett Raja Movies 1080p Torrent.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-Download Bullett Raja Movies 1080p Torrent
-
He is a good actor and can act no doubt. But in this movie he doesn't show any empathy or sentiment for his character. At the end of the day he is a villain.
There is no suspense or anxiety during the first half of the movie. It is the second half that is interesting. Seeing Saif Ali Khan and Jimmy Shergil in action is interesting enough.
Overall, this movie is a plain and simple movie.
I doubt if people will go out to see this movie on its first day. Most actors who come out with their own movies tend to put their best foot forward. If they fail this time, It will be a big fail and won't be doing good to them.
It is disappointing that Nana Patekar is back after so many years with one more disappointing movie. He has been disappointing since his mother's death.
Disappointed with Bullet Raja (2013). This is not one of the better movies. He is a great actor but he failed this time.
This time it is the other actor Jimmy Shergil who is leading the movie. He is an actor who creates a movie better than its movie.
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Frontschweine Vollversion Kostenlos Chip.epub.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Frontschweine Vollversion Kostenlos Chip.epub.md
deleted file mode 100644
index b90f8c88d2161e40b467298fc580482fc5a15a7b..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Frontschweine Vollversion Kostenlos Chip.epub.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-Frontschweine: Ein Klassiker der Strategiespiele jetzt als E-Book
-Frontschweine Vollversion Kostenlos Chip.epub
-
-
-
\ No newline at end of file
diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Yeh Jawaani Hai Deewani Full Movie Download In Hd Mp4golkes) Fixed.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Yeh Jawaani Hai Deewani Full Movie Download In Hd Mp4golkes) Fixed.md
deleted file mode 100644
index c86b8f474ed7793e1baca8913afeb4c9d300b816..0000000000000000000000000000000000000000
--- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/HD Online Player (Yeh Jawaani Hai Deewani Full Movie Download In Hd Mp4golkes) Fixed.md
+++ /dev/null
@@ -1,25 +0,0 @@
-
-```html
-How to Watch Yeh Jawaani Hai Deewani Full Movie Online in HD Quality
-HD Online Player (Yeh Jawaani Hai Deewani full movie download in hd mp4golkes)
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/richardzhangy26/yandian_flow_classification/configs/liteflownet/liteflownet_pre_M5S5R5_8x1_flyingchairs_320x448.py b/spaces/richardzhangy26/yandian_flow_classification/configs/liteflownet/liteflownet_pre_M5S5R5_8x1_flyingchairs_320x448.py
deleted file mode 100644
index 6c4b0cea3e5a5c0c532490772450a590a2924273..0000000000000000000000000000000000000000
--- a/spaces/richardzhangy26/yandian_flow_classification/configs/liteflownet/liteflownet_pre_M5S5R5_8x1_flyingchairs_320x448.py
+++ /dev/null
@@ -1,23 +0,0 @@
-_base_ = [
- '../_base_/models/liteflownet/liteflownet_pre_M5S5R5.py',
- '../_base_/datasets/flyingchairs_320x448.py',
- '../_base_/default_runtime.py'
-]
-
-optimizer = dict(type='Adam', lr=1e-4, weight_decay=0.0004, betas=(0.9, 0.999))
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step', by_epoch=False, gamma=0.5, step=[120000, 160000, 200000])
-runner = dict(type='IterBasedRunner', max_iters=240000)
-checkpoint_config = dict(by_epoch=False, interval=40000)
-evaluation = dict(interval=40000, metric='EPE')
-custom_hooks = [
- dict(
- type='LiteFlowNetStageLoadHook',
- src_level='level6',
- dst_level='level5')
-]
-
-# Weights are initialized from model of previous stage
-load_from = 'https://download.openmmlab.com/mmflow/liteflownet/liteflownet_pre_M6S6R6_8x1_flyingchairs_320x448.pth' # noqa
diff --git a/spaces/rinme/vits-models/commons.py b/spaces/rinme/vits-models/commons.py
deleted file mode 100644
index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000
--- a/spaces/rinme/vits-models/commons.py
+++ /dev/null
@@ -1,172 +0,0 @@
-import math
-import torch
-from torch.nn import functional as F
-import torch.jit
-
-
-def script_method(fn, _rcb=None):
- return fn
-
-
-def script(obj, optimize=True, _frames_up=0, _rcb=None):
- return obj
-
-
-torch.jit.script_method = script_method
-torch.jit.script = script
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def intersperse(lst, item):
- result = [item] * (len(lst) * 2 + 1)
- result[1::2] = lst
- return result
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q)
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(
- length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = (
- math.log(float(max_timescale) / float(min_timescale)) /
- (num_timescales - 1))
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment)
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2,3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1. / norm_type)
- return total_norm
diff --git a/spaces/rng0x17/jupyterlab/Dockerfile b/spaces/rng0x17/jupyterlab/Dockerfile
deleted file mode 100644
index bb6076cd164b921a380a5f2ee44896c2f2c62ddd..0000000000000000000000000000000000000000
--- a/spaces/rng0x17/jupyterlab/Dockerfile
+++ /dev/null
@@ -1,123 +0,0 @@
-# FROM nvidia/cuda:11.3.1-base-ubuntu20.04
-FROM nvidia/cuda:12.2.2-base-ubuntu22.04
-
-ENV DEBIAN_FRONTEND=noninteractive \
- TZ=Europe/Paris
-
-# Remove any third-party apt sources to avoid issues with expiring keys.
-# Install some basic utilities
-RUN rm -f /etc/apt/sources.list.d/*.list && \
- apt-get update && apt-get install -y --no-install-recommends \
- curl \
- ca-certificates \
- sudo \
- git \
- git-lfs \
- zip \
- unzip \
- htop \
- bzip2 \
- libx11-6 \
- build-essential \
- libsndfile-dev \
- software-properties-common \
- wget \
- && rm -rf /var/lib/apt/lists/*
-
-RUN add-apt-repository ppa:flexiondotorg/nvtop && \
- apt-get upgrade -y && \
- apt-get install -y --no-install-recommends nvtop
-
-# RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
-# apt-get install -y nodejs && \
-# npm install -g configurable-http-proxy
-
-# Create a working directory
-WORKDIR /app
-
-# Create a non-root user and switch to it
-RUN adduser --disabled-password --gecos '' --shell /bin/bash user \
- && chown -R user:user /app
-RUN echo "user ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/90-user
-USER user
-
-# All users can use /home/user as their home directory
-ENV HOME=/home/user
-RUN mkdir $HOME/.cache $HOME/.config \
- && chmod -R 777 $HOME
-
-# Set up the Conda environment
-ENV CONDA_AUTO_UPDATE_CONDA=false \
- PATH=$HOME/miniconda/bin:$PATH
-RUN curl -sLo ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py39_4.10.3-Linux-x86_64.sh \
- && chmod +x ~/miniconda.sh \
- && ~/miniconda.sh -b -p ~/miniconda \
- && rm ~/miniconda.sh \
- && conda clean -ya
- # && conda install npm\
- # && npm install -g configurable-http-proxy
-WORKDIR $HOME/app
-
-#######################################
-# Start root user section
-#######################################
-
-USER root
-
-# User Debian packages
-## Security warning : Potential user code executed as root (build time)
-RUN --mount=target=/root/packages.txt,source=packages.txt \
- apt-get update && \
- xargs -r -a /root/packages.txt apt-get install -y --no-install-recommends \
- && rm -rf /var/lib/apt/lists/*
-
-RUN --mount=target=/root/on_startup.sh,source=on_startup.sh,readwrite \
- bash /root/on_startup.sh
-
-# install code cli
-RUN curl -o vscode_cli.tar.gz https://az764295.vo.msecnd.net/stable/f1b07bd25dfad64b0167beb15359ae573aecd2cc/vscode_cli_alpine_x64_cli.tar.gz
-RUN tar -xf vscode_cli.tar.gz
-# RUN curl -o vscode.deb https://az764295.vo.msecnd.net/stable/f1b07bd25dfad64b0167beb15359ae573aecd2cc/code_1.83.1-1696982868_amd64.deb
-# RUN apt-get install ./vscode.deb
-# RUN echo *
-RUN cp code /usr/local/bin
-RUN chmod +x /usr/local/bin/code
-# RUN apt-get install wget gpg && \
-# wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg && \
-# install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg && \
-# sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main" > /etc/apt/sources.list.d/vscode.list' && \
-# rm -f packages.microsoft.gpg && \
-# apt-get install -y apt-transport-https && \
-# apt-get update && \
-# apt-get install code -y
-
-
-#######################################
-# End root user section
-#######################################
-
-USER user
-
-# Python packages
-RUN --mount=target=requirements.txt,source=requirements.txt \
- pip install --no-cache-dir --upgrade -r requirements.txt
-
-# Copy the current directory contents into the container at $HOME/app setting the owner to the user
-COPY --chown=user . $HOME/app
-
-RUN chmod +x start_server.sh
-
-COPY --chown=user login.html /home/user/miniconda/lib/python3.9/site-packages/jupyter_server/templates/login.html
-
-
-
-
-ENV PYTHONUNBUFFERED=1 \
- GRADIO_ALLOW_FLAGGING=never \
- GRADIO_NUM_PORTS=1 \
- GRADIO_SERVER_NAME=0.0.0.0 \
- GRADIO_THEME=huggingface \
- SYSTEM=spaces \
- SHELL=/bin/bash
-
-CMD ["./start_server.sh"]
diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download Baaghi 2 Movie Songs In Hindi [EXCLUSIVE].md b/spaces/rorallitri/biomedical-language-models/logs/Download Baaghi 2 Movie Songs In Hindi [EXCLUSIVE].md
deleted file mode 100644
index 81af4f39f2a7e9fd1d38f1077614328b805eb1f1..0000000000000000000000000000000000000000
--- a/spaces/rorallitri/biomedical-language-models/logs/Download Baaghi 2 Movie Songs In Hindi [EXCLUSIVE].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Download Baaghi 2 Movie Songs In Hindi
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/rubend18/ChatGPT-Prompt-Generator/README.md b/spaces/rubend18/ChatGPT-Prompt-Generator/README.md
deleted file mode 100644
index 9dcbc6fc065d7d557254234b687a0f4613b7a81f..0000000000000000000000000000000000000000
--- a/spaces/rubend18/ChatGPT-Prompt-Generator/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ChatGPT Prompt Generator
-emoji: 📉
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/russel0719/deepfake_detector/training/zoo/classifiers.py b/spaces/russel0719/deepfake_detector/training/zoo/classifiers.py
deleted file mode 100644
index 8b2b91948822f8638dcce4e2719a58b643f52d04..0000000000000000000000000000000000000000
--- a/spaces/russel0719/deepfake_detector/training/zoo/classifiers.py
+++ /dev/null
@@ -1,172 +0,0 @@
-from functools import partial
-
-import numpy as np
-import torch
-from timm.models.efficientnet import tf_efficientnet_b4_ns, tf_efficientnet_b3_ns, \
- tf_efficientnet_b5_ns, tf_efficientnet_b2_ns, tf_efficientnet_b6_ns, tf_efficientnet_b7
-from torch import nn
-from torch.nn.modules.dropout import Dropout
-from torch.nn.modules.linear import Linear
-from torch.nn.modules.pooling import AdaptiveAvgPool2d
-
-encoder_params = {
- "tf_efficientnet_b3_ns": {
- "features": 1536,
- "init_op": partial(tf_efficientnet_b3_ns, pretrained=True, drop_path_rate=0.2)
- },
- "tf_efficientnet_b2_ns": {
- "features": 1408,
- "init_op": partial(tf_efficientnet_b2_ns, pretrained=False, drop_path_rate=0.2)
- },
- "tf_efficientnet_b4_ns": {
- "features": 1792,
- "init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.5)
- },
- "tf_efficientnet_b5_ns": {
- "features": 2048,
- "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.2)
- },
- "tf_efficientnet_b4_ns_03d": {
- "features": 1792,
- "init_op": partial(tf_efficientnet_b4_ns, pretrained=True, drop_path_rate=0.3)
- },
- "tf_efficientnet_b5_ns_03d": {
- "features": 2048,
- "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.3)
- },
- "tf_efficientnet_b5_ns_04d": {
- "features": 2048,
- "init_op": partial(tf_efficientnet_b5_ns, pretrained=True, drop_path_rate=0.4)
- },
- "tf_efficientnet_b6_ns": {
- "features": 2304,
- "init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.2)
- },
- "tf_efficientnet_b7": {
- "features": 2560,
- "init_op": partial(tf_efficientnet_b7, pretrained=True, drop_path_rate=0.2)
- },
- "tf_efficientnet_b6_ns_04d": {
- "features": 2304,
- "init_op": partial(tf_efficientnet_b6_ns, pretrained=True, drop_path_rate=0.4)
- },
-}
-
-
-def setup_srm_weights(input_channels: int = 3) -> torch.Tensor:
- """Creates the SRM kernels for noise analysis."""
- # note: values taken from Zhou et al., "Learning Rich Features for Image Manipulation Detection", CVPR2018
- srm_kernel = torch.from_numpy(np.array([
- [ # srm 1/2 horiz
- [0., 0., 0., 0., 0.], # noqa: E241,E201
- [0., 0., 0., 0., 0.], # noqa: E241,E201
- [0., 1., -2., 1., 0.], # noqa: E241,E201
- [0., 0., 0., 0., 0.], # noqa: E241,E201
- [0., 0., 0., 0., 0.], # noqa: E241,E201
- ], [ # srm 1/4
- [0., 0., 0., 0., 0.], # noqa: E241,E201
- [0., -1., 2., -1., 0.], # noqa: E241,E201
- [0., 2., -4., 2., 0.], # noqa: E241,E201
- [0., -1., 2., -1., 0.], # noqa: E241,E201
- [0., 0., 0., 0., 0.], # noqa: E241,E201
- ], [ # srm 1/12
- [-1., 2., -2., 2., -1.], # noqa: E241,E201
- [2., -6., 8., -6., 2.], # noqa: E241,E201
- [-2., 8., -12., 8., -2.], # noqa: E241,E201
- [2., -6., 8., -6., 2.], # noqa: E241,E201
- [-1., 2., -2., 2., -1.], # noqa: E241,E201
- ]
- ])).float()
- srm_kernel[0] /= 2
- srm_kernel[1] /= 4
- srm_kernel[2] /= 12
- return srm_kernel.view(3, 1, 5, 5).repeat(1, input_channels, 1, 1)
-
-
-def setup_srm_layer(input_channels: int = 3) -> torch.nn.Module:
- """Creates a SRM convolution layer for noise analysis."""
- weights = setup_srm_weights(input_channels)
- conv = torch.nn.Conv2d(input_channels, out_channels=3, kernel_size=5, stride=1, padding=2, bias=False)
- with torch.no_grad():
- conv.weight = torch.nn.Parameter(weights, requires_grad=False)
- return conv
-
-
-class DeepFakeClassifierSRM(nn.Module):
- def __init__(self, encoder, dropout_rate=0.5) -> None:
- super().__init__()
- self.encoder = encoder_params[encoder]["init_op"]()
- self.avg_pool = AdaptiveAvgPool2d((1, 1))
- self.srm_conv = setup_srm_layer(3)
- self.dropout = Dropout(dropout_rate)
- self.fc = Linear(encoder_params[encoder]["features"], 1)
-
- def forward(self, x):
- noise = self.srm_conv(x)
- x = self.encoder.forward_features(noise)
- x = self.avg_pool(x).flatten(1)
- x = self.dropout(x)
- x = self.fc(x)
- return x
-
-
-class GlobalWeightedAvgPool2d(nn.Module):
- """
- Global Weighted Average Pooling from paper "Global Weighted Average
- Pooling Bridges Pixel-level Localization and Image-level Classification"
- """
-
- def __init__(self, features: int, flatten=False):
- super().__init__()
- self.conv = nn.Conv2d(features, 1, kernel_size=1, bias=True)
- self.flatten = flatten
-
- def fscore(self, x):
- m = self.conv(x)
- m = m.sigmoid().exp()
- return m
-
- def norm(self, x: torch.Tensor):
- return x / x.sum(dim=[2, 3], keepdim=True)
-
- def forward(self, x):
- input_x = x
- x = self.fscore(x)
- x = self.norm(x)
- x = x * input_x
- x = x.sum(dim=[2, 3], keepdim=not self.flatten)
- return x
-
-
-class DeepFakeClassifier(nn.Module):
- def __init__(self, encoder, dropout_rate=0.0) -> None:
- super().__init__()
- self.encoder = encoder_params[encoder]["init_op"]()
- self.avg_pool = AdaptiveAvgPool2d((1, 1))
- self.dropout = Dropout(dropout_rate)
- self.fc = Linear(encoder_params[encoder]["features"], 1)
-
- def forward(self, x):
- x = self.encoder.forward_features(x)
- x = self.avg_pool(x).flatten(1)
- x = self.dropout(x)
- x = self.fc(x)
- return x
-
-
-
-
-class DeepFakeClassifierGWAP(nn.Module):
- def __init__(self, encoder, dropout_rate=0.5) -> None:
- super().__init__()
- self.encoder = encoder_params[encoder]["init_op"]()
- self.avg_pool = GlobalWeightedAvgPool2d(encoder_params[encoder]["features"])
- self.dropout = Dropout(dropout_rate)
- self.fc = Linear(encoder_params[encoder]["features"], 1)
-
- def forward(self, x):
- x = self.encoder.forward_features(x)
- x = self.avg_pool(x).flatten(1)
- x = self.dropout(x)
- x = self.fc(x)
- return x
\ No newline at end of file
diff --git a/spaces/ryanjvi/MS-Image2Video/share_btn.py b/spaces/ryanjvi/MS-Image2Video/share_btn.py
deleted file mode 100644
index 52a200db1e71b0b5655bb7b61e4046baf945a224..0000000000000000000000000000000000000000
--- a/spaces/ryanjvi/MS-Image2Video/share_btn.py
+++ /dev/null
@@ -1,85 +0,0 @@
-community_icon_html = """"""
-
-loading_icon_html = """"""
-
-share_js = """async () => {
- async function uploadFile(file){
- const UPLOAD_URL = 'https://huggingface.co/uploads';
- const response = await fetch(UPLOAD_URL, {
- method: 'POST',
- headers: {
- 'Content-Type': file.type,
- 'X-Requested-With': 'XMLHttpRequest',
- },
- body: file, /// <- File inherits from Blob
- });
- const url = await response.text();
- return url;
- }
-
- async function getInputImgFile(imgEl){
- const res = await fetch(imgEl.src);
- const blob = await res.blob();
- const imgId = Date.now() % 200;
- const isPng = imgEl.src.startsWith(`data:image/png`);
- if(isPng){
- const fileName = `sd-perception-${{imgId}}.png`;
- return new File([blob], fileName, { type: 'image/png' });
- }else{
- const fileName = `sd-perception-${{imgId}}.jpg`;
- return new File([blob], fileName, { type: 'image/jpeg' });
- }
- }
-
- async function getVideoBlobFile(videoEL){
- const res = await fetch(videoEL.src);
- const blob = await res.blob();
- const videoId = Date.now() % 200;
- const fileName = `ms-image2video-${{videoId}}.mp4`;
- const videoBlob = new File([blob], fileName, { type: 'video/mp4' });
- console.log(videoBlob);
- return videoBlob;
- }
-
- const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app');
- const inputImgEl = gradioEl.querySelector('#image-in img');
- const outputVideo = gradioEl.querySelector('#video-out video');
-
- const shareBtnEl = gradioEl.querySelector('#share-btn');
- const shareIconEl = gradioEl.querySelector('#share-btn-share-icon');
- const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon');
- if(!outputVideo){
- return;
- };
- shareBtnEl.style.pointerEvents = 'none';
- shareIconEl.style.display = 'none';
- loadingIconEl.style.removeProperty('display');
-
- const inputFile = await getInputImgFile(inputImgEl);
- const urlInputImg = await uploadFile(inputFile);
- const videoOutFile = await getVideoBlobFile(outputVideo);
- const dataOutputVid = await uploadFile(videoOutFile);
-
- const descriptionMd = `
-#### Image init:
-
-
-#### MS Image2Video result:
-${dataOutputVid}
-`;
- const params = new URLSearchParams({
- title: "Please provide a title :)",
- description: descriptionMd,
- });
- const paramsStr = params.toString();
- window.open(`https://huggingface.co/spaces/fffiloni/MS-Image2Video/discussions/new?${paramsStr}`, '_blank');
- shareBtnEl.style.removeProperty('pointer-events');
- shareIconEl.style.removeProperty('display');
- loadingIconEl.style.display = 'none';
-}"""
\ No newline at end of file
diff --git a/spaces/sahshd/ChuanhuChatGPT/modules/shared.py b/spaces/sahshd/ChuanhuChatGPT/modules/shared.py
deleted file mode 100644
index a9e72580aa7ae48f907e923a09099513570a9ad8..0000000000000000000000000000000000000000
--- a/spaces/sahshd/ChuanhuChatGPT/modules/shared.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST
-import os
-import queue
-
-class State:
- interrupted = False
- multi_api_key = False
- completion_url = COMPLETION_URL
- balance_api_url = BALANCE_API_URL
- usage_api_url = USAGE_API_URL
-
- def interrupt(self):
- self.interrupted = True
-
- def recover(self):
- self.interrupted = False
-
- def set_api_host(self, api_host):
- self.completion_url = f"https://{api_host}/v1/chat/completions"
- self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants"
- self.usage_api_url = f"https://{api_host}/dashboard/billing/usage"
- os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1"
-
- def reset_api_host(self):
- self.completion_url = COMPLETION_URL
- self.balance_api_url = BALANCE_API_URL
- self.usage_api_url = USAGE_API_URL
- os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1"
- return API_HOST
-
- def reset_all(self):
- self.interrupted = False
- self.completion_url = COMPLETION_URL
-
- def set_api_key_queue(self, api_key_list):
- self.multi_api_key = True
- self.api_key_queue = queue.Queue()
- for api_key in api_key_list:
- self.api_key_queue.put(api_key)
-
- def switching_api_key(self, func):
- if not hasattr(self, "api_key_queue"):
- return func
-
- def wrapped(*args, **kwargs):
- api_key = self.api_key_queue.get()
- args[0].api_key = api_key
- ret = func(*args, **kwargs)
- self.api_key_queue.put(api_key)
- return ret
-
- return wrapped
-
-
-state = State()
diff --git a/spaces/samuelinferences/TabPFN/TabPFN/priors/flexible_categorical.py b/spaces/samuelinferences/TabPFN/TabPFN/priors/flexible_categorical.py
deleted file mode 100644
index b24a83d49018fc7ebd62f803ceec643de9bc206e..0000000000000000000000000000000000000000
--- a/spaces/samuelinferences/TabPFN/TabPFN/priors/flexible_categorical.py
+++ /dev/null
@@ -1,240 +0,0 @@
-import time
-import random
-
-import torch
-from torch import nn
-
-from .utils import get_batch_to_dataloader
-from utils import normalize_data, nan_handling_missing_for_unknown_reason_value, nan_handling_missing_for_no_reason_value, nan_handling_missing_for_a_reason_value, to_ranking_low_mem, remove_outliers
-from .utils import normalize_by_used_features_f, randomize_classes, CategoricalActivation
-from .utils import uniform_int_sampler_f
-
-time_it = False
-
-class BalancedBinarize(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, x):
- return (x > torch.median(x)).float()
-
-def class_sampler_f(min_, max_):
- def s():
- if random.random() > 0.5:
- return uniform_int_sampler_f(min_, max_)()
- return 2
- return s
-
-class MulticlassRank(nn.Module):
- def __init__(self, num_classes, ordered_p=0.5):
- super().__init__()
- self.num_classes = class_sampler_f(2, num_classes)()
- self.ordered_p = ordered_p
-
- def forward(self, x):
- # x has shape (T,B,H)
-
- # CAUTION: This samples the same idx in sequence for each class boundary in a batch
- class_boundaries = torch.randint(0, x.shape[0], (self.num_classes - 1,))
- class_boundaries = x[class_boundaries].unsqueeze(1)
-
- d = (x > class_boundaries).sum(axis=0)
-
- randomized_classes = torch.rand((d.shape[1], )) > self.ordered_p
- d[:, randomized_classes] = randomize_classes(d[:, randomized_classes], self.num_classes)
- reverse_classes = torch.rand((d.shape[1],)) > 0.5
- d[:, reverse_classes] = self.num_classes - 1 - d[:, reverse_classes]
- return d
-
-class MulticlassValue(nn.Module):
- def __init__(self, num_classes, ordered_p=0.5):
- super().__init__()
- self.num_classes = class_sampler_f(2, num_classes)()
- self.classes = nn.Parameter(torch.randn(num_classes-1), requires_grad=False)
- self.ordered_p = ordered_p
-
- def forward(self, x):
- # x has shape (T,B,H)
- d = (x > (self.classes.unsqueeze(-1).unsqueeze(-1))).sum(axis=0)
-
- randomized_classes = torch.rand((d.shape[1],)) > self.ordered_p
- d[:, randomized_classes] = randomize_classes(d[:, randomized_classes], self.num_classes)
- reverse_classes = torch.rand((d.shape[1],)) > 0.5
- d[:, reverse_classes] = self.num_classes - 1 - d[:, reverse_classes]
- return d
-
-class MulticlassMultiNode(nn.Module):
- def __init__(self, num_classes, ordered_p=0.5):
- super().__init__()
- self.num_classes = class_sampler_f(2, num_classes)()
- self.classes = nn.Parameter(torch.randn(num_classes-1), requires_grad=False)
- self.alt_multi_class = MulticlassValue(num_classes, ordered_p)
-
- def forward(self, x):
- # x has shape T, B, H
- if len(x.shape) == 2:
- return self.alt_multi_class(x)
- T = 3
- x[torch.isnan(x)] = 0.00001
- d = torch.multinomial(torch.pow(0.00001+torch.sigmoid(x[:, :, 0:self.num_classes]).reshape(-1, self.num_classes), T), 1, replacement=True).reshape(x.shape[0], x.shape[1]).float()
- return d
-
-
-class FlexibleCategorical(torch.nn.Module):
- def __init__(self, get_batch, hyperparameters, args):
- super(FlexibleCategorical, self).__init__()
-
- self.h = {k: hyperparameters[k]() if callable(hyperparameters[k]) else hyperparameters[k] for k in
- hyperparameters.keys()}
- self.args = args
- self.args_passed = {**self.args}
- self.args_passed.update({'num_features': self.h['num_features_used']})
- self.get_batch = get_batch
-
- if self.h['num_classes'] > 1 and not self.h['balanced']:
- if self.h['multiclass_type'] == 'rank':
- self.class_assigner = MulticlassRank(self.h['num_classes']
- , ordered_p=self.h['output_multiclass_ordered_p']
- )
- elif self.h['multiclass_type'] == 'value':
- self.class_assigner = MulticlassValue(self.h['num_classes']
- , ordered_p=self.h['output_multiclass_ordered_p']
- )
- elif self.h['multiclass_type'] == 'multi_node':
- self.class_assigner = MulticlassMultiNode(self.h['num_classes'])
- else:
- raise ValueError("Unknow Multiclass type")
- elif self.h['num_classes'] == 2 and self.h['balanced']:
- self.class_assigner = BalancedBinarize()
- elif self.h['num_classes'] > 2 and self.h['balanced']:
- raise NotImplementedError("Balanced multiclass training is not possible")
- else:
- self.class_assigner = lambda x:x # Regression
-
- def drop_for_reason(self, x, v):
- nan_prob_sampler = CategoricalActivation(ordered_p=0.0
- , categorical_p=1.0
- , keep_activation_size=False,
- num_classes_sampler=lambda: 20)
- d = nan_prob_sampler(x)
- # TODO: Make a different ordering for each activation
- x[d < torch.rand((1,), device=x.device) * 20 * self.h['nan_prob_no_reason'] * random.random()] = v
- return x
-
- def drop_for_no_reason(self, x, v):
- x[torch.rand(x.shape, device=self.args['device']) < self.h['nan_prob_no_reason']] = v
- return x
-
- def forward(self, batch_size):
- start = time.time()
- x, y, y_ = self.get_batch(hyperparameters=self.h, **self.args_passed)
- if time_it:
- print('Flex Forward Block 1', round(time.time() - start, 3))
-
- start = time.time()
-
- if self.h['nan_prob_no_reason']+self.h['nan_prob_a_reason']+self.h['nan_prob_unknown_reason'] > 0 and random.random() > 0.5: # Only one out of two datasets should have nans
- if self.h['nan_prob_no_reason'] > 0 and random.random() > 0.5: # Missing for no reason
- x = self.drop_for_no_reason(x, nan_handling_missing_for_no_reason_value(self.h['set_value_to_nan']))
-
- if self.h['nan_prob_a_reason'] > 0 and random.random() > 0.5: # Missing for a reason
- x = self.drop_for_reason(x, nan_handling_missing_for_a_reason_value(self.h['set_value_to_nan']))
-
- if self.h['nan_prob_unknown_reason'] > 0: # Missing for unknown reason and random.random() > 0.5
- if random.random() < self.h['nan_prob_unknown_reason_reason_prior']:
- x = self.drop_for_no_reason(x, nan_handling_missing_for_unknown_reason_value(self.h['set_value_to_nan']))
- else:
- x = self.drop_for_reason(x, nan_handling_missing_for_unknown_reason_value(self.h['set_value_to_nan']))
-
- # Categorical features
- if 'categorical_feature_p' in self.h and random.random() > 1 - self.h['categorical_feature_p']:
- p = random.random()
- for col in range(x.shape[2]):
- m = MulticlassRank(10, ordered_p=0.3)
- if random.random() > p:
- x[:, :, col] = m(x[:, :, col])
-
- if time_it:
- print('Flex Forward Block 2', round(time.time() - start, 3))
- start = time.time()
-
- if self.h['normalize_to_ranking']:
- x = to_ranking_low_mem(x)
- else:
- x = remove_outliers(x)
- x, y = normalize_data(x), normalize_data(y)
-
- if time_it:
- print('Flex Forward Block 3', round(time.time() - start, 3))
- start = time.time()
-
- # Cast to classification if enabled
- y = self.class_assigner(y).float()
-
- if time_it:
- print('Flex Forward Block 4', round(time.time() - start, 3))
- start = time.time()
- if self.h['normalize_by_used_features']:
- x = normalize_by_used_features_f(x, self.h['num_features_used'], self.args['num_features'], normalize_with_sqrt=self.h.get('normalize_with_sqrt',False))
- if time_it:
- print('Flex Forward Block 5', round(time.time() - start, 3))
-
- start = time.time()
- # Append empty features if enabled
- x = torch.cat(
- [x, torch.zeros((x.shape[0], x.shape[1], self.args['num_features'] - self.h['num_features_used']),
- device=self.args['device'])], -1)
- if time_it:
- print('Flex Forward Block 6', round(time.time() - start, 3))
-
- return x, y, y # x.shape = (T,B,H)
-
-import torch.cuda as cutorch
-
-@torch.no_grad()
-def get_batch(batch_size, seq_len, num_features, get_batch, device, hyperparameters=None, batch_size_per_gp_sample=None, **kwargs):
- batch_size_per_gp_sample = batch_size_per_gp_sample or (min(32, batch_size))
- num_models = batch_size // batch_size_per_gp_sample
- assert num_models > 0, f'Batch size ({batch_size}) is too small for batch_size_per_gp_sample ({batch_size_per_gp_sample})'
- assert num_models * batch_size_per_gp_sample == batch_size, f'Batch size ({batch_size}) not divisible by batch_size_per_gp_sample ({batch_size_per_gp_sample})'
-
- # Sample one seq_len for entire batch
- seq_len = hyperparameters['seq_len_used']() if callable(hyperparameters['seq_len_used']) else seq_len
-
- args = {'device': device, 'seq_len': seq_len, 'num_features': num_features, 'batch_size': batch_size_per_gp_sample}
-
- models = [FlexibleCategorical(get_batch, hyperparameters, args).to(device) for _ in range(num_models)]
-
- start = time.time()
- sample = sum([[model(batch_size=batch_size_per_gp_sample)] for model in models], [])
- #print('sample', time.time() - start)
-
- x, y, y_ = zip(*sample)
- x, y, y_ = torch.cat(x, 1).detach(), torch.cat(y, 1).detach(), torch.cat(y_, 1).detach()
-
- # # TODO: Reintegrate this code (Doesn't work on batch dim), could be applied to each batch sample individually
- # if hyperparameters['is_binary_classification'] and hyperparameters['order_y']:
- # x, y = order_by_y(x, y)
-
- return x, y, y_
-
-# num_features_used = num_features_used_sampler()
-# prior_outputscale = prior_outputscale_sampler()
-# prior_lengthscale = prior_lengthscale_sampler()
-#
-# x, sample = normalize_data(x), normalize_data(sample)
-#
-# if is_binary_classification:
-# sample = (sample > torch.median(sample, dim=0)[0]).float()
-#
-# if normalize_by_used_features:
-# x = normalize_by_used_features_f(x, num_features_used, num_features)
-#
-# # # if is_binary_classification and order_y:
-# # # x, sample = order_by_y(x, sample)
-# #
-# # Append empty features if enabled
-# x = torch.cat([x, torch.zeros((x.shape[0], x.shape[1], num_features - num_features_used), device=device)], -1)
-
-DataLoader = get_batch_to_dataloader(get_batch)
-DataLoader.num_outputs = 1
\ No newline at end of file
diff --git a/spaces/scedlatioru/img-to-music/example/Www TamilRockers Cc Kanithan 2016www TamilRockers Cc Kanithan 2016 TOP.md b/spaces/scedlatioru/img-to-music/example/Www TamilRockers Cc Kanithan 2016www TamilRockers Cc Kanithan 2016 TOP.md
deleted file mode 100644
index feec8f3792ee2bba756fc0101eab82f1a9b3fe24..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Www TamilRockers Cc Kanithan 2016www TamilRockers Cc Kanithan 2016 TOP.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
Www TamilRockers Cc Kanithan 2016www TamilRockers Cc Kanithan 2016
-
-AL. 49a0673df2. Cowboy Bebop English Dub Movie Torrent · Www TamilRockers Cc Kanithan 2016www TamilRockers Cc Kanithan 2016 4d29de3e1b
-
-
-
diff --git a/spaces/scedlatioru/img-to-music/example/Xilisoft Photo Slideshow Maker Serial Number VERIFIED.md b/spaces/scedlatioru/img-to-music/example/Xilisoft Photo Slideshow Maker Serial Number VERIFIED.md
deleted file mode 100644
index 928b2fb60fe63eec9c6787c974c89f6b6586da99..0000000000000000000000000000000000000000
--- a/spaces/scedlatioru/img-to-music/example/Xilisoft Photo Slideshow Maker Serial Number VERIFIED.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-xilisoft photo slideshow maker serial number
-
-
-
\ No newline at end of file
diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/models/__init__.py b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/models/__init__.py
deleted file mode 100644
index 00bde45f003698a5b15d3517ae47b59ef1d86e0c..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/models/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import importlib
-from copy import deepcopy
-from os import path as osp
-
-from basicsr.utils import get_root_logger, scandir
-from basicsr.utils.registry import MODEL_REGISTRY
-
-__all__ = ['build_model']
-
-# automatically scan and import model modules for registry
-# scan all the files under the 'models' folder and collect files ending with
-# '_model.py'
-model_folder = osp.dirname(osp.abspath(__file__))
-model_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(model_folder) if v.endswith('_model.py')]
-# import all the model modules
-_model_modules = [importlib.import_module(f'basicsr.models.{file_name}') for file_name in model_filenames]
-
-
-def build_model(opt):
- """Build model from options.
-
- Args:
- opt (dict): Configuration. It must constain:
- model_type (str): Model type.
- """
- opt = deepcopy(opt)
- model = MODEL_REGISTRY.get(opt['model_type'])(opt)
- logger = get_root_logger()
- logger.info(f'Model [{model.__class__.__name__}] is created.')
- return model
diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp
deleted file mode 100644
index 41c6df6f721bd95a525fd6a03dd9882e863de042..0000000000000000000000000000000000000000
--- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/dcn/src/deform_conv_ext.cpp
+++ /dev/null
@@ -1,164 +0,0 @@
-// modify from
-// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c
-
-#include
-
-
-
-
-
-
- 题目
-
-
-
- 答案
-
-
-
-
- 正误
-
-
-
- 得分
-
-
-
-
- How to Download Counter Strike 1.6 and Play Online
-download counter strike 1.6
-What is Counter Strike 1.6?
-A brief history of the game
-The main features of the game
-How to download Counter Strike 1.6 for free?
-The official way: Steam
-
-
-The alternative way: CS 1.6 website
-
-download cs 1.6 warzone
-download cs 1.6 non steam
-download cs 1.6 full version
-download cs 1.6 with bots
-download cs 1.6 free for pc
-download cs 1.6 online
-download cs 1.6 torrent
-download cs 1.6 no recoil
-download cs 1.6 windows 10
-download cs 1.6 zombie mod
-download cs 1.6 best version
-download cs 1.6 steam key
-download cs 1.6 direct link
-download cs 1.6 latest update
-download cs 1.6 portable
-download cs 1.6 maps pack
-download cs 1.6 skins pack
-download cs 1.6 cheats pack
-download cs 1.6 server creator
-download cs 1.6 dedicated server
-download cs 1.6 lan version
-download cs 1.6 hd graphics
-download cs 1.6 extreme v7
-download cs 1.6 final release
-download cs 1.6 source mod
-download cs 1.6 condition zero
-download cs 1.6 global offensive mod
-download cs 1.6 modern warfare mod
-download cs 1.6 counter strike nexon zombies mod
-download cs 1.6 reloaded edition
-download cs 1.6 revolution edition
-download cs 1.6 professional edition
-download cs 1.6 classic edition
-download cs 1.6 clean edition
-download cs 1.6 fun edition
-download cs 1.6 super edition
-download cs 1.6 ultimate edition
-download cs 1.6 gold edition
-download cs 1.6 pro gaming edition
-
-How to play Counter Strike 1.6 online?
-Joining a server from the game menu
-
-
Finding a server from a website
-
-
-Creating your own server
-
-
-Tips and tricks for playing Counter Strike 1.6 online
-Learn the maps and the weapons
-Communicate with your teammates
-Practice your aim and reflexes
-Conclusion
-FAQs
-Is Counter Strike 1.6 still popular in 2023?
-What are the system requirements for Counter Strike 1.6?
-
-
-
-
-What are the differences between Counter Strike 1.6 and Counter Strike: Global Offensive?
-How can I improve my performance and FPS in Counter Strike 1.6?
-
-
-Where can I find more information and resources about Counter Strike 1.6?
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Score Hero APK Hack and Become a Football Legend.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Score Hero APK Hack and Become a Football Legend.md
deleted file mode 100644
index 5fb0f389000aaa6fe51aa693f54ba363ccedc241..0000000000000000000000000000000000000000
--- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Score Hero APK Hack and Become a Football Legend.md
+++ /dev/null
@@ -1,100 +0,0 @@
-
-Score Hero APK Hack Download: How to Get Unlimited Money and Energy in Score Hero 2
- Introduction
- score hero apk hack download
- How to Download and Install the Score Hero APK Hack
-
-score hero hack apk download latest version
-score hero apk mod unlimited lives and stars download
-score hero hack apk download for android
-score hero mod apk download 2021
-score hero hack apk download no root
-score hero mod apk unlimited money and energy 2.75
-score hero hack apk download ios
-score hero mod apk unlimited everything download
-score hero hack apk download free
-score hero mod apk download rexdl
-score hero hack apk download 2020
-score hero mod apk download revdl
-score hero hack apk download uptodown
-score hero mod apk download an1
-score hero hack apk download android 1
-score hero mod apk download apkpure
-score hero hack apk download apkmody
-score hero mod apk download happymod
-score hero hack apk download latest version 2021
-score hero mod apk download for pc
-score hero hack apk download for ios
-score hero mod apk download android oyun club
-score hero hack apk download android oyun club
-score hero mod apk download 2.75
-score hero hack apk download 2.75
-score hero mod apk download 2.32
-score hero hack apk download 2.32
-score hero mod apk download 2.22
-score hero hack apk download 2.22
-score hero mod apk unlimited money and energy 2.32
-score hero hack apk unlimited money and energy 2.32
-score hero mod apk unlimited money and energy 2.22
-score hero hack apk unlimited money and energy 2.22
-score hero mod apk unlimited money and energy 1.77
-score hero hack apk unlimited money and energy 1.77
-score hero mod apk unlimited money and energy 1.76
-score hero hack apk unlimited money and energy 1.76
-score hero mod apk unlimited money and energy 1.75
-score hero hack apk unlimited money and energy 1.75How to Use the Score Hero APK Hack Features
- Unlimited Money
- Unlimited Energy
- Unlocked Levels and Stories
- Tips and Tricks for Playing Score Hero 2
- How to Score Amazing Goals and Win Awards
- How to Pass Wisely and Avoid Defenders
- How to Customize Your Hero and Improve Your Skills
- Conclusion
- FAQs
- Q1: Is the Score Hero APK hack safe to use?
- Q2: Do I need to root my device to use the Score Hero APK hack?
- Q3: Can I play online with the Score Hero APK hack?
- Q4: Will I get banned for using the Score Hero APK hack?
- Q5: How can I update the Score Hero APK hack?
-
-
-
\ No newline at end of file
diff --git a/spaces/sklearn-docs/Manifold-Learning-methods-on-a-severed-sphere/README.md b/spaces/sklearn-docs/Manifold-Learning-methods-on-a-severed-sphere/README.md
deleted file mode 100644
index 97c873b6f40647c2cc348251e0af1422662f6934..0000000000000000000000000000000000000000
--- a/spaces/sklearn-docs/Manifold-Learning-methods-on-a-severed-sphere/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Manifold Learning Methods On A Severed Sphere
-emoji: 🦀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.29.0
-app_file: app.py
-pinned: false
-license: bsd-3-clause
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/app.py b/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/app.py
deleted file mode 100644
index 26c63f03abdacef260b11a091048af5729b26b8c..0000000000000000000000000000000000000000
--- a/spaces/sky24h/Controllable_Multi-domain_Semantic_Artwork_Synthesis/app.py
+++ /dev/null
@@ -1,133 +0,0 @@
-import os
-import sys
-import time
-import json
-import torch
-import base64
-from PIL import Image
-from io import BytesIO
-
-# set CUDA_MODULE_LOADING=LAZY to speed up the serverless function
-os.environ["CUDA_MODULE_LOADING"] = "LAZY"
-# set SAFETENSORS_FAST_GPU=1 to speed up the serverless function
-os.environ["SAFETENSORS_FAST_GPU"] = "1"
-
-sys.path.append(os.path.join(os.path.dirname(__file__), "seg2art"))
-from seg2art.sstan_models.pix2pix_model import Pix2PixModel
-from seg2art.options.test_options import TestOptions
-from seg2art.inference_util import get_artwork
-
-import uvicorn
-from fastapi import FastAPI, Form
-from fastapi.templating import Jinja2Templates
-from fastapi.responses import PlainTextResponse, HTMLResponse
-from fastapi.requests import Request
-from fastapi.staticfiles import StaticFiles
-
-
-# declare constants
-HOST = "0.0.0.0"
-PORT = 7860
-# FastAPI
-app = FastAPI(root_path=os.path.abspath(os.path.dirname(__file__)))
-app.mount("/static", StaticFiles(directory="static"), name="static")
-templates = Jinja2Templates(directory="templates")
-
-
-# initialize SEAN model.
-opt = TestOptions().parse()
-opt.status = "test"
-model = Pix2PixModel(opt)
-model = model.half() if torch.cuda.is_available() else model
-model.eval()
-
-
-from utils.umap_utils import get_code, load_boundries, modify_code
-
-boundaries = load_boundries()
-global current_codes
-current_codes = {}
-max_user_num = 5
-
-initial_code_path = os.path.join(os.path.dirname(__file__), "static/init_code")
-initial_code = torch.load(initial_code_path) if torch.cuda.is_available() else torch.load(initial_code_path, map_location=torch.device("cpu"))
-
-
-def EncodeImage(img_pil):
- with BytesIO() as buffer:
- img_pil.save(buffer, "jpeg")
- image_data = base64.b64encode(buffer.getvalue())
- return image_data
-
-
-def DecodeImage(img_pil):
- img_pil = BytesIO(base64.urlsafe_b64decode(img_pil))
- img_pil = Image.open(img_pil).convert("RGB")
- return img_pil
-
-
-def process_input(body, random=False):
- global current_codes
- json_body = json.loads(body.decode("utf-8"))
- user_id = json_body["user_id"]
- start_time = time.time()
-
- # save current code for different users
- if user_id not in current_codes:
- current_codes[user_id] = initial_code.clone()
- if len(current_codes[user_id]) > max_user_num:
- current_codes[user_id] = current_codes[user_id][-max_user_num:]
-
- if random:
- # randomize code
- domain = json_body["model"]
- current_codes[user_id] = get_code(domain, boundaries)
-
- # get input
- input_img = DecodeImage(json_body["img"])
-
- try:
- move_range = float(json_body["move_range"])
- except:
- move_range = 0
-
- # set move range to 3 if random is True
- move_range = 3 if random else move_range
- # print("Input image was received")
- # get selected style
- domain = json_body["model"]
- if move_range != 0:
- modified_code = modify_code(current_codes[user_id], boundaries, domain, move_range)
- else:
- modified_code = current_code.clone()
-
- # inference
- result = get_artwork(model, input_img, modified_code)
- print("Time Cost: ", time.time() - start_time)
- return EncodeImage(result)
-
-
-@app.get("/", response_class=HTMLResponse)
-def root(request: Request):
- return templates.TemplateResponse("index.html", {"request": request})
-
-@app.get("/check_gpu")
-async def check_gpu():
- return torch.cuda.is_available()
-
-@app.post("/predict")
-async def predict(request: Request):
- body = await request.body()
- result = process_input(body, random=False)
- return result
-
-
-@app.post("/predict_random")
-async def predict_random(request: Request):
- body = await request.body()
- result = process_input(body, random=True)
- return result
-
-
-if __name__ == "__main__":
- uvicorn.run(app, host=HOST, port=PORT, log_level="info")
diff --git a/spaces/sneedium/dvatch_captcha_sneedium_old/modules/resnet.py b/spaces/sneedium/dvatch_captcha_sneedium_old/modules/resnet.py
deleted file mode 100644
index 3b53b137134f36b126fa5bdf68d619c2283d082f..0000000000000000000000000000000000000000
--- a/spaces/sneedium/dvatch_captcha_sneedium_old/modules/resnet.py
+++ /dev/null
@@ -1,104 +0,0 @@
-import math
-
-import torch.nn as nn
-import torch.nn.functional as F
-import torch.utils.model_zoo as model_zoo
-
-
-def conv1x1(in_planes, out_planes, stride=1):
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=False)
-
-
-def conv3x3(in_planes, out_planes, stride=1):
- "3x3 convolution with padding"
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
- padding=1, bias=False)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv1x1(inplanes, planes)
- self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes, stride)
- self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.bn1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.bn2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ResNet(nn.Module):
-
- def __init__(self, block, layers):
- self.inplanes = 32
- super(ResNet, self).__init__()
- self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1,
- bias=False)
- self.bn1 = nn.BatchNorm2d(32)
- self.relu = nn.ReLU(inplace=True)
-
- self.layer1 = self._make_layer(block, 32, layers[0], stride=2)
- self.layer2 = self._make_layer(block, 64, layers[1], stride=1)
- self.layer3 = self._make_layer(block, 128, layers[2], stride=2)
- self.layer4 = self._make_layer(block, 256, layers[3], stride=1)
- self.layer5 = self._make_layer(block, 512, layers[4], stride=1)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
- m.weight.data.normal_(0, math.sqrt(3. / n))
- elif isinstance(m, nn.BatchNorm2d):
- m.weight.data.fill_(1)
- m.bias.data.zero_()
-
- def _make_layer(self, block, planes, blocks, stride=1):
- downsample = None
- if stride != 1 or self.inplanes != planes * block.expansion:
- downsample = nn.Sequential(
- nn.Conv2d(self.inplanes, planes * block.expansion,
- kernel_size=1, stride=stride, bias=False),
- nn.BatchNorm2d(planes * block.expansion),
- )
-
- layers = []
- layers.append(block(self.inplanes, planes, stride, downsample))
- self.inplanes = planes * block.expansion
- for i in range(1, blocks):
- layers.append(block(self.inplanes, planes))
-
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = self.relu(x)
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.layer4(x)
- x = self.layer5(x)
- return x
-
-
-def resnet45():
- return ResNet(BasicBlock, [3, 4, 6, 6, 3])
diff --git a/spaces/society-ethics/model-card-regulatory-check/tests/cards/sentence-transformers___all-MiniLM-L6-v2.md b/spaces/society-ethics/model-card-regulatory-check/tests/cards/sentence-transformers___all-MiniLM-L6-v2.md
deleted file mode 100644
index c2cf8c94e238937a7385a9ca84ecf5114adb550f..0000000000000000000000000000000000000000
--- a/spaces/society-ethics/model-card-regulatory-check/tests/cards/sentence-transformers___all-MiniLM-L6-v2.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# all-MiniLM-L6-v2
-This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
-
-## Usage (Sentence-Transformers)
-Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
-
-```
-pip install -U sentence-transformers
-```
-
-Then you can use the model like this:
-```python
-from sentence_transformers import SentenceTransformer
-sentences = ["This is an example sentence", "Each sentence is converted"]
-
-model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
-embeddings = model.encode(sentences)
-print(embeddings)
-```
-
-## Usage (HuggingFace Transformers)
-Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
-
-```python
-from transformers import AutoTokenizer, AutoModel
-import torch
-import torch.nn.functional as F
-
-#Mean Pooling - Take attention mask into account for correct averaging
-def mean_pooling(model_output, attention_mask):
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
- return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
-
-
-# Sentences we want sentence embeddings for
-sentences = ['This is an example sentence', 'Each sentence is converted']
-
-# Load model from HuggingFace Hub
-tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
-model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
-
-# Tokenize sentences
-encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
-
-# Compute token embeddings
-with torch.no_grad():
- model_output = model(**encoded_input)
-
-# Perform pooling
-sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
-
-# Normalize embeddings
-sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
-
-print("Sentence embeddings:")
-print(sentence_embeddings)
-```
-
-## Evaluation Results
-
-For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/all-MiniLM-L6-v2)
-
-------
-
-## Background
-
-The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
-contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a
-1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
-
-We developped this model during the
-[Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
-organized by Hugging Face. We developped this model as part of the project:
-[Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
-
-## Intended uses
-
-Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
-the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
-
-By default, input text longer than 256 word pieces is truncated.
-
-
-## Training procedure
-
-### Pre-training
-
-We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
-
-### Fine-tuning
-
-We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
-We then apply the cross entropy loss by comparing with true pairs.
-
-#### Hyper parameters
-
-We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
-We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
-a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
-
-#### Training data
-
-We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
-We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
-
-
-| Dataset | Paper | Number of training tuples |
-|--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
-| [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
-| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
-| [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
-| [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
-| [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
-| [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
-| [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
-| [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
-| [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
-| [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
-| [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
-| [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
-| [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
-| [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
-| [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
-| AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
-| [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
-| [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
-| [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
-| [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
-| [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
-| [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
-| [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
-| [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
-| [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
-| **Total** | | **1,170,060,424** |
\ No newline at end of file
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/joint_alignment_translation/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/joint_alignment_translation/README.md
deleted file mode 100644
index cd9c0ea65f5292198296a8f427b42e01b584e2d9..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/joint_alignment_translation/README.md
+++ /dev/null
@@ -1,89 +0,0 @@
-# Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)
-
-This page includes instructions for training models described in [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](https://arxiv.org/abs/1909.02074).
-
-## Training a joint alignment-translation model on WMT'18 En-De
-
-##### 1. Extract and preprocess the WMT'18 En-De data
-```bash
-./prepare-wmt18en2de_no_norm_no_escape_no_agressive.sh
-```
-
-##### 2. Generate alignments from statistical alignment toolkits e.g. Giza++/FastAlign.
-In this example, we use FastAlign.
-```bash
-git clone git@github.com:clab/fast_align.git
-pushd fast_align
-mkdir build
-cd build
-cmake ..
-make
-popd
-ALIGN=fast_align/build/fast_align
-paste bpe.32k/train.en bpe.32k/train.de | awk -F '\t' '{print $1 " ||| " $2}' > bpe.32k/train.en-de
-$ALIGN -i bpe.32k/train.en-de -d -o -v > bpe.32k/train.align
-```
-
-##### 3. Preprocess the dataset with the above generated alignments.
-```bash
-fairseq-preprocess \
- --source-lang en --target-lang de \
- --trainpref bpe.32k/train \
- --validpref bpe.32k/valid \
- --testpref bpe.32k/test \
- --align-suffix align \
- --destdir binarized/ \
- --joined-dictionary \
- --workers 32
-```
-
-##### 4. Train a model
-```bash
-fairseq-train \
- binarized \
- --arch transformer_wmt_en_de_big_align --share-all-embeddings \
- --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 --activation-fn relu\
- --lr 0.0002 --lr-scheduler inverse_sqrt --warmup-updates 4000 --warmup-init-lr 1e-07 \
- --dropout 0.3 --attention-dropout 0.1 --weight-decay 0.0 \
- --max-tokens 3500 --label-smoothing 0.1 \
- --save-dir ./checkpoints --log-interval 1000 --max-update 60000 \
- --keep-interval-updates -1 --save-interval-updates 0 \
- --load-alignments --criterion label_smoothed_cross_entropy_with_alignment \
- --fp16
-```
-
-Note that the `--fp16` flag requires you have CUDA 9.1 or greater and a Volta GPU or newer.
-
-If you want to train the above model with big batches (assuming your machine has 8 GPUs):
-- add `--update-freq 8` to simulate training on 8x8=64 GPUs
-- increase the learning rate; 0.0007 works well for big batches
-
-##### 5. Evaluate and generate the alignments (BPE level)
-```bash
-fairseq-generate \
- binarized --gen-subset test --print-alignment \
- --source-lang en --target-lang de \
- --path checkpoints/checkpoint_best.pt --beam 5 --nbest 1
-```
-
-##### 6. Other resources.
-The code for:
-1. preparing alignment test sets
-2. converting BPE level alignments to token level alignments
-3. symmetrizing bidirectional alignments
-4. evaluating alignments using AER metric
-can be found [here](https://github.com/lilt/alignment-scripts)
-
-## Citation
-
-```bibtex
-@inproceedings{garg2019jointly,
- title = {Jointly Learning to Align and Translate with Transformer Models},
- author = {Garg, Sarthak and Peitz, Stephan and Nallasamy, Udhyakumar and Paulik, Matthias},
- booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)},
- address = {Hong Kong},
- month = {November},
- url = {https://arxiv.org/abs/1909.02074},
- year = {2019},
-}
-```
diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/w2l_decoder.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/w2l_decoder.py
deleted file mode 100644
index fbf2d3524ee40bd0d08b6a9560047d96e49b6045..0000000000000000000000000000000000000000
--- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_recognition/w2l_decoder.py
+++ /dev/null
@@ -1,486 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Flashlight decoders.
-"""
-
-import gc
-import itertools as it
-import os.path as osp
-from typing import List
-import warnings
-from collections import deque, namedtuple
-
-import numpy as np
-import torch
-from examples.speech_recognition.data.replabels import unpack_replabels
-from fairseq import tasks
-from fairseq.utils import apply_to_sample
-from omegaconf import open_dict
-from fairseq.dataclass.utils import convert_namespace_to_omegaconf
-
-
-try:
- from flashlight.lib.text.dictionary import create_word_dict, load_words
- from flashlight.lib.sequence.criterion import CpuViterbiPath, get_data_ptr_as_bytes
- from flashlight.lib.text.decoder import (
- CriterionType,
- LexiconDecoderOptions,
- KenLM,
- LM,
- LMState,
- SmearingMode,
- Trie,
- LexiconDecoder,
- )
-except:
- warnings.warn(
- "flashlight python bindings are required to use this functionality. Please install from https://github.com/facebookresearch/flashlight/tree/master/bindings/python"
- )
- LM = object
- LMState = object
-
-
-class W2lDecoder(object):
- def __init__(self, args, tgt_dict):
- self.tgt_dict = tgt_dict
- self.vocab_size = len(tgt_dict)
- self.nbest = args.nbest
-
- # criterion-specific init
- self.criterion_type = CriterionType.CTC
- self.blank = (
- tgt_dict.index("Arturia Brass VSTi RTAS V2.0.5 I: A Powerful and Expressive Virtual Brass Instrument
-
-Arturia Brass VSTi RTAS V2.0.5 I
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Film Hindi Af Somali Mohabbat.md b/spaces/stomexserde/gpt4-ui/Examples/Film Hindi Af Somali Mohabbat.md
deleted file mode 100644
index 975144eb91ef70a8b04d88247d7f16c4f6ea9e45..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Film Hindi Af Somali Mohabbat.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-Mohabbat: A Romantic Hindi Film with Somali Subtitles
-film hindi af somali mohabbat
-
-
-
\ No newline at end of file
diff --git a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/autocast.py b/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/sub314xxl/MusicGen-Continuation/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/subhajitmaji/MusicGen/tests/data/__init__.py b/spaces/subhajitmaji/MusicGen/tests/data/__init__.py
deleted file mode 100644
index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000
--- a/spaces/subhajitmaji/MusicGen/tests/data/__init__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
diff --git a/spaces/suchun/chatGPT_acdemic/crazy_functions/crazy_functions_test.py b/spaces/suchun/chatGPT_acdemic/crazy_functions/crazy_functions_test.py
deleted file mode 100644
index 2838e543977e94c13791a681a5a6b9bb8f4110dc..0000000000000000000000000000000000000000
--- a/spaces/suchun/chatGPT_acdemic/crazy_functions/crazy_functions_test.py
+++ /dev/null
@@ -1,92 +0,0 @@
-"""
-这是什么?
- 这个文件用于函数插件的单元测试
- 运行方法 python crazy_functions/crazy_functions_test.py
-"""
-
-def validate_path():
- import os, sys
- dir_name = os.path.dirname(__file__)
- root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume)
- sys.path.append(root_dir_assume)
-
-validate_path() # validate path so you can run from base directory
-
-from toolbox import get_conf, ChatBotWithCookies
-proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT, LAYOUT, API_KEY = \
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT', 'LAYOUT', 'API_KEY')
-
-llm_kwargs = {
- 'api_key': API_KEY,
- 'llm_model': LLM_MODEL,
- 'top_p':1.0,
- 'max_length': None,
- 'temperature':1.0,
-}
-plugin_kwargs = { }
-chatbot = ChatBotWithCookies(llm_kwargs)
-history = []
-system_prompt = "Serve me as a writing and programming assistant."
-web_port = 1024
-
-
-def test_解析一个Python项目():
- from crazy_functions.解析项目源代码 import 解析一个Python项目
- txt = "crazy_functions/test_project/python/dqn"
- for cookies, cb, hist, msg in 解析一个Python项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_解析一个Cpp项目():
- from crazy_functions.解析项目源代码 import 解析一个C项目
- txt = "crazy_functions/test_project/cpp/cppipc"
- for cookies, cb, hist, msg in 解析一个C项目(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_Latex英文润色():
- from crazy_functions.Latex全文润色 import Latex英文润色
- txt = "crazy_functions/test_project/latex/attention"
- for cookies, cb, hist, msg in Latex英文润色(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_Markdown中译英():
- from crazy_functions.批量Markdown翻译 import Markdown中译英
- txt = "README.md"
- for cookies, cb, hist, msg in Markdown中译英(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_批量翻译PDF文档():
- from crazy_functions.批量翻译PDF文档_多线程 import 批量翻译PDF文档
- txt = "crazy_functions/test_project/pdf_and_word"
- for cookies, cb, hist, msg in 批量翻译PDF文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_谷歌检索小助手():
- from crazy_functions.谷歌检索小助手 import 谷歌检索小助手
- txt = "https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=auto+reinforcement+learning&btnG="
- for cookies, cb, hist, msg in 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_总结word文档():
- from crazy_functions.总结word文档 import 总结word文档
- txt = "crazy_functions/test_project/pdf_and_word"
- for cookies, cb, hist, msg in 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-def test_下载arxiv论文并翻译摘要():
- from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
- txt = "1812.10695"
- for cookies, cb, hist, msg in 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port):
- print(cb)
-
-test_解析一个Python项目()
-test_Latex英文润色()
-test_Markdown中译英()
-test_批量翻译PDF文档()
-test_谷歌检索小助手()
-test_总结word文档()
-test_下载arxiv论文并翻译摘要()
-test_解析一个Cpp项目()
-
-input("程序完成,回车退出。")
-print("退出。")
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Delcam Featurecam 2014 20 1 0 24 Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Delcam Featurecam 2014 20 1 0 24 Torrent.md
deleted file mode 100644
index 59e67dba8940ced80bbfb151a450ca271e53172f..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Delcam Featurecam 2014 20 1 0 24 Torrent.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-```
-Delcam Featurecam 2014 20 1 0 24 Torrent: A Comprehensive Review
-
-
What is Delcam Featurecam 2014 20 1 0 24 Torrent?
-Delcam Featurecam 2014 20 1 0 24 Torrent
-What are the benefits of Delcam Featurecam 2014 20 1 0 24 Torrent?
-
-
-What are the drawbacks of Delcam Featurecam 2014 20 1 0 24 Torrent?
-
-
-How to download Delcam Featurecam 2014 20 1 0 24 Torrent for free?
-Conclusion
-
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Electromagnetic Waves By R Shevgaonkar Pdf.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Electromagnetic Waves By R Shevgaonkar Pdf.md
deleted file mode 100644
index 3d66b3ca04d619bf94bec1331416088fb5c24406..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Electromagnetic Waves By R Shevgaonkar Pdf.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-
the original name for this course is: electronics - transmission lines and em waves. this course has 42 video lectures on transmission lines and electromagnetic waves, by professor r. shevgaonkar.electromagnetic waves by r shevgaonkar pdf
-
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ford Transit Connect Workshop Manual Downloads Torrent.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ford Transit Connect Workshop Manual Downloads Torrent.md
deleted file mode 100644
index 13e44f11827bac06d1783cd1bf82f9b8a802d0ea..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Ford Transit Connect Workshop Manual Downloads Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Ford Transit Connect Workshop Manual Downloads Torrent
-
-Workshop Manual PDF Kindle, there is the 2002 Ford Expedition Owners Manual 189 ... IPhone and car not read 19 24mb Ford Transit Connect is a manual. 4d29de3e1b
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Steinberg - HALion V3.1.0.947 - H2O Serial Key Keygen !!INSTALL!!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Steinberg - HALion V3.1.0.947 - H2O Serial Key Keygen !!INSTALL!!.md
deleted file mode 100644
index 9c4484998c4e850c5665eceb6db97bdfc37c80ba..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Steinberg - HALion V3.1.0.947 - H2O Serial Key Keygen !!INSTALL!!.md
+++ /dev/null
@@ -1,10 +0,0 @@
-Steinberg - HALion V3.1.0.947 - H2O Serial Key Keygen
-
-The CD is loaded with great professional samples . The CD is loaded with great professional samples . These samples were recorded in the Swedish E-MU Systems Studios, which were designed by Rick Chasen and Daniel 'Nelz' Nilsson in 1999. All the tracks were recorded at 24-bit resolution using a SSL 9000-series console, and sampled at 48 kHz and have been digitally remastered. The samples come with comprehensive tagging so they are ready for creative use immediately after download. Thanks to E-MU Systems, to Sweden's leading specialist sound supplier - Electro-Music Sweden, for their great service and assistance in providing samples. Thank you Electro-Music Sweden. They can be contacted on info@electro-music.se)
-
-Third in the series is "Music for Trance and Psychedelic Trance". The set includes House Music, Techno, Progressive Trance, Trance and Psychedelic Trance. It is designed to be used as a series of playlists that can be used in a DJ set or as an extra bonus when playing a set as a vinyl DJ. Again, this set breaks out with 3 )[sample] E-MU Systems Sound Library Vol 13 Dance 2000 [E-MU] 1CD (The CD is loaded with great professional samples . The CD is loaded with great professional samples . The CD is loaded with great professional samples . These samples were recorded in the Swedish E-MU Systems Studios, which were designed by Rick Chasen and Daniel 'Nelz' Nilsson in 1999. All the tracks were recorded at 24-bit resolution using a SSL 9000-series console, and sampled at 48 kHz and have been digitally remastered. The samples come with comprehensive tagging so they are ready for creative use immediately after download. Thanks to E-MU Systems, to Sweden's leading specialist sound supplier - Electro-Music Sweden, for their great service and assistance in providing samples. Thank you Electro-Music Sweden. They can be contacted on info@electro-music.se)
-
-Fourth in the series is "House Music Sampler". This set breaks out with 3 )[sample] E-MU Systems Sound Library Vol 13 Dance 2000 [E-MU] 1CD (The CD is loaded with great professional samples . The CD is loaded with great professional samples . The CD is loaded with great professional samples . These samples were recorded in the Swedish E-MU Systems Studios, which were designed by Rick Chasen and Daniel 'N 4fefd39f24
-
-
-
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/momentum_updater.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/momentum_updater.py
deleted file mode 100644
index 60437756ceedf06055ec349df69a25465738d3f0..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/runner/hooks/momentum_updater.py
+++ /dev/null
@@ -1,493 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import annotator.uniformer.mmcv as mmcv
-from .hook import HOOKS, Hook
-from .lr_updater import annealing_cos, annealing_linear, format_param
-
-
-class MomentumUpdaterHook(Hook):
-
- def __init__(self,
- by_epoch=True,
- warmup=None,
- warmup_iters=0,
- warmup_ratio=0.9):
- # validate the "warmup" argument
- if warmup is not None:
- if warmup not in ['constant', 'linear', 'exp']:
- raise ValueError(
- f'"{warmup}" is not a supported type for warming up, valid'
- ' types are "constant" and "linear"')
- if warmup is not None:
- assert warmup_iters > 0, \
- '"warmup_iters" must be a positive integer'
- assert 0 < warmup_ratio <= 1.0, \
- '"warmup_momentum" must be in range (0,1]'
-
- self.by_epoch = by_epoch
- self.warmup = warmup
- self.warmup_iters = warmup_iters
- self.warmup_ratio = warmup_ratio
-
- self.base_momentum = [] # initial momentum for all param groups
- self.regular_momentum = [
- ] # expected momentum if no warming up is performed
-
- def _set_momentum(self, runner, momentum_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, mom in zip(optim.param_groups,
- momentum_groups[k]):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
- else:
- for param_group, mom in zip(runner.optimizer.param_groups,
- momentum_groups):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
-
- def get_momentum(self, runner, base_momentum):
- raise NotImplementedError
-
- def get_regular_momentum(self, runner):
- if isinstance(runner.optimizer, dict):
- momentum_groups = {}
- for k in runner.optimizer.keys():
- _momentum_group = [
- self.get_momentum(runner, _base_momentum)
- for _base_momentum in self.base_momentum[k]
- ]
- momentum_groups.update({k: _momentum_group})
- return momentum_groups
- else:
- return [
- self.get_momentum(runner, _base_momentum)
- for _base_momentum in self.base_momentum
- ]
-
- def get_warmup_momentum(self, cur_iters):
-
- def _get_warmup_momentum(cur_iters, regular_momentum):
- if self.warmup == 'constant':
- warmup_momentum = [
- _momentum / self.warmup_ratio
- for _momentum in self.regular_momentum
- ]
- elif self.warmup == 'linear':
- k = (1 - cur_iters / self.warmup_iters) * (1 -
- self.warmup_ratio)
- warmup_momentum = [
- _momentum / (1 - k) for _momentum in self.regular_mom
- ]
- elif self.warmup == 'exp':
- k = self.warmup_ratio**(1 - cur_iters / self.warmup_iters)
- warmup_momentum = [
- _momentum / k for _momentum in self.regular_mom
- ]
- return warmup_momentum
-
- if isinstance(self.regular_momentum, dict):
- momentum_groups = {}
- for key, regular_momentum in self.regular_momentum.items():
- momentum_groups[key] = _get_warmup_momentum(
- cur_iters, regular_momentum)
- return momentum_groups
- else:
- return _get_warmup_momentum(cur_iters, self.regular_momentum)
-
- def before_run(self, runner):
- # NOTE: when resuming from a checkpoint,
- # if 'initial_momentum' is not saved,
- # it will be set according to the optimizer params
- if isinstance(runner.optimizer, dict):
- self.base_momentum = {}
- for k, optim in runner.optimizer.items():
- for group in optim.param_groups:
- if 'momentum' in group.keys():
- group.setdefault('initial_momentum', group['momentum'])
- else:
- group.setdefault('initial_momentum', group['betas'][0])
- _base_momentum = [
- group['initial_momentum'] for group in optim.param_groups
- ]
- self.base_momentum.update({k: _base_momentum})
- else:
- for group in runner.optimizer.param_groups:
- if 'momentum' in group.keys():
- group.setdefault('initial_momentum', group['momentum'])
- else:
- group.setdefault('initial_momentum', group['betas'][0])
- self.base_momentum = [
- group['initial_momentum']
- for group in runner.optimizer.param_groups
- ]
-
- def before_train_epoch(self, runner):
- if not self.by_epoch:
- return
- self.regular_mom = self.get_regular_momentum(runner)
- self._set_momentum(runner, self.regular_mom)
-
- def before_train_iter(self, runner):
- cur_iter = runner.iter
- if not self.by_epoch:
- self.regular_mom = self.get_regular_momentum(runner)
- if self.warmup is None or cur_iter >= self.warmup_iters:
- self._set_momentum(runner, self.regular_mom)
- else:
- warmup_momentum = self.get_warmup_momentum(cur_iter)
- self._set_momentum(runner, warmup_momentum)
- elif self.by_epoch:
- if self.warmup is None or cur_iter > self.warmup_iters:
- return
- elif cur_iter == self.warmup_iters:
- self._set_momentum(runner, self.regular_mom)
- else:
- warmup_momentum = self.get_warmup_momentum(cur_iter)
- self._set_momentum(runner, warmup_momentum)
-
-
-@HOOKS.register_module()
-class StepMomentumUpdaterHook(MomentumUpdaterHook):
- """Step momentum scheduler with min value clipping.
-
- Args:
- step (int | list[int]): Step to decay the momentum. If an int value is
- given, regard it as the decay interval. If a list is given, decay
- momentum at these steps.
- gamma (float, optional): Decay momentum ratio. Default: 0.5.
- min_momentum (float, optional): Minimum momentum value to keep. If
- momentum after decay is lower than this value, it will be clipped
- accordingly. If None is given, we don't perform lr clipping.
- Default: None.
- """
-
- def __init__(self, step, gamma=0.5, min_momentum=None, **kwargs):
- if isinstance(step, list):
- assert mmcv.is_list_of(step, int)
- assert all([s > 0 for s in step])
- elif isinstance(step, int):
- assert step > 0
- else:
- raise TypeError('"step" must be a list or integer')
- self.step = step
- self.gamma = gamma
- self.min_momentum = min_momentum
- super(StepMomentumUpdaterHook, self).__init__(**kwargs)
-
- def get_momentum(self, runner, base_momentum):
- progress = runner.epoch if self.by_epoch else runner.iter
-
- # calculate exponential term
- if isinstance(self.step, int):
- exp = progress // self.step
- else:
- exp = len(self.step)
- for i, s in enumerate(self.step):
- if progress < s:
- exp = i
- break
-
- momentum = base_momentum * (self.gamma**exp)
- if self.min_momentum is not None:
- # clip to a minimum value
- momentum = max(momentum, self.min_momentum)
- return momentum
-
-
-@HOOKS.register_module()
-class CosineAnnealingMomentumUpdaterHook(MomentumUpdaterHook):
-
- def __init__(self, min_momentum=None, min_momentum_ratio=None, **kwargs):
- assert (min_momentum is None) ^ (min_momentum_ratio is None)
- self.min_momentum = min_momentum
- self.min_momentum_ratio = min_momentum_ratio
- super(CosineAnnealingMomentumUpdaterHook, self).__init__(**kwargs)
-
- def get_momentum(self, runner, base_momentum):
- if self.by_epoch:
- progress = runner.epoch
- max_progress = runner.max_epochs
- else:
- progress = runner.iter
- max_progress = runner.max_iters
- if self.min_momentum_ratio is not None:
- target_momentum = base_momentum * self.min_momentum_ratio
- else:
- target_momentum = self.min_momentum
- return annealing_cos(base_momentum, target_momentum,
- progress / max_progress)
-
-
-@HOOKS.register_module()
-class CyclicMomentumUpdaterHook(MomentumUpdaterHook):
- """Cyclic momentum Scheduler.
-
- Implement the cyclical momentum scheduler policy described in
- https://arxiv.org/pdf/1708.07120.pdf
-
- This momentum scheduler usually used together with the CyclicLRUpdater
- to improve the performance in the 3D detection area.
-
- Attributes:
- target_ratio (tuple[float]): Relative ratio of the lowest momentum and
- the highest momentum to the initial momentum.
- cyclic_times (int): Number of cycles during training
- step_ratio_up (float): The ratio of the increasing process of momentum
- in the total cycle.
- by_epoch (bool): Whether to update momentum by epoch.
- """
-
- def __init__(self,
- by_epoch=False,
- target_ratio=(0.85 / 0.95, 1),
- cyclic_times=1,
- step_ratio_up=0.4,
- **kwargs):
- if isinstance(target_ratio, float):
- target_ratio = (target_ratio, target_ratio / 1e5)
- elif isinstance(target_ratio, tuple):
- target_ratio = (target_ratio[0], target_ratio[0] / 1e5) \
- if len(target_ratio) == 1 else target_ratio
- else:
- raise ValueError('target_ratio should be either float '
- f'or tuple, got {type(target_ratio)}')
-
- assert len(target_ratio) == 2, \
- '"target_ratio" must be list or tuple of two floats'
- assert 0 <= step_ratio_up < 1.0, \
- '"step_ratio_up" must be in range [0,1)'
-
- self.target_ratio = target_ratio
- self.cyclic_times = cyclic_times
- self.step_ratio_up = step_ratio_up
- self.momentum_phases = [] # init momentum_phases
- # currently only support by_epoch=False
- assert not by_epoch, \
- 'currently only support "by_epoch" = False'
- super(CyclicMomentumUpdaterHook, self).__init__(by_epoch, **kwargs)
-
- def before_run(self, runner):
- super(CyclicMomentumUpdaterHook, self).before_run(runner)
- # initiate momentum_phases
- # total momentum_phases are separated as up and down
- max_iter_per_phase = runner.max_iters // self.cyclic_times
- iter_up_phase = int(self.step_ratio_up * max_iter_per_phase)
- self.momentum_phases.append(
- [0, iter_up_phase, max_iter_per_phase, 1, self.target_ratio[0]])
- self.momentum_phases.append([
- iter_up_phase, max_iter_per_phase, max_iter_per_phase,
- self.target_ratio[0], self.target_ratio[1]
- ])
-
- def get_momentum(self, runner, base_momentum):
- curr_iter = runner.iter
- for (start_iter, end_iter, max_iter_per_phase, start_ratio,
- end_ratio) in self.momentum_phases:
- curr_iter %= max_iter_per_phase
- if start_iter <= curr_iter < end_iter:
- progress = curr_iter - start_iter
- return annealing_cos(base_momentum * start_ratio,
- base_momentum * end_ratio,
- progress / (end_iter - start_iter))
-
-
-@HOOKS.register_module()
-class OneCycleMomentumUpdaterHook(MomentumUpdaterHook):
- """OneCycle momentum Scheduler.
-
- This momentum scheduler usually used together with the OneCycleLrUpdater
- to improve the performance.
-
- Args:
- base_momentum (float or list): Lower momentum boundaries in the cycle
- for each parameter group. Note that momentum is cycled inversely
- to learning rate; at the peak of a cycle, momentum is
- 'base_momentum' and learning rate is 'max_lr'.
- Default: 0.85
- max_momentum (float or list): Upper momentum boundaries in the cycle
- for each parameter group. Functionally,
- it defines the cycle amplitude (max_momentum - base_momentum).
- Note that momentum is cycled inversely
- to learning rate; at the start of a cycle, momentum is
- 'max_momentum' and learning rate is 'base_lr'
- Default: 0.95
- pct_start (float): The percentage of the cycle (in number of steps)
- spent increasing the learning rate.
- Default: 0.3
- anneal_strategy (str): {'cos', 'linear'}
- Specifies the annealing strategy: 'cos' for cosine annealing,
- 'linear' for linear annealing.
- Default: 'cos'
- three_phase (bool): If three_phase is True, use a third phase of the
- schedule to annihilate the learning rate according to
- final_div_factor instead of modifying the second phase (the first
- two phases will be symmetrical about the step indicated by
- pct_start).
- Default: False
- """
-
- def __init__(self,
- base_momentum=0.85,
- max_momentum=0.95,
- pct_start=0.3,
- anneal_strategy='cos',
- three_phase=False,
- **kwargs):
- # validate by_epoch, currently only support by_epoch=False
- if 'by_epoch' not in kwargs:
- kwargs['by_epoch'] = False
- else:
- assert not kwargs['by_epoch'], \
- 'currently only support "by_epoch" = False'
- if not isinstance(base_momentum, (float, list, dict)):
- raise ValueError('base_momentum must be the type among of float,'
- 'list or dict.')
- self._base_momentum = base_momentum
- if not isinstance(max_momentum, (float, list, dict)):
- raise ValueError('max_momentum must be the type among of float,'
- 'list or dict.')
- self._max_momentum = max_momentum
- # validate pct_start
- if pct_start < 0 or pct_start > 1 or not isinstance(pct_start, float):
- raise ValueError('Expected float between 0 and 1 pct_start, but '
- f'got {pct_start}')
- self.pct_start = pct_start
- # validate anneal_strategy
- if anneal_strategy not in ['cos', 'linear']:
- raise ValueError('anneal_strategy must by one of "cos" or '
- f'"linear", instead got {anneal_strategy}')
- elif anneal_strategy == 'cos':
- self.anneal_func = annealing_cos
- elif anneal_strategy == 'linear':
- self.anneal_func = annealing_linear
- self.three_phase = three_phase
- self.momentum_phases = [] # init momentum_phases
- super(OneCycleMomentumUpdaterHook, self).__init__(**kwargs)
-
- def before_run(self, runner):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- if ('momentum' not in optim.defaults
- and 'betas' not in optim.defaults):
- raise ValueError('optimizer must support momentum with'
- 'option enabled')
- self.use_beta1 = 'betas' in optim.defaults
- _base_momentum = format_param(k, optim, self._base_momentum)
- _max_momentum = format_param(k, optim, self._max_momentum)
- for group, b_momentum, m_momentum in zip(
- optim.param_groups, _base_momentum, _max_momentum):
- if self.use_beta1:
- _, beta2 = group['betas']
- group['betas'] = (m_momentum, beta2)
- else:
- group['momentum'] = m_momentum
- group['base_momentum'] = b_momentum
- group['max_momentum'] = m_momentum
- else:
- optim = runner.optimizer
- if ('momentum' not in optim.defaults
- and 'betas' not in optim.defaults):
- raise ValueError('optimizer must support momentum with'
- 'option enabled')
- self.use_beta1 = 'betas' in optim.defaults
- k = type(optim).__name__
- _base_momentum = format_param(k, optim, self._base_momentum)
- _max_momentum = format_param(k, optim, self._max_momentum)
- for group, b_momentum, m_momentum in zip(optim.param_groups,
- _base_momentum,
- _max_momentum):
- if self.use_beta1:
- _, beta2 = group['betas']
- group['betas'] = (m_momentum, beta2)
- else:
- group['momentum'] = m_momentum
- group['base_momentum'] = b_momentum
- group['max_momentum'] = m_momentum
-
- if self.three_phase:
- self.momentum_phases.append({
- 'end_iter':
- float(self.pct_start * runner.max_iters) - 1,
- 'start_momentum':
- 'max_momentum',
- 'end_momentum':
- 'base_momentum'
- })
- self.momentum_phases.append({
- 'end_iter':
- float(2 * self.pct_start * runner.max_iters) - 2,
- 'start_momentum':
- 'base_momentum',
- 'end_momentum':
- 'max_momentum'
- })
- self.momentum_phases.append({
- 'end_iter': runner.max_iters - 1,
- 'start_momentum': 'max_momentum',
- 'end_momentum': 'max_momentum'
- })
- else:
- self.momentum_phases.append({
- 'end_iter':
- float(self.pct_start * runner.max_iters) - 1,
- 'start_momentum':
- 'max_momentum',
- 'end_momentum':
- 'base_momentum'
- })
- self.momentum_phases.append({
- 'end_iter': runner.max_iters - 1,
- 'start_momentum': 'base_momentum',
- 'end_momentum': 'max_momentum'
- })
-
- def _set_momentum(self, runner, momentum_groups):
- if isinstance(runner.optimizer, dict):
- for k, optim in runner.optimizer.items():
- for param_group, mom in zip(optim.param_groups,
- momentum_groups[k]):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
- else:
- for param_group, mom in zip(runner.optimizer.param_groups,
- momentum_groups):
- if 'momentum' in param_group.keys():
- param_group['momentum'] = mom
- elif 'betas' in param_group.keys():
- param_group['betas'] = (mom, param_group['betas'][1])
-
- def get_momentum(self, runner, param_group):
- curr_iter = runner.iter
- start_iter = 0
- for i, phase in enumerate(self.momentum_phases):
- end_iter = phase['end_iter']
- if curr_iter <= end_iter or i == len(self.momentum_phases) - 1:
- pct = (curr_iter - start_iter) / (end_iter - start_iter)
- momentum = self.anneal_func(
- param_group[phase['start_momentum']],
- param_group[phase['end_momentum']], pct)
- break
- start_iter = end_iter
- return momentum
-
- def get_regular_momentum(self, runner):
- if isinstance(runner.optimizer, dict):
- momentum_groups = {}
- for k, optim in runner.optimizer.items():
- _momentum_group = [
- self.get_momentum(runner, param_group)
- for param_group in optim.param_groups
- ]
- momentum_groups.update({k: _momentum_group})
- return momentum_groups
- else:
- momentum_groups = []
- for param_group in runner.optimizer.param_groups:
- momentum_groups.append(self.get_momentum(runner, param_group))
- return momentum_groups
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/parrots_wrapper.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/parrots_wrapper.py
deleted file mode 100644
index 93c97640d4b9ed088ca82cfe03e6efebfcfa9dbf..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/utils/parrots_wrapper.py
+++ /dev/null
@@ -1,107 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from functools import partial
-
-import torch
-
-TORCH_VERSION = torch.__version__
-
-
-def is_rocm_pytorch() -> bool:
- is_rocm = False
- if TORCH_VERSION != 'parrots':
- try:
- from torch.utils.cpp_extension import ROCM_HOME
- is_rocm = True if ((torch.version.hip is not None) and
- (ROCM_HOME is not None)) else False
- except ImportError:
- pass
- return is_rocm
-
-
-def _get_cuda_home():
- if TORCH_VERSION == 'parrots':
- from parrots.utils.build_extension import CUDA_HOME
- else:
- if is_rocm_pytorch():
- from torch.utils.cpp_extension import ROCM_HOME
- CUDA_HOME = ROCM_HOME
- else:
- from torch.utils.cpp_extension import CUDA_HOME
- return CUDA_HOME
-
-
-def get_build_config():
- if TORCH_VERSION == 'parrots':
- from parrots.config import get_build_info
- return get_build_info()
- else:
- return torch.__config__.show()
-
-
-def _get_conv():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.conv import _ConvNd, _ConvTransposeMixin
- else:
- from torch.nn.modules.conv import _ConvNd, _ConvTransposeMixin
- return _ConvNd, _ConvTransposeMixin
-
-
-def _get_dataloader():
- if TORCH_VERSION == 'parrots':
- from torch.utils.data import DataLoader, PoolDataLoader
- else:
- from torch.utils.data import DataLoader
- PoolDataLoader = DataLoader
- return DataLoader, PoolDataLoader
-
-
-def _get_extension():
- if TORCH_VERSION == 'parrots':
- from parrots.utils.build_extension import BuildExtension, Extension
- CppExtension = partial(Extension, cuda=False)
- CUDAExtension = partial(Extension, cuda=True)
- else:
- from torch.utils.cpp_extension import (BuildExtension, CppExtension,
- CUDAExtension)
- return BuildExtension, CppExtension, CUDAExtension
-
-
-def _get_pool():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.pool import (_AdaptiveAvgPoolNd,
- _AdaptiveMaxPoolNd, _AvgPoolNd,
- _MaxPoolNd)
- else:
- from torch.nn.modules.pooling import (_AdaptiveAvgPoolNd,
- _AdaptiveMaxPoolNd, _AvgPoolNd,
- _MaxPoolNd)
- return _AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd
-
-
-def _get_norm():
- if TORCH_VERSION == 'parrots':
- from parrots.nn.modules.batchnorm import _BatchNorm, _InstanceNorm
- SyncBatchNorm_ = torch.nn.SyncBatchNorm2d
- else:
- from torch.nn.modules.instancenorm import _InstanceNorm
- from torch.nn.modules.batchnorm import _BatchNorm
- SyncBatchNorm_ = torch.nn.SyncBatchNorm
- return _BatchNorm, _InstanceNorm, SyncBatchNorm_
-
-
-_ConvNd, _ConvTransposeMixin = _get_conv()
-DataLoader, PoolDataLoader = _get_dataloader()
-BuildExtension, CppExtension, CUDAExtension = _get_extension()
-_BatchNorm, _InstanceNorm, SyncBatchNorm_ = _get_norm()
-_AdaptiveAvgPoolNd, _AdaptiveMaxPoolNd, _AvgPoolNd, _MaxPoolNd = _get_pool()
-
-
-class SyncBatchNorm(SyncBatchNorm_):
-
- def _check_input_dim(self, input):
- if TORCH_VERSION == 'parrots':
- if input.dim() < 2:
- raise ValueError(
- f'expected at least 2D input (got {input.dim()}D input)')
- else:
- super()._check_input_dim(input)
diff --git a/spaces/systash/hashtag_and_named_entity_generator/theme.css b/spaces/systash/hashtag_and_named_entity_generator/theme.css
deleted file mode 100644
index 56d9cfc145c394b65b9f4cdc95482afb982f2360..0000000000000000000000000000000000000000
--- a/spaces/systash/hashtag_and_named_entity_generator/theme.css
+++ /dev/null
@@ -1,26 +0,0 @@
-/* Force scrollbar to always display */
-::-webkit-scrollbar {
- -webkit-appearance: none;
- width: 10px;
-}
-
-::-webkit-scrollbar-thumb {
- border-radius: 5px;
- background-color: rgba(0, 0, 0, .5);
- -webkit-box-shadow: 0 0 1px rgba(255, 255, 255, .5);
-}
-
-/* Add scrollbar to body */
-body::-webkit-scrollbar {
- width: 10px;
-}
-
-body::-webkit-scrollbar-track {
- background-color: #F5F5F5;
-}
-
-body::-webkit-scrollbar-thumb {
- background-color: #000000;
- border-radius: 10px;
- border: 2px solid #F5F5F5;
-}
\ No newline at end of file
diff --git a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/misc.py b/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/misc.py
deleted file mode 100644
index 46d7a61b4b605cb6409c3ae5b0ff9ceac5bac9ba..0000000000000000000000000000000000000000
--- a/spaces/szukevin/VISOR-GPT/train/tencentpretrain/utils/misc.py
+++ /dev/null
@@ -1,39 +0,0 @@
-import torch
-import sys
-
-
-def count_lines(file_path):
- lines_num = 0
- with open(file_path, 'rb') as f:
- while True:
- data = f.read(2 ** 20)
- if not data:
- break
- lines_num += data.count(b'\n')
- return lines_num
-
-
-def flip(x, dim):
- indices = [slice(None)] * x.dim()
- indices[dim] = torch.arange(x.size(dim) - 1, -1, -1,
- dtype=torch.long, device=x.device)
- return x[tuple(indices)]
-
-
-def pooling(memory_bank, seg, pooling_type):
- seg = torch.unsqueeze(seg, dim=-1).type_as(memory_bank)
- memory_bank = memory_bank * seg
- if pooling_type == "mean":
- features = torch.sum(memory_bank, dim=1)
- features = torch.div(features, torch.sum(seg, dim=1))
- elif pooling_type == "last":
- features = memory_bank[torch.arange(memory_bank.shape[0]), torch.squeeze(torch.sum(seg, dim=1).type(torch.int64) - 1), :]
- elif pooling_type == "max":
- features = torch.max(memory_bank + (seg - 1) * sys.maxsize, dim=1)[0]
- else:
- features = memory_bank[:, 0, :]
- return features
-
-class ZeroOneNormalize(object):
- def __call__(self, img):
- return img.float().div(255)
\ No newline at end of file
diff --git a/spaces/tang155/bingo/src/lib/bots/bing/tts.ts b/spaces/tang155/bingo/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/tang155/bingo/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/tang155/bingo/src/lib/isomorphic/index.ts b/spaces/tang155/bingo/src/lib/isomorphic/index.ts
deleted file mode 100644
index 738dc92f74079ab762d584fb7422a8c8c3b61547..0000000000000000000000000000000000000000
--- a/spaces/tang155/bingo/src/lib/isomorphic/index.ts
+++ /dev/null
@@ -1,17 +0,0 @@
-'use client'
-
-import Default from './browser'
-
-let exportsModel: any = {}
-
-if (process.browser) {
- Object.assign(exportsModel, require('./browser').default)
-} else {
- Object.assign(exportsModel, require('./node').default)
-}
-
-export default exportsModel! as typeof Default
-
-export const fetch: typeof Default.fetch = exportsModel!.fetch
-export const WebSocket: typeof Default.WebSocket = exportsModel!.WebSocket
-export const debug: typeof Default.debug = exportsModel!.debug
diff --git a/spaces/tcfly/Flowise/README.md b/spaces/tcfly/Flowise/README.md
deleted file mode 100644
index 66793dba8fdd6d0b2427b6cbbbab90aba95a9e89..0000000000000000000000000000000000000000
--- a/spaces/tcfly/Flowise/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Flowise
-emoji: 🦀
-colorFrom: pink
-colorTo: yellow
-sdk: docker
-app_port: 3000
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/terfces0erbo/CollegeProjectV2/AutoCAD LT 2010 32 Bit Free Download [TOP].md b/spaces/terfces0erbo/CollegeProjectV2/AutoCAD LT 2010 32 Bit Free Download [TOP].md
deleted file mode 100644
index df57651b06ae0a61e03696167380bdba00b5613b..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/AutoCAD LT 2010 32 Bit Free Download [TOP].md
+++ /dev/null
@@ -1,31 +0,0 @@
-
-How to Download AutoCAD LT 2010 32 Bit for Free
-AutoCAD LT 2010 32 bit free download
-
-
-Why Choose AutoCAD LT 2010 32 Bit?
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/CUR3D Maker Edition Torrent Download [Crack Serial Key.md b/spaces/terfces0erbo/CollegeProjectV2/CUR3D Maker Edition Torrent Download [Crack Serial Key.md
deleted file mode 100644
index 7b375a2c85d631ce6eeba648b2eff5cee40a418d..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/CUR3D Maker Edition Torrent Download [Crack Serial Key.md
+++ /dev/null
@@ -1,6 +0,0 @@
-CUR3D Maker Edition Torrent Download [Crack Serial Key
-
-CUR3D Maker Edition Torrent Download [Crack Serial Key - http://cinurl.com/16baae About This Software 3D printing as easy as printing on a ... 4d29de3e1b
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Gtachennaicitypcgamedownload UPD.md b/spaces/terfces0erbo/CollegeProjectV2/Gtachennaicitypcgamedownload UPD.md
deleted file mode 100644
index 6e7b0f41ef9de7901ec69b72f74a2904a53f8b62..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Gtachennaicitypcgamedownload UPD.md
+++ /dev/null
@@ -1,6 +0,0 @@
-gtachennaicitypcgamedownload
-
-Lil \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' Eddie City Of My Heart [iTunes Deluxe Edition] (2010) · gtachennaicitypcgamedownload. 1fdad05405
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Ih8sn0w Ireb V3 1.2 For Windows.md b/spaces/terfces0erbo/CollegeProjectV2/Ih8sn0w Ireb V3 1.2 For Windows.md
deleted file mode 100644
index faba1be20e9f34cadd5f4dee3dfc43fa97048239..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Ih8sn0w Ireb V3 1.2 For Windows.md
+++ /dev/null
@@ -1,45 +0,0 @@
-
-How to Use iH8sn0w's iREB v3.1.2 for Windows
-Ih8sn0w Ireb V3 1.2 For Windows
-Requirements
-
-
-Steps
-
-
Benefits of Custom Firmware
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/text-generation-inference/chat-ui/README.md b/spaces/text-generation-inference/chat-ui/README.md
deleted file mode 100644
index 9b6f929ebdfd6077806faf4ede5959963102b5b9..0000000000000000000000000000000000000000
--- a/spaces/text-generation-inference/chat-ui/README.md
+++ /dev/null
@@ -1,48 +0,0 @@
----
-title: chat-ui
-emoji: 🔥
-colorFrom: purple
-colorTo: purple
-sdk: docker
-pinned: false
-license: other
----
-
-# create-svelte
-
-Everything you need to build a Svelte project, powered by [`create-svelte`](https://github.com/sveltejs/kit/tree/master/packages/create-svelte).
-
-## Creating a project
-
-If you're seeing this, you've probably already done this step. Congrats!
-
-```bash
-# create a new project in the current directory
-npm create svelte@latest
-
-# create a new project in my-app
-npm create svelte@latest my-app
-```
-
-## Developing
-
-Once you've created a project and installed dependencies with `npm install` (or `pnpm install` or `yarn`), start a development server:
-
-```bash
-npm run dev
-
-# or start the server and open the app in a new browser tab
-npm run dev -- --open
-```
-
-## Building
-
-To create a production version of your app:
-
-```bash
-npm run build
-```
-
-You can preview the production build with `npm run preview`.
-
-> To deploy your app, you may need to install an [adapter](https://kit.svelte.dev/docs/adapters) for your target environment.
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Demon Hunter Shadow World Premium Mod APK and Become the Ultimate Hero.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Demon Hunter Shadow World Premium Mod APK and Become the Ultimate Hero.md
deleted file mode 100644
index 0c53e26b791d938e8fbe16ff9c4d2422ce8fbe16..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Demon Hunter Shadow World Premium Mod APK and Become the Ultimate Hero.md
+++ /dev/null
@@ -1,123 +0,0 @@
-
-Download Demon Hunter: Shadow World Premium Mod APK
-download demon hunter shadow world premium mod apk
-Features of Demon Hunter: Shadow World
-Action-packed gameplay
-Multiple classes and customization options
-Diverse modes and locations
-
-* download demon hunter shadow world unlimited money mod apk
-* download demon hunter shadow world latest version mod apk
-* download demon hunter shadow world hack mod apk
-* download demon hunter shadow world full unlocked mod apk
-* download demon hunter shadow world premium mod apk for android
-* download demon hunter shadow world premium mod apk free
-* download demon hunter shadow world premium mod apk offline
-* download demon hunter shadow world premium mod apk no root
-* download demon hunter shadow world premium mod apk happymod
-* download demon hunter shadow world premium mod apk rexdl
-* download demon hunter shadow world premium mod apk revdl
-* download demon hunter shadow world premium mod apk android 1
-* download demon hunter shadow world premium mod apk apkpure
-* download demon hunter shadow world premium mod apk apkdone
-* download demon hunter shadow world premium mod apk obb
-* download demon hunter shadow world premium mod apk data
-* download demon hunter shadow world premium mod apk + data
-* download demon hunter shadow world premium mod apk + obb
-* download demon hunter shadow world premium mod apk 60.80.11.0
-* download demon hunter shadow world premium mod apk 2023
-* download demon hunter shadow world premium mod apk new update
-* download demon hunter shadow world premium mod apk latest update
-* download demon hunter shadow world premium mod apk mega mod
-* download demon hunter shadow world premium mod apk pro
-* download demon hunter shadow world premium cracked mod apk
-* download demon hunter shadow world premium patched mod apk
-* download demon hunter shadow world premium vip mod apk
-* download demon hunter shadow world adventure game mod apk
-* download demon hunter shadow world hidden object game mod apk
-* download demon hunter shadow world ea publishing game mod apk
-* how to download demon hunter shadow world premium mod apk
-* where to download demon hunter shadow world premium mod apk
-* best site to download demon hunter shadow world premium mod apk
-* best way to download demon hunter shadow world premium mod apk
-* safe site to download demon hunter shadow world premium mod apk
-* safe way to download demon hunter shadow world premium mod apk
-* easy way to download demon hunter shadow world premium mod apk
-* fast way to download demon hunter shadow world premium mod apk
-* free way to download demon hunter shadow world premium mod apkThrilling PvP battles
-Benefits of Demon Hunter: Shadow World Premium Mod APK
-Access to premium features for free
-Removal of ads and other annoyances
-Unlimited resources and in-app purchases
-Offline functionality and compatibility
-How to Download and Install Demon Hunter: Shadow World Premium Mod APK
-Step 1: Find a reliable source
-Step 2: Download the mod apk file
-Step 3: Enable unknown sources on your device
-
-
-Step 4: Install the mod apk file
-
-
-Step 5: Enjoy the game
-
-
- Conclusion
-FAQs
- What is a mod apk?
-Is Demon Hunter: Shadow World mod apk safe to use?
-What are the minimum requirements to play Demon Hunter: Shadow World mod apk?
-
-
- How can I update Demon Hunter: Shadow World mod apk?
-How can I contact the developer of Demon Hunter: Shadow World mod apk?
-
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GTA San Andreas Multiplayer APK for Android 2021 - Play with Friends Online.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GTA San Andreas Multiplayer APK for Android 2021 - Play with Friends Online.md
deleted file mode 100644
index 7fafa818a385de916e7aa9721911b5f2700b1b82..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download GTA San Andreas Multiplayer APK for Android 2021 - Play with Friends Online.md
+++ /dev/null
@@ -1,70 +0,0 @@
-
-
-
-
- GTA San Andreas Multiplayer Android Download 2021 APK
-gta san andreas multiplayer android download 2021 apk
-Introduction
-
-gta san andreas online mobile android 2021
-gta sa multiplayer android mod apk 2021
-download gta san andreas for android with multiplayer 2021
-gta san andreas android ios multiplayer 2021
-gta san andreas samp android 2021 latest version
-gta sa online android apk download 2021
-gta san andreas multiplayer mod for android 2021
-how to play gta san andreas online on android 2021
-gta san andreas mobile multiplayer apk 2021
-gta sa android multiplayer server 2021
-gta san andreas online apk 2021 for android
-gta sa multiplayer mod android download 2021
-gta san andreas android multiplayer gameplay 2021
-gta san andreas samp apk download for android 2021
-gta sa online android 2021 free
-gta san andreas multiplayer android install 2021
-gta sa multiplayer android cheats 2021
-gta san andreas online android game 2021
-gta sa multiplayer android tutorial 2021
-gta san andreas samp android update 2021
-gta sa online android mod apk 2021
-gta san andreas multiplayer android offline 2021
-gta sa multiplayer android hack 2021
-gta san andreas online android app 2021How to download GTA SAMP Android APK 2021
-Requirements
-
-
-Steps
-
-
Features of GTA SAMP Android APK 2021
-Multiplayer mode
-Customization
-Compatibility
-
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Cakewalk Sonar X2 Producer.md b/spaces/tioseFevbu/cartoon-converter/scripts/Cakewalk Sonar X2 Producer.md
deleted file mode 100644
index 6234ee9b93dacc313353ae6c512e31674cdd3e8f..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Cakewalk Sonar X2 Producer.md
+++ /dev/null
@@ -1,171 +0,0 @@
-
-
-
-
- Cakewalk Sonar X2 Producer: A Comprehensive Review
-Features and Benefits of Cakewalk Sonar X2 Producer
-Cakewalk Sonar X2 Producer
-Skylight Interface
-ProChannel
-Virtual Instruments
-
-
-Audio and MIDI Effects
-
-
-Matrix View
-Smart Tool
-Console Emulator
-R-MIX SONAR
-FX Chains
-Automation and Take Lanes
-SoundCloud, MusicXML, and Export Options
-Pros and Cons of Cakewalk Sonar X2 Producer
-Pros
-
-
-Cons
-
-
-Comparison with Other DAWs
-
-
-
-
-DAW
-Differences
-
-
-Pro Tools
-Pro Tools is one of the most widely used DAWs in the professional music industry. It is known for its high-quality audio recording and editing capabilities, as well as its industry-standard format and compatibility. However, Pro Tools is also one of the most expensive DAWs in the market, and it requires a proprietary hardware device called an iLok to run. Pro Tools also has fewer virtual instruments and effects than Cakewalk Sonar X2 Producer, and it does not have features such as the Matrix View, the R-MIX SONAR, or the FX Chains.
-
-
-Logic Pro
-Logic Pro is one of the most popular DAWs for Mac users. It is known for its intuitive and elegant interface, as well as its rich and diverse collection of virtual instruments and effects. However, Logic Pro is only available for Mac OS platforms, and it does not support Windows or Linux platforms. Logic Pro also has fewer audio editing features than Cakewalk Sonar X2 Producer, and it does not have features such as the ProChannel, the Console Emulator, or the R-MIX SONAR.
-
- Ableton Live
-Ableton Live is one of the most innovative DAWs for live performance and electronic music production. It is known for its unique Session View that allows you to trigger and remix clips in real-time, as well as its powerful warping and time-stretching features that allow you to manipulate audio in creative ways. However, Ableton Live is also one of the most expensive DAWs in the market, and it has fewer audio recording and editing features than Cakewalk Sonar X2 Producer. Ableton Live also has fewer virtual instruments and effects than Cakewalk Sonar X2 Producer, and it does not have features such as the ProChannel, the Console Emulator, or the FX Chains.
-
-
-FL Studio
-FL Studio is one of the most user-friendly and affordable DAWs for beginners and hobbyists. It is known for its easy-to-use and colorful interface, as well as its step sequencer and piano roll that allow you to create and edit beats and melodies quickly. However, FL Studio is also one of the most limited DAWs in terms of audio recording and editing features, and it does not have features such as the ProChannel, the Console Emulator, the R-MIX SONAR, or the FX Chains. FL Studio also has fewer virtual instruments and effects than Cakewalk Sonar X2 Producer, and it does not have features such as the Matrix View, the Smart Tool, or the Automation and Take Lanes.
-Pricing and Availability of Cakewalk Sonar X2 Producer
-Conclusion
-
-
-FAQs
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download 720p Maidan-E-Jung Movi TOP.md b/spaces/tioseFevbu/cartoon-converter/scripts/Download 720p Maidan-E-Jung Movi TOP.md
deleted file mode 100644
index 9ce02e3c7ab74f6db1dc037da65a33da14fec633..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Download 720p Maidan-E-Jung Movi TOP.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-Download 720p Maidan-E-Jung Movie
-Download 720p Maidan-E-Jung Movi
-
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/svg.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/svg.py
deleted file mode 100644
index 075150a4b586d668c1666513fbf90463cdbb11ab..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/svg.py
+++ /dev/null
@@ -1,188 +0,0 @@
-"""
- pygments.formatters.svg
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for SVG output.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.token import Comment
-from pip._vendor.pygments.util import get_bool_opt, get_int_opt
-
-__all__ = ['SvgFormatter']
-
-
-def escape_html(text):
- """Escape &, <, > as well as single and double quotes for HTML."""
- return text.replace('&', '&'). \
- replace('<', '<'). \
- replace('>', '>'). \
- replace('"', '"'). \
- replace("'", ''')
-
-
-class2style = {}
-
-class SvgFormatter(Formatter):
- """
- Format tokens as an SVG graphics file. This formatter is still experimental.
- Each line of code is a ``