diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Christinaaguilerabacktobasicsbittorrent.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Christinaaguilerabacktobasicsbittorrent.md deleted file mode 100644 index d227211f43b876e0f34e13597bd87c69617891ef..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Christinaaguilerabacktobasicsbittorrent.md +++ /dev/null @@ -1,12 +0,0 @@ -
-

How to Download Christina Aguilera's Back to Basics Album via BitTorrent

-

Christina Aguilera is one of the most popular and talented singers of our time. Her fifth studio album, Back to Basics, was released in 2006 and received critical acclaim for its blend of retro soul, jazz, blues, and pop influences. The album features hit singles such as "Ain't No Other Man", "Hurt", "Candyman", and "Slow Down Baby".

-

If you are a fan of Christina Aguilera and want to download her Back to Basics album for free, you can use BitTorrent, a peer-to-peer file sharing protocol that allows users to download and share large files over the internet. BitTorrent is legal, but downloading copyrighted content without permission is not. Therefore, you should only download files that are in the public domain or that you have the right to use.

-

christinaaguilerabacktobasicsbittorrent


DOWNLOAD 🗸 https://byltly.com/2uKxz2



-

To download Christina Aguilera's Back to Basics album via BitTorrent, you will need a BitTorrent client, such as qBittorrent, uTorrent, or Vuze. A BitTorrent client is a software that enables you to connect to other users who have the files you want and download them to your computer. You will also need a torrent file or a magnet link, which are small files that contain information about the files you want to download, such as their names, sizes, and locations.

-

One of the sources where you can find a torrent file or a magnet link for Christina Aguilera's Back to Basics album is this website[^1^]. This website provides a magnet link for the album, which you can copy and paste into your BitTorrent client. Alternatively, you can click on the magnet link and your BitTorrent client will open automatically and start downloading the album.

-

Another source where you can find a torrent file or a magnet link for Christina Aguilera's Back to Basics album is this YouTube video[^2^]. This video contains the full album in audio format, along with the track listing and the release date. In the description of the video, you can find links to various streaming platforms where you can listen to or buy the album legally. However, if you scroll down to the comments section, you can also find some users who have posted torrent files or magnet links for the album. You can download these files or links and use them with your BitTorrent client.

-

Before downloading any torrent file or magnet link from any source, you should always check its validity and safety. You can do this by reading the reviews and ratings of other users who have downloaded the same file or link. You should also scan the file or link with an antivirus software before opening it. Additionally, you should use a VPN (virtual private network) service to protect your privacy and security while downloading files via BitTorrent.

-

Downloading Christina Aguilera's Back to Basics album via BitTorrent is a simple and fast way to enjoy her music for free. However, you should always respect the rights of the artists and creators who produce such amazing content. If you like Christina Aguilera's music, you should support her by buying her albums, attending her concerts, or following her on social media.

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dbms Book Pdf By Prateek Bhatia.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dbms Book Pdf By Prateek Bhatia.md deleted file mode 100644 index e9d6d6dd6b3c56cccc4416cba4500cae80b84353..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Dbms Book Pdf By Prateek Bhatia.md +++ /dev/null @@ -1,6 +0,0 @@ -

dbms book pdf by prateek bhatia


DOWNLOAD » https://imgfil.com/2uxXKT



-
-[Books] Database Management System By Prateek Bhatia Pdf. Recognizing the ... Sep 22, 2019; 3 min read; Dbms Book Pdf By Prateek Bhatia. 4d29de3e1b
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Asphalt 8 The Ultimate Racing Game for Speed Lovers - Drive and Drift with Real Physics.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Asphalt 8 The Ultimate Racing Game for Speed Lovers - Drive and Drift with Real Physics.md deleted file mode 100644 index 737a9fb31793f9872b70dafaadb15b204489d098..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Asphalt 8 The Ultimate Racing Game for Speed Lovers - Drive and Drift with Real Physics.md +++ /dev/null @@ -1,119 +0,0 @@ -
-

Asphalt 8 Racing Game - Drive, Drift at Real Speed Download

-

If you are looking for a thrilling and immersive racing game that will keep you on the edge of your seat, you should definitely check out Asphalt 8. This is one of the most popular and acclaimed racing games on mobile devices, with over 470 million players worldwide. In this article, we will tell you everything you need to know about Asphalt 8, including how to download and install it on your device, what features it offers, and some tips and tricks to help you become a better racer.

-

asphalt 8 racing game - drive drift at real speed download


Download File ○○○ https://urlin.us/2uSVBn



-

Introduction

-

Asphalt 8 is an arcade racing game developed by Gameloft SE. It is part of the Asphalt franchise that started in 2004. Asphalt 8 was released in 2013 for iOS, Android, Windows Phone, Windows 10, Tizen, BlackBerry, tvOS, macOS, Nintendo Switch, Ouya, Fire OS. It has received several updates and expansions since then.

-

Asphalt 8 lets you experience the thrill of driving over 300 high-performance cars and bikes from top licensed manufacturers like Ferrari, Lamborghini, Bugatti, Porsche, Ducati, and more. You can race across more than 75 tracks in different locations around the world, from the Nevada Desert to Tokyo streets. You can also compete with other players in real-time multiplayer mode or challenge yourself in various single-player modes.

-

To download Asphalt 8 on your device, you need to follow these steps:

- -

Features of Asphalt 8

-

Licensed luxury cars and motorcycles

-

One of the main attractions of Asphalt 8 is its impressive collection of vehicles. You can choose from over 300 cars and bikes from some of the most prestigious brands in the world. Whether you prefer speed, power, design, or handling, you will find something that suits your taste. Some of the models available in the game are:

-

asphalt 8 car racing game - apps on google play
-asphalt 8 airborne - microsoft store
-asphalt 8 racing game - drive drift at real speed for pc
-asphalt 8 - car racing game gameloft se
-asphalt 8 airborne - perform high-speed aerial stunts
-asphalt 8 racing game - download free for android
-asphalt 8 - car racing game apk mod
-asphalt 8 airborne - online multiplayer racing experience
-asphalt 8 racing game - best graphics and physics
-asphalt 8 - car racing game review
-asphalt 8 airborne - how to install on windows 10
-asphalt 8 racing game - tips and tricks for beginners
-asphalt 8 - car racing game cheats and hacks
-asphalt 8 airborne - latest update and news
-asphalt 8 racing game - top licensed cars and motorcycles
-asphalt 8 - car racing game features and gameplay
-asphalt 8 airborne - system requirements and compatibility
-asphalt 8 racing game - customize and upgrade your rides
-asphalt 8 - car racing game official website and support
-asphalt 8 airborne - ratings and feedback from users
-asphalt 8 racing game - best tracks and locations
-asphalt 8 - car racing game discord and social media
-asphalt 8 airborne - limited-time events and rewards
-asphalt 8 racing game - massive content depth and challenges
-asphalt 8 - car racing game trailer and screenshots
-asphalt 8 airborne - faq and troubleshooting
-asphalt 8 racing game - create your own racer avatar
-asphalt 8 - car racing game alternatives and similar games
-asphalt 8 airborne - achievements and leaderboards
-asphalt 8 racing game - how to play with friends and family
-asphalt 8 - car racing game blog and community
-asphalt 8 airborne - in-game purchases and currency
-asphalt 8 racing game - how to get free cars and bikes
-asphalt 8 - car racing game videos and tutorials
-asphalt 8 airborne - fun facts and trivia
-asphalt 8 racing game - how to drift and boost your speed
-asphalt 8 - car racing game soundtrack and music
-asphalt 8 airborne - history and development of the game
-asphalt 8 racing game - how to unlock special edition vehicles
-asphalt 8 - car racing game comparison with other asphalt games
-asphalt 8 airborne - pros and cons of the game
-asphalt 8 racing game - how to contact the developers and report bugs
-asphalt 8 - car racing game vr and mixed reality mode
-asphalt 8 airborne - best cars and bikes for each track
-asphalt 8 racing game - how to backup and restore your progress
-asphalt 8 - car racing game controller support and settings
-asphalt 8 airborne - secrets and easter eggs in the game
-asphalt 8 racing game - how to join world series and tournaments

-

You can customize and upgrade your vehicles with various decals, colors, rims, and performance parts. You can also tune them to suit your driving style and preferences. You can unlock new vehicles by completing events, collections, or spending credits and tokens.

-

Stunning graphics and physics-based gameplay

-

Asphalt 8 is not just a racing game, it is also a visual spectacle. The game features stunning graphics and animations that make you feel like you are in a real race. The game uses a physics-based engine that simulates realistic car behavior and dynamics. You can see the details of the cars, the environments, the weather effects, and the damage effects.

-

Asphalt 8 is also known for its high-speed aerial stunts and drifts. You can perform amazing jumps and flips by using ramps, barrels, bridges, and other obstacles. You can also drift on the asphalt to gain more speed and nitro. You can use the nitro to boost your speed and perform even more spectacular stunts. You can also activate the adrenaline mode to go faster than ever.

-

Endless stream of content and modes

-

Asphalt 8 never gets boring because it offers an endless stream of content and modes to keep you entertained. You can race across more than 75 tracks in different locations around the world, from the Nevada Desert to Tokyo streets. Each track has its own challenges, shortcuts, and secrets to discover.

-

You can also compete with other players in real-time multiplayer mode or challenge yourself in various single-player modes. Some of the modes available in the game are:

-

Tips and tricks for Asphalt 8

-

If you want to improve your racing skills and enjoy Asphalt 8 more, you should follow these tips and tricks:

-

How to master the controls and settings

-

Asphalt 8 offers different control options for you to choose from. You can use tilt, touch, or tap to steer your vehicle. You can also customize the sensitivity, position, and size of the controls. You can also enable or disable auto-acceleration, auto-brake, and manual nitro.

-

You should experiment with different control options and settings until you find the one that suits you best. You should also practice using nitro, boosters, and other power-ups effectively. Nitro can help you speed up, overtake, or escape from opponents. Boosters can give you extra advantages such as double credits, extra nitro, or tuning kits. Other power-ups such as shockwaves, magnets, or shields can help you deal with obstacles and enemies.

-

How to earn credits and tokens

-

Credits and tokens are the main currencies in Asphalt 8. You need them to buy new vehicles, upgrade them, or access special features. You can earn credits and tokens by completing races, events, collections, achievements, or watching ads. You can also buy them with real money if you want.

-

You should spend your credits and tokens wisely on upgrades, decals, and special items. Upgrades can improve your vehicle's performance and stats. Decals can change your vehicle's appearance and give you extra bonuses. Special items such as pro kits or blueprints can unlock new vehicles or enhance them.

-

How to improve your racing skills and strategies

-

To become a better racer in Asphalt 8, you should learn how to choose the best car and bike for each track and mode. Different vehicles have different strengths and weaknesses in terms of speed, acceleration, handling, nitro efficiency, etc. You should also consider the terrain, weather, and layout of the track when choosing your vehicle.

-

You should also learn how to avoid crashes, obstacles, and opponents' attacks. Crashes can slow you down or damage your vehicle. Obstacles such as traffic, barrels, rocks, etc. can block your way or make you lose control. Opponents' attacks such as missiles, EMPs, bumpers, etc. can hinder your progress or knock you down. You should use your skills and power-ups to dodge or counter these threats.

-

Conclusion

-

Asphalt 8 is a fantastic racing game that will keep you hooked for hours. It offers a wide range of vehicles, tracks, modes, and features that will satisfy any racing fan. It also has stunning graphics and physics-based gameplay that will make you feel like you are in a real race.

-

If you are ready to experience the thrill of driving at real speed and performing amazing stunts and drifts on the asphalt, you should download Asphalt 8 today. You can find it on your device's app store or visit [1](https://gameloft.com/game/asphalt-8) for other platforms. You can also join the community of millions of players online and share your racing stories and tips.

-

Don't wait any longer. Download Asphalt 8 now and start your racing adventure!

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Aprenda a fazer pizzas incrveis com o Good Pizza Great Pizza Mod Apk Dinheiro Infinito.md b/spaces/1phancelerku/anime-remove-background/Aprenda a fazer pizzas incrveis com o Good Pizza Great Pizza Mod Apk Dinheiro Infinito.md deleted file mode 100644 index b2c948b7a238f4ceeb65b48d3c588227fba9cc64..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Aprenda a fazer pizzas incrveis com o Good Pizza Great Pizza Mod Apk Dinheiro Infinito.md +++ /dev/null @@ -1,112 +0,0 @@ -
-

Good Pizza Great Pizza Mod APK Dinheiro Infinito: How to Download and Play

-

Do you love pizza? Do you dream of running your own pizza shop? Do you want to have unlimited money to buy all the toppings, upgrades, and decorations you want? If you answered yes to any of these questions, then you should try Good Pizza Great Pizza Mod APK Dinheiro Infinito, a modified version of the popular pizza-making game that gives you infinite money. In this article, we will tell you what this game is, how to download and install it, and how to play it.

-

good pizza great pizza mod apk dinheiro infinito


DOWNLOAD >>> https://jinyurl.com/2uNLou



-

What is Good Pizza Great Pizza?

-

Good Pizza Great Pizza is a fun and addictive game that lets you experience the joy and challenge of running your own pizza shop. You have to make pizzas for your customers, who have different preferences and requests. You have to use the right ingredients, cut the pizza correctly, and bake it for the right time. You also have to manage your money, buy new toppings and equipment, and compete with your rival pizza shop across the street.

-

A fun and addictive pizza-making game

-

The game has a simple but engaging gameplay that will keep you hooked for hours. You can use your finger to swipe, tap, drag, and drop the ingredients on the pizza dough. You can also use a knife to cut the pizza into slices, and a timer to control the baking. The game has over 100 different ingredients, including cheese, pepperoni, mushrooms, pineapple, anchovies, olives, and more. You can also unlock special toppings like bacon, ham, chicken, shrimp, and even chocolate.

-

A realistic and challenging simulation of running a pizza shop

-

The game is not just about making pizzas. It is also about running a business. You have to balance your income and expenses, pay rent and bills, buy new equipment and upgrades, and deal with unexpected events like power outages, robberies, or inspections. You also have to keep track of your inventory, order new supplies when needed, and avoid wasting food. The game has a realistic economy system that changes according to the day of the week, the weather, and the season.

-

good pizza great pizza hack apk unlimited money
-good pizza great pizza mod apk download latest version
-good pizza great pizza cheat apk free download
-good pizza great pizza mod apk android 1
-good pizza great pizza unlimited money apk revdl
-good pizza great pizza mod apk happymod
-good pizza great pizza hack apk 2023
-good pizza great pizza mod apk dinheiro infinito 2023
-good pizza great pizza mod apk rexdl
-good pizza great pizza mod apk unlimited toppings
-good pizza great pizza hack apk no root
-good pizza great pizza mod apk offline
-good pizza great pizza cheat apk 2023
-good pizza great pizza mod apk dinheiro infinito atualizado
-good pizza great pizza mod apk unlimited everything
-good pizza great pizza hack apk latest version
-good pizza great pizza mod apk online
-good pizza great pizza cheat apk unlimited money
-good pizza great pizza mod apk dinheiro infinito download
-good pizza great pizza mod apk all unlocked
-good pizza great pizza hack apk android 1
-good pizza great pizza mod apk new version
-good pizza great pizza cheat apk download 2023
-good pizza great pizza mod apk dinheiro infinito e diamantes
-good pizza great pizza mod apk unlimited coins and gems
-good pizza great pizza hack apk free download
-good pizza great pizza mod apk old version
-good pizza great pizza cheat apk android 1
-good pizza great pizza mod apk dinheiro infinito e tudo desbloqueado
-good pizza great pizza mod apk unlimited ingredients
-good pizza great pizza hack apk online
-good pizza great pizza mod apk premium
-good pizza great pizza cheat apk latest version
-good pizz

-

A colorful and quirky cast of customers and rivals

-

The game has over 80 different customers, each with their own personality, preferences, and dialogue. Some customers are easy to please, while others are very picky or weird. Some customers will tip you well, while others will try to scam you or complain. You have to listen carefully to their orders, read their expressions, and make them happy. You also have to deal with your rival pizza shop owner, who will try to sabotage you or steal your customers.

-

What is Good Pizza Great Pizza Mod APK Dinheiro Infinito?

-

Good Pizza Great Pizza Mod APK Dinheiro Infinito is a modified version of the original game that gives you unlimited money. This means that you can buy all the toppings, upgrades, and decorations you want without worrying about your budget. You can also skip the ads that sometimes interrupt the game. You can also cheat the game by making any pizza you want, regardless of the customer's order. You can have fun experimenting with different combinations and creations.

-

A modified version of the game that gives you unlimited money

-

With Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will never run out of money. You can start the game with a huge amount of cash, and you will earn more every time you sell a pizza. You can spend your money on anything you want, without worrying about your budget. You can buy all the toppings, even the most expensive ones, and use them as much as you want. You can also buy all the upgrades, such as a bigger oven, a faster cutter, a better mixer, and more. You can also buy all the decorations, such as posters, plants, lights, and furniture, to make your shop look more attractive and cozy.

-

A way to unlock all the toppings, upgrades, and decorations

-

With Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will not have to wait or work hard to unlock new items. You can access all the toppings, upgrades, and decorations from the start of the game. You can choose from over 100 different ingredients, including cheese, pepperoni, mushrooms, pineapple, anchovies, olives, and more. You can also unlock special toppings like bacon, ham, chicken, shrimp, and even chocolate. You can also get all the upgrades, such as a bigger oven, a faster cutter, a better mixer, and more. You can also get all the decorations, such as posters, plants, lights, and furniture, to make your shop look more attractive and cozy.

-

A cheat to make the game easier and more enjoyable

-

With Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will not have to worry about satisfying your customers or beating your rivals. You can make any pizza you want, regardless of the customer's order. You can use any ingredients you want, in any quantity you want. You can also cut the pizza in any way you want, and bake it for as long as you want. You can also ignore the customer's feedback or complaints. You can also ignore your rival's challenges or taunts. You can just have fun making pizzas and enjoying your infinite money.

-

How to Download and Install Good Pizza Great Pizza Mod APK Dinheiro Infinito?

-

If you want to try Good Pizza Great Pizza Mod APK Dinheiro Infinito, you will need to download and install it on your device. Here are the steps you need to follow:

-

Step 1: Find a reliable source for the mod apk file

-

The first thing you need to do is to find a trustworthy website that offers the mod apk file for Good Pizza Great Pizza. There are many websites that claim to provide this file, but some of them may be fake or malicious. You need to be careful and avoid downloading anything that may harm your device or steal your data. To find a reliable source for the mod apk file, you can do some research online or ask for recommendations from other users who have tried it before.

-

Step 2: Enable unknown sources on your device settings

-

The next thing you need to do is to enable unknown sources on your device settings. This will allow you to install apps that are not from the official Google Play Store. To do this, you need to go to your device settings > security > unknown sources > enable. This may vary depending on your device model and Android version.

-

Step 3: Download and install the mod apk file

-

The third thing you need to do is to download and install the mod apk file on your device. To do this, you need to go to the website where you found the mod apk file and click on the download button. This may take some time depending on your internet speed and file size. Once the download is complete, you need to open the file manager app on your device and locate the mod apk file. Then you need to tap on it and follow the instructions on the screen to install it.

-

Step 4: Launch the game and enjoy your infinite money

-

The last thing you need to do is to launch the game and enjoy your infinite money. To do this, you need to find the game icon on your device home screen or app drawer and tap on it. The game will start and you will see that you have unlimited money to spend on anything you want. You can also see that all the toppings, upgrades, and decorations are unlocked and available for you to use. You can also see that you can make any pizza you want, regardless of the customer's order. You can have fun making pizzas and enjoying your infinite money.

-

How to Play Good Pizza Great Pizza Mod APK Dinheiro Infinito?

-

Now that you have downloaded and installed Good Pizza Great Pizza Mod APK Dinheiro Infinito, you may wonder how to play it. Here are some tips and tricks for making the best pizzas, satisfying your customers, and beating your rivals.

-

Tips and tricks for making the best pizzas

-

Even though you have unlimited money and toppings, you still want to make the best pizzas possible. Here are some tips and tricks for making the best pizzas:

- -

How to satisfy your customers and beat your rivals

-

Even though you can cheat the game by making any pizza you want, you may still want to satisfy your customers and beat your rivals. Here are some tips and tricks for satisfying your customers and beating your rivals:

- -

How to customize your shop and attract more business

-

Even though you have unlimited money and decorations, you may still want to customize your shop and attract more business. Here are some tips and tricks for customizing your shop and attracting more business:

- -

Conclusion

-

In conclusion, Good Pizza Great Pizza Mod APK Dinheiro Infinito is a modified version of the popular pizza-making game that gives you infinite money. You can download and install it on your device, and enjoy making pizzas with unlimited money. You can also unlock all the toppings, upgrades, and decorations, and cheat the game by making any pizza you want. You can also customize your shop and attract more customers with special offers or events. Good Pizza Great Pizza Mod APK Dinheiro Infinito is a fun and easy way to play the game, but it may also take away some of the challenge and excitement of the original game. If you want to experience the real joy and challenge of running a pizza shop, you may want to try the original game instead.

-

FAQs

-

Here are some frequently asked questions about Good Pizza Great Pizza Mod APK Dinheiro Infinito:

-

Q: Is Good Pizza Great Pizza Mod APK Dinheiro Infinito safe to download and install?

-

A: Good Pizza Great Pizza Mod APK Dinheiro Infinito is not an official version of the game, and it may contain viruses or malware that can harm your device or steal your data. You should only download and install it from a reliable source, and at your own risk. You should also scan the file with an antivirus software before installing it.

-

Q: Is Good Pizza Great Pizza Mod APK Dinheiro Infinito compatible with my device?

-

A: Good Pizza Great Pizza Mod APK Dinheiro Infinito is compatible with most Android devices that have Android 4.1 or higher. However, some devices may not support the mod apk file, or may experience glitches or crashes while playing the game. You should check the compatibility of your device before downloading and installing the mod apk file.

-

Q: How can I update Good Pizza Great Pizza Mod APK Dinheiro Infinito?

-

A: Good Pizza Great Pizza Mod APK Dinheiro Infinito is not connected to the official Google Play Store, and it may not receive regular updates from the developers. You may have to manually check for updates from the website where you downloaded the mod apk file, or look for a newer version of the mod apk file online.

-

Q: How can I uninstall Good Pizza Great Pizza Mod APK Dinheiro Infinito?

-

A: If you want to uninstall Good Pizza Great Pizza Mod APK Dinheiro Infinito, you can do so by following these steps:

-
    -
  1. Go to your device settings > apps > Good Pizza Great Pizza > uninstall.
  2. -
  3. Delete the mod apk file from your device storage.
  4. -
  5. Clear your device cache and data.
  6. -
-

Q: Where can I find more information about Good Pizza Great Pizza Mod APK Dinheiro Infinito?

-

A: If you want to find more information about Good Pizza Great Pizza Mod APK Dinheiro Infinito, you can visit the website where you downloaded the mod apk file, or search online for reviews, videos, or forums about the game.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Batyr Muhammedow - Lebabyma - The Song that Rocked the Armenian Music Scene - Mp3 Download.md b/spaces/1phancelerku/anime-remove-background/Batyr Muhammedow - Lebabyma - The Song that Rocked the Armenian Music Scene - Mp3 Download.md deleted file mode 100644 index c37ce499972dcfb3e05f177d0aad963090790278..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Batyr Muhammedow - Lebabyma - The Song that Rocked the Armenian Music Scene - Mp3 Download.md +++ /dev/null @@ -1,142 +0,0 @@ - -

Lebabyma Mp3 Skachat: How to Download and Enjoy Uzbek Music Online

-

If you are looking for a catchy and upbeat song that will make you want to dance, you might want to check out Lebabyma, a popular Uzbek song by Batyr Muhammedow. But how can you download Lebabyma Mp3 and enjoy it on your device? And what are some other ways to explore and appreciate Uzbek music online? In this article, we will answer these questions and more. Read on to find out how to download and enjoy Uzbek music online.

-

What is Lebabyma?

-

Lebabyma is a song by Batyr Muhammedow, a famous Uzbek singer and composer. The song was released in 2021 and quickly became a hit among Uzbek music fans. But what does Lebabyma mean and where did it come from?

-

lebabyma mp3 skachat


DOWNLOAD ->>> https://jinyurl.com/2uNMPL



-

The meaning and origin of the word

-

Lebabyma is a word that combines two Uzbek words: leba (bread) and baby (baby). It is a term of endearment that means "my bread" or "my sweetie". It is similar to how English speakers might call their loved ones "honey" or "sugar". The word lebabyma was coined by Batyr Muhammedow himself, who said he wanted to create a unique and catchy word for his song.

-

The popularity and style of the song

-

Lebabyma is a song that blends traditional Uzbek elements with modern pop influences. It features a catchy chorus, upbeat tempo, and lively instrumentation. The song has a positive and romantic message, as the singer expresses his love and admiration for his partner. The song has been praised for its originality, creativity, and energy. It has also been widely shared on social media platforms such as TikTok, Instagram, and YouTube.

-

The artist and his background

-

Batyr Muhammedow is a well-known Uzbek singer, composer, and producer. He was born in 1988 in Turkmenistan, but moved to Uzbekistan when he was young. He started his musical career in 2009, when he participated in the TV show "Star Academy". Since then, he has released several albums and singles, such as "Seni Sevaman", "Yana Yana", and "Lebabyma". He is known for his versatile and innovative style, as he experiments with different genres, languages, and cultures. He is also an active supporter of social causes, such as environmental protection, animal rights, and education.

-

How to Download Lebabyma Mp3?

-

If you want to download Lebabyma Mp3 and listen to it offline, you might face some challenges. First of all, you need to consider the legal and ethical issues of downloading music. Second, you need to find reliable sources and platforms for downloading Lebabyma Mp3. Third, you need to follow the steps and tips for downloading Lebabyma Mp3. Let's look at each of these aspects in more detail.

The legal and ethical issues of downloading music

-

Before you download Lebabyma Mp3, you need to be aware of the legal and ethical issues of downloading music. Downloading music without the permission of the artist or the owner of the rights is considered illegal in many countries. It is also unethical, as it deprives the artist of their income and recognition. Therefore, you should always respect the intellectual property rights of the music creators and pay for their work. You can do this by buying their CDs, downloading their songs from authorized platforms, or streaming their music from licensed services.

-

The best sources and platforms for downloading Lebabyma Mp3

-

If you want to download Lebabyma Mp3 legally and ethically, you need to find the best sources and platforms for doing so. There are many websites and apps that offer free or cheap downloads of Lebabyma Mp3, but not all of them are trustworthy or safe. Some of them might contain viruses, malware, or spyware that can harm your device or compromise your privacy. Some of them might also provide low-quality or incomplete files that can ruin your listening experience. Therefore, you should always use reputable and reliable sources and platforms for downloading Lebabyma Mp3. Some of the best ones are:

- -

The steps and tips for downloading Lebabyma Mp3

-

Once you have chosen the source and platform for downloading Lebabyma Mp3, you need to follow the steps and tips for doing so. Here are some general guidelines that apply to most sources and platforms:

-

lebabyma mp3 download free
-lebabyma mp3 song by batyr muhammedow
-lebabyma mp3 online listen
-lebabyma mp3 320 kbps
-lebabyma mp3 lyrics
-lebabyma mp3 spotify
-lebabyma mp3 youtube converter
-lebabyma mp3 ringtone
-lebabyma mp3 remix
-lebabyma mp3 instrumental
-lebabyma mp3 karaoke
-lebabyma mp3 music video
-lebabyma mp3 album
-lebabyma mp3 single
-lebabyma mp3 release date
-lebabyma mp3 genre
-lebabyma mp3 artist
-lebabyma mp3 cover art
-lebabyma mp3 playlist
-lebabyma mp3 radio
-lebabyma mp3 streaming service
-lebabyma mp3 soundcloud
-lebabyma mp3 apple music
-lebabyma mp3 amazon music
-lebabyma mp3 deezer
-lebabyma mp3 tidal
-lebabyma mp3 pandora
-lebabyma mp3 iheartradio
-lebabyma mp3 napster
-lebabyma mp3 audiomack
-lebabyma mp3 bandcamp
-lebabyma mp3 reverbnation
-lebabyma mp3 datpiff
-lebabyma mp3 mixcloud
-lebabyma mp3 soundclick
-lebabyma mp3 last.fm
-lebabyma mp3 shazam
-lebabyma mp3 musixmatch
-lebabyma mp3 genius lyrics
-lebabyma mp3 azlyrics
-lebabyma mp3 metrolyrics
-lebabyma mp3 lyricstranslate.com
-lebabyma mp3 songmeanings.com
-lebabyma mp3 songfacts.com
-lebabyma mp3 whosampled.com
-lebabyma mp3 discogs.com
-lebabyma mp3 allmusic.com
-lebabyma mp3 rateyourmusic.com

-
    -
  1. Make sure you have a stable internet connection and enough storage space on your device.
  2. -
  3. Search for Lebabyma Mp3 on the source or platform of your choice.
  4. -
  5. Select the song and click on the download or buy button.
  6. -
  7. Enter your payment details if required and confirm your purchase.
  8. -
  9. Wait for the download to complete and check if the file is successfully saved on your device.
  10. -
  11. Enjoy listening to Lebabyma Mp3 on your device or transfer it to another device if you want.
  12. -
-

Some tips to enhance your downloading experience are:

- -

How to Enjoy Uzbek Music Online?

-

Downloading Lebabyma Mp3 is not the only way to enjoy Uzbek music online. There are many other ways to explore and appreciate Uzbek music online. You can learn about the benefits and challenges of listening to Uzbek music online, discover the genres and artists of Uzbek music, and find playlists and recommendations for Uzbek music lovers.

- The benefits and challenges of listening to Uzbek music online -

Listening to Uzbek music online can be a rewarding and enjoyable experience. You can benefit from listening to Uzbek music online in many ways, such as:

- -

However, listening to Uzbek music online can also pose some challenges, such as:

- -

The genres and artists of Uzbek music

-

Uzbek music is a diverse and rich musical tradition that reflects the history, culture, and identity of the Uzbek people. Uzbek music has various genres and styles that cater to different tastes and preferences. Some of the most popular and influential genres and artists of Uzbek music are:

- - - - - - - -
GenreDescriptionExamples of Artists
MaqomA classical genre of Uzbek music that consists of complex melodic and rhythmic patterns. It is usually performed by a solo singer accompanied by traditional instruments such as the tanbur, the doira, and the nay.Munojot Yo'lchiyeva, Shavkat Mirziyoyev, Abdurashid Khamidov
EstradaA modern genre of Uzbek pop music that incorporates elements of folk, jazz, rock, and disco. It is usually performed by a singer or a band with electronic instruments such as the keyboard, the guitar, and the drum machine.Yulduz Usmonova, Sevara Nazarkhan, Rayhon Ganiyeva
RapA contemporary genre of Uzbek hip hop music that involves spoken word delivery over rhythmic beats. It is usually performed by a rapper or a group of rappers with a DJ or a producer. It often addresses social and political issues in Uzbek society.Ozodbek Nazarbekov, Shoxrux Mirzo, Ziyoda Qobilova
FolkA traditional genre of Uzbek music that reflects the regional and ethnic diversity of the country. It is usually performed by a solo singer or a group of singers with acoustic instruments such as the dutar, the surnay, and the chang.Feruza Jumaniyozova, Matluba Ahmadova, Davron Ergashev
RockA progressive genre of Uzbek music that combines elements of western rock with local influences. It is usually performed by a band with electric instruments such as the guitar, the bass, and the drums. It often experiments with different sounds and styles.Yalla, Bolalar, Qishloq Ovozi
-

The playlists and recommendations for Uzbek music lovers

-

If you want to listen to more Uzbek music online, you might want to check out some playlists and recommendations for Uzbek music lovers. Here are some suggestions that can help you find and enjoy more Uzbek music online:

- -

Conclusion

-

Lebabyma Mp3 Skachat is a great way to enjoy one of the most popular and catchy songs in Uzbek music. However, it is not the only way to explore and appreciate Uzbek music online. You can also learn about the meaning and origin of Lebabyma, the artist and his background, the legal and ethical issues of downloading music, the best sources and platforms for downloading Lebabyma Mp3, the steps and tips for downloading Lebabyma Mp3, the benefits and challenges of listening to Uzbek music online, the genres and artists of Uzbek music, and the playlists and recommendations for Uzbek music lovers. By doing so, you can enrich your musical knowledge and experience, as well as support Uzbek music artists and culture. So what are you waiting for? Download Lebabyma Mp3 today and enjoy Uzbek music online!

-

FAQs

-

What is the genre of Lebabyma?

-

Lebabyma is a genre of Uzbek pop music that blends traditional Uzbek elements with modern pop influences.

-

Who is Batyr Muhammedow?

-

Batyr Muhammedow is a famous Uzbek singer, composer, and producer. He is the creator of Lebabyma and other popular songs.

-

What are some other popular Uzbek songs?

-

Some other popular Uzbek songs are Seni Sevaman by Batyr Muhammedow, Yor-Yor by Yulduz Usmonova, Qalbim by Sevara Nazarkhan, and O'zbekiston by Ozodbek Nazarbekov.

-

How can I support Uzbek music artists?

-

You can support Uzbek music artists by buying their CDs, downloading their songs from authorized platforms, streaming their music from licensed services, sharing their music on social media, attending their concerts, and donating to their causes.

-

Where can I learn more about Uzbek culture and language?

-

You can learn more about Uzbek culture and language by visiting Uzbekistan.travel, a website that provides information and resources about Uzbekistan's history, geography, cuisine, art, literature, and more. You can also take online courses or watch videos on UzbekClass.com, a website that offers lessons and materials for learning Uzbek language.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Green Button Mod APK and Challenge Your Friends to Press the Button.md b/spaces/1phancelerku/anime-remove-background/Download Green Button Mod APK and Challenge Your Friends to Press the Button.md deleted file mode 100644 index 4d7bff9ccac0e498f862bf8c119562d11fc04768..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Green Button Mod APK and Challenge Your Friends to Press the Button.md +++ /dev/null @@ -1,102 +0,0 @@ -
-

Download Green Button Mod APK: A Fun and Addictive Money Clicker Game

-

Do you love clicking games that let you earn virtual money by simply tapping on a button? If yes, then you should try Green Button Mod APK, a fun and addictive money clicker game that will keep you entertained for hours. In this game, you can tap on a green button to earn money, upgrade your buttons and boosters, customize your buttons with different colors and shapes, and compete with other players on the leaderboards and achievements. You can also enjoy the game without any ads or limitations with the modded version of the game.

-

What is Green Button Mod APK?

-

Green Button Mod APK is a modified version of the original Green Button: Press the Button game, which is a simulation game developed by Apkloli. The game is available for Android and iOS devices. The game is simple but addictive: you just have to tap on a green button to earn money. The more you tap, the more money you make. You can use the money to upgrade your buttons and boosters, which will increase your earnings per tap. You can also customize your buttons with different colors and shapes, such as red, blue, yellow, square, circle, star, etc. You can also unlock new buttons with special effects and bonuses.

-

download green button mod apk


Download Zip ☆☆☆☆☆ https://jinyurl.com/2uNStN



-

The modded version of the game gives you unlimited money to spend on upgrades and customizations. You can also enjoy the game without any ads or interruptions. You can also access all the features and levels of the game without any restrictions.

-

Features of Green Button Mod APK

-

Unlimited money

-

With Green Button Mod APK, you can get unlimited money to spend on upgrades and customizations. You don't have to worry about running out of money or waiting for it to accumulate. You can buy any button or booster you want and make your game more fun and exciting.

-

No ads

-

Another benefit of Green Button Mod APK is that it removes all the ads from the game. You don't have to watch any annoying or intrusive ads that pop up on your screen or interrupt your gameplay. You can enjoy the game without any distractions or delays.

-

download idle green button mod apk free rewards
-download green button games mod apk unlimited money
-download green button press the button mod apk latest version
-download green button simulator mod apk no ads
-download green button clicker mod apk hack
-download green button tycoon mod apk offline
-download green button idle game mod apk android
-download green button challenge mod apk premium
-download green button adventure mod apk unlocked
-download green button factory mod apk pro
-download green button frenzy mod apk cheat
-download green button evolution mod apk online
-download green button empire mod apk ios
-download green button mania mod apk vip
-download green button quest mod apk modded
-download green button world mod apk cracked
-download green button farm mod apk unlimited gems
-download green button life mod apk full version
-download green button city mod apk mega mod
-download green button fun mod apk no root
-download green button tap tap mod apk free shopping
-download green button maker mod apk unlimited coins
-download green button legend mod apk all unlocked
-download green button master mod apk god mode
-download green button hero mod apk free download
-download green button saga mod apk unlimited everything
-download green button story mod apk no verification
-download green button journey mod apk free fire
-download green button magic mod apk one hit kill
-download green button land mod apk unlimited lives
-download green button dream mod apk free spins
-download green button rush mod apk unlimited keys
-download green button war mod apk unlimited energy
-download green button blast mod apk free coins
-download green button space mod apk unlimited stars
-download green button race mod apk unlimited gold
-download green button puzzle mod apk free boosters
-download green button kingdom mod apk unlimited diamonds
-download green button garden mod apk free tickets
-download green button escape mod apk unlimited hints
-download green button smash mod apk free cash
-download green button shooter mod apk unlimited ammo
-download green button runner mod apk free skins
-download green button arcade mod apk unlimited tokens
-download green button casino mod apk free chips
-download green button trivia mod apk free points
-download green button bingo mod apk free cards
-download green button slots mod apk free spins

-

Customizable buttons

-

Green Button Mod APK also allows you to customize your buttons with different colors and shapes. You can choose from a variety of options, such as red, blue, yellow, square, circle, star, etc. You can also unlock new buttons with special effects and bonuses, such as fire, ice, lightning, rainbow, etc. You can make your buttons look more appealing and unique.

-

Leaderboards and achievements

-

Green Button Mod APK also lets you compete with other players on the leaderboards and achievements. You can see how you rank among other players in terms of money earned, taps made, buttons unlocked, etc. You can also complete various achievements and earn rewards and trophies. You can challenge your friends and other players to see who is the best money clicker.

-

How to download and install Green Button Mod APK?

-

Steps to download and install Green Button Mod APK

-

If you want to download and install Green Button Mod APK on your Android device, you can follow these simple steps:

-
    -
  1. Go to [this link](^1 ) to download the Green Button Mod APK file on your device.
  2. -
  3. Once the download is complete, go to your device settings and enable the installation of apps from unknown sources.
  4. -
  5. Locate the downloaded file and tap on it to start the installation process.
  6. -
  7. Follow the instructions on the screen and wait for the installation to finish.
  8. -
  9. Launch the game and enjoy the unlimited money and no ads features.
  10. -
-

Tips and tricks for playing Green Button Mod APK

-

If you want to make the most out of Green Button Mod APK, you can follow these tips and tricks:

-

Tap faster and smarter

-

The basic rule of the game is to tap on the green button as fast as you can to earn money. However, you can also tap smarter by using multiple fingers or tapping on different parts of the button. This will increase your tapping speed and efficiency, and help you earn more money in less time.

-

Upgrade your buttons and boosters

-

Another way to increase your earnings is to upgrade your buttons and boosters. You can use the money you earn to buy new buttons or improve the existing ones. You can also buy boosters that will multiply your earnings per tap, such as x2, x5, x10, etc. Upgrading your buttons and boosters will also unlock new levels and features in the game.

-

Use the offline mode

-

Green Button Mod APK also has an offline mode that allows you to earn money even when you are not playing the game. You can activate the offline mode by tapping on the airplane icon on the top right corner of the screen. This will enable a passive income that will accumulate over time. You can collect the money when you return to the game.

-

Challenge your friends and other players

-

Green Button Mod APK also has a social aspect that lets you challenge your friends and other players on the leaderboards and achievements. You can connect your game account with Facebook or Google Play and see how you rank among other players in terms of money earned, taps made, buttons unlocked, etc. You can also invite your friends to play the game and compare your scores. You can also earn rewards and bonuses for playing with friends.

-

Conclusion

-

Green Button Mod APK is a fun and addictive money clicker game that will keep you entertained for hours. You can tap on a green button to earn money, upgrade your buttons and boosters, customize your buttons with different colors and shapes, and compete with other players on the leaderboards and achievements. You can also enjoy the game without any ads or limitations with the modded version of the game. If you want to download and install Green Button Mod APK on your Android device, you can follow the steps mentioned above. You can also use the tips and tricks to make the most out of the game.

-

FAQs

-

Here are some frequently asked questions about Green Button Mod APK:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/configuration_utils.py b/spaces/1toTree/lora_test/ppdiffusers/configuration_utils.py deleted file mode 100644 index a6224303819ecc252051f9460d3fd8d91741bd5c..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/configuration_utils.py +++ /dev/null @@ -1,591 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" ConfigMixin base class and utilities.""" -import functools -import importlib -import inspect -import json -import os -import re -import tempfile -from collections import OrderedDict -from typing import Any, Dict, Optional, Tuple, Union - -import numpy as np -from huggingface_hub import ( - create_repo, - get_hf_file_metadata, - hf_hub_download, - hf_hub_url, - repo_type_and_id_from_hf_id, - upload_folder, -) -from huggingface_hub.utils import EntryNotFoundError -from requests import HTTPError - -from .download_utils import ppdiffusers_bos_download -from .utils import ( - DOWNLOAD_SERVER, - HF_CACHE, - PPDIFFUSERS_CACHE, - DummyObject, - deprecate, - logging, -) -from .version import VERSION as __version__ - -logger = logging.get_logger(__name__) - -_re_configuration_file = re.compile(r"config\.(.*)\.json") - - -class FrozenDict(OrderedDict): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - for key, value in self.items(): - setattr(self, key, value) - - self.__frozen = True - - def __delitem__(self, *args, **kwargs): - raise Exception(f"You cannot use ``__delitem__`` on a {self.__class__.__name__} instance.") - - def setdefault(self, *args, **kwargs): - raise Exception(f"You cannot use ``setdefault`` on a {self.__class__.__name__} instance.") - - def pop(self, *args, **kwargs): - raise Exception(f"You cannot use ``pop`` on a {self.__class__.__name__} instance.") - - def update(self, *args, **kwargs): - raise Exception(f"You cannot use ``update`` on a {self.__class__.__name__} instance.") - - def __setattr__(self, name, value): - if hasattr(self, "__frozen") and self.__frozen: - raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.") - super().__setattr__(name, value) - - def __setitem__(self, name, value): - if hasattr(self, "__frozen") and self.__frozen: - raise Exception(f"You cannot use ``__setattr__`` on a {self.__class__.__name__} instance.") - super().__setitem__(name, value) - - -class ConfigMixin: - r""" - Base class for all configuration classes. Stores all configuration parameters under `self.config` Also handles all - methods for loading/downloading/saving classes inheriting from [`ConfigMixin`] with - - [`~ConfigMixin.from_config`] - - [`~ConfigMixin.save_config`] - - Class attributes: - - **config_name** (`str`) -- A filename under which the config should stored when calling - [`~ConfigMixin.save_config`] (should be overridden by parent class). - - **ignore_for_config** (`List[str]`) -- A list of attributes that should not be saved in the config (should be - overridden by subclass). - - **has_compatibles** (`bool`) -- Whether the class has compatible classes (should be overridden by subclass). - - **_deprecated_kwargs** (`List[str]`) -- Keyword arguments that are deprecated. Note that the init function - should only have a `kwargs` argument if at least one argument is deprecated (should be overridden by - subclass). - """ - config_name = None - ignore_for_config = [] - has_compatibles = False - _deprecated_kwargs = [] - - def register_to_config(self, **kwargs): - if self.config_name is None: - raise NotImplementedError(f"Make sure that {self.__class__} has defined a class name `config_name`") - - # Special case for `kwargs` used in deprecation warning added to schedulers - # TODO: remove this when we remove the deprecation warning, and the `kwargs` argument, - # or solve in a more general way. - kwargs.pop("kwargs", None) - for key, value in kwargs.items(): - try: - setattr(self, key, value) - except AttributeError as err: - logger.error(f"Can't set {key} with value {value} for {self}") - raise err - - if not hasattr(self, "_internal_dict"): - internal_dict = kwargs - else: - previous_dict = dict(self._internal_dict) - internal_dict = {**self._internal_dict, **kwargs} - logger.debug(f"Updating config from {previous_dict} to {internal_dict}") - - self._internal_dict = FrozenDict(internal_dict) - - def save_config(self, save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs): - """ - Save a configuration object to the directory `save_directory`, so that it can be re-loaded using the - [`~ConfigMixin.from_config`] class method. - - Args: - save_directory (`str` or `os.PathLike`): - Directory where the configuration JSON file will be saved (will be created if it does not exist). - """ - if os.path.isfile(save_directory): - raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file") - - os.makedirs(save_directory, exist_ok=True) - - # If we save using the predefined names, we can load using `from_config` - output_config_file = os.path.join(save_directory, self.config_name) - - self.to_json_file(output_config_file) - logger.info(f"Configuration saved in {output_config_file}") - - def save_to_hf_hub( - self, - repo_id: str, - private: Optional[bool] = None, - subfolder: Optional[str] = None, - commit_message: Optional[str] = None, - revision: Optional[str] = None, - create_pr: bool = False, - ): - """ - Uploads all elements of this config to a new HuggingFace Hub repository. - Args: - repo_id (str): Repository name for your model/tokenizer in the Hub. - private (bool, optional): Whether the model/tokenizer is set to private - subfolder (str, optional): Push to a subfolder of the repo instead of the root - commit_message (str, optional): The summary / title / first line of the generated commit. Defaults to: f"Upload {path_in_repo} with huggingface_hub" - revision (str, optional): The git revision to commit from. Defaults to the head of the "main" branch. - create_pr (boolean, optional): Whether or not to create a Pull Request with that commit. Defaults to False. - If revision is not set, PR is opened against the "main" branch. If revision is set and is a branch, PR is opened against this branch. - If revision is set and is not a branch name (example: a commit oid), an RevisionNotFoundError is returned by the server. - - Returns: The url of the commit of your model in the given repository. - """ - repo_url = create_repo(repo_id, private=private, exist_ok=True) - - # Infer complete repo_id from repo_url - # Can be different from the input `repo_id` if repo_owner was implicit - _, repo_owner, repo_name = repo_type_and_id_from_hf_id(repo_url) - - repo_id = f"{repo_owner}/{repo_name}" - - # Check if README file already exist in repo - try: - get_hf_file_metadata(hf_hub_url(repo_id=repo_id, filename="README.md", revision=revision)) - has_readme = True - except EntryNotFoundError: - has_readme = False - - with tempfile.TemporaryDirectory() as root_dir: - if subfolder is not None: - save_dir = os.path.join(root_dir, subfolder) - else: - save_dir = root_dir - # save config - self.save_config(save_dir) - # Add readme if does not exist - logger.info("README.md not found, adding the default README.md") - if not has_readme: - with open(os.path.join(root_dir, "README.md"), "w") as f: - f.write(f"---\nlibrary_name: ppdiffusers\n---\n# {repo_id}") - - # Upload model and return - logger.info(f"Pushing to the {repo_id}. This might take a while") - return upload_folder( - repo_id=repo_id, - repo_type="model", - folder_path=root_dir, - commit_message=commit_message, - revision=revision, - create_pr=create_pr, - ) - - @classmethod - def from_config(cls, config: Union[FrozenDict, Dict[str, Any]] = None, return_unused_kwargs=False, **kwargs): - r""" - Instantiate a Python class from a config dictionary - - Parameters: - config (`Dict[str, Any]`): - A config dictionary from which the Python class will be instantiated. Make sure to only load - configuration files of compatible classes. - return_unused_kwargs (`bool`, *optional*, defaults to `False`): - Whether kwargs that are not consumed by the Python class should be returned or not. - - kwargs (remaining dictionary of keyword arguments, *optional*): - Can be used to update the configuration object (after it being loaded) and initiate the Python class. - `**kwargs` will be directly passed to the underlying scheduler/model's `__init__` method and eventually - overwrite same named arguments of `config`. - - Examples: - - ```python - >>> from ppdiffusers import DDPMScheduler, DDIMScheduler, PNDMScheduler - - >>> # Download scheduler from BOS and cache. - >>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cifar10-32") - - >>> # Instantiate DDIM scheduler class with same config as DDPM - >>> scheduler = DDIMScheduler.from_config(scheduler.config) - - >>> # Instantiate PNDM scheduler class with same config as DDPM - >>> scheduler = PNDMScheduler.from_config(scheduler.config) - ``` - """ - # <===== TO BE REMOVED WITH DEPRECATION - # TODO(Patrick) - make sure to remove the following lines when config=="model_path" is deprecated - if "pretrained_model_name_or_path" in kwargs: - config = kwargs.pop("pretrained_model_name_or_path") - - if config is None: - raise ValueError("Please make sure to provide a config as the first positional argument.") - # ======> - - if not isinstance(config, dict): - deprecation_message = "It is deprecated to pass a pretrained model name or path to `from_config`." - if "Scheduler" in cls.__name__: - deprecation_message += ( - f"If you were trying to load a scheduler, please use {cls}.from_pretrained(...) instead." - " Otherwise, please make sure to pass a configuration dictionary instead. This functionality will" - " be removed in v1.0.0." - ) - elif "Model" in cls.__name__: - deprecation_message += ( - f"If you were trying to load a model, please use {cls}.load_config(...) followed by" - f" {cls}.from_config(...) instead. Otherwise, please make sure to pass a configuration dictionary" - " instead. This functionality will be removed in v1.0.0." - ) - deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False) - config, kwargs = cls.load_config(pretrained_model_name_or_path=config, return_unused_kwargs=True, **kwargs) - - init_dict, unused_kwargs, hidden_dict = cls.extract_init_dict(config, **kwargs) - - # Allow dtype to be specified on initialization - if "dtype" in unused_kwargs: - # (TODO junnyu, donot use dtype) - unused_kwargs.pop("dtype") - # init_dict["dtype"] = unused_kwargs.pop("dtype") - - # add possible deprecated kwargs - for deprecated_kwarg in cls._deprecated_kwargs: - if deprecated_kwarg in unused_kwargs: - init_dict[deprecated_kwarg] = unused_kwargs.pop(deprecated_kwarg) - - # Return model and optionally state and/or unused_kwargs - model = cls(**init_dict) - - # make sure to also save config parameters that might be used for compatible classes - model.register_to_config(**hidden_dict) - - # add hidden kwargs of compatible classes to unused_kwargs - unused_kwargs = {**unused_kwargs, **hidden_dict} - - if return_unused_kwargs: - return (model, unused_kwargs) - else: - return model - - @classmethod - def get_config_dict(cls, *args, **kwargs): - deprecation_message = ( - f" The function get_config_dict is deprecated. Please use {cls}.load_config instead. This function will be" - " removed in version v1.0.0" - ) - deprecate("get_config_dict", "1.0.0", deprecation_message, standard_warn=False) - return cls.load_config(*args, **kwargs) - - @classmethod - def load_config( - cls, pretrained_model_name_or_path: Union[str, os.PathLike], return_unused_kwargs=False, **kwargs - ) -> Tuple[Dict[str, Any], Dict[str, Any]]: - r""" - Instantiate a Python class from a config dictionary - - Parameters: - pretrained_model_name_or_path (`str` or `os.PathLike`, *optional*): - Can be either: - - - A string, the *model id* of a model repo on huggingface.co. Valid model ids should have an - organization name, like `google/ddpm-celebahq-256`. - - A path to a *directory* containing model weights saved using [`~ConfigMixin.save_config`], e.g., - `./my_model_directory/`. - - cache_dir (`Union[str, os.PathLike]`, *optional*): - Path to a directory in which a downloaded pretrained model configuration should be cached if the - standard cache should not be used. - output_loading_info(`bool`, *optional*, defaults to `False`): - Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. - subfolder (`str`, *optional*, defaults to `""`): - In case the relevant files are located inside a subfolder of the model repo (either remote in - huggingface.co or downloaded locally), you can specify the folder name here. - from_hf_hub (bool, *optional*): - Whether to load from Hugging Face Hub. Defaults to False - """ - from_hf_hub = kwargs.pop("from_hf_hub", False) - if from_hf_hub: - cache_dir = kwargs.pop("cache_dir", HF_CACHE) - else: - cache_dir = kwargs.pop("cache_dir", PPDIFFUSERS_CACHE) - subfolder = kwargs.pop("subfolder", None) - - pretrained_model_name_or_path = str(pretrained_model_name_or_path) - - if cls.config_name is None: - raise ValueError( - "`self.config_name` is not defined. Note that one should not load a config from " - "`ConfigMixin`. Please make sure to define `config_name` in a class inheriting from `ConfigMixin`" - ) - - if os.path.isfile(pretrained_model_name_or_path): - config_file = pretrained_model_name_or_path - elif os.path.isdir(pretrained_model_name_or_path): - if os.path.isfile(os.path.join(pretrained_model_name_or_path, cls.config_name)): - # Load from a Paddle checkpoint - config_file = os.path.join(pretrained_model_name_or_path, cls.config_name) - elif subfolder is not None and os.path.isfile( - os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name) - ): - config_file = os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name) - else: - raise EnvironmentError( - f"Error no file named {cls.config_name} found in directory {pretrained_model_name_or_path}." - ) - elif from_hf_hub: - config_file = hf_hub_download( - repo_id=pretrained_model_name_or_path, - filename=cls.config_name, - cache_dir=cache_dir, - subfolder=subfolder, - library_name="PPDiffusers", - library_version=__version__, - ) - else: - try: - config_file = ppdiffusers_bos_download( - pretrained_model_name_or_path, - filename=cls.config_name, - subfolder=subfolder, - cache_dir=cache_dir, - ) - except HTTPError as err: - raise EnvironmentError( - "There was a specific connection error when trying to load" - f" {pretrained_model_name_or_path}:\n{err}" - ) - except ValueError: - raise EnvironmentError( - f"We couldn't connect to '{DOWNLOAD_SERVER}' to load this model, couldn't find it" - f" in the cached files and it looks like {pretrained_model_name_or_path} is not the path to a" - f" directory containing a {cls.config_name} file.\nCheckout your internet connection or see how to" - " run the library in offline mode at" - " 'https://huggingface.co/docs/diffusers/installation#offline-mode'." - ) - except EnvironmentError: - raise EnvironmentError( - f"Can't load config for '{pretrained_model_name_or_path}'. If you were trying to load it from " - "'https://huggingface.co/models', make sure you don't have a local directory with the same name. " - f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct path to a directory " - f"containing a {cls.config_name} file" - ) - - try: - # Load config dict - config_dict = cls._dict_from_json_file(config_file) - except (json.JSONDecodeError, UnicodeDecodeError): - raise EnvironmentError(f"It looks like the config file at '{config_file}' is not a valid JSON file.") - - if return_unused_kwargs: - return config_dict, kwargs - - return config_dict - - @staticmethod - def _get_init_keys(cls): - return set(dict(inspect.signature(cls.__init__).parameters).keys()) - - @classmethod - def extract_init_dict(cls, config_dict, **kwargs): - # 0. Copy origin config dict - original_dict = {k: v for k, v in config_dict.items()} - - # 1. Retrieve expected config attributes from __init__ signature - expected_keys = cls._get_init_keys(cls) - expected_keys.remove("self") - # remove general kwargs if present in dict - if "kwargs" in expected_keys: - expected_keys.remove("kwargs") - - # 2. Remove attributes that cannot be expected from expected config attributes - # remove keys to be ignored - if len(cls.ignore_for_config) > 0: - expected_keys = expected_keys - set(cls.ignore_for_config) - - # load ppdiffusers library to import compatible and original scheduler - ppdiffusers_library = importlib.import_module(__name__.split(".")[0]) - - if cls.has_compatibles: - compatible_classes = [c for c in cls._get_compatibles() if not isinstance(c, DummyObject)] - else: - compatible_classes = [] - - expected_keys_comp_cls = set() - for c in compatible_classes: - expected_keys_c = cls._get_init_keys(c) - expected_keys_comp_cls = expected_keys_comp_cls.union(expected_keys_c) - expected_keys_comp_cls = expected_keys_comp_cls - cls._get_init_keys(cls) - config_dict = {k: v for k, v in config_dict.items() if k not in expected_keys_comp_cls} - - # remove attributes from orig class that cannot be expected - orig_cls_name = config_dict.pop("_class_name", cls.__name__) - if orig_cls_name != cls.__name__ and hasattr(ppdiffusers_library, orig_cls_name): - orig_cls = getattr(ppdiffusers_library, orig_cls_name) - unexpected_keys_from_orig = cls._get_init_keys(orig_cls) - expected_keys - config_dict = {k: v for k, v in config_dict.items() if k not in unexpected_keys_from_orig} - - # remove private attributes - config_dict = {k: v for k, v in config_dict.items() if not k.startswith("_")} - - # 3. Create keyword arguments that will be passed to __init__ from expected keyword arguments - init_dict = {} - for key in expected_keys: - # if config param is passed to kwarg and is present in config dict - # it should overwrite existing config dict key - if key in kwargs and key in config_dict: - config_dict[key] = kwargs.pop(key) - - if key in kwargs: - # overwrite key - init_dict[key] = kwargs.pop(key) - elif key in config_dict: - # use value from config dict - init_dict[key] = config_dict.pop(key) - - # 4. Give nice warning if unexpected values have been passed - if len(config_dict) > 0: - logger.warning( - f"The config attributes {config_dict} were passed to {cls.__name__}, " - "but are not expected and will be ignored. Please verify your " - f"{cls.config_name} configuration file." - ) - - # 5. Give nice info if config attributes are initiliazed to default because they have not been passed - passed_keys = set(init_dict.keys()) - if len(expected_keys - passed_keys) > 0: - logger.info( - f"{expected_keys - passed_keys} was not found in config. Values will be initialized to default values." - ) - - # 6. Define unused keyword arguments - unused_kwargs = {**config_dict, **kwargs} - - # 7. Define "hidden" config parameters that were saved for compatible classes - hidden_config_dict = {k: v for k, v in original_dict.items() if k not in init_dict} - - return init_dict, unused_kwargs, hidden_config_dict - - @classmethod - def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]): - with open(json_file, "r", encoding="utf-8") as reader: - text = reader.read() - return json.loads(text) - - def __repr__(self): - return f"{self.__class__.__name__} {self.to_json_string()}" - - @property - def config(self) -> Dict[str, Any]: - """ - Returns the config of the class as a frozen dictionary - - Returns: - `Dict[str, Any]`: Config of the class. - """ - return self._internal_dict - - def to_json_string(self) -> str: - """ - Serializes this instance to a JSON string. - - Returns: - `str`: String containing all the attributes that make up this configuration instance in JSON format. - """ - config_dict = self._internal_dict if hasattr(self, "_internal_dict") else {} - config_dict["_class_name"] = self.__class__.__name__ - config_dict["_ppdiffusers_version"] = __version__ - - def to_json_saveable(value): - if isinstance(value, np.ndarray): - value = value.tolist() - return value - - config_dict = {k: to_json_saveable(v) for k, v in config_dict.items()} - return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" - - def to_json_file(self, json_file_path: Union[str, os.PathLike]): - """ - Save this instance to a JSON file. - - Args: - json_file_path (`str` or `os.PathLike`): - Path to the JSON file in which this configuration instance's parameters will be saved. - """ - with open(json_file_path, "w", encoding="utf-8") as writer: - writer.write(self.to_json_string()) - - -def register_to_config(init): - r""" - Decorator to apply on the init of classes inheriting from [`ConfigMixin`] so that all the arguments are - automatically sent to `self.register_for_config`. To ignore a specific argument accepted by the init but that - shouldn't be registered in the config, use the `ignore_for_config` class variable - - Warning: Once decorated, all private arguments (beginning with an underscore) are trashed and not sent to the init! - """ - - @functools.wraps(init) - def inner_init(self, *args, **kwargs): - # Ignore private kwargs in the init. - init_kwargs = {k: v for k, v in kwargs.items() if not k.startswith("_")} - config_init_kwargs = {k: v for k, v in kwargs.items() if k.startswith("_")} - - if not isinstance(self, ConfigMixin): - raise RuntimeError( - f"`@register_for_config` was applied to {self.__class__.__name__} init method, but this class does " - "not inherit from `ConfigMixin`." - ) - - ignore = getattr(self, "ignore_for_config", []) - # Get positional arguments aligned with kwargs - new_kwargs = {} - signature = inspect.signature(init) - parameters = { - name: p.default for i, (name, p) in enumerate(signature.parameters.items()) if i > 0 and name not in ignore - } - for arg, name in zip(args, parameters.keys()): - new_kwargs[name] = arg - - # Then add all kwargs - new_kwargs.update( - { - k: init_kwargs.get(k, default) - for k, default in parameters.items() - if k not in ignore and k not in new_kwargs - } - ) - new_kwargs = {**config_init_kwargs, **new_kwargs} - getattr(self, "register_to_config")(**new_kwargs) - init(self, *args, **init_kwargs) - - return inner_init diff --git a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_discrete.py b/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_discrete.py deleted file mode 100644 index c5d3f836f30791024474b4212d8f9c575be7e3f2..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_k_dpm_2_discrete.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright 2022 Katherine Crowson, The HuggingFace Team and hlky. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import List, Optional, Tuple, Union - -import numpy as np -import paddle - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS -from .scheduling_utils import SchedulerMixin, SchedulerOutput - - -class KDPM2DiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Scheduler created by @crowsonkb in [k_diffusion](https://github.com/crowsonkb/k-diffusion), see: - https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188 - - Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022). - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - order = 2 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.00085, # sensible defaults - beta_end: float = 0.012, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - ): - if trained_betas is not None: - self.betas = paddle.to_tensor(trained_betas, dtype="float32") - elif beta_schedule == "linear": - self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32") - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2 - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = paddle.cumprod(self.alphas, 0) - - # set all values - self.set_timesteps(num_train_timesteps, num_train_timesteps) - - def index_for_timestep(self, timestep): - indices = (self.timesteps == timestep).nonzero() - if self.state_in_first_order: - pos = -1 - else: - pos = 0 - return indices[pos].item() - - def scale_model_input( - self, - sample: paddle.Tensor, - timestep: Union[float, paddle.Tensor], - ) -> paddle.Tensor: - """ - Args: - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - sample (`paddle.Tensor`): input sample timestep (`int`, optional): current timestep - Returns: - `paddle.Tensor`: scaled input sample - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - else: - sigma = self.sigmas_interpol[step_index] - - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - - def set_timesteps( - self, - num_inference_steps: int, - num_train_timesteps: Optional[int] = None, - ): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - self.num_inference_steps = num_inference_steps - - num_train_timesteps = num_train_timesteps or self.config.num_train_timesteps - - timesteps = np.linspace(0, num_train_timesteps - 1, num_inference_steps, dtype=np.float32)[::-1].copy() - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - self.log_sigmas = paddle.to_tensor(np.log(sigmas), dtype="float32") - - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - sigmas = paddle.to_tensor(sigmas) - - # interpolate sigmas - sigmas_interpol = sigmas.log().lerp(sigmas.roll(1).log(), 0.5).exp() - # must set to 0.0 - sigmas_interpol[-1] = 0.0 - - self.sigmas = paddle.concat([sigmas[:1], sigmas[1:].repeat_interleave(2), sigmas[-1:]]) - self.sigmas_interpol = paddle.concat( - [sigmas_interpol[:1], sigmas_interpol[1:].repeat_interleave(2), sigmas_interpol[-1:]] - ) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = self.sigmas.max() - - timesteps = paddle.to_tensor(timesteps) - - # interpolate timesteps - timesteps_interpol = self.sigma_to_t(sigmas_interpol) - interleaved_timesteps = paddle.stack((timesteps_interpol[1:-1, None], timesteps[1:, None]), axis=-1).flatten() - timesteps = paddle.concat([timesteps[:1], interleaved_timesteps]) - - self.timesteps = timesteps - - self.sample = None - - def sigma_to_t(self, sigma): - # get log sigma - log_sigma = sigma.log() - - # get distribution - dists = log_sigma - self.log_sigmas[:, None] - - # get sigmas range - low_idx = (dists >= 0).cast("int64").cumsum(axis=0).argmax(axis=0).clip(max=self.log_sigmas.shape[0] - 2) - - high_idx = low_idx + 1 - - low = self.log_sigmas[low_idx] - high = self.log_sigmas[high_idx] - - # interpolate sigmas - w = (low - log_sigma) / (low - high) - w = w.clip(0, 1) - - # transform interpolation to time range - t = (1 - w) * low_idx + w * high_idx - t = t.reshape(sigma.shape) - return t - - @property - def state_in_first_order(self): - return self.sample is None - - def step( - self, - model_output: Union[paddle.Tensor, np.ndarray], - timestep: Union[float, paddle.Tensor], - sample: Union[paddle.Tensor, np.ndarray], - return_dict: bool = True, - ) -> Union[SchedulerOutput, Tuple]: - """ - Args: - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - model_output (`paddle.Tensor` or `np.ndarray`): direct output from learned diffusion model. timestep - (`int`): current discrete timestep in the diffusion chain. sample (`paddle.Tensor` or `np.ndarray`): - current instance of sample being created by diffusion process. - return_dict (`bool`): option for returning tuple rather than SchedulerOutput class - Returns: - [`~schedulers.scheduling_utils.SchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.SchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - step_index = self.index_for_timestep(timestep) - - if self.state_in_first_order: - sigma = self.sigmas[step_index] - sigma_interpol = self.sigmas_interpol[step_index + 1] - sigma_next = self.sigmas[step_index + 1] - else: - # 2nd order / KDPM2's method - sigma = self.sigmas[step_index - 1] - sigma_interpol = self.sigmas_interpol[step_index] - sigma_next = self.sigmas[step_index] - - # currently only gamma=0 is supported. This usually works best anyways. - # We can support gamma in the future but then need to scale the timestep before - # passing it to the model which requires a change in API - gamma = 0 - sigma_hat = sigma * (gamma + 1) # Note: sigma_hat == sigma for now - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = sample - sigma_input * model_output - elif self.config.prediction_type == "v_prediction": - sigma_input = sigma_hat if self.state_in_first_order else sigma_interpol - pred_original_sample = model_output * (-sigma_input / (sigma_input**2 + 1) ** 0.5) + ( - sample / (sigma_input**2 + 1) - ) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - if self.state_in_first_order: - # 2. Convert to an ODE derivative for 1st order - derivative = (sample - pred_original_sample) / sigma_hat - # 3. delta timestep - dt = sigma_interpol - sigma_hat - - # store for 2nd order step - self.sample = sample - else: - # DPM-Solver-2 - # 2. Convert to an ODE derivative for 2nd order - derivative = (sample - pred_original_sample) / sigma_interpol - - # 3. delta timestep - dt = sigma_next - sigma_hat - - sample = self.sample - self.sample = None - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return SchedulerOutput(prev_sample=prev_sample) - - def add_noise( - self, - original_samples: paddle.Tensor, - noise: paddle.Tensor, - timesteps: paddle.Tensor, - ) -> paddle.Tensor: - # Make sure sigmas and timesteps have the same dtype as original_samples - self.sigmas = self.sigmas.cast(original_samples.dtype) - - step_indices = [self.index_for_timestep(t) for t in timesteps] - - sigma = self.sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/1toTree/lora_test/ppdiffusers/utils/import_utils.py b/spaces/1toTree/lora_test/ppdiffusers/utils/import_utils.py deleted file mode 100644 index a620a9f68a1eb02be935aa5732d8433a220ba032..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/utils/import_utils.py +++ /dev/null @@ -1,331 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Import utilities: Utilities related to imports and our lazy inits. -""" -import importlib.util -import operator as op -import os -import sys -from collections import OrderedDict -from typing import Union - -from packaging.version import Version, parse - -from . import logging - -# The package importlib_metadata is in a different place, depending on the python version. -if sys.version_info < (3, 8): - import importlib_metadata -else: - import importlib.metadata as importlib_metadata - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -ENV_VARS_TRUE_VALUES = {"1", "ON", "YES", "TRUE"} -ENV_VARS_TRUE_AND_AUTO_VALUES = ENV_VARS_TRUE_VALUES.union({"AUTO"}) - -USE_PADDLE = os.environ.get("USE_PADDLE", "AUTO").upper() - -STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt} - -_paddle_version = "N/A" -if USE_PADDLE in ENV_VARS_TRUE_AND_AUTO_VALUES: - _paddle_available = importlib.util.find_spec("paddle") is not None - if _paddle_available: - try: - import paddle - - _paddle_version = paddle.__version__ - logger.info(f"Paddle version {_paddle_version} available.") - except importlib_metadata.PackageNotFoundError: - _paddle_available = False -else: - logger.info("Disabling Paddle because USE_PADDLE is not set.") - _paddle_available = False - -_paddlenlp_available = importlib.util.find_spec("paddlenlp") is not None -try: - _paddlenlp_version = importlib_metadata.version("paddlenlp") - logger.debug(f"Successfully imported paddlenlp version {_paddlenlp_version}") -except importlib_metadata.PackageNotFoundError: - _paddlenlp_available = False - -_inflect_available = importlib.util.find_spec("inflect") is not None -try: - _inflect_version = importlib_metadata.version("inflect") - logger.debug(f"Successfully imported inflect version {_inflect_version}") -except importlib_metadata.PackageNotFoundError: - _inflect_available = False - -_unidecode_available = importlib.util.find_spec("unidecode") is not None -try: - _unidecode_version = importlib_metadata.version("unidecode") - logger.debug(f"Successfully imported unidecode version {_unidecode_version}") -except importlib_metadata.PackageNotFoundError: - _unidecode_available = False - -_modelcards_available = importlib.util.find_spec("modelcards") is not None -try: - _modelcards_version = importlib_metadata.version("modelcards") - logger.debug(f"Successfully imported modelcards version {_modelcards_version}") -except importlib_metadata.PackageNotFoundError: - _modelcards_available = False - -_onnxruntime_version = "N/A" -_onnx_available = importlib.util.find_spec("onnxruntime") is not None -if _onnx_available: - candidates = ( - "onnxruntime", - "onnxruntime-gpu", - "onnxruntime-directml", - "onnxruntime-openvino", - "ort_nightly_directml", - ) - _onnxruntime_version = None - # For the metadata, we have to look for both onnxruntime and onnxruntime-gpu - for pkg in candidates: - try: - _onnxruntime_version = importlib_metadata.version(pkg) - break - except importlib_metadata.PackageNotFoundError: - pass - _onnx_available = _onnxruntime_version is not None - if _onnx_available: - logger.debug(f"Successfully imported onnxruntime version {_onnxruntime_version}") - -_scipy_available = importlib.util.find_spec("scipy") is not None -try: - _scipy_version = importlib_metadata.version("scipy") - logger.debug(f"Successfully imported scipy version {_scipy_version}") -except importlib_metadata.PackageNotFoundError: - _scipy_available = False - -_librosa_available = importlib.util.find_spec("librosa") is not None -try: - _librosa_version = importlib_metadata.version("librosa") - logger.debug(f"Successfully imported librosa version {_librosa_version}") -except importlib_metadata.PackageNotFoundError: - _librosa_available = False - -_fastdeploy_available = importlib.util.find_spec("fastdeploy") is not None -if _fastdeploy_available: - candidates = ("fastdeploy_gpu_python", "fastdeploy_python") - _fastdeploy_version = None - # For the metadata, we have to look for both fastdeploy_python and fastdeploy_gpu_python - for pkg in candidates: - try: - _fastdeploy_version = importlib_metadata.version(pkg) - break - except importlib_metadata.PackageNotFoundError: - pass - _fastdeploy_available = _fastdeploy_version is not None - if _fastdeploy_available: - logger.debug(f"Successfully imported fastdeploy version {_fastdeploy_version}") - - -_k_diffusion_available = importlib.util.find_spec("k_diffusion") is not None -try: - _k_diffusion_version = importlib_metadata.version("k_diffusion") - logger.debug(f"Successfully imported k-diffusion version {_k_diffusion_version}") -except importlib_metadata.PackageNotFoundError: - _k_diffusion_available = True - -_wandb_available = importlib.util.find_spec("wandb") is not None -try: - _wandb_version = importlib_metadata.version("wandb") - logger.debug(f"Successfully imported wandb version {_wandb_version }") -except importlib_metadata.PackageNotFoundError: - _wandb_available = False - - -def is_paddle_available(): - return _paddle_available - - -def is_paddlenlp_available(): - return _paddlenlp_available - - -def is_inflect_available(): - return _inflect_available - - -def is_unidecode_available(): - return _unidecode_available - - -def is_modelcards_available(): - return _modelcards_available - - -def is_onnx_available(): - return _onnx_available - - -def is_scipy_available(): - return _scipy_available - - -def is_librosa_available(): - return _librosa_available - - -def is_fastdeploy_available(): - return _fastdeploy_available - - -def is_k_diffusion_available(): - return _k_diffusion_available - - -def is_wandb_available(): - return _wandb_available - - -# docstyle-ignore -FASTDEPLOY_IMPORT_ERROR = """ -{0} requires the fastdeploy library but it was not found in your environment. You can install it with pip: `pip install -fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html` -""" - -# docstyle-ignore -INFLECT_IMPORT_ERROR = """ -{0} requires the inflect library but it was not found in your environment. You can install it with pip: `pip install -inflect` -""" - -# docstyle-ignore -PADDLE_IMPORT_ERROR = """ -{0} requires the Paddle library but it was not found in your environment. Checkout the instructions on the -installation page: https://www.paddlepaddle.org.cn/install/quick and follow the ones that match your environment. -""" - -# docstyle-ignore -LIBROSA_IMPORT_ERROR = """ -{0} requires the librosa library but it was not found in your environment. Checkout the instructions on the -installation page: https://librosa.org/doc/latest/install.html and follow the ones that match your environment. -""" - -# docstyle-ignore -ONNX_IMPORT_ERROR = """ -{0} requires the onnxruntime library but it was not found in your environment. You can install it with pip: `pip -install onnxruntime` -""" - -# docstyle-ignore -SCIPY_IMPORT_ERROR = """ -{0} requires the scipy library but it was not found in your environment. You can install it with pip: `pip install -scipy` -""" - -# docstyle-ignore -PADDLENLP_IMPORT_ERROR = """ -{0} requires the paddlenlp library but it was not found in your environment. You can install it with pip: `pip -install paddlenlp` -""" - -# docstyle-ignore -UNIDECODE_IMPORT_ERROR = """ -{0} requires the unidecode library but it was not found in your environment. You can install it with pip: `pip install -Unidecode` -""" - -# docstyle-ignore -K_DIFFUSION_IMPORT_ERROR = """ -{0} requires the k-diffusion library but it was not found in your environment. You can install it with pip: `pip -install k-diffusion` -""" - -# docstyle-ignore -WANDB_IMPORT_ERROR = """ -{0} requires the wandb library but it was not found in your environment. You can install it with pip: `pip -install wandb` -""" - -BACKENDS_MAPPING = OrderedDict( - [ - ("fastdeploy", (is_fastdeploy_available, FASTDEPLOY_IMPORT_ERROR)), - ("inflect", (is_inflect_available, INFLECT_IMPORT_ERROR)), - ("onnx", (is_onnx_available, ONNX_IMPORT_ERROR)), - ("scipy", (is_scipy_available, SCIPY_IMPORT_ERROR)), - ("paddle", (is_paddle_available, PADDLE_IMPORT_ERROR)), - ("paddlenlp", (is_paddlenlp_available, PADDLENLP_IMPORT_ERROR)), - ("unidecode", (is_unidecode_available, UNIDECODE_IMPORT_ERROR)), - ("librosa", (is_librosa_available, LIBROSA_IMPORT_ERROR)), - ("k_diffusion", (is_k_diffusion_available, K_DIFFUSION_IMPORT_ERROR)), - ("wandb", (is_wandb_available, WANDB_IMPORT_ERROR)), - ] -) - - -def requires_backends(obj, backends): - if not isinstance(backends, (list, tuple)): - backends = [backends] - - name = obj.__name__ if hasattr(obj, "__name__") else obj.__class__.__name__ - checks = (BACKENDS_MAPPING[backend] for backend in backends) - failed = [msg.format(name) for available, msg in checks if not available()] - if failed: - raise ImportError("".join(failed)) - - -class DummyObject(type): - """ - Metaclass for the dummy objects. Any class inheriting from it will return the ImportError generated by - `requires_backend` each time a user tries to access any method of that class. - """ - - def __getattr__(cls, key): - if key.startswith("_"): - return super().__getattr__(cls, key) - requires_backends(cls, cls._backends) - - -# This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L319 -def compare_versions(library_or_version: Union[str, Version], operation: str, requirement_version: str): - """ - Args: - Compares a library version to some requirement using a given operation. - library_or_version (`str` or `packaging.version.Version`): - A library name or a version to check. - operation (`str`): - A string representation of an operator, such as `">"` or `"<="`. - requirement_version (`str`): - The version to compare the library version against - """ - if operation not in STR_OPERATION_TO_FUNC.keys(): - raise ValueError(f"`operation` must be one of {list(STR_OPERATION_TO_FUNC.keys())}, received {operation}") - operation = STR_OPERATION_TO_FUNC[operation] - if isinstance(library_or_version, str): - library_or_version = parse(importlib_metadata.version(library_or_version)) - return operation(library_or_version, parse(requirement_version)) - - -# This function was copied from: https://github.com/huggingface/accelerate/blob/874c4967d94badd24f893064cc3bef45f57cadf7/src/accelerate/utils/versions.py#L338 -def is_paddle_version(operation: str, version: str): - """ - Args: - Compares the current Paddle version to a given reference with an operation. - operation (`str`): - A string representation of an operator, such as `">"` or `"<="` - version (`str`): - A string version of Paddle - """ - return compare_versions(parse(_paddle_version), operation, version) - - -class OptionalDependencyNotAvailable(BaseException): - """An error indicating that an optional dependency of Diffusers was not found in the environment.""" diff --git a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/infer/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/AIWaves/Debate/src/agents/template.py b/spaces/AIWaves/Debate/src/agents/template.py deleted file mode 100644 index 194c9f2c3bad4be9589b72f520660971e2bc4e5a..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/template.py +++ /dev/null @@ -1,111 +0,0 @@ -## default { "temperature": 0.3, "model": "gpt-3.5-turbo-16k-0613","log_path": "logs/{your name}"} -LLM = { - "temperature": 0.0, - "model": "gpt-3.5-turbo-16k-0613", - "log_path": "logs/god" -} - - -Agents = { - "Lilong" : { - "style" : "professional", - "roles" : { - "company" : "coder", - "state2" : "role2", - }, - "name2" : { - "style" : "professional", - "roles" : { - "company" : "coder", - "state2" : "role2", - }, - } - } -} - -# indispensable parameter: "controller_type"("order","random","rule") -# default extract words: "end". You can choose not to fill in this parameter -controller = { - "controller_type": "order", - "max_chat_nums" : 12, - "judge_system_prompt": "", - "judge_last_prompt": "", - "judge_extract_words": "end", - "call_system_prompt" : "", - "call_last_prompt": "", - "call_extract_words": "" -} - -# -Agent_state = { - "role": { - "LLM_type": "OpenAI", - "LLM": LLM, - "style": { - "role": "Opening Advocate for the Affirmative", - "style": "professional" - }, - "task": { - "task": "" - }, - "rule": { - "rule": "" - } - }, -} - - -# indispensable parameter: "agent_states","controller" -# "roles" determines the speaking order when the rule is order. If not set, it is the default order. -# "begin_query" & "begin_role" determines the first speaker.It often determines the direction of the next speech. If you do not set it, it will default to the first agent. -# "environment_prompt" : Responsible for setting the scene for the current environment -State = { - "controller": controller, - "begin_role": "", - "begin_query": "", - "environment_prompt": "", - "roles": ["role1","role2"], - "LLM_type": "OpenAI", - "LLM": LLM, - "agent_state" : Agent_state, -} - - - -States = { - "end_state":{ - "agent_states":{} - }, - "state1" : State - -} - - -# default finish_state_name is "end_state" -# "environment_type" : "competive" : different states not share the memory; "cooperative":diffrent states share the memory -SOP = { - "config" : { - "API_KEY" : "Your key", - "PROXY" : "Your PROXY", - "MAX_CHAT_HISTORY" : "5", - "User_Names" : "[\"alexander\"]" - }, - "environment_type" : "competive", - "LLM_type": "OpenAI", - "LLM" :LLM, - "root": "state1", - "finish_state_name" : "end_state", - "relations": { - "state1": { - "0": "state1", - "1": "state2" - }, - "state2":{ - "0":"state2", - "1":"end_state" - } - }, - "agents": Agents, - "states": States, -} - diff --git a/spaces/AP123/ai-avatars/convertosd.py b/spaces/AP123/ai-avatars/convertosd.py deleted file mode 100644 index e4bec6cbe894dd74b24f633cc66346d687d3f802..0000000000000000000000000000000000000000 --- a/spaces/AP123/ai-avatars/convertosd.py +++ /dev/null @@ -1,226 +0,0 @@ -# Script for converting a HF Diffusers saved pipeline to a Stable Diffusion checkpoint. -# *Only* converts the UNet, VAE, and Text Encoder. -# Does not convert optimizer state or any other thing. -# Written by jachiam - -import argparse -import os.path as osp - -import torch -import gc - -# =================# -# UNet Conversion # -# =================# - -unet_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("time_embed.0.weight", "time_embedding.linear_1.weight"), - ("time_embed.0.bias", "time_embedding.linear_1.bias"), - ("time_embed.2.weight", "time_embedding.linear_2.weight"), - ("time_embed.2.bias", "time_embedding.linear_2.bias"), - ("input_blocks.0.0.weight", "conv_in.weight"), - ("input_blocks.0.0.bias", "conv_in.bias"), - ("out.0.weight", "conv_norm_out.weight"), - ("out.0.bias", "conv_norm_out.bias"), - ("out.2.weight", "conv_out.weight"), - ("out.2.bias", "conv_out.bias"), -] - -unet_conversion_map_resnet = [ - # (stable-diffusion, HF Diffusers) - ("in_layers.0", "norm1"), - ("in_layers.2", "conv1"), - ("out_layers.0", "norm2"), - ("out_layers.3", "conv2"), - ("emb_layers.1", "time_emb_proj"), - ("skip_connection", "conv_shortcut"), -] - -unet_conversion_map_layer = [] -# hardcoded number of downblocks and resnets/attentions... -# would need smarter logic for other networks. -for i in range(4): - # loop over downblocks/upblocks - - for j in range(2): - # loop over resnets/attentions for downblocks - hf_down_res_prefix = f"down_blocks.{i}.resnets.{j}." - sd_down_res_prefix = f"input_blocks.{3*i + j + 1}.0." - unet_conversion_map_layer.append((sd_down_res_prefix, hf_down_res_prefix)) - - if i < 3: - # no attention layers in down_blocks.3 - hf_down_atn_prefix = f"down_blocks.{i}.attentions.{j}." - sd_down_atn_prefix = f"input_blocks.{3*i + j + 1}.1." - unet_conversion_map_layer.append((sd_down_atn_prefix, hf_down_atn_prefix)) - - for j in range(3): - # loop over resnets/attentions for upblocks - hf_up_res_prefix = f"up_blocks.{i}.resnets.{j}." - sd_up_res_prefix = f"output_blocks.{3*i + j}.0." - unet_conversion_map_layer.append((sd_up_res_prefix, hf_up_res_prefix)) - - if i > 0: - # no attention layers in up_blocks.0 - hf_up_atn_prefix = f"up_blocks.{i}.attentions.{j}." - sd_up_atn_prefix = f"output_blocks.{3*i + j}.1." - unet_conversion_map_layer.append((sd_up_atn_prefix, hf_up_atn_prefix)) - - if i < 3: - # no downsample in down_blocks.3 - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0.conv." - sd_downsample_prefix = f"input_blocks.{3*(i+1)}.0.op." - unet_conversion_map_layer.append((sd_downsample_prefix, hf_downsample_prefix)) - - # no upsample in up_blocks.3 - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"output_blocks.{3*i + 2}.{1 if i == 0 else 2}." - unet_conversion_map_layer.append((sd_upsample_prefix, hf_upsample_prefix)) - -hf_mid_atn_prefix = "mid_block.attentions.0." -sd_mid_atn_prefix = "middle_block.1." -unet_conversion_map_layer.append((sd_mid_atn_prefix, hf_mid_atn_prefix)) - -for j in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{j}." - sd_mid_res_prefix = f"middle_block.{2*j}." - unet_conversion_map_layer.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -def convert_unet_state_dict(unet_state_dict): - # buyer beware: this is a *brittle* function, - # and correct output requires that all of these pieces interact in - # the exact order in which I have arranged them. - mapping = {k: k for k in unet_state_dict.keys()} - for sd_name, hf_name in unet_conversion_map: - mapping[hf_name] = sd_name - for k, v in mapping.items(): - if "resnets" in k: - for sd_part, hf_part in unet_conversion_map_resnet: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - for sd_part, hf_part in unet_conversion_map_layer: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: unet_state_dict[k] for k, v in mapping.items()} - return new_state_dict - - -# ================# -# VAE Conversion # -# ================# - -vae_conversion_map = [ - # (stable-diffusion, HF Diffusers) - ("nin_shortcut", "conv_shortcut"), - ("norm_out", "conv_norm_out"), - ("mid.attn_1.", "mid_block.attentions.0."), -] - -for i in range(4): - # down_blocks have two resnets - for j in range(2): - hf_down_prefix = f"encoder.down_blocks.{i}.resnets.{j}." - sd_down_prefix = f"encoder.down.{i}.block.{j}." - vae_conversion_map.append((sd_down_prefix, hf_down_prefix)) - - if i < 3: - hf_downsample_prefix = f"down_blocks.{i}.downsamplers.0." - sd_downsample_prefix = f"down.{i}.downsample." - vae_conversion_map.append((sd_downsample_prefix, hf_downsample_prefix)) - - hf_upsample_prefix = f"up_blocks.{i}.upsamplers.0." - sd_upsample_prefix = f"up.{3-i}.upsample." - vae_conversion_map.append((sd_upsample_prefix, hf_upsample_prefix)) - - # up_blocks have three resnets - # also, up blocks in hf are numbered in reverse from sd - for j in range(3): - hf_up_prefix = f"decoder.up_blocks.{i}.resnets.{j}." - sd_up_prefix = f"decoder.up.{3-i}.block.{j}." - vae_conversion_map.append((sd_up_prefix, hf_up_prefix)) - -# this part accounts for mid blocks in both the encoder and the decoder -for i in range(2): - hf_mid_res_prefix = f"mid_block.resnets.{i}." - sd_mid_res_prefix = f"mid.block_{i+1}." - vae_conversion_map.append((sd_mid_res_prefix, hf_mid_res_prefix)) - - -vae_conversion_map_attn = [ - # (stable-diffusion, HF Diffusers) - ("norm.", "group_norm."), - ("q.", "query."), - ("k.", "key."), - ("v.", "value."), - ("proj_out.", "proj_attn."), -] - - -def reshape_weight_for_sd(w): - # convert HF linear weights to SD conv2d weights - return w.reshape(*w.shape, 1, 1) - - -def convert_vae_state_dict(vae_state_dict): - mapping = {k: k for k in vae_state_dict.keys()} - for k, v in mapping.items(): - for sd_part, hf_part in vae_conversion_map: - v = v.replace(hf_part, sd_part) - mapping[k] = v - for k, v in mapping.items(): - if "attentions" in k: - for sd_part, hf_part in vae_conversion_map_attn: - v = v.replace(hf_part, sd_part) - mapping[k] = v - new_state_dict = {v: vae_state_dict[k] for k, v in mapping.items()} - weights_to_convert = ["q", "k", "v", "proj_out"] - print("Converting to CKPT ...") - for k, v in new_state_dict.items(): - for weight_name in weights_to_convert: - if f"mid.attn_1.{weight_name}.weight" in k: - new_state_dict[k] = reshape_weight_for_sd(v) - return new_state_dict - - -# =========================# -# Text Encoder Conversion # -# =========================# -# pretty much a no-op - - -def convert_text_enc_state_dict(text_enc_dict): - return text_enc_dict - - -def convert(model_path, checkpoint_path): - unet_path = osp.join(model_path, "unet", "diffusion_pytorch_model.bin") - vae_path = osp.join(model_path, "vae", "diffusion_pytorch_model.bin") - text_enc_path = osp.join(model_path, "text_encoder", "pytorch_model.bin") - - # Convert the UNet model - unet_state_dict = torch.load(unet_path, map_location='cpu') - unet_state_dict = convert_unet_state_dict(unet_state_dict) - unet_state_dict = {"model.diffusion_model." + k: v for k, v in unet_state_dict.items()} - - # Convert the VAE model - vae_state_dict = torch.load(vae_path, map_location='cpu') - vae_state_dict = convert_vae_state_dict(vae_state_dict) - vae_state_dict = {"first_stage_model." + k: v for k, v in vae_state_dict.items()} - - # Convert the text encoder model - text_enc_dict = torch.load(text_enc_path, map_location='cpu') - text_enc_dict = convert_text_enc_state_dict(text_enc_dict) - text_enc_dict = {"cond_stage_model.transformer." + k: v for k, v in text_enc_dict.items()} - - # Put together new checkpoint - state_dict = {**unet_state_dict, **vae_state_dict, **text_enc_dict} - - state_dict = {k:v.half() for k,v in state_dict.items()} - state_dict = {"state_dict": state_dict} - torch.save(state_dict, checkpoint_path) - del state_dict, text_enc_dict, vae_state_dict, unet_state_dict - torch.cuda.empty_cache() - gc.collect() diff --git a/spaces/AUBADA-ALARABI/AraPoet/README.md b/spaces/AUBADA-ALARABI/AraPoet/README.md deleted file mode 100644 index 2a3094cbd474e5ab6e37587a2d49cc58af8e5518..0000000000000000000000000000000000000000 --- a/spaces/AUBADA-ALARABI/AraPoet/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: AraPoet -emoji: ✍️ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: gpl-3.0 -duplicated_from: Abdllh/AraPoet ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Abhilashvj/planogram-compliance/yolo_inference_util.py b/spaces/Abhilashvj/planogram-compliance/yolo_inference_util.py deleted file mode 100644 index bda33c48c4502d695bcc00110d0ad12d4b3308a1..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/yolo_inference_util.py +++ /dev/null @@ -1,369 +0,0 @@ -import argparse -import sys -from pathlib import Path - -import cv2 -import numpy as np -import torch -import torch.backends.cudnn as cudnn - -from models.experimental import attempt_load -from utils.datasets import LoadImages, LoadStreams -from utils.general import ( - apply_classifier, - check_img_size, - check_imshow, - check_requirements, - check_suffix, - colorstr, - increment_path, - is_ascii, - non_max_suppression, - save_one_box, - scale_coords, - set_logging, - strip_optimizer, - xyxy2xywh, -) -from utils.plots import Annotator, colors -from utils.torch_utils import load_classifier, select_device, time_sync - -# FILE = Path(__file__).resolve() -# ROOT = FILE.parents[0] # YOLOv5 root directory -# if str(ROOT) not in sys.path: -# sys.path.append(str(ROOT)) # add ROOT to PATH - - - -@torch.no_grad() -def run_yolo_v5( - weights="yolov5s.pt", # model.pt path(s) - source="data/images", # file/dir/URL/glob, 0 for webcam - imgsz=640, # inference size (pixels) - conf_thres=0.25, # confidence threshold - iou_thres=0.45, # NMS IOU threshold - max_det=1000, # maximum detections per image - device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu - view_img=False, # show results - save_txt=False, # save results to *.txt - save_conf=False, # save confidences in --save-txt labels - save_crop=False, # save cropped prediction boxes - nosave=False, # do not save images/videos - classes=None, # filter by class: --class 0, or --class 0 2 3 - agnostic_nms=False, # class-agnostic NMS - augment=False, # augmented inference - visualize=False, # visualize features - update=False, # update all models - project="runs/detect", # save results to project/name - name="exp", # save results to project/name - exist_ok=False, # existing project/name ok, do not increment - line_thickness=3, # bounding box thickness (pixels) - hide_labels=False, # hide labels - hide_conf=False, # hide confidences - half=False, # use FP16 half-precision inference -): - save_img = not nosave and not source.endswith( - ".txt" - ) # save inference images - webcam = ( - source.isnumeric() - or source.endswith(".txt") - or source.lower().startswith( - ("rtsp://", "rtmp://", "http://", "https://") - ) - ) - - # Directories - save_dir = increment_path( - Path(project) / name, exist_ok=exist_ok - ) # increment run - (save_dir / "labels" if save_txt else save_dir).mkdir( - parents=True, exist_ok=True - ) # make dir - - # Initialize - set_logging() - device = select_device(device) - half &= device.type != "cpu" # half precision only supported on CUDA - - # Load model - w = weights[0] if isinstance(weights, list) else weights - classify, suffix, suffixes = ( - False, - Path(w).suffix.lower(), - [".pt", ".onnx", ".tflite", ".pb", ""], - ) - check_suffix(w, suffixes) # check weights have acceptable suffix - pt, onnx, tflite, pb, saved_model = ( - suffix == x for x in suffixes - ) # backend booleans - stride, names = 64, [f"class{i}" for i in range(1000)] # assign defaults - if pt: - model = attempt_load(weights, map_location=device) # load FP32 model - stride = int(model.stride.max()) # model stride - names = ( - model.module.names if hasattr(model, "module") else model.names - ) # get class names - if half: - model.half() # to FP16 - if classify: # second-stage classifier - modelc = load_classifier(name="resnet50", n=2) # initialize - modelc.load_state_dict( - torch.load("resnet50.pt", map_location=device)["model"] - ).to(device).eval() - elif onnx: - check_requirements(("onnx", "onnxruntime")) - import onnxruntime - - session = onnxruntime.InferenceSession(w, None) - else: # TensorFlow models - check_requirements(("tensorflow>=2.4.1",)) - import tensorflow as tf - - if ( - pb - ): # https://www.tensorflow.org/guide/migrate#a_graphpb_or_graphpbtxt - - def wrap_frozen_graph(gd, inputs, outputs): - x = tf.compat.v1.wrap_function( - lambda: tf.compat.v1.import_graph_def(gd, name=""), [] - ) # wrapped import - return x.prune( - tf.nest.map_structure(x.graph.as_graph_element, inputs), - tf.nest.map_structure(x.graph.as_graph_element, outputs), - ) - - graph_def = tf.Graph().as_graph_def() - graph_def.ParseFromString(open(w, "rb").read()) - frozen_func = wrap_frozen_graph( - gd=graph_def, inputs="x:0", outputs="Identity:0" - ) - elif saved_model: - model = tf.keras.models.load_model(w) - elif tflite: - interpreter = tf.lite.Interpreter( - model_path=w - ) # load TFLite model - interpreter.allocate_tensors() # allocate - input_details = interpreter.get_input_details() # inputs - output_details = interpreter.get_output_details() # outputs - int8 = ( - input_details[0]["dtype"] == np.uint8 - ) # is TFLite quantized uint8 model - imgsz = check_img_size(imgsz, s=stride) # check image size - ascii = is_ascii(names) # names are ascii (use PIL for UTF-8) - - # Dataloader - print("Loading data from the source", source) - if webcam: - view_img = check_imshow() - cudnn.benchmark = ( - True # set True to speed up constant image size inference - ) - dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt) - bs = len(dataset) # batch_size - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt) - bs = 1 # batch_size - vid_path, vid_writer = [None] * bs, [None] * bs - - # Run inference - if pt and device.type != "cpu": - model( - torch.zeros(1, 3, *imgsz) - .to(device) - .type_as(next(model.parameters())) - ) # run once - dt, seen = [0.0, 0.0, 0.0], 0 - results = [] - for path, img, im0s, vid_cap in dataset: - t1 = time_sync() - if onnx: - img = img.astype("float32") - else: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img = img / 255.0 # 0 - 255 to 0.0 - 1.0 - if len(img.shape) == 3: - img = img[None] # expand for batch dim - t2 = time_sync() - dt[0] += t2 - t1 - - # Inference - if pt: - visualize = ( - increment_path(save_dir / Path(path).stem, mkdir=True) - if visualize - else False - ) - pred = model(img, augment=augment, visualize=visualize)[0] - elif onnx: - pred = torch.tensor( - session.run( - [session.get_outputs()[0].name], - {session.get_inputs()[0].name: img}, - ) - ) - else: # tensorflow model (tflite, pb, saved_model) - imn = img.permute(0, 2, 3, 1).cpu().numpy() # image in numpy - if pb: - pred = frozen_func(x=tf.constant(imn)).numpy() - elif saved_model: - pred = model(imn, training=False).numpy() - elif tflite: - if int8: - scale, zero_point = input_details[0]["quantization"] - imn = (imn / scale + zero_point).astype( - np.uint8 - ) # de-scale - interpreter.set_tensor(input_details[0]["index"], imn) - interpreter.invoke() - pred = interpreter.get_tensor(output_details[0]["index"]) - if int8: - scale, zero_point = output_details[0]["quantization"] - pred = ( - pred.astype(np.float32) - zero_point - ) * scale # re-scale - pred[..., 0] *= imgsz[1] # x - pred[..., 1] *= imgsz[0] # y - pred[..., 2] *= imgsz[1] # w - pred[..., 3] *= imgsz[0] # h - pred = torch.tensor(pred) - t3 = time_sync() - dt[1] += t3 - t2 - - # NMS - pred = non_max_suppression( - pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det - ) - dt[2] += time_sync() - t3 - - # Second-stage classifier (optional) - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process predictions - for i, det in enumerate(pred): # per image - seen += 1 - if webcam: # batch_size >= 1 - p, s, im0, frame = ( - path[i], - f"{i}: ", - im0s[i].copy(), - dataset.count, - ) - else: - p, s, im0, frame = ( - path, - "", - im0s.copy(), - getattr(dataset, "frame", 0), - ) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # img.jpg - txt_path = str(save_dir / "labels" / p.stem) + ( - "" if dataset.mode == "image" else f"_{frame}" - ) # img.txt - s += "%gx%g " % img.shape[2:] # print string - gn = torch.tensor(im0.shape)[ - [1, 0, 1, 0] - ] # normalization gain whwh - imc = im0.copy() if save_crop else im0 # for save_crop - annotator = Annotator( - im0, line_width=line_thickness, pil=not ascii - ) - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords( - img.shape[2:], det[:, :4], im0.shape - ).round() - results.append((im0, det)) - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = ( - (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn) - .view(-1) - .tolist() - ) # normalized xywh - line = ( - (cls, *xywh, conf) if save_conf else (cls, *xywh) - ) # label format - with open(txt_path + ".txt", "a") as f: - f.write(("%g " * len(line)).rstrip() % line + "\n") - - if save_img or save_crop or view_img: # Add bbox to image - c = int(cls) # integer class - label = ( - None - if hide_labels - else ( - names[c] - if hide_conf - else f"{names[c]} {conf:.2f}" - ) - ) - annotator.box_label(xyxy, label, color=colors(c, True)) - if save_crop: - save_one_box( - xyxy, - imc, - file=save_dir - / "crops" - / names[c] - / f"{p.stem}.jpg", - BGR=True, - ) - # Print time (inference-only) - print(f"{s}Done. ({t3 - t2:.3f}s)") - - # Stream results - im0 = annotator.result() - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == "image": - cv2.imwrite(save_path, im0) - else: # 'video' or 'stream' - if vid_path[i] != save_path: # new video - vid_path[i] = save_path - if isinstance(vid_writer[i], cv2.VideoWriter): - vid_writer[ - i - ].release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += ".mp4" - vid_writer[i] = cv2.VideoWriter( - save_path, - cv2.VideoWriter_fourcc(*"mp4v"), - fps, - (w, h), - ) - vid_writer[i].write(im0) - - # Print results - t = tuple(x / seen * 1e3 for x in dt) # speeds per image - print( - f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}" - % t - ) - return results - # if save_txt or save_img: - # s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - # print(f"Results saved to {colorstr('bold', save_dir)}{s}") - # if update: - # strip_optimizer(weights) # update model (to fix SourceChangeWarning) diff --git a/spaces/AdamWEE80/VoiceTTS/README.md b/spaces/AdamWEE80/VoiceTTS/README.md deleted file mode 100644 index 6f0967b5f829053ccb4ee440fb958aea3c654e9f..0000000000000000000000000000000000000000 --- a/spaces/AdamWEE80/VoiceTTS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VoiceTTS -emoji: 🐨 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AgentVerse/agentVerse/dataloader/gsm8k.py b/spaces/AgentVerse/agentVerse/dataloader/gsm8k.py deleted file mode 100644 index b02ac54d9b9e05935174897c63a491dc8d191630..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/dataloader/gsm8k.py +++ /dev/null @@ -1,22 +0,0 @@ -from .dataloader import DataLoader -from . import dataloader_registry -import json -import re - - -@dataloader_registry.register("tasksolving/gsm8k") -class GSM8KLoader(DataLoader): - def __init__(self, path: str): - self.answer_pat = re.compile(r"#### (-?\d+)") - super().__init__(path) - - def load(self): - with open(self.path) as f: - for line in f: - line = json.loads(line) - self.examples.append( - { - "input": line["question"], - "answer": line["answer"].split('#### ')[-1], - } - ) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PaddingMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PaddingMethods.js deleted file mode 100644 index 038e6bd56b1edfff75d06d99ac91c326eef4aba1..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/PaddingMethods.js +++ /dev/null @@ -1,36 +0,0 @@ -import { GetPadding, SetPadding } from '../../../plugins/utils/padding/PaddingMethods.js'; - -export default { - getInnerPadding(key) { - return GetPadding(this.space, key); - }, - - setInnerPadding(key, value) { - SetPadding(this.space, key, value); - return this; - }, - - getOuterPadding(key) { - return GetPadding(this.getSizerConfig(this).padding, key); - }, - - setOuterPadding(key, value) { - SetPadding(this.getSizerConfig(this).padding, key, value); - return this; - }, - - getChildOuterPadding(child, key) { - if (typeof (child) === 'string') { - child = this.getElement(child); - } - return GetPadding(this.getSizerConfig(child).padding, key); - }, - - setChildOuterPadding(child, key, value) { - if (typeof (child) === 'string') { - child = this.getElement(child); - } - SetPadding(this.getSizerConfig(child).padding, key, value); - return this; - }, -} \ No newline at end of file diff --git a/spaces/Aitor/CVchat/app.py b/spaces/Aitor/CVchat/app.py deleted file mode 100644 index bd09273c005c04282072667b6bb86e6fad8f3290..0000000000000000000000000000000000000000 --- a/spaces/Aitor/CVchat/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import os - -import gradio as gr -import requests -from langchain.chains import RetrievalQA -from langchain.document_loaders import PDFMinerLoader -from langchain.indexes import VectorstoreIndexCreator -from langchain.llms import OpenAI - - -def set_openai_key(raw_key): - # Check if the API is valid - headers = {"Authorization": f"Bearer {raw_key}"} - response = requests.get("https://api.openai.com/v1/engines", headers=headers) - if response.status_code != 200: - raise gr.Error("API key is not valid. Check the key and try again.") - - os.environ["OPENAI_API_KEY"] = raw_key - return gr.File.update(interactive=True), gr.Button.update(interactive=True) - - -def create_langchain(pdf_object): - loader = PDFMinerLoader(pdf_object.name) - index_creator = VectorstoreIndexCreator() - docsearch = index_creator.from_loaders([loader]) - chain = RetrievalQA.from_chain_type( - llm=OpenAI(), - chain_type="stuff", - retriever=docsearch.vectorstore.as_retriever(), - input_key="question", - verbose=True, - return_source_documents=True, - ) - return chain, gr.Button.update(interactive=True) - - -def ask_question(chain, question_text): - return chain({"question": question_text})["result"] - - -with gr.Blocks() as demo: - # Sate objects - chain_state = gr.State() - - # Layout - oai_token = gr.Textbox( - label="OpenAI Token", - placeholder="Lm-iIas452gaw3erGtPar26gERGSA5RVkFJQST23WEG524EWEl", - ) - - pdf_object = gr.File( - label="Upload your CV in PDF format", - file_count="single", - type="file", - interactive=False, - ) - gr.Examples( - examples=[ - os.path.join(os.path.abspath(""), "sample_data", "CV_AITOR_MIRA.pdf") - ], - inputs=pdf_object, - label="Example CV", - ) - create_chain_btn = gr.Button(value="Create CVchat", interactive=False) - - question_placeholder = """Enumerate the candidate's top 5 hard skills and rate them by importance from 0 to 5. -Example: -- Algebra 5/5""" - question_box = gr.Textbox(label="Question", value=question_placeholder) - qa_button = gr.Button(value="Submit question", interactive=False) - - # Actions - oai_token.change( - set_openai_key, inputs=oai_token, outputs=[pdf_object, create_chain_btn] - ) - lchain = create_chain_btn.click( - create_langchain, inputs=pdf_object, outputs=[chain_state, qa_button] - ) - qa_button.click( - ask_question, - inputs=[chain_state, question_box], - outputs=gr.Textbox(label="Answer"), - ) - -demo.launch(debug=True) diff --git a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/data_utils.py b/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/data_utils.py deleted file mode 100644 index c6c8dee9d157161f2082484b89bdb282364e2a0e..0000000000000000000000000000000000000000 --- a/spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/data_utils.py +++ /dev/null @@ -1,267 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import torchaudio - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams, symbols): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - self.symbols = symbols - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - # audiopath = "./user_voice/" + audiopath - - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - # audio, sampling_rate = load_wav_to_torch(filename) - # if sampling_rate != self.sampling_rate: - # raise ValueError("{} {} SR doesn't match target {} SR".format( - # sampling_rate, self.sampling_rate)) - # audio_norm = audio / self.max_wav_value if audio.max() > 10 else audio - # audio_norm = audio_norm.unsqueeze(0) - audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True) - # spec_filename = filename.replace(".wav", ".spec.pt") - # if os.path.exists(spec_filename): - # spec = torch.load(spec_filename) - # else: - # try: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = spec.squeeze(0) - # except NotImplementedError: - # print("?") - # spec = torch.squeeze(spec, 0) - # torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text, self.symbols) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size \ No newline at end of file diff --git a/spaces/AlekseyKorshuk/gai-project/modules/playground.py b/spaces/AlekseyKorshuk/gai-project/modules/playground.py deleted file mode 100644 index 46a3c199c7a00a5122a4aae45031333100ec7fc9..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/gai-project/modules/playground.py +++ /dev/null @@ -1,142 +0,0 @@ -from functools import partial - -import gradio as gr - -import config -from modules import utils -from modules import common -from modules.models import GuanacoModel, ChaiBot - - -def render_playground(demo): - # set inital states - bot_config = utils.get_bot_config(config.DEFAULT_BOT_NAME) - bot_state = gr.State(bot_config) - convo_state = common.get_convo_state(bot_config) - - # render widgets - render_header() - common.render_section_separator("Set up") - model_tag = common.render_model_selector() - bot_profile, bot_selector = common.render_bot_profile(bot_config) - bot_config_text = common.render_bot_config(bot_config) - - common.render_section_separator("Chat") - dialog = render_dialog(bot_config) - - # set default model state according to database - model_state = common.get_model_state(config.DEFAULT_MODEL) - - # render submit buttons and parameter sliders - msg, send, regenerate, clear = common.render_chat_buttons() - - # set callbacks - bot_selector.change( - _reload_bot, - [bot_selector, bot_profile], - [bot_profile, convo_state, dialog, bot_state, bot_config_text], - queue=False - ) - - model_tag.change( - _clear_chat, - [dialog, bot_state], - [dialog], - queue=False - ) - send.click( - _respond, - [msg, convo_state, dialog, model_state], - [msg, dialog], - queue=False - ) - msg.submit( - _respond, - [msg, convo_state, dialog, model_state], - [msg, dialog], - queue=False - ) - regenerate.click( - _regenerate_response, - [convo_state, dialog, model_state], - [dialog], - queue=False - ) - clear.click( - _clear_chat, - [dialog, bot_state], - [dialog], - queue=False - ) - - -def _update_model_parameter_slider(slider, params_state, label): - params_state.update({label: slider}) - return params_state - - -def render_header(): - gr.Markdown(""" - # Playground - """) - - -def render_dialog(bot_config): - first_message = (None, bot_config["firstMessage"]) - dialog = gr.Chatbot([first_message]) - return dialog - - -def _reload_bot(bot_selector, bot_profile): - bot_selector = bot_selector or config.DEFAULT_BOT_NAME - bot_config = utils.get_bot_config(bot_selector) - bot_profile = utils.get_bot_picture_html(bot_config) - convo_state = ChaiBot(bot_config) - bot_config_text = f"# Memory\n{bot_config.get('memory', '')}\n# Prompt\n{bot_config.get('prompt', '')}" - dialog_st = [(None, bot_config["firstMessage"])] - return bot_profile, convo_state, dialog_st, bot_config, bot_config_text - - -def _respond(user_message, chaibot, chat_history, model): - chaibot.add_user_message(user_message) - bot_response = model.generate_response(chaibot) - chaibot.add_bot_message(bot_response) - chat_history.append( - (user_message, bot_response) - ) - return "", chat_history - - -def _clear_chat(chat_history, bot_state): - chat_history = [(None, bot_state["firstMessage"])] - return chat_history - - -def _regenerate_response(chaibot, chat_history, model): - chaibot.messages.pop() - chat_history.pop() - user_message = chaibot.messages[-1][-1] - bot_response = model.generate_response(chaibot) - chaibot.add_bot_message(bot_response) - chat_history.append( - (user_message, bot_response) - ) - return chat_history - - -def _get_model(model_tag): - model = GuanacoModel(model_tag) - return model - - -def _parse_model_parameters_from_bot_id(model_tag): - model = _get_model(model_tag) - out = [ - model.config.generation_params["temperature"], - model.config.generation_params["repetition_penalty"], - model.config.generation_params["max_new_tokens"], - model.config.generation_params["top_k"], - model.config.generation_params["top_p"], - model - ] - return out diff --git a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cantonese.py b/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/Alichuan/VITS-Umamusume-voice-synthesizer/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Amrrs/image-caption-with-vit-gpt2/app.py b/spaces/Amrrs/image-caption-with-vit-gpt2/app.py deleted file mode 100644 index 7e51907443c7537eee444ddf241b2bb14cc464e7..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/image-caption-with-vit-gpt2/app.py +++ /dev/null @@ -1,78 +0,0 @@ -# -*- coding: utf-8 -*- -"""Image Captioning with ViT+GPT2 - -Automatically generated by Colaboratory. - -Original file is located at - https://colab.research.google.com/drive/1P3O0gO5AUqSmM8rE9dxy2tXJ-9jkhxHz -""" - -#! pip install transformers -q - -#! pip install gradio -q - -from PIL import Image -from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, PreTrainedTokenizerFast -import requests - -model = VisionEncoderDecoderModel.from_pretrained("sachin/vit2distilgpt2") - -vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") - -tokenizer = PreTrainedTokenizerFast.from_pretrained("distilgpt2") - -# url = 'https://d2gp644kobdlm6.cloudfront.net/wp-content/uploads/2016/06/bigstock-Shocked-and-surprised-boy-on-t-113798588-300x212.jpg' - -# with Image.open(requests.get(url, stream=True).raw) as img: -# pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values - -#encoder_outputs = model.generate(pixel_values.to('cpu'),num_beams=5) - -#generated_sentences = tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True) - -#generated_sentences - -#naive text processing -#generated_sentences[0].split('.')[0] - -# inference function - -def vit2distilgpt2(img): - pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values - encoder_outputs = generated_ids = model.generate(pixel_values.to('cpu'),num_beams=5) - generated_sentences = tokenizer.batch_decode(encoder_outputs, skip_special_tokens=True) - - return(generated_sentences[0].split('.')[0]) - -#!wget https://media.glamour.com/photos/5f171c4fd35176eaedb36823/master/w_2560%2Cc_limit/bike.jpg - -import gradio as gr - -inputs = [ - gr.inputs.Image(type="pil", label="Original Image") -] - -outputs = [ - gr.outputs.Textbox(label = 'Caption') -] - -title = "Image Captioning using ViT + GPT2" -description = "ViT and GPT2 are used to generate Image Caption for the uploaded image. COCO Dataset was used for training. This image captioning model might have some biases that we couldn't figure during our stress testing, so if you find any bias (gender, race and so on) please use `Flag` button to flag the image with bias" -article = " Model Repo on Hugging Face Model Hub" -examples = [ - ["people-walking-street-pedestrian-crossing-traffic-light-city.jpeg"], - ["elonmusk.jpeg"] - -] - -gr.Interface( - vit2distilgpt2, - inputs, - outputs, - title=title, - description=description, - article=article, - examples=examples, - theme="huggingface", -).launch(debug=True, enable_queue=True) - diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae.py deleted file mode 100644 index 9cd8bcda859107b1e62af5550c831f0fd2d22c83..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vae.py +++ /dev/null @@ -1,688 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional - -import numpy as np -import torch -import torch.nn as nn - -from ..utils import BaseOutput, is_torch_version, randn_tensor -from .attention_processor import SpatialNorm -from .unet_2d_blocks import UNetMidBlock2D, get_down_block, get_up_block - - -@dataclass -class DecoderOutput(BaseOutput): - """ - Output of decoding method. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - The decoded output sample from the last layer of the model. - """ - - sample: torch.FloatTensor - - -class Encoder(nn.Module): - def __init__( - self, - in_channels=3, - out_channels=3, - down_block_types=("DownEncoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - double_z=True, - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = torch.nn.Conv2d( - in_channels, - block_out_channels[0], - kernel_size=3, - stride=1, - padding=1, - ) - - self.mid_block = None - self.down_blocks = nn.ModuleList([]) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=self.layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - add_downsample=not is_final_block, - resnet_eps=1e-6, - downsample_padding=0, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attention_head_dim=output_channel, - temb_channels=None, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default", - attention_head_dim=block_out_channels[-1], - resnet_groups=norm_num_groups, - temb_channels=None, - ) - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[-1], num_groups=norm_num_groups, eps=1e-6) - self.conv_act = nn.SiLU() - - conv_out_channels = 2 * out_channels if double_z else out_channels - self.conv_out = nn.Conv2d(block_out_channels[-1], conv_out_channels, 3, padding=1) - - self.gradient_checkpointing = False - - def forward(self, x): - sample = x - sample = self.conv_in(sample) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - # down - if is_torch_version(">=", "1.11.0"): - for down_block in self.down_blocks: - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(down_block), sample, use_reentrant=False - ) - # middle - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.mid_block), sample, use_reentrant=False - ) - else: - for down_block in self.down_blocks: - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(down_block), sample) - # middle - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(self.mid_block), sample) - - else: - # down - for down_block in self.down_blocks: - sample = down_block(sample) - - # middle - sample = self.mid_block(sample) - - # post-process - sample = self.conv_norm_out(sample) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class Decoder(nn.Module): - def __init__( - self, - in_channels=3, - out_channels=3, - up_block_types=("UpDecoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - norm_type="group", # group, spatial - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = nn.Conv2d( - in_channels, - block_out_channels[-1], - kernel_size=3, - stride=1, - padding=1, - ) - - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - temb_channels = in_channels if norm_type == "spatial" else None - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default" if norm_type == "group" else norm_type, - attention_head_dim=block_out_channels[-1], - resnet_groups=norm_num_groups, - temb_channels=temb_channels, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - prev_output_channel=None, - add_upsample=not is_final_block, - resnet_eps=1e-6, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attention_head_dim=output_channel, - temb_channels=temb_channels, - resnet_time_scale_shift=norm_type, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - if norm_type == "spatial": - self.conv_norm_out = SpatialNorm(block_out_channels[0], temb_channels) - else: - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6) - self.conv_act = nn.SiLU() - self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1) - - self.gradient_checkpointing = False - - def forward(self, z, latent_embeds=None): - sample = z - sample = self.conv_in(sample) - - upscale_dtype = next(iter(self.up_blocks.parameters())).dtype - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - if is_torch_version(">=", "1.11.0"): - # middle - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.mid_block), sample, latent_embeds, use_reentrant=False - ) - sample = sample.to(upscale_dtype) - - # up - for up_block in self.up_blocks: - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(up_block), sample, latent_embeds, use_reentrant=False - ) - else: - # middle - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.mid_block), sample, latent_embeds - ) - sample = sample.to(upscale_dtype) - - # up - for up_block in self.up_blocks: - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(up_block), sample, latent_embeds) - else: - # middle - sample = self.mid_block(sample, latent_embeds) - sample = sample.to(upscale_dtype) - - # up - for up_block in self.up_blocks: - sample = up_block(sample, latent_embeds) - - # post-process - if latent_embeds is None: - sample = self.conv_norm_out(sample) - else: - sample = self.conv_norm_out(sample, latent_embeds) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class UpSample(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - ) -> None: - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.deconv = nn.ConvTranspose2d(in_channels, out_channels, kernel_size=4, stride=2, padding=1) - - def forward(self, x: torch.FloatTensor) -> torch.FloatTensor: - x = torch.relu(x) - x = self.deconv(x) - return x - - -class MaskConditionEncoder(nn.Module): - """ - used in AsymmetricAutoencoderKL - """ - - def __init__( - self, - in_ch: int, - out_ch: int = 192, - res_ch: int = 768, - stride: int = 16, - ) -> None: - super().__init__() - - channels = [] - while stride > 1: - stride = stride // 2 - in_ch_ = out_ch * 2 - if out_ch > res_ch: - out_ch = res_ch - if stride == 1: - in_ch_ = res_ch - channels.append((in_ch_, out_ch)) - out_ch *= 2 - - out_channels = [] - for _in_ch, _out_ch in channels: - out_channels.append(_out_ch) - out_channels.append(channels[-1][0]) - - layers = [] - in_ch_ = in_ch - for l in range(len(out_channels)): - out_ch_ = out_channels[l] - if l == 0 or l == 1: - layers.append(nn.Conv2d(in_ch_, out_ch_, kernel_size=3, stride=1, padding=1)) - else: - layers.append(nn.Conv2d(in_ch_, out_ch_, kernel_size=4, stride=2, padding=1)) - in_ch_ = out_ch_ - - self.layers = nn.Sequential(*layers) - - def forward(self, x: torch.FloatTensor, mask=None) -> torch.FloatTensor: - out = {} - for l in range(len(self.layers)): - layer = self.layers[l] - x = layer(x) - out[str(tuple(x.shape))] = x - x = torch.relu(x) - return out - - -class MaskConditionDecoder(nn.Module): - """The `MaskConditionDecoder` should be used in combination with [`AsymmetricAutoencoderKL`] to enhance the model's - decoder with a conditioner on the mask and masked image.""" - - def __init__( - self, - in_channels=3, - out_channels=3, - up_block_types=("UpDecoderBlock2D",), - block_out_channels=(64,), - layers_per_block=2, - norm_num_groups=32, - act_fn="silu", - norm_type="group", # group, spatial - ): - super().__init__() - self.layers_per_block = layers_per_block - - self.conv_in = nn.Conv2d( - in_channels, - block_out_channels[-1], - kernel_size=3, - stride=1, - padding=1, - ) - - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - temb_channels = in_channels if norm_type == "spatial" else None - - # mid - self.mid_block = UNetMidBlock2D( - in_channels=block_out_channels[-1], - resnet_eps=1e-6, - resnet_act_fn=act_fn, - output_scale_factor=1, - resnet_time_scale_shift="default" if norm_type == "group" else norm_type, - attention_head_dim=block_out_channels[-1], - resnet_groups=norm_num_groups, - temb_channels=temb_channels, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=self.layers_per_block + 1, - in_channels=prev_output_channel, - out_channels=output_channel, - prev_output_channel=None, - add_upsample=not is_final_block, - resnet_eps=1e-6, - resnet_act_fn=act_fn, - resnet_groups=norm_num_groups, - attention_head_dim=output_channel, - temb_channels=temb_channels, - resnet_time_scale_shift=norm_type, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # condition encoder - self.condition_encoder = MaskConditionEncoder( - in_ch=out_channels, - out_ch=block_out_channels[0], - res_ch=block_out_channels[-1], - ) - - # out - if norm_type == "spatial": - self.conv_norm_out = SpatialNorm(block_out_channels[0], temb_channels) - else: - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=1e-6) - self.conv_act = nn.SiLU() - self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1) - - self.gradient_checkpointing = False - - def forward(self, z, image=None, mask=None, latent_embeds=None): - sample = z - sample = self.conv_in(sample) - - upscale_dtype = next(iter(self.up_blocks.parameters())).dtype - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - if is_torch_version(">=", "1.11.0"): - # middle - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.mid_block), sample, latent_embeds, use_reentrant=False - ) - sample = sample.to(upscale_dtype) - - # condition encoder - if image is not None and mask is not None: - masked_image = (1 - mask) * image - im_x = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.condition_encoder), masked_image, mask, use_reentrant=False - ) - - # up - for up_block in self.up_blocks: - if image is not None and mask is not None: - sample_ = im_x[str(tuple(sample.shape))] - mask_ = nn.functional.interpolate(mask, size=sample.shape[-2:], mode="nearest") - sample = sample * mask_ + sample_ * (1 - mask_) - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(up_block), sample, latent_embeds, use_reentrant=False - ) - if image is not None and mask is not None: - sample = sample * mask + im_x[str(tuple(sample.shape))] * (1 - mask) - else: - # middle - sample = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.mid_block), sample, latent_embeds - ) - sample = sample.to(upscale_dtype) - - # condition encoder - if image is not None and mask is not None: - masked_image = (1 - mask) * image - im_x = torch.utils.checkpoint.checkpoint( - create_custom_forward(self.condition_encoder), masked_image, mask - ) - - # up - for up_block in self.up_blocks: - if image is not None and mask is not None: - sample_ = im_x[str(tuple(sample.shape))] - mask_ = nn.functional.interpolate(mask, size=sample.shape[-2:], mode="nearest") - sample = sample * mask_ + sample_ * (1 - mask_) - sample = torch.utils.checkpoint.checkpoint(create_custom_forward(up_block), sample, latent_embeds) - if image is not None and mask is not None: - sample = sample * mask + im_x[str(tuple(sample.shape))] * (1 - mask) - else: - # middle - sample = self.mid_block(sample, latent_embeds) - sample = sample.to(upscale_dtype) - - # condition encoder - if image is not None and mask is not None: - masked_image = (1 - mask) * image - im_x = self.condition_encoder(masked_image, mask) - - # up - for up_block in self.up_blocks: - if image is not None and mask is not None: - sample_ = im_x[str(tuple(sample.shape))] - mask_ = nn.functional.interpolate(mask, size=sample.shape[-2:], mode="nearest") - sample = sample * mask_ + sample_ * (1 - mask_) - sample = up_block(sample, latent_embeds) - if image is not None and mask is not None: - sample = sample * mask + im_x[str(tuple(sample.shape))] * (1 - mask) - - # post-process - if latent_embeds is None: - sample = self.conv_norm_out(sample) - else: - sample = self.conv_norm_out(sample, latent_embeds) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - return sample - - -class VectorQuantizer(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly avoids costly matrix - multiplications and allows for post-hoc remapping of indices. - """ - - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__( - self, n_e, vq_embed_dim, beta, remap=None, unknown_index="random", sane_index_shape=False, legacy=True - ): - super().__init__() - self.n_e = n_e - self.vq_embed_dim = vq_embed_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding(self.n_e, self.vq_embed_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed + 1 - print( - f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices." - ) - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - match = (inds[:, :, None] == used[None, None, ...]).long() - new = match.argmax(-1) - unknown = match.sum(2) < 1 - if self.unknown_index == "random": - new[unknown] = torch.randint(0, self.re_embed, size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds >= self.used.shape[0]] = 0 # simply set to zero - back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds) - return back.reshape(ishape) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.vq_embed_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - min_encoding_indices = torch.argmin(torch.cdist(z_flattened, self.embedding.weight), dim=1) - - z_q = self.embedding(min_encoding_indices).view(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach() - z) ** 2) + torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape(z.shape[0], -1) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape(-1, 1) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape(z_q.shape[0], z_q.shape[2], z_q.shape[3]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape(shape[0], -1) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape(-1) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like( - self.mean, device=self.parameters.device, dtype=self.parameters.dtype - ) - - def sample(self, generator: Optional[torch.Generator] = None) -> torch.FloatTensor: - # make sure sample is on the same device as the parameters and has same dtype - sample = randn_tensor( - self.mean.shape, generator=generator, device=self.parameters.device, dtype=self.parameters.dtype - ) - x = self.mean + self.std * sample - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.0]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar, dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - - 1.0 - - self.logvar - + other.logvar, - dim=[1, 2, 3], - ) - - def nll(self, sample, dims=[1, 2, 3]): - if self.deterministic: - return torch.Tensor([0.0]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum(logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, dim=dims) - - def mode(self): - return self.mean diff --git a/spaces/Andy1621/UniFormerV2_mit_demo/mitv1_class_index.py b/spaces/Andy1621/UniFormerV2_mit_demo/mitv1_class_index.py deleted file mode 100644 index d5344945a0f6eefa2696872a3d02f9195d64970b..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/UniFormerV2_mit_demo/mitv1_class_index.py +++ /dev/null @@ -1,341 +0,0 @@ -mitv1_classnames = { - "0": "adult+female+singing", - "1": "adult+female+speaking", - "2": "adult+male+singing", - "3": "adult+male+speaking", - "4": "aiming", - "5": "applauding", - "6": "arresting", - "7": "ascending", - "8": "asking", - "9": "assembling", - "10": "attacking", - "11": "autographing", - "12": "baking", - "13": "balancing", - "14": "baptizing", - "15": "barbecuing", - "16": "barking", - "17": "bathing", - "18": "bending", - "19": "bicycling", - "20": "biting", - "21": "blocking", - "22": "blowing", - "23": "boarding", - "24": "boating", - "25": "boiling", - "26": "bouncing", - "27": "bowing", - "28": "bowling", - "29": "boxing", - "30": "breaking", - "31": "brushing", - "32": "bubbling", - "33": "building", - "34": "bulldozing", - "35": "burning", - "36": "burying", - "37": "buttoning", - "38": "buying", - "39": "calling", - "40": "camping", - "41": "carrying", - "42": "carving", - "43": "catching", - "44": "celebrating", - "45": "chasing", - "46": "cheering", - "47": "cheerleading", - "48": "chewing", - "49": "child+singing", - "50": "child+speaking", - "51": "chopping", - "52": "clapping", - "53": "clawing", - "54": "cleaning", - "55": "clearing", - "56": "climbing", - "57": "clinging", - "58": "clipping", - "59": "closing", - "60": "coaching", - "61": "colliding", - "62": "combing", - "63": "combusting", - "64": "competing", - "65": "constructing", - "66": "cooking", - "67": "coughing", - "68": "covering", - "69": "cracking", - "70": "crafting", - "71": "cramming", - "72": "crashing", - "73": "crawling", - "74": "crouching", - "75": "crushing", - "76": "crying", - "77": "cuddling", - "78": "cutting", - "79": "dancing", - "80": "descending", - "81": "destroying", - "82": "digging", - "83": "dining", - "84": "dipping", - "85": "discussing", - "86": "diving", - "87": "dragging", - "88": "draining", - "89": "drawing", - "90": "drenching", - "91": "dressing", - "92": "drilling", - "93": "drinking", - "94": "dripping", - "95": "driving", - "96": "dropping", - "97": "drumming", - "98": "drying", - "99": "dunking", - "100": "dusting", - "101": "eating", - "102": "emptying", - "103": "entering", - "104": "erupting", - "105": "exercising", - "106": "exiting", - "107": "extinguishing", - "108": "falling", - "109": "feeding", - "110": "fencing", - "111": "fighting", - "112": "filling", - "113": "filming", - "114": "fishing", - "115": "flicking", - "116": "flipping", - "117": "floating", - "118": "flooding", - "119": "flowing", - "120": "flying", - "121": "folding", - "122": "frowning", - "123": "frying", - "124": "fueling", - "125": "gambling", - "126": "gardening", - "127": "giggling", - "128": "giving", - "129": "grilling", - "130": "grinning", - "131": "gripping", - "132": "grooming", - "133": "guarding", - "134": "hammering", - "135": "handcuffing", - "136": "handwriting", - "137": "hanging", - "138": "hiking", - "139": "hitchhiking", - "140": "hitting", - "141": "howling", - "142": "hugging", - "143": "hunting", - "144": "imitating", - "145": "inflating", - "146": "injecting", - "147": "instructing", - "148": "interviewing", - "149": "jogging", - "150": "joining", - "151": "juggling", - "152": "jumping", - "153": "kicking", - "154": "kissing", - "155": "kneeling", - "156": "knitting", - "157": "knocking", - "158": "landing", - "159": "laughing", - "160": "launching", - "161": "leaking", - "162": "leaning", - "163": "leaping", - "164": "lecturing", - "165": "licking", - "166": "lifting", - "167": "loading", - "168": "locking", - "169": "manicuring", - "170": "marching", - "171": "marrying", - "172": "massaging", - "173": "measuring", - "174": "mopping", - "175": "mowing", - "176": "officiating", - "177": "opening", - "178": "operating", - "179": "overflowing", - "180": "packaging", - "181": "packing", - "182": "painting", - "183": "parading", - "184": "paying", - "185": "pedaling", - "186": "peeling", - "187": "performing", - "188": "photographing", - "189": "picking", - "190": "piloting", - "191": "pitching", - "192": "placing", - "193": "planting", - "194": "playing", - "195": "playing+fun", - "196": "playing+music", - "197": "playing+sports", - "198": "playing+videogames", - "199": "plugging", - "200": "plunging", - "201": "pointing", - "202": "poking", - "203": "pouring", - "204": "praying", - "205": "preaching", - "206": "pressing", - "207": "protesting", - "208": "pulling", - "209": "punching", - "210": "punting", - "211": "pushing", - "212": "putting", - "213": "queuing", - "214": "racing", - "215": "rafting", - "216": "raining", - "217": "raising", - "218": "reaching", - "219": "reading", - "220": "removing", - "221": "repairing", - "222": "resting", - "223": "riding", - "224": "rinsing", - "225": "rising", - "226": "roaring", - "227": "rocking", - "228": "rolling", - "229": "rowing", - "230": "rubbing", - "231": "running", - "232": "sailing", - "233": "saluting", - "234": "sanding", - "235": "sawing", - "236": "scratching", - "237": "screwing", - "238": "scrubbing", - "239": "selling", - "240": "serving", - "241": "sewing", - "242": "shaking", - "243": "shaving", - "244": "shooting", - "245": "shopping", - "246": "shouting", - "247": "shoveling", - "248": "shredding", - "249": "shrugging", - "250": "signing", - "251": "singing", - "252": "sitting", - "253": "skating", - "254": "sketching", - "255": "skiing", - "256": "skipping", - "257": "slapping", - "258": "sleeping", - "259": "slicing", - "260": "sliding", - "261": "slipping", - "262": "smashing", - "263": "smelling", - "264": "smiling", - "265": "smoking", - "266": "snapping", - "267": "sneezing", - "268": "sniffing", - "269": "snowing", - "270": "snuggling", - "271": "socializing", - "272": "sowing", - "273": "speaking", - "274": "spilling", - "275": "spinning", - "276": "spitting", - "277": "splashing", - "278": "spraying", - "279": "spreading", - "280": "sprinkling", - "281": "sprinting", - "282": "squatting", - "283": "squinting", - "284": "stacking", - "285": "standing", - "286": "starting", - "287": "stealing", - "288": "steering", - "289": "stirring", - "290": "stitching", - "291": "stomping", - "292": "stopping", - "293": "storming", - "294": "stretching", - "295": "stroking", - "296": "studying", - "297": "submerging", - "298": "surfing", - "299": "sweeping", - "300": "swerving", - "301": "swimming", - "302": "swinging", - "303": "talking", - "304": "taping", - "305": "tapping", - "306": "tattooing", - "307": "teaching", - "308": "tearing", - "309": "telephoning", - "310": "throwing", - "311": "tickling", - "312": "towing", - "313": "trimming", - "314": "tripping", - "315": "tuning", - "316": "turning", - "317": "twisting", - "318": "tying", - "319": "typing", - "320": "unloading", - "321": "unpacking", - "322": "vacuuming", - "323": "waking", - "324": "walking", - "325": "washing", - "326": "watering", - "327": "waving", - "328": "waxing", - "329": "weeding", - "330": "welding", - "331": "wetting", - "332": "whistling", - "333": "winking", - "334": "working", - "335": "wrapping", - "336": "wrestling", - "337": "writing", - "338": "yawning" -} \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_giou_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_giou_1x_coco.py deleted file mode 100644 index 5556c4977e221182b013b68fef4b73d1b0605bf3..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_fpn_giou_1x_coco.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = './faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - roi_head=dict( - bbox_head=dict( - reg_decoded_bbox=True, - loss_bbox=dict(type='GIoULoss', loss_weight=10.0)))) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/README.md deleted file mode 100644 index 1f28087f6ac6ac8ac1a32e5c165959e61fce7353..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/README.md +++ /dev/null @@ -1,28 +0,0 @@ -# Libra R-CNN: Towards Balanced Learning for Object Detection - -## Introduction - -[ALGORITHM] - -We provide config files to reproduce the results in the CVPR 2019 paper [Libra R-CNN](https://arxiv.org/pdf/1904.02701.pdf). - -``` -@inproceedings{pang2019libra, - title={Libra R-CNN: Towards Balanced Learning for Object Detection}, - author={Pang, Jiangmiao and Chen, Kai and Shi, Jianping and Feng, Huajun and Ouyang, Wanli and Dahua Lin}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2019} -} -``` - -## Results and models - -The results on COCO 2017val are shown in the below table. (results on test-dev are usually slightly higher than val) - -| Architecture | Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:------------:|:---------------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| Faster R-CNN | R-50-FPN | pytorch | 1x | 4.6 | 19.0 | 38.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco/libra_faster_rcnn_r50_fpn_1x_coco_20200130-3afee3a9.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco/libra_faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) | -| Fast R-CNN | R-50-FPN | pytorch | 1x | | | | | -| Faster R-CNN | R-101-FPN | pytorch | 1x | 6.5 | 14.4 | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco/libra_faster_rcnn_r101_fpn_1x_coco_20200203-8dba6a5a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r101_fpn_1x_coco/libra_faster_rcnn_r101_fpn_1x_coco_20200203_001405.log.json) | -| Faster R-CNN | X-101-64x4d-FPN | pytorch | 1x | 10.8 | 8.5 | 42.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco/libra_faster_rcnn_x101_64x4d_fpn_1x_coco_20200315-3a7d0488.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_x101_64x4d_fpn_1x_coco/libra_faster_rcnn_x101_64x4d_fpn_1x_coco_20200315_231625.log.json) | -| RetinaNet | R-50-FPN | pytorch | 1x | 4.2 | 17.7 | 37.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_retinanet_r50_fpn_1x_coco/libra_retinanet_r50_fpn_1x_coco_20200205-804d94ce.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_retinanet_r50_fpn_1x_coco/libra_retinanet_r50_fpn_1x_coco_20200205_112757.log.json) | diff --git a/spaces/Andy1621/uniformer_image_detection/configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py deleted file mode 100644 index 8a2ef260bac24c2a6a849b2492e438d317acf355..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py +++ /dev/null @@ -1,79 +0,0 @@ -_base_ = [ - '../_base_/models/retinanet_r50_fpn.py', - '../_base_/datasets/coco_detection.py', '../_base_/default_runtime.py' -] -cudnn_benchmark = True -# model settings -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - type='RetinaNet', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch'), - neck=dict(type='NASFPN', stack_times=7, norm_cfg=norm_cfg), - bbox_head=dict(type='RetinaSepBNHead', num_ins=5, norm_cfg=norm_cfg), - # training and testing settings - train_cfg=dict(assigner=dict(neg_iou_thr=0.5))) -# dataset settings -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='Resize', - img_scale=(640, 640), - ratio_range=(0.8, 1.2), - keep_ratio=True), - dict(type='RandomCrop', crop_size=(640, 640)), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=(640, 640)), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(640, 640), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=128), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict( - type='SGD', - lr=0.08, - momentum=0.9, - weight_decay=0.0001, - paramwise_cfg=dict(norm_decay_mult=0, bypass_duplicate=True)) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=0.1, - step=[30, 40]) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=50) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/distributed_sampler.py b/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/distributed_sampler.py deleted file mode 100644 index cc61019484655ee2829f7908dc442caa20cf1d54..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/distributed_sampler.py +++ /dev/null @@ -1,39 +0,0 @@ -import math - -import torch -from torch.utils.data import DistributedSampler as _DistributedSampler - - -class DistributedSampler(_DistributedSampler): - - def __init__(self, - dataset, - num_replicas=None, - rank=None, - shuffle=True, - seed=0): - super().__init__( - dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - # for the compatibility from PyTorch 1.3+ - self.seed = seed if seed is not None else 0 - - def __iter__(self): - # deterministically shuffle based on epoch - if self.shuffle: - g = torch.Generator() - g.manual_seed(self.epoch + self.seed) - indices = torch.randperm(len(self.dataset), generator=g).tolist() - else: - indices = torch.arange(len(self.dataset)).tolist() - - # add extra samples to make it evenly divisible - # in case that indices is shorter than half of total_size - indices = (indices * - math.ceil(self.total_size / len(indices)))[:self.total_size] - assert len(indices) == self.total_size - - # subsample - indices = indices[self.rank:self.total_size:self.num_replicas] - assert len(indices) == self.num_samples - - return iter(indices) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/yolo_neck.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/yolo_neck.py deleted file mode 100644 index c2f9b9ef3859796c284c16ad1a92fe41ecbed613..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/yolo_neck.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule - -from ..builder import NECKS - - -class DetectionBlock(nn.Module): - """Detection block in YOLO neck. - - Let out_channels = n, the DetectionBlock contains: - Six ConvLayers, 1 Conv2D Layer and 1 YoloLayer. - The first 6 ConvLayers are formed the following way: - 1x1xn, 3x3x2n, 1x1xn, 3x3x2n, 1x1xn, 3x3x2n. - The Conv2D layer is 1x1x255. - Some block will have branch after the fifth ConvLayer. - The input channel is arbitrary (in_channels) - - Args: - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - def __init__(self, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)): - super(DetectionBlock, self).__init__() - double_out_channels = out_channels * 2 - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - self.conv1 = ConvModule(in_channels, out_channels, 1, **cfg) - self.conv2 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv3 = ConvModule(double_out_channels, out_channels, 1, **cfg) - self.conv4 = ConvModule( - out_channels, double_out_channels, 3, padding=1, **cfg) - self.conv5 = ConvModule(double_out_channels, out_channels, 1, **cfg) - - def forward(self, x): - tmp = self.conv1(x) - tmp = self.conv2(tmp) - tmp = self.conv3(tmp) - tmp = self.conv4(tmp) - out = self.conv5(tmp) - return out - - -@NECKS.register_module() -class YOLOV3Neck(nn.Module): - """The neck of YOLOV3. - - It can be treated as a simplified version of FPN. It - will take the result from Darknet backbone and do some upsampling and - concatenation. It will finally output the detection result. - - Note: - The input feats should be from top to bottom. - i.e., from high-lvl to low-lvl - But YOLOV3Neck will process them in reversed order. - i.e., from bottom (high-lvl) to top (low-lvl) - - Args: - num_scales (int): The number of scales / stages. - in_channels (int): The number of input channels. - out_channels (int): The number of output channels. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Dictionary to construct and config norm layer. - Default: dict(type='BN', requires_grad=True) - act_cfg (dict): Config dict for activation layer. - Default: dict(type='LeakyReLU', negative_slope=0.1). - """ - - def __init__(self, - num_scales, - in_channels, - out_channels, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - act_cfg=dict(type='LeakyReLU', negative_slope=0.1)): - super(YOLOV3Neck, self).__init__() - assert (num_scales == len(in_channels) == len(out_channels)) - self.num_scales = num_scales - self.in_channels = in_channels - self.out_channels = out_channels - - # shortcut - cfg = dict(conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg) - - # To support arbitrary scales, the code looks awful, but it works. - # Better solution is welcomed. - self.detect1 = DetectionBlock(in_channels[0], out_channels[0], **cfg) - for i in range(1, self.num_scales): - in_c, out_c = self.in_channels[i], self.out_channels[i] - self.add_module(f'conv{i}', ConvModule(in_c, out_c, 1, **cfg)) - # in_c + out_c : High-lvl feats will be cat with low-lvl feats - self.add_module(f'detect{i+1}', - DetectionBlock(in_c + out_c, out_c, **cfg)) - - def forward(self, feats): - assert len(feats) == self.num_scales - - # processed from bottom (high-lvl) to top (low-lvl) - outs = [] - out = self.detect1(feats[-1]) - outs.append(out) - - for i, x in enumerate(reversed(feats[:-1])): - conv = getattr(self, f'conv{i+1}') - tmp = conv(out) - - # Cat with low-lvl feats - tmp = F.interpolate(tmp, scale_factor=2) - tmp = torch.cat((tmp, x), 1) - - detect = getattr(self, f'detect{i+2}') - out = detect(tmp) - outs.append(out) - - return tuple(outs) - - def init_weights(self): - """Initialize the weights of module.""" - # init is done in ConvModule - pass diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/hrf.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/hrf.py deleted file mode 100644 index 923203b51377f9344277fc561803d7a78bd2c684..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/datasets/hrf.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/modules.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/modules.py deleted file mode 100644 index 4edd5496b9e668ea72a5be39db9cca94b6a42f9b..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/encoders/modules.py +++ /dev/null @@ -1,213 +0,0 @@ -import torch -import torch.nn as nn -from torch.utils.checkpoint import checkpoint - -from transformers import T5Tokenizer, T5EncoderModel, CLIPTokenizer, CLIPTextModel - -import open_clip -from ldm.util import default, count_params - - -class AbstractEncoder(nn.Module): - def __init__(self): - super().__init__() - - def encode(self, *args, **kwargs): - raise NotImplementedError - - -class IdentityEncoder(AbstractEncoder): - - def encode(self, x): - return x - - -class ClassEmbedder(nn.Module): - def __init__(self, embed_dim, n_classes=1000, key='class', ucg_rate=0.1): - super().__init__() - self.key = key - self.embedding = nn.Embedding(n_classes, embed_dim) - self.n_classes = n_classes - self.ucg_rate = ucg_rate - - def forward(self, batch, key=None, disable_dropout=False): - if key is None: - key = self.key - # this is for use in crossattn - c = batch[key][:, None] - if self.ucg_rate > 0. and not disable_dropout: - mask = 1. - torch.bernoulli(torch.ones_like(c) * self.ucg_rate) - c = mask * c + (1-mask) * torch.ones_like(c)*(self.n_classes-1) - c = c.long() - c = self.embedding(c) - return c - - def get_unconditional_conditioning(self, bs, device="cuda"): - uc_class = self.n_classes - 1 # 1000 classes --> 0 ... 999, one extra class for ucg (class 1000) - uc = torch.ones((bs,), device=device) * uc_class - uc = {self.key: uc} - return uc - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -class FrozenT5Embedder(AbstractEncoder): - """Uses the T5 transformer encoder for text""" - def __init__(self, version="google/t5-v1_1-large", device="cuda", max_length=77, freeze=True): # others are google/t5-v1_1-xl and google/t5-v1_1-xxl - super().__init__() - self.tokenizer = T5Tokenizer.from_pretrained(version) - self.transformer = T5EncoderModel.from_pretrained(version) - self.device = device - self.max_length = max_length # TODO: typical value? - if freeze: - self.freeze() - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens) - - z = outputs.last_hidden_state - return z - - def encode(self, text): - return self(text) - - -class FrozenCLIPEmbedder(AbstractEncoder): - """Uses the CLIP transformer encoder for text (from huggingface)""" - LAYERS = [ - "last", - "pooled", - "hidden" - ] - def __init__(self, version="openai/clip-vit-large-patch14", device="cuda", max_length=77, - freeze=True, layer="last", layer_idx=None): # clip-vit-base-patch32 - super().__init__() - assert layer in self.LAYERS - self.tokenizer = CLIPTokenizer.from_pretrained(version) - self.transformer = CLIPTextModel.from_pretrained(version) - self.device = device - self.max_length = max_length - if freeze: - self.freeze() - self.layer = layer - self.layer_idx = layer_idx - if layer == "hidden": - assert layer_idx is not None - assert 0 <= abs(layer_idx) <= 12 - - def freeze(self): - self.transformer = self.transformer.eval() - #self.train = disabled_train - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - batch_encoding = self.tokenizer(text, truncation=True, max_length=self.max_length, return_length=True, - return_overflowing_tokens=False, padding="max_length", return_tensors="pt") - tokens = batch_encoding["input_ids"].to(self.device) - outputs = self.transformer(input_ids=tokens, output_hidden_states=self.layer=="hidden") - if self.layer == "last": - z = outputs.last_hidden_state - elif self.layer == "pooled": - z = outputs.pooler_output[:, None, :] - else: - z = outputs.hidden_states[self.layer_idx] - return z - - def encode(self, text): - return self(text) - - -class FrozenOpenCLIPEmbedder(AbstractEncoder): - """ - Uses the OpenCLIP transformer encoder for text - """ - LAYERS = [ - #"pooled", - "last", - "penultimate" - ] - def __init__(self, arch="ViT-H-14", version="laion2b_s32b_b79k", device="cuda", max_length=77, - freeze=True, layer="last"): - super().__init__() - assert layer in self.LAYERS - model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=version) - del model.visual - self.model = model - - self.device = device - self.max_length = max_length - if freeze: - self.freeze() - self.layer = layer - if self.layer == "last": - self.layer_idx = 0 - elif self.layer == "penultimate": - self.layer_idx = 1 - else: - raise NotImplementedError() - - def freeze(self): - self.model = self.model.eval() - for param in self.parameters(): - param.requires_grad = False - - def forward(self, text): - tokens = open_clip.tokenize(text) - z = self.encode_with_transformer(tokens.to(self.device)) - return z - - def encode_with_transformer(self, text): - x = self.model.token_embedding(text) # [batch_size, n_ctx, d_model] - x = x + self.model.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.text_transformer_forward(x, attn_mask=self.model.attn_mask) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.model.ln_final(x) - return x - - def text_transformer_forward(self, x: torch.Tensor, attn_mask = None): - for i, r in enumerate(self.model.transformer.resblocks): - if i == len(self.model.transformer.resblocks) - self.layer_idx: - break - if self.model.transformer.grad_checkpointing and not torch.jit.is_scripting(): - x = checkpoint(r, x, attn_mask) - else: - x = r(x, attn_mask=attn_mask) - return x - - def encode(self, text): - return self(text) - - -class FrozenCLIPT5Encoder(AbstractEncoder): - def __init__(self, clip_version="openai/clip-vit-large-patch14", t5_version="google/t5-v1_1-xl", device="cuda", - clip_max_length=77, t5_max_length=77): - super().__init__() - self.clip_encoder = FrozenCLIPEmbedder(clip_version, device, max_length=clip_max_length) - self.t5_encoder = FrozenT5Embedder(t5_version, device, max_length=t5_max_length) - print(f"{self.clip_encoder.__class__.__name__} has {count_params(self.clip_encoder)*1.e-6:.2f} M parameters, " - f"{self.t5_encoder.__class__.__name__} comes with {count_params(self.t5_encoder)*1.e-6:.2f} M params.") - - def encode(self, text): - return self(text) - - def forward(self, text): - clip_z = self.clip_encoder.encode(text) - t5_z = self.t5_encoder.encode(text) - return [clip_z, t5_z] - - diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py deleted file mode 100644 index 4c25647930c6557d10e8a3ee92b68cfe3a07f7d7..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/distributions/sdist.py +++ /dev/null @@ -1,150 +0,0 @@ -import logging -from typing import Iterable, Set, Tuple - -from pip._internal.build_env import BuildEnvironment -from pip._internal.distributions.base import AbstractDistribution -from pip._internal.exceptions import InstallationError -from pip._internal.index.package_finder import PackageFinder -from pip._internal.metadata import BaseDistribution -from pip._internal.utils.subprocess import runner_with_spinner_message - -logger = logging.getLogger(__name__) - - -class SourceDistribution(AbstractDistribution): - """Represents a source distribution. - - The preparation step for these needs metadata for the packages to be - generated, either using PEP 517 or using the legacy `setup.py egg_info`. - """ - - def get_metadata_distribution(self) -> BaseDistribution: - return self.req.get_dist() - - def prepare_distribution_metadata( - self, - finder: PackageFinder, - build_isolation: bool, - check_build_deps: bool, - ) -> None: - # Load pyproject.toml, to determine whether PEP 517 is to be used - self.req.load_pyproject_toml() - - # Set up the build isolation, if this requirement should be isolated - should_isolate = self.req.use_pep517 and build_isolation - if should_isolate: - # Setup an isolated environment and install the build backend static - # requirements in it. - self._prepare_build_backend(finder) - # Check that if the requirement is editable, it either supports PEP 660 or - # has a setup.py or a setup.cfg. This cannot be done earlier because we need - # to setup the build backend to verify it supports build_editable, nor can - # it be done later, because we want to avoid installing build requirements - # needlessly. Doing it here also works around setuptools generating - # UNKNOWN.egg-info when running get_requires_for_build_wheel on a directory - # without setup.py nor setup.cfg. - self.req.isolated_editable_sanity_check() - # Install the dynamic build requirements. - self._install_build_reqs(finder) - # Check if the current environment provides build dependencies - should_check_deps = self.req.use_pep517 and check_build_deps - if should_check_deps: - pyproject_requires = self.req.pyproject_requires - assert pyproject_requires is not None - conflicting, missing = self.req.build_env.check_requirements( - pyproject_requires - ) - if conflicting: - self._raise_conflicts("the backend dependencies", conflicting) - if missing: - self._raise_missing_reqs(missing) - self.req.prepare_metadata() - - def _prepare_build_backend(self, finder: PackageFinder) -> None: - # Isolate in a BuildEnvironment and install the build-time - # requirements. - pyproject_requires = self.req.pyproject_requires - assert pyproject_requires is not None - - self.req.build_env = BuildEnvironment() - self.req.build_env.install_requirements( - finder, pyproject_requires, "overlay", kind="build dependencies" - ) - conflicting, missing = self.req.build_env.check_requirements( - self.req.requirements_to_check - ) - if conflicting: - self._raise_conflicts("PEP 517/518 supported requirements", conflicting) - if missing: - logger.warning( - "Missing build requirements in pyproject.toml for %s.", - self.req, - ) - logger.warning( - "The project does not specify a build backend, and " - "pip cannot fall back to setuptools without %s.", - " and ".join(map(repr, sorted(missing))), - ) - - def _get_build_requires_wheel(self) -> Iterable[str]: - with self.req.build_env: - runner = runner_with_spinner_message("Getting requirements to build wheel") - backend = self.req.pep517_backend - assert backend is not None - with backend.subprocess_runner(runner): - return backend.get_requires_for_build_wheel() - - def _get_build_requires_editable(self) -> Iterable[str]: - with self.req.build_env: - runner = runner_with_spinner_message( - "Getting requirements to build editable" - ) - backend = self.req.pep517_backend - assert backend is not None - with backend.subprocess_runner(runner): - return backend.get_requires_for_build_editable() - - def _install_build_reqs(self, finder: PackageFinder) -> None: - # Install any extra build dependencies that the backend requests. - # This must be done in a second pass, as the pyproject.toml - # dependencies must be installed before we can call the backend. - if ( - self.req.editable - and self.req.permit_editable_wheels - and self.req.supports_pyproject_editable() - ): - build_reqs = self._get_build_requires_editable() - else: - build_reqs = self._get_build_requires_wheel() - conflicting, missing = self.req.build_env.check_requirements(build_reqs) - if conflicting: - self._raise_conflicts("the backend dependencies", conflicting) - self.req.build_env.install_requirements( - finder, missing, "normal", kind="backend dependencies" - ) - - def _raise_conflicts( - self, conflicting_with: str, conflicting_reqs: Set[Tuple[str, str]] - ) -> None: - format_string = ( - "Some build dependencies for {requirement} " - "conflict with {conflicting_with}: {description}." - ) - error_message = format_string.format( - requirement=self.req, - conflicting_with=conflicting_with, - description=", ".join( - f"{installed} is incompatible with {wanted}" - for installed, wanted in sorted(conflicting_reqs) - ), - ) - raise InstallationError(error_message) - - def _raise_missing_reqs(self, missing: Set[str]) -> None: - format_string = ( - "Some build dependencies for {requirement} are missing: {missing}." - ) - error_message = format_string.format( - requirement=self.req, missing=", ".join(map(repr, sorted(missing))) - ) - raise InstallationError(error_message) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/actions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/actions.py deleted file mode 100644 index f72c66e743146c7a5b70a5440e9ab5459f10245b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/actions.py +++ /dev/null @@ -1,207 +0,0 @@ -# actions.py - -from .exceptions import ParseException -from .util import col - - -class OnlyOnce: - """ - Wrapper for parse actions, to ensure they are only called once. - """ - - def __init__(self, method_call): - from .core import _trim_arity - - self.callable = _trim_arity(method_call) - self.called = False - - def __call__(self, s, l, t): - if not self.called: - results = self.callable(s, l, t) - self.called = True - return results - raise ParseException(s, l, "OnlyOnce obj called multiple times w/out reset") - - def reset(self): - """ - Allow the associated parse action to be called once more. - """ - - self.called = False - - -def match_only_at_col(n): - """ - Helper method for defining parse actions that require matching at - a specific column in the input text. - """ - - def verify_col(strg, locn, toks): - if col(locn, strg) != n: - raise ParseException(strg, locn, "matched token not at column {}".format(n)) - - return verify_col - - -def replace_with(repl_str): - """ - Helper method for common parse actions that simply return - a literal value. Especially useful when used with - :class:`transform_string` (). - - Example:: - - num = Word(nums).set_parse_action(lambda toks: int(toks[0])) - na = one_of("N/A NA").set_parse_action(replace_with(math.nan)) - term = na | num - - term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234] - """ - return lambda s, l, t: [repl_str] - - -def remove_quotes(s, l, t): - """ - Helper parse action for removing quotation marks from parsed - quoted strings. - - Example:: - - # by default, quotation marks are included in parsed results - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["'Now is the Winter of our Discontent'"] - - # use remove_quotes to strip quotation marks from parsed results - quoted_string.set_parse_action(remove_quotes) - quoted_string.parse_string("'Now is the Winter of our Discontent'") # -> ["Now is the Winter of our Discontent"] - """ - return t[0][1:-1] - - -def with_attribute(*args, **attr_dict): - """ - Helper to create a validating parse action to be used with start - tags created with :class:`make_xml_tags` or - :class:`make_html_tags`. Use ``with_attribute`` to qualify - a starting tag with a required attribute value, to avoid false - matches on common tags such as ```` or ``
``. - - Call ``with_attribute`` with a series of attribute names and - values. Specify the list of filter attributes names and values as: - - - keyword arguments, as in ``(align="right")``, or - - as an explicit dict with ``**`` operator, when an attribute - name is also a Python reserved word, as in ``**{"class":"Customer", "align":"right"}`` - - a list of name-value tuples, as in ``(("ns1:class", "Customer"), ("ns2:align", "right"))`` - - For attribute names with a namespace prefix, you must use the second - form. Attribute names are matched insensitive to upper/lower case. - - If just testing for ``class`` (with or without a namespace), use - :class:`with_class`. - - To verify that the attribute exists, but without specifying a value, - pass ``with_attribute.ANY_VALUE`` as the value. - - Example:: - - html = ''' -
- Some text -
1 4 0 1 0
-
1,3 2,3 1,1
-
this has no type
-
- - ''' - div,div_end = make_html_tags("div") - - # only match div tag having a type attribute with value "grid" - div_grid = div().set_parse_action(with_attribute(type="grid")) - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - # construct a match with any div tag having a type attribute, regardless of the value - div_any_type = div().set_parse_action(with_attribute(type=with_attribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - if args: - attrs = args[:] - else: - attrs = attr_dict.items() - attrs = [(k, v) for k, v in attrs] - - def pa(s, l, tokens): - for attrName, attrValue in attrs: - if attrName not in tokens: - raise ParseException(s, l, "no matching attribute " + attrName) - if attrValue != with_attribute.ANY_VALUE and tokens[attrName] != attrValue: - raise ParseException( - s, - l, - "attribute {!r} has value {!r}, must be {!r}".format( - attrName, tokens[attrName], attrValue - ), - ) - - return pa - - -with_attribute.ANY_VALUE = object() - - -def with_class(classname, namespace=""): - """ - Simplified version of :class:`with_attribute` when - matching on a div class - made difficult because ``class`` is - a reserved word in Python. - - Example:: - - html = ''' -
- Some text -
1 4 0 1 0
-
1,3 2,3 1,1
-
this <div> has no class
-
- - ''' - div,div_end = make_html_tags("div") - div_grid = div().set_parse_action(with_class("grid")) - - grid_expr = div_grid + SkipTo(div | div_end)("body") - for grid_header in grid_expr.search_string(html): - print(grid_header.body) - - div_any_type = div().set_parse_action(with_class(withAttribute.ANY_VALUE)) - div_expr = div_any_type + SkipTo(div | div_end)("body") - for div_header in div_expr.search_string(html): - print(div_header.body) - - prints:: - - 1 4 0 1 0 - - 1 4 0 1 0 - 1,3 2,3 1,1 - """ - classattr = "{}:class".format(namespace) if namespace else "class" - return with_attribute(**{classattr: classname}) - - -# pre-PEP8 compatibility symbols -replaceWith = replace_with -removeQuotes = remove_quotes -withAttribute = with_attribute -withClass = with_class -matchOnlyAtCol = match_only_at_col diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/text/file_utils.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/text/file_utils.py deleted file mode 100644 index 51918cf3857471e4ffb5b617d73ee8b9eed0989e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/text/file_utils.py +++ /dev/null @@ -1,256 +0,0 @@ -# Utilities for working with the local dataset cache. -# This file is adapted from the AllenNLP library at https://github.com/allenai/allennlp -# Copyright by the AllenNLP authors. - -from __future__ import absolute_import, division, print_function, unicode_literals - -import sys -import json -import logging -import os -import shutil -import tempfile -import fnmatch -from functools import wraps -from hashlib import sha256 -from io import open - -import boto3 -import requests -from botocore.exceptions import ClientError -from tqdm import tqdm - -try: - from torch.hub import _get_torch_home - torch_cache_home = _get_torch_home() -except ImportError: - torch_cache_home = os.path.expanduser( - os.getenv('TORCH_HOME', os.path.join( - os.getenv('XDG_CACHE_HOME', '~/.cache'), 'torch'))) -default_cache_path = os.path.join(torch_cache_home, 'pytorch_transformers') - -try: - from urllib.parse import urlparse -except ImportError: - from urlparse import urlparse - -try: - from pathlib import Path - PYTORCH_PRETRAINED_BERT_CACHE = Path( - os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', default_cache_path)) -except (AttributeError, ImportError): - PYTORCH_PRETRAINED_BERT_CACHE = os.getenv('PYTORCH_PRETRAINED_BERT_CACHE', - default_cache_path) - -logger = logging.getLogger(__name__) # pylint: disable=invalid-name - - -def url_to_filename(url, etag=None): - """ - Convert `url` into a hashed filename in a repeatable way. - If `etag` is specified, append its hash to the url's, delimited - by a period. - """ - url_bytes = url.encode('utf-8') - url_hash = sha256(url_bytes) - filename = url_hash.hexdigest() - - if etag: - etag_bytes = etag.encode('utf-8') - etag_hash = sha256(etag_bytes) - filename += '.' + etag_hash.hexdigest() - - return filename - - -def filename_to_url(filename, cache_dir=None): - """ - Return the url and etag (which may be ``None``) stored for `filename`. - Raise ``EnvironmentError`` if `filename` or its stored metadata do not exist. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BERT_CACHE - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - cache_path = os.path.join(cache_dir, filename) - if not os.path.exists(cache_path): - raise EnvironmentError("file {} not found".format(cache_path)) - - meta_path = cache_path + '.json' - if not os.path.exists(meta_path): - raise EnvironmentError("file {} not found".format(meta_path)) - - with open(meta_path, encoding="utf-8") as meta_file: - metadata = json.load(meta_file) - url = metadata['url'] - etag = metadata['etag'] - - return url, etag - - -def cached_path(url_or_filename, cache_dir=None): - """ - Given something that might be a URL (or might be a local path), - determine which. If it's a URL, download the file and cache it, and - return the path to the cached file. If it's already a local path, - make sure the file exists and then return the path. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BERT_CACHE - if sys.version_info[0] == 3 and isinstance(url_or_filename, Path): - url_or_filename = str(url_or_filename) - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - - parsed = urlparse(url_or_filename) - - if parsed.scheme in ('http', 'https', 's3'): - # URL, so get it from the cache (downloading if necessary) - return get_from_cache(url_or_filename, cache_dir) - elif os.path.exists(url_or_filename): - # File, and it exists. - return url_or_filename - elif parsed.scheme == '': - # File, but it doesn't exist. - raise EnvironmentError("file {} not found".format(url_or_filename)) - else: - # Something unknown - raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename)) - - -def split_s3_path(url): - """Split a full s3 path into the bucket name and path.""" - parsed = urlparse(url) - if not parsed.netloc or not parsed.path: - raise ValueError("bad s3 path {}".format(url)) - bucket_name = parsed.netloc - s3_path = parsed.path - # Remove '/' at beginning of path. - if s3_path.startswith("/"): - s3_path = s3_path[1:] - return bucket_name, s3_path - - -def s3_request(func): - """ - Wrapper function for s3 requests in order to create more helpful error - messages. - """ - - @wraps(func) - def wrapper(url, *args, **kwargs): - try: - return func(url, *args, **kwargs) - except ClientError as exc: - if int(exc.response["Error"]["Code"]) == 404: - raise EnvironmentError("file {} not found".format(url)) - else: - raise - - return wrapper - - -@s3_request -def s3_etag(url): - """Check ETag on S3 object.""" - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_object = s3_resource.Object(bucket_name, s3_path) - return s3_object.e_tag - - -@s3_request -def s3_get(url, temp_file): - """Pull a file directly from S3.""" - s3_resource = boto3.resource("s3") - bucket_name, s3_path = split_s3_path(url) - s3_resource.Bucket(bucket_name).download_fileobj(s3_path, temp_file) - - -def http_get(url, temp_file): - req = requests.get(url, stream=True) - content_length = req.headers.get('Content-Length') - total = int(content_length) if content_length is not None else None - progress = tqdm(unit="B", total=total) - for chunk in req.iter_content(chunk_size=1024): - if chunk: # filter out keep-alive new chunks - progress.update(len(chunk)) - temp_file.write(chunk) - progress.close() - - -def get_from_cache(url, cache_dir=None): - """ - Given a URL, look for the corresponding dataset in the local cache. - If it's not there, download it. Then return the path to the cached file. - """ - if cache_dir is None: - cache_dir = PYTORCH_PRETRAINED_BERT_CACHE - if sys.version_info[0] == 3 and isinstance(cache_dir, Path): - cache_dir = str(cache_dir) - if sys.version_info[0] == 2 and not isinstance(cache_dir, str): - cache_dir = str(cache_dir) - - if not os.path.exists(cache_dir): - os.makedirs(cache_dir) - - # Get eTag to add to filename, if it exists. - if url.startswith("s3://"): - etag = s3_etag(url) - else: - try: - response = requests.head(url, allow_redirects=True) - if response.status_code != 200: - etag = None - else: - etag = response.headers.get("ETag") - except EnvironmentError: - etag = None - - if sys.version_info[0] == 2 and etag is not None: - etag = etag.decode('utf-8') - filename = url_to_filename(url, etag) - - # get cache path to put the file - cache_path = os.path.join(cache_dir, filename) - - # If we don't have a connection (etag is None) and can't identify the file - # try to get the last downloaded one - if not os.path.exists(cache_path) and etag is None: - matching_files = fnmatch.filter(os.listdir(cache_dir), filename + '.*') - matching_files = list(filter(lambda s: not s.endswith('.json'), matching_files)) - if matching_files: - cache_path = os.path.join(cache_dir, matching_files[-1]) - - if not os.path.exists(cache_path): - # Download to temporary file, then copy to cache dir once finished. - # Otherwise you get corrupt cache entries if the download gets interrupted. - with tempfile.NamedTemporaryFile() as temp_file: - logger.info("%s not found in cache, downloading to %s", url, temp_file.name) - - # GET file object - if url.startswith("s3://"): - s3_get(url, temp_file) - else: - http_get(url, temp_file) - - # we are copying the file before closing it, so flush to avoid truncation - temp_file.flush() - # shutil.copyfileobj() starts at the current position, so go to the start - temp_file.seek(0) - - logger.info("copying %s to cache at %s", temp_file.name, cache_path) - with open(cache_path, 'wb') as cache_file: - shutil.copyfileobj(temp_file, cache_file) - - logger.info("creating metadata file for %s", cache_path) - meta = {'url': url, 'etag': etag} - meta_path = cache_path + '.json' - with open(meta_path, 'w') as meta_file: - output_string = json.dumps(meta) - meta_file.write(output_string) - - logger.info("removing temp file %s", temp_file.name) - - return cache_path diff --git a/spaces/BIOML-SVM/SVM/README.md b/spaces/BIOML-SVM/SVM/README.md deleted file mode 100644 index 40dc07f6508f29b4da979416643bae94b7c96574..0000000000000000000000000000000000000000 --- a/spaces/BIOML-SVM/SVM/README.md +++ /dev/null @@ -1,54 +0,0 @@ ---- -# https://huggingface.co/docs/hub/spaces-config-reference -title: SVM -emoji: 🧬 -colorFrom: green -colorTo: green -sdk: gradio -app_file: app.py -pinned: false -models: - - InstaDeepAI/nucleotide-transformer-500m-1000g - - facebook/esmfold_v1 - - sentence-transformers/all-mpnet-base-v2 -python_version: 3.10.4 -license: mit ---- - -# ProteinBind - -[![View on GitHub](https://img.shields.io/badge/-View%20on%20GitHub-000?style=flat&logo=github&logoColor=white&link=https://github.com/svm-ai/svm-hackathon)](https://github.com/svm-ai/svm-hackathon) - -## ML-Driven Bioinformatics for Protein Mutation Analysis - -This repository contains the source code and resources for our bioinformatics project aimed at identifying how gene/protein mutations alter function and which mutations can be pathogenic. Our approach is ML-driven and utilizes a multimodal contrastive learning framework, inspired by the ImageBind model by MetaAI. - -## Project Goal - -Our goal is to develop a method that can predict the effect of sequence variation on the function of genes/proteins. This information is critical for understanding gene/protein function, designing new proteins, and aiding in drug discovery. By modeling these effects, we can better select patients for clinical trials and modify existing drug-like molecules to treat previously untreated populations of the same disease with different mutations. - -## Model Description - -Our model uses contrastive learning across several modalities including amino acid (AA) sequences, Gene Ontology (GO) annotations, multiple sequence alignment (MSA), 3D structure, text annotations, and DNA sequences. - -We utilize the following encoders for each modality: - -- AA sequences: ESM v1/v2 by MetaAI -- Text annotations: Sentence-BERT (SBERT) -- 3D structure: ESMFold by MetaAI -- DNA nucleotide sequence: Nucleotide-Transformer -- MSA sequence: MSA-transformer - - -The NT-Xent loss function is used for contrastive learning. - -## Getting Started - -Clone the repository and install the necessary dependencies. Note that we will assume you have already installed Git Large File Storage (Git LFS) as some files in this repository are tracked using Git LFS. - -## Contributing -Contributions are welcome! Please read the contributing guidelines before getting started. - -## License - -This project is licensed under the terms of the MIT license. \ No newline at end of file diff --git a/spaces/Banbri/zcvzcv/Dockerfile b/spaces/Banbri/zcvzcv/Dockerfile deleted file mode 100644 index 91319be9b3dd35d916d18fba5260f51125c46b50..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM node:18-alpine AS base - -# Install dependencies only when needed -FROM base AS deps -# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. -RUN apk add --no-cache libc6-compat -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - -# Uncomment the following lines if you want to use a secret at buildtime, -# for example to access your private npm packages -# RUN --mount=type=secret,id=HF_EXAMPLE_SECRET,mode=0444,required=true \ -# $(cat /run/secrets/HF_EXAMPLE_SECRET) - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -# RUN yarn build - -# If you use yarn, comment out this line and use the line above -RUN npm run build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN addgroup --system --gid 1001 nodejs -RUN adduser --system --uid 1001 nextjs - -COPY --from=builder /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ -COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static -COPY --from=builder --chown=nextjs:nodejs /app/.next/cache ./.next/cache -# COPY --from=builder --chown=nextjs:nodejs /app/.next/cache/fetch-cache ./.next/cache/fetch-cache - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/Banbri/zcvzcv/src/lib/pick.ts b/spaces/Banbri/zcvzcv/src/lib/pick.ts deleted file mode 100644 index 48dc2995f08d8c3774a9b7b35b808064313361a7..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/lib/pick.ts +++ /dev/null @@ -1,2 +0,0 @@ - -export const pick = (items: string[]) => items[Math.floor(Math.random()*items.length)] diff --git a/spaces/Basil2k4/VPSnguyenmanh/src/create_user_and_fix_permissions.sh b/spaces/Basil2k4/VPSnguyenmanh/src/create_user_and_fix_permissions.sh deleted file mode 100644 index 285e103126230bb8c848c31dcd46f8e9fffc1d59..0000000000000000000000000000000000000000 --- a/spaces/Basil2k4/VPSnguyenmanh/src/create_user_and_fix_permissions.sh +++ /dev/null @@ -1,47 +0,0 @@ -#!/bin/bash -## Creates an ordinary non-root VNC_USER and calls the script to fix the file permissions - -### every exit != 0 fails the script -set -e -set -u - -UNAME=0 -UGROUP=0 - -if [[ -n "${VNC_USER}" ]] ; then - case "$VNC_USER" in - root|0) UNAME=root; UGROUP=$UNAME;; # exact match - root:*|0:*) UNAME=root; UGROUP=$UNAME;; # match from the beginning - *:root|*:0) UNAME=root; UGROUP=$UNAME;; # match at the end - *) UNAME=${VNC_USER/%:*/}; UGROUP=${VNC_USER/#*:/};; # else case - esac - - if [[ "$UGROUP" != "" && "$UGROUP" != "root" ]] ; then - - ### Creates the group only if it does not exist yet - echo "Creating group $UGROUP if needed" - groupadd -f $UGROUP - - ### Returns "0" if the user exists, or "1" otherwise - missing_user=$(id -u $UNAME > /dev/null 2>&1; echo $?) - - if [[ $missing_user != 0 ]] ; then - echo "Creating non-root user \"$VNC_USER\"." - useradd --no-log-init --gid $UGROUP --home-dir $HOME --shell /bin/bash --password $VNC_PW $UNAME - fi - else - echo "Will not create root user \"$VNC_USER\"." - fi -fi - -FIXING="Fixing permissions: " - -for var in "$@" -do - echo "$FIXING $var" - find "$var"/ -name '*.sh' -exec chmod a+x {} + - find "$var"/ -name '*.desktop' -exec chmod a+x {} + - - ### folder and its content belong to the group zero (recursively) - chgrp -R 0 "$var" && chmod -R -v a+rw "$var" && find "$var" -type d -exec chmod -v a+x {} + -done diff --git a/spaces/Benson/text-generation/Examples/Banderas De Pases.md b/spaces/Benson/text-generation/Examples/Banderas De Pases.md deleted file mode 100644 index e22753119be20667145267ac0c71a3e9af3c459b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Banderas De Pases.md +++ /dev/null @@ -1,83 +0,0 @@ - -

UnlockGo Crack Descargar: ¿Vale la pena?

-

Si alguna vez has olvidado tu contraseña, PIN, patrón o cara ID en tu iPhone o dispositivo Android, sabes lo frustrante que puede ser. Puede perder el acceso a sus datos, aplicaciones, contactos, fotos y más. También puede enfrentar el problema de bloqueo de activación de iCloud o bloqueo de Google FRP, que le impide configurar su dispositivo después de un restablecimiento de fábrica.

-

banderas de países


Download Filehttps://bltlly.com/2v6KJ1



-

Afortunadamente, hay una solución que puede ayudarle a evitar estos bloqueos y recuperar el control de su dispositivo. Se llama UnlockGo, y es una poderosa herramienta que puede eliminar varios tipos de bloqueos en dispositivos iOS y Android sin ninguna contraseña o pérdida de datos.

-

Pero lo que si usted no quiere pagar por la versión completa de UnlockGo? Puede sentirse tentado a buscar un crack, que es una versión modificada del software que evita la verificación de la licencia y le permite usarla de forma gratuita. Sin embargo, esto no es una buena idea, ya que hay muchos riesgos y desventajas de usar una grieta. En este artículo, vamos a explicar por qué usted debe evitar el uso de una grieta y cómo se puede obtener el mejor valor de UnlockGo.

-

¿Cuáles son los riesgos de usar una grieta?

-

Una grieta puede parecer una manera fácil de ahorrar dinero, pero viene con muchas desventajas y peligros. Aquí están algunos de ellos:

-
    -
  • Ralentización de dispositivos: La grieta puede contener código malicioso que puede infectar el dispositivo y hacerlo funcionar más lento o estrellarse por completo.
  • -
  • Riesgo de virus: Los sitios web que ofrecen grietas también pueden contener virus que pueden dañar su dispositivo o robar su información personal.
  • -
  • Malware: La grieta también puede instalar malware en su dispositivo que puede espiar sus actividades, mostrar anuncios o redirigirlo a sitios web no deseados.
  • -
  • Violación de la privacidad: La grieta también puede acceder a sus datos y enviarlos a terceros sin su consentimiento.
  • - -
  • Falta de soporte: La grieta puede no funcionar correctamente o causar errores que no puede corregir. No podrá ponerse en contacto con el servicio de atención al cliente de UnlockGo ni obtener ningún reembolso.
  • -
  • Cuestiones legales: La grieta puede violar los términos y condiciones de UnlockGo e infringir sus derechos de propiedad intelectual. Usted puede enfrentar consecuencias legales si es sorprendido usando una grieta.
  • -
-

¿Cómo descargar UnlockGo desde el sitio web oficial?

-

La mejor manera de descargar UnlockGo es desde su sitio web oficial: https://itoolab.com/unlock-iphone/. De esta manera, puede asegurarse de que obtiene la versión original y más reciente del software que es seguro y confiable. También puedes disfrutar de los siguientes beneficios:

-

-
    -
  • Prueba gratuita: Puedes probar UnlockGo gratis antes de comprarlo. Puedes usarlo para escanear tu dispositivo y ver si puede desbloquearlo.
  • -
  • Precio asequible: Usted puede comprar UnlockGo por un precio razonable que es mucho más barato que comprar un nuevo dispositivo o pagar por un servicio de reparación. - Garantía de devolución de dinero: Puede obtener un reembolso completo dentro de los 30 días si no está satisfecho con UnlockGo o si no logra desbloquear su dispositivo.
  • -
  • Actualizaciones de por vida: Puede obtener actualizaciones gratuitas e ilimitadas para UnlockGo siempre y cuando tenga una licencia válida.
  • -
  • 24/7 support: Puede ponerse en contacto con el servicio de atención al cliente de UnlockGo en cualquier momento por correo electrónico o chat en vivo. Te ayudarán con cualquier problema o pregunta que puedas tener.
  • -
-

Para descargar UnlockGo desde el sitio web oficial, debe seguir estos sencillos pasos:

-
    -
  1. Visite el sitio web -
  2. Ejecute el instalador y siga las instrucciones para instalar UnlockGo en su computadora.
  3. - -
  4. Elija el modo que se adapte a su situación (Desbloquear código de acceso de pantalla, Desbloquear ID de Apple, MDM de derivación, o tiempo de pantalla de derivación).
  5. -
  6. Haga clic en el botón "Inicio" y siga los pasos en pantalla para desbloquear el dispositivo.
  7. -
-

¿Cómo usar UnlockGo para desbloquear varios bloqueos en dispositivos iOS y Android?

-

UnlockGo es una herramienta versátil que puede desbloquear diferentes tipos de bloqueos en dispositivos iOS y Android. Aquí están algunas de las características y funciones de UnlockGo:

- - -Característica -Función - - -Desbloquear contraseña de pantalla -Esta función puede eliminar cualquier bloqueo de pantalla en su iPhone o iPad, como contraseña, PIN, patrón, Touch ID o Face ID. También puede eliminar el bloqueo de activación de iCloud o el bloqueo de Google FRP que le impide configurar su dispositivo después de un restablecimiento de fábrica. Esta función funciona para todos los dispositivos y versiones iOS y Android. - - -Desbloquear ID de Apple -Esta función puede eliminar el ID de Apple y la cuenta de iCloud de tu iPhone o iPad sin una contraseña. También puede desactivar Buscar mi iPhone y borrar todos los datos asociados con el ID de Apple. Esta función funciona para dispositivos iOS que ejecutan iOS 11.4 o anterior, o que han sido liberados por jailbreak. - - -Bypass MDM -Esta característica puede evitar el bloqueo de Administración de dispositivos móviles (MDM) que restringe el uso de su iPhone o iPad por una organización o escuela. También puede eliminar el perfil y la configuración de MDM de su dispositivo. Esta función funciona para todos los dispositivos iOS y versiones. - - -Tiempo de pantalla de derivación -Esta función puede omitir el código de acceso de tiempo de pantalla que limita el uso de aplicaciones y funciones en su iPhone o iPad. También puede eliminar la configuración de tiempo de pantalla y los datos de su dispositivo. Esta función funciona para todos los dispositivos iOS y versiones. - - - -

¿Cómo evitar los riesgos de usar una grieta y obtener el mejor valor de UnlockGo?

-

Como hemos visto, usar una grieta no vale la pena, ya que te expone a muchos riesgos y desventajas. En su lugar, debe utilizar la versión oficial de UnlockGo, que es seguro, confiable y eficaz. Aquí hay algunos consejos sobre cómo obtener el máximo provecho de UnlockGo:

-
    -
  • Compruebe la compatibilidad: Antes de usar UnlockGo, asegúrese de que su modelo de dispositivo y la versión del sistema son compatibles con el software. Puede consultar la lista de compatibilidad en el sitio web oficial o ponerse en contacto con el servicio de atención al cliente si no está seguro.
  • -
  • Copia de seguridad de sus datos: Aunque UnlockGo no causa ninguna pérdida de datos en la mayoría de los casos, todavía se recomienda que copia de seguridad de sus datos antes de usarlo. Puede utilizar iTunes, iCloud, Google Drive o cualquier otra herramienta de copia de seguridad para guardar sus datos.
  • -
  • Siga las instrucciones cuidadosamente: Cuando use UnlockGo, asegúrese de seguir las instrucciones en la pantalla cuidadosamente. No desconecte el dispositivo ni cierre el software durante el proceso de desbloqueo. Si encuentra algún error o problema, no se asuste y póngase en contacto con el servicio de atención al cliente para obtener ayuda.
  • -
  • Utilice un código de cupón: Si desea ahorrar algo de dinero al comprar UnlockGo, puede utilizar un código de cupón que le da un descuento. Usted puede encontrar códigos de cupón en varios sitios web o plataformas de medios sociales que promueven UnlockGo. También puede - Suscribirse al boletín de noticias: Si desea obtener las últimas noticias y actualizaciones sobre UnlockGo, puede suscribirse al boletín de noticias en el sitio web oficial. También recibirá ofertas y descuentos exclusivos de vez en cuando.
  • -
-

Conclusión

- -

Sin embargo, debes evitar usar una grieta, ya que es arriesgada, ilegal e ineficaz. Puede dañar su dispositivo, comprometer su privacidad y causar errores y problemas. Siempre debe descargar UnlockGo desde el sitio web oficial y utilizarlo de acuerdo con las instrucciones.

-

Si quieres probar UnlockGo por ti mismo, puedes descargarlo desde aquí: https://itoolab.com/unlock-iphone/. Te sorprenderá lo fácil y rápido que puede desbloquear tu dispositivo.

-

Preguntas frecuentes

-

¿Es UnlockGo seguro y legítimo?

-

Sí, UnlockGo es seguro y legítimo. No contiene ningún virus o malware que pueda dañar su dispositivo o datos. Tampoco accede ni comparte su información personal sin su permiso. Es un software de confianza que ha sido confiado por millones de usuarios en todo el mundo.

-

¿UnlockGo es compatible con todos los dispositivos y versiones iOS y Android?

-

Sí, UnlockGo es compatible con todos los dispositivos y versiones iOS y Android. Puede desbloquear iPhone, iPad, iPod touch, Samsung, Huawei, LG, Motorola, Sony, HTC y otros dispositivos. También puede desbloquear iOS 14, iOS 13, iOS 12, iOS 11, Android 11, Android 10, Android 9 y otras versiones.

-

¿Cuánto tiempo se tarda en desbloquear un dispositivo con UnlockGo?

-

El tiempo de desbloqueo depende del tipo de bloqueo y del modelo de dispositivo. Generalmente, se tarda solo unos minutos en desbloquear un dispositivo con UnlockGo. Sin embargo, algunos bloqueos pueden requerir más tiempo o pasos para desbloquear. Por ejemplo, desbloquear el bloqueo de activación de iCloud o bloqueo de Google FRP puede requerir la descarga de firmware o entrar en modo de recuperación.

-

¿Qué pasa si encuentro algún problema al usar UnlockGo?

-

Si encuentra algún problema al usar UnlockGo, puede ponerse en contacto con el servicio de atención al cliente de UnlockGo por correo electrónico o chat en vivo. Te ayudarán a resolver los problemas lo antes posible. También puede consultar la guía del usuario o la sección de preguntas frecuentes en el sitio web oficial para obtener más información.

-

¿Cómo puedo contactar con el servicio de atención al cliente de UnlockGo?

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/markers.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/markers.py deleted file mode 100644 index 540e7a4dc79d02a820e291b57c43335d5aa25a41..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/packaging/markers.py +++ /dev/null @@ -1,304 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import operator -import os -import platform -import sys -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -from pip._vendor.pyparsing import ( # noqa: N817 - Forward, - Group, - Literal as L, - ParseException, - ParseResults, - QuotedString, - ZeroOrMore, - stringEnd, - stringStart, -) - -from .specifiers import InvalidSpecifier, Specifier - -__all__ = [ - "InvalidMarker", - "UndefinedComparison", - "UndefinedEnvironmentName", - "Marker", - "default_environment", -] - -Operator = Callable[[str, str], bool] - - -class InvalidMarker(ValueError): - """ - An invalid marker was found, users should refer to PEP 508. - """ - - -class UndefinedComparison(ValueError): - """ - An invalid operation was attempted on a value that doesn't support it. - """ - - -class UndefinedEnvironmentName(ValueError): - """ - A name was attempted to be used that does not exist inside of the - environment. - """ - - -class Node: - def __init__(self, value: Any) -> None: - self.value = value - - def __str__(self) -> str: - return str(self.value) - - def __repr__(self) -> str: - return f"<{self.__class__.__name__}('{self}')>" - - def serialize(self) -> str: - raise NotImplementedError - - -class Variable(Node): - def serialize(self) -> str: - return str(self) - - -class Value(Node): - def serialize(self) -> str: - return f'"{self}"' - - -class Op(Node): - def serialize(self) -> str: - return str(self) - - -VARIABLE = ( - L("implementation_version") - | L("platform_python_implementation") - | L("implementation_name") - | L("python_full_version") - | L("platform_release") - | L("platform_version") - | L("platform_machine") - | L("platform_system") - | L("python_version") - | L("sys_platform") - | L("os_name") - | L("os.name") # PEP-345 - | L("sys.platform") # PEP-345 - | L("platform.version") # PEP-345 - | L("platform.machine") # PEP-345 - | L("platform.python_implementation") # PEP-345 - | L("python_implementation") # undocumented setuptools legacy - | L("extra") # PEP-508 -) -ALIASES = { - "os.name": "os_name", - "sys.platform": "sys_platform", - "platform.version": "platform_version", - "platform.machine": "platform_machine", - "platform.python_implementation": "platform_python_implementation", - "python_implementation": "platform_python_implementation", -} -VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0]))) - -VERSION_CMP = ( - L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<") -) - -MARKER_OP = VERSION_CMP | L("not in") | L("in") -MARKER_OP.setParseAction(lambda s, l, t: Op(t[0])) - -MARKER_VALUE = QuotedString("'") | QuotedString('"') -MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0])) - -BOOLOP = L("and") | L("or") - -MARKER_VAR = VARIABLE | MARKER_VALUE - -MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR) -MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0])) - -LPAREN = L("(").suppress() -RPAREN = L(")").suppress() - -MARKER_EXPR = Forward() -MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN) -MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR) - -MARKER = stringStart + MARKER_EXPR + stringEnd - - -def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]: - if isinstance(results, ParseResults): - return [_coerce_parse_result(i) for i in results] - else: - return results - - -def _format_marker( - marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True -) -> str: - - assert isinstance(marker, (list, tuple, str)) - - # Sometimes we have a structure like [[...]] which is a single item list - # where the single item is itself it's own list. In that case we want skip - # the rest of this function so that we don't get extraneous () on the - # outside. - if ( - isinstance(marker, list) - and len(marker) == 1 - and isinstance(marker[0], (list, tuple)) - ): - return _format_marker(marker[0]) - - if isinstance(marker, list): - inner = (_format_marker(m, first=False) for m in marker) - if first: - return " ".join(inner) - else: - return "(" + " ".join(inner) + ")" - elif isinstance(marker, tuple): - return " ".join([m.serialize() for m in marker]) - else: - return marker - - -_operators: Dict[str, Operator] = { - "in": lambda lhs, rhs: lhs in rhs, - "not in": lambda lhs, rhs: lhs not in rhs, - "<": operator.lt, - "<=": operator.le, - "==": operator.eq, - "!=": operator.ne, - ">=": operator.ge, - ">": operator.gt, -} - - -def _eval_op(lhs: str, op: Op, rhs: str) -> bool: - try: - spec = Specifier("".join([op.serialize(), rhs])) - except InvalidSpecifier: - pass - else: - return spec.contains(lhs) - - oper: Optional[Operator] = _operators.get(op.serialize()) - if oper is None: - raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.") - - return oper(lhs, rhs) - - -class Undefined: - pass - - -_undefined = Undefined() - - -def _get_env(environment: Dict[str, str], name: str) -> str: - value: Union[str, Undefined] = environment.get(name, _undefined) - - if isinstance(value, Undefined): - raise UndefinedEnvironmentName( - f"{name!r} does not exist in evaluation environment." - ) - - return value - - -def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool: - groups: List[List[bool]] = [[]] - - for marker in markers: - assert isinstance(marker, (list, tuple, str)) - - if isinstance(marker, list): - groups[-1].append(_evaluate_markers(marker, environment)) - elif isinstance(marker, tuple): - lhs, op, rhs = marker - - if isinstance(lhs, Variable): - lhs_value = _get_env(environment, lhs.value) - rhs_value = rhs.value - else: - lhs_value = lhs.value - rhs_value = _get_env(environment, rhs.value) - - groups[-1].append(_eval_op(lhs_value, op, rhs_value)) - else: - assert marker in ["and", "or"] - if marker == "or": - groups.append([]) - - return any(all(item) for item in groups) - - -def format_full_version(info: "sys._version_info") -> str: - version = "{0.major}.{0.minor}.{0.micro}".format(info) - kind = info.releaselevel - if kind != "final": - version += kind[0] + str(info.serial) - return version - - -def default_environment() -> Dict[str, str]: - iver = format_full_version(sys.implementation.version) - implementation_name = sys.implementation.name - return { - "implementation_name": implementation_name, - "implementation_version": iver, - "os_name": os.name, - "platform_machine": platform.machine(), - "platform_release": platform.release(), - "platform_system": platform.system(), - "platform_version": platform.version(), - "python_full_version": platform.python_version(), - "platform_python_implementation": platform.python_implementation(), - "python_version": ".".join(platform.python_version_tuple()[:2]), - "sys_platform": sys.platform, - } - - -class Marker: - def __init__(self, marker: str) -> None: - try: - self._markers = _coerce_parse_result(MARKER.parseString(marker)) - except ParseException as e: - raise InvalidMarker( - f"Invalid marker: {marker!r}, parse error at " - f"{marker[e.loc : e.loc + 8]!r}" - ) - - def __str__(self) -> str: - return _format_marker(self._markers) - - def __repr__(self) -> str: - return f"" - - def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool: - """Evaluate a marker. - - Return the boolean from evaluating the given marker against the - environment. environment is an optional argument to override all or - part of the determined environment. - - The environment is determined from the current Python process. - """ - current_environment = default_environment() - if environment is not None: - current_environment.update(environment) - - return _evaluate_markers(self._markers, current_environment) diff --git a/spaces/BridgeTower/bridgetower-video-search/app.py b/spaces/BridgeTower/bridgetower-video-search/app.py deleted file mode 100644 index 0bb604cfc73784f7fac70cac67d82cd54c8ba943..0000000000000000000000000000000000000000 --- a/spaces/BridgeTower/bridgetower-video-search/app.py +++ /dev/null @@ -1,341 +0,0 @@ -import os -import cv2 -import gradio as gr -import numpy as np -import json -import pickle -from PIL import Image -import torch -from torch.nn.utils.rnn import pad_sequence -from transformers import BridgeTowerProcessor -from tqdm import tqdm - -from bridgetower_custom import BridgeTowerTextFeatureExtractor, BridgeTowerForITC - -import faiss -import webvtt - -from pytube import YouTube -from youtube_transcript_api import YouTubeTranscriptApi -from youtube_transcript_api.formatters import WebVTTFormatter - -if torch.cuda.is_available(): - device = 'cuda' -else: - device = 'cpu' -model_name = 'BridgeTower/bridgetower-large-itm-mlm-itc' -model = BridgeTowerForITC.from_pretrained(model_name).to(device) -text_model = BridgeTowerTextFeatureExtractor.from_pretrained(model_name).to(device) - -processor = BridgeTowerProcessor.from_pretrained(model_name) - - -def download_video(video_url, path='/tmp/'): - - yt = YouTube(video_url) - yt = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first() - if not os.path.exists(path): - os.makedirs(path) - filepath = os.path.join(path, yt.default_filename) - if not os.path.exists(filepath): - print('Downloading video from YouTube...') - yt.download(path) - return filepath - - -# Get transcript in webvtt -def get_transcript_vtt(video_id, path='/tmp'): - filepath = os.path.join(path,'test_vm.vtt') - if os.path.exists(filepath): - return filepath - - transcript = YouTubeTranscriptApi.get_transcript(video_id) - formatter = WebVTTFormatter() - webvtt_formatted = formatter.format_transcript(transcript) - - with open(filepath, 'w', encoding='utf-8') as webvtt_file: - webvtt_file.write(webvtt_formatted) - webvtt_file.close() - - return filepath - -# https://stackoverflow.com/a/57781047 -# Resizes a image and maintains aspect ratio -def maintain_aspect_ratio_resize(image, width=None, height=None, inter=cv2.INTER_AREA): - # Grab the image size and initialize dimensions - dim = None - (h, w) = image.shape[:2] - - # Return original image if no need to resize - if width is None and height is None: - return image - - # We are resizing height if width is none - if width is None: - # Calculate the ratio of the height and construct the dimensions - r = height / float(h) - dim = (int(w * r), height) - # We are resizing width if height is none - else: - # Calculate the ratio of the width and construct the dimensions - r = width / float(w) - dim = (width, int(h * r)) - - # Return the resized image - return cv2.resize(image, dim, interpolation=inter) - -def time_to_frame(time, fps): - ''' - convert time in seconds into frame number - ''' - return int(time * fps - 1) - -def str2time(strtime): - strtime = strtime.strip('"') - hrs, mins, seconds = [float(c) for c in strtime.split(':')] - - total_seconds = hrs * 60**2 + mins * 60 + seconds - - return total_seconds - -def collate_fn(batch_list): - batch = {} - batch['input_ids'] = pad_sequence([encoding['input_ids'].squeeze(0) for encoding in batch_list], batch_first=True) - batch['attention_mask'] = pad_sequence([encoding['attention_mask'].squeeze(0) for encoding in batch_list], batch_first=True) - batch['pixel_values'] = torch.cat([encoding['pixel_values'] for encoding in batch_list], dim=0) - batch['pixel_mask'] = torch.cat([encoding['pixel_mask'] for encoding in batch_list], dim=0) - return batch - -def extract_images_and_embeds(video_id, video_path, subtitles, output, expanded=False, batch_size=2, progress=gr.Progress()): - if os.path.exists(os.path.join(output, 'embeddings.pkl')): - return - - os.makedirs(output, exist_ok=True) - os.makedirs(os.path.join(output, 'frames'), exist_ok=True) - os.makedirs(os.path.join(output, 'frames_thumb'), exist_ok=True) - - count = 0 - - vidcap = cv2.VideoCapture(video_path) - - # Get the frames per second - fps = vidcap.get(cv2.CAP_PROP_FPS) - - # Get the total numer of frames in the video. - frame_count = vidcap.get(cv2.CAP_PROP_FRAME_COUNT) - - # print(fps, frame_count) - - frame_number = 0 - - count = 0 - anno = [] - - embeddings = [] - batch_list = [] - vtt = webvtt.read(subtitles) - - for idx, caption in enumerate(tqdm(vtt, total=vtt.total_length, desc="Generating embeddings")): - st_time = str2time(caption.start) - ed_time = str2time(caption.end) - - mid_time = (ed_time + st_time) / 2 - text = caption.text.replace('\n', ' ') - - if expanded : - raise NotImplementedError - - frame_no = time_to_frame(mid_time, fps) - mid_time_ms = mid_time * 1000 - # vidcap.set(1, frame_no) # added this line - vidcap.set(cv2.CAP_PROP_POS_MSEC, mid_time_ms) - print('Read a new frame: ', idx, mid_time, frame_no, text) - success, frame = vidcap.read() - if success: - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frame = Image.fromarray(frame) - img_fname = f'{video_id}_{idx:06d}' - img_fpath = os.path.join(output, 'frames', img_fname + '.jpg') - # image = maintain_aspect_ratio_resize(image, height=350) # save frame as JPEG file - # cv2.imwrite( img_fpath, image) # save frame as JPEG file - - count += 1 - anno.append({ - 'image_id': idx, - 'img_fname': img_fname, - 'caption': text, - 'time': mid_time_ms, - 'frame_no': frame_no - }) - - encoding = processor(frame, text, return_tensors="pt").to(device) - encoding['text'] = text - encoding['image_filepath'] = img_fpath - encoding['start_time'] = caption.start - encoding['time'] = mid_time_ms - - batch_list.append(encoding) - - else: - break - - if len(batch_list) == batch_size: - batch = collate_fn(batch_list) - with torch.no_grad(): - outputs = model(**batch, output_hidden_states=True) - - for i in range(batch_size): - embeddings.append({ - 'embeddings':outputs.logits[i,2,:].detach().cpu().numpy(), - 'text': batch_list[i]['text'], - 'image_filepath': batch_list[i]['image_filepath'], - 'start_time': batch_list[i]['start_time'], - 'time': batch_list[i]['time'], - }) - batch_list = [] - - if batch_list: - batch = collate_fn(batch_list) - with torch.no_grad(): - outputs = model(**batch, output_hidden_states=True) - - for i in range(len(batch_list)): - embeddings.append({ - 'embeddings':outputs.logits[i,2,:].detach().cpu().numpy(), - 'text': batch_list[i]['text'], - 'image_filepath': batch_list[i]['image_filepath'], - 'start_time': batch_list[i]['start_time'], - 'time': batch_list[i]['time'], - }) - - batch_list = [] - - with open(os.path.join(output, 'annotations.json'), 'w') as fh: - json.dump(anno, fh) - - with open(os.path.join(output, 'embeddings.pkl'), 'wb') as fh: - pickle.dump(embeddings, fh) - -def run_query(video_path, text_query, path='/tmp'): - - vidcap = cv2.VideoCapture(video_path) - - embeddings_filepath = os.path.join(path, 'embeddings.pkl') - faiss_filepath = os.path.join(path, 'faiss_index.pkl') - - embeddings = pickle.load(open(embeddings_filepath, 'rb')) - - if os.path.exists(faiss_filepath): - faiss_index = pickle.load(open(faiss_filepath, 'rb')) - else : - embs = [emb['embeddings'] for emb in embeddings] - vectors = np.stack(embs, axis=0) - num_vectors, vector_dim = vectors.shape - faiss_index = faiss.IndexFlatIP(vector_dim) - faiss_index.add(vectors) - pickle.dump(faiss_index, open(faiss_filepath, 'wb')) - - print('Processing query') - encoding = processor.tokenizer(text_query, return_tensors="pt").to(device) - with torch.no_grad(): - outputs = text_model(**encoding) - emb_query = outputs.cpu().numpy() - print('Running FAISS search') - _, I = faiss_index.search(emb_query, 6) - - clip_images = [] - transcripts = [] - for idx in I[0]: - # frame_no = embeddings[idx]['frame_no'] - # vidcap.set(1, frame_no) # added this line - frame_timestamp = embeddings[idx]['time'] - vidcap.set(cv2.CAP_PROP_POS_MSEC, frame_timestamp) - - success, frame = vidcap.read() - if success: - frame = maintain_aspect_ratio_resize(frame, height=400) - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frame = Image.fromarray(frame) - clip_images.append(frame) - transcripts.append(f"({embeddings[idx]['start_time']}) {embeddings[idx]['text']}") - - return clip_images, transcripts - - -#https://stackoverflow.com/a/7936523 -def get_video_id_from_url(video_url): - """ - Examples: - - http://youtu.be/SA2iWivDJiE - - http://www.youtube.com/watch?v=_oPAwA_Udwc&feature=feedu - - http://www.youtube.com/embed/SA2iWivDJiE - - http://www.youtube.com/v/SA2iWivDJiE?version=3&hl=en_US - """ - import urllib.parse - url = urllib.parse.urlparse(video_url) - if url.hostname == 'youtu.be': - return url.path[1:] - if url.hostname in ('www.youtube.com', 'youtube.com'): - if url.path == '/watch': - p = urllib.parse.parse_qs(url.query) - return p['v'][0] - if url.path[:7] == '/embed/': - return url.path.split('/')[2] - if url.path[:3] == '/v/': - return url.path.split('/')[2] - - return None - - -def process(video_url, text_query, progress=gr.Progress(track_tqdm=True)): - tmp_dir = os.environ.get('TMPDIR', '/tmp') - video_id = get_video_id_from_url(video_url) - output_dir = os.path.join(tmp_dir, video_id) - video_file = download_video(video_url, path=output_dir) - subtitles = get_transcript_vtt(video_id, path=output_dir) - extract_images_and_embeds(video_id=video_id, - video_path=video_file, - subtitles=subtitles, - output=output_dir, - expanded=False, - batch_size=8, - progress=progress, - ) - frame_paths, transcripts = run_query(video_file, text_query, path=output_dir) - return video_file, [(image, caption) for image, caption in zip(frame_paths, transcripts)] - - -description = "This Space lets you run semantic search on a video." - -with gr.Blocks() as demo: - gr.Markdown(description) - with gr.Row(): - with gr.Column(): - video_url = gr.Text(label="Youtube url") - text_query = gr.Text(label="Text query") - btn = gr.Button("Run query") - video_player = gr.Video(label="Video") - - with gr.Row(): - gallery = gr.Gallery(label="Images").style(grid=6) - - gr.Examples( - examples=[ - ['https://www.youtube.com/watch?v=CvjoXdC-WkM','wedding'], - ['https://www.youtube.com/watch?v=fWs2dWcNGu0', 'cheesecake'], - ['https://www.youtube.com/watch?v=rmPpNsx4yAk', 'bunny'], - ['https://www.youtube.com/watch?v=KCFYf4TJdN0' ,'sandwich'], - ], - inputs=[video_url, text_query], - ) - - btn.click(fn=process, - inputs=[video_url, text_query], - outputs=[video_player, gallery], - ) - -try: - demo.queue(concurrency_count=3) - demo.launch(share=True) -except: - demo.launch() diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_anchor_generator.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_anchor_generator.py deleted file mode 100644 index d5bbdd66875a8d099afd951926aac4499f76e9fb..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_anchor_generator.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import logging -import unittest -import torch - -from detectron2.config import get_cfg -from detectron2.layers import ShapeSpec -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator, RotatedAnchorGenerator - -logger = logging.getLogger(__name__) - - -class TestAnchorGenerator(unittest.TestCase): - def test_default_anchor_generator(self): - cfg = get_cfg() - cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]] - cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]] - - anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)]) - - # only the last two dimensions of features matter here - num_images = 2 - features = {"stage3": torch.rand(num_images, 96, 1, 2)} - anchors = anchor_generator([features["stage3"]]) - expected_anchor_tensor = torch.tensor( - [ - [-32.0, -8.0, 32.0, 8.0], - [-16.0, -16.0, 16.0, 16.0], - [-8.0, -32.0, 8.0, 32.0], - [-64.0, -16.0, 64.0, 16.0], - [-32.0, -32.0, 32.0, 32.0], - [-16.0, -64.0, 16.0, 64.0], - [-28.0, -8.0, 36.0, 8.0], # -28.0 == -32.0 + STRIDE (4) - [-12.0, -16.0, 20.0, 16.0], - [-4.0, -32.0, 12.0, 32.0], - [-60.0, -16.0, 68.0, 16.0], - [-28.0, -32.0, 36.0, 32.0], - [-12.0, -64.0, 20.0, 64.0], - ] - ) - - for i in range(num_images): - assert torch.allclose(anchors[i][0].tensor, expected_anchor_tensor) - - def test_default_anchor_generator_centered(self): - cfg = get_cfg() - cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]] - cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]] - cfg.MODEL.ANCHOR_GENERATOR.OFFSET = 0.5 - - anchor_generator = DefaultAnchorGenerator(cfg, [ShapeSpec(stride=4)]) - - # only the last two dimensions of features matter here - num_images = 2 - features = {"stage3": torch.rand(num_images, 96, 1, 2)} - anchors = anchor_generator([features["stage3"]]) - expected_anchor_tensor = torch.tensor( - [ - [-30.0, -6.0, 34.0, 10.0], - [-14.0, -14.0, 18.0, 18.0], - [-6.0, -30.0, 10.0, 34.0], - [-62.0, -14.0, 66.0, 18.0], - [-30.0, -30.0, 34.0, 34.0], - [-14.0, -62.0, 18.0, 66.0], - [-26.0, -6.0, 38.0, 10.0], - [-10.0, -14.0, 22.0, 18.0], - [-2.0, -30.0, 14.0, 34.0], - [-58.0, -14.0, 70.0, 18.0], - [-26.0, -30.0, 38.0, 34.0], - [-10.0, -62.0, 22.0, 66.0], - ] - ) - - for i in range(num_images): - assert torch.allclose(anchors[i][0].tensor, expected_anchor_tensor) - - def test_rrpn_anchor_generator(self): - cfg = get_cfg() - cfg.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64]] - cfg.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.25, 1, 4]] - cfg.MODEL.ANCHOR_GENERATOR.ANGLES = [[0, 45]] - anchor_generator = RotatedAnchorGenerator(cfg, [ShapeSpec(stride=4)]) - - # only the last two dimensions of features matter here - num_images = 2 - features = {"stage3": torch.rand(num_images, 96, 1, 2)} - anchors = anchor_generator([features["stage3"]]) - expected_anchor_tensor = torch.tensor( - [ - [0.0, 0.0, 64.0, 16.0, 0.0], - [0.0, 0.0, 64.0, 16.0, 45.0], - [0.0, 0.0, 32.0, 32.0, 0.0], - [0.0, 0.0, 32.0, 32.0, 45.0], - [0.0, 0.0, 16.0, 64.0, 0.0], - [0.0, 0.0, 16.0, 64.0, 45.0], - [0.0, 0.0, 128.0, 32.0, 0.0], - [0.0, 0.0, 128.0, 32.0, 45.0], - [0.0, 0.0, 64.0, 64.0, 0.0], - [0.0, 0.0, 64.0, 64.0, 45.0], - [0.0, 0.0, 32.0, 128.0, 0.0], - [0.0, 0.0, 32.0, 128.0, 45.0], - [4.0, 0.0, 64.0, 16.0, 0.0], # 4.0 == 0.0 + STRIDE (4) - [4.0, 0.0, 64.0, 16.0, 45.0], - [4.0, 0.0, 32.0, 32.0, 0.0], - [4.0, 0.0, 32.0, 32.0, 45.0], - [4.0, 0.0, 16.0, 64.0, 0.0], - [4.0, 0.0, 16.0, 64.0, 45.0], - [4.0, 0.0, 128.0, 32.0, 0.0], - [4.0, 0.0, 128.0, 32.0, 45.0], - [4.0, 0.0, 64.0, 64.0, 0.0], - [4.0, 0.0, 64.0, 64.0, 45.0], - [4.0, 0.0, 32.0, 128.0, 0.0], - [4.0, 0.0, 32.0, 128.0, 45.0], - ] - ) - - for i in range(num_images): - assert torch.allclose(anchors[i][0].tensor, expected_anchor_tensor) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/event_error.h b/spaces/CVPR/LIVE/thrust/thrust/detail/event_error.h deleted file mode 100644 index 114d4763f116ef20966572a86ca52076b837f1cc..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/event_error.h +++ /dev/null @@ -1,166 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/// \file thrust/detail/event_error.h -/// \brief \c thrust::future and thrust::future error handling types and codes. - -#pragma once - -#include -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 && !defined(THRUST_LEGACY_GCC) - -#include -#include - -#include - -namespace thrust -{ - -enum class event_errc -{ - unknown_event_error -, no_state -, no_content -, last_event_error -}; - -/// \return error_code(static_cast(e), event_category()) -inline error_code make_error_code(event_errc e); - -/// \return error_condition(static_cast(e), event_category()). -inline error_condition make_error_condition(event_errc e); - -struct event_error_category : error_category -{ - event_error_category() = default; - - virtual char const* name() const - { - return "event"; - } - - virtual std::string message(int ev) const - { - switch (static_cast(ev)) - { - case event_errc::no_state: - { - return "no_state: an operation that requires an event or future to have " - "a stream or content has been performed on a event or future " - "without either, e.g. a moved-from or default constructed event " - "or future (an event or future may have been consumed more than " - "once)"; - } - case event_errc::no_content: - { - return "no_content: an operation that requires a future to have content " - "has been performed on future without any, e.g. a moved-from, " - "default constructed, or `thrust::new_stream` constructed future " - "(a future may have been consumed more than once)"; - } - default: - { - return "unknown_event_error: an unknown error with a future " - "object has occurred"; - } - }; - } - - virtual error_condition default_error_condition(int ev) const - { - if ( - event_errc::last_event_error - > - static_cast(ev) - ) - return make_error_condition(static_cast(ev)); - - return system_category().default_error_condition(ev); - } -}; - -/// Obtains a reference to the static error category object for the errors -/// related to futures and promises. The object is required to override the -/// virtual function error_category::name() to return a pointer to the string -/// "event". It is used to identify error codes provided in the -/// exceptions of type event_error. -inline error_category const& event_category() -{ - static const event_error_category result; - return result; -} - -namespace system -{ -/// Specialization of \p is_error_code_enum for \p event_errc. -template<> struct is_error_code_enum : true_type {}; -} // end system - -/// \return error_code(static_cast(e), event_category()) -inline error_code make_error_code(event_errc e) -{ - return error_code(static_cast(e), event_category()); -} - -/// \return error_condition(static_cast(e), event_category()). -inline error_condition make_error_condition(event_errc e) -{ - return error_condition(static_cast(e), event_category()); -} - -struct event_error : std::logic_error -{ - __host__ - explicit event_error(error_code ec) - : std::logic_error(ec.message()), ec_(ec) - {} - - __host__ - explicit event_error(event_errc e) - : event_error(make_error_code(e)) - {} - - __host__ - error_code const& code() const noexcept - { - return ec_; - } - - __host__ - virtual ~event_error() noexcept {} - -private: - error_code ec_; -}; - -inline bool operator==(event_error const& lhs, event_error const& rhs) noexcept -{ - return lhs.code() == rhs.code(); -} - -inline bool operator<(event_error const& lhs, event_error const& rhs) noexcept -{ - return lhs.code() < rhs.code(); -} - -} // end namespace thrust - -#endif - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scalar/binary_search.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scalar/binary_search.h deleted file mode 100644 index 373b59a606affd84e68edbf8fe3df44da9e24df6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/scalar/binary_search.h +++ /dev/null @@ -1,85 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -namespace system -{ - -namespace detail -{ - -namespace generic -{ - -namespace scalar -{ - -template -__host__ __device__ -RandomAccessIterator lower_bound_n(RandomAccessIterator first, - Size n, - const T &val, - BinaryPredicate comp); - -template -__host__ __device__ -RandomAccessIterator lower_bound(RandomAccessIterator first, RandomAccessIterator last, - const T &val, - BinaryPredicate comp); - -template -__host__ __device__ -RandomAccessIterator upper_bound_n(RandomAccessIterator first, - Size n, - const T &val, - BinaryPredicate comp); - -template -__host__ __device__ -RandomAccessIterator upper_bound(RandomAccessIterator first, RandomAccessIterator last, - const T &val, - BinaryPredicate comp); - -template -__host__ __device__ - pair - equal_range(RandomAccessIterator first, RandomAccessIterator last, - const T &val, - BinaryPredicate comp); - -template -__host__ __device__ -bool binary_search(RandomAccessIterator first, RandomAccessIterator last, const T &value, Compare comp); - -} // end scalar - -} // end generic - -} // end detail - -} // end system - -} // end thrust - -#include - diff --git a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_preprocessing.py b/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_preprocessing.py deleted file mode 100644 index aaacd146fa9a9ec41c762a7a07d4738716ab01f5..0000000000000000000000000000000000000000 --- a/spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/ip_preprocessing.py +++ /dev/null @@ -1,69 +0,0 @@ -import cv2 -import numpy as np -from CDM.config.CONFIG_UIED import Config -C = Config() - - -def read_img(path, resize_height=None, kernel_size=None): - - def resize_by_height(org): - w_h_ratio = org.shape[1] / org.shape[0] - resize_w = resize_height * w_h_ratio - re = cv2.resize(org, (int(resize_w), int(resize_height))) - return re - - try: - img = cv2.imread(path) - if kernel_size is not None: - img = cv2.medianBlur(img, kernel_size) - if img is None: - print("*** Image does not exist ***") - return None, None - if resize_height is not None: - img = resize_by_height(img) - gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - return img, gray - - except Exception as e: - print(e) - print("*** Img Reading Failed ***\n") - return None, None - - -def gray_to_gradient(img): - if len(img.shape) == 3: - img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) - img_f = np.copy(img) - img_f = img_f.astype("float") - - kernel_h = np.array([[0,0,0], [0,-1.,1.], [0,0,0]]) - kernel_v = np.array([[0,0,0], [0,-1.,0], [0,1.,0]]) - dst1 = abs(cv2.filter2D(img_f, -1, kernel_h)) - dst2 = abs(cv2.filter2D(img_f, -1, kernel_v)) - gradient = (dst1 + dst2).astype('uint8') - return gradient - - -def reverse_binary(bin, show=False): - """ - Reverse the input binary image - """ - r, bin = cv2.threshold(bin, 1, 255, cv2.THRESH_BINARY_INV) - if show: - cv2.imshow('binary_rev', bin) - cv2.waitKey() - return bin - - -def binarization(org, grad_min, show=False, write_path=None, wait_key=0): - grey = cv2.cvtColor(org, cv2.COLOR_BGR2GRAY) - grad = gray_to_gradient(grey) # get RoI with high gradient - rec, binary = cv2.threshold(grad, grad_min, 255, cv2.THRESH_BINARY) # enhance the RoI - morph = cv2.morphologyEx(binary, cv2.MORPH_CLOSE, (3, 3)) # remove noises - if write_path is not None: - cv2.imwrite(write_path, morph) - if show: - cv2.imshow('binary', morph) - if wait_key is not None: - cv2.waitKey(wait_key) - return morph diff --git a/spaces/DEEMOSTECH/ChatAvatar/static/js/main.84e5ce89.js b/spaces/DEEMOSTECH/ChatAvatar/static/js/main.84e5ce89.js deleted file mode 100644 index 9f9001c25b96ceb2069d26688d659fc3b90b8162..0000000000000000000000000000000000000000 --- a/spaces/DEEMOSTECH/ChatAvatar/static/js/main.84e5ce89.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! For license information please see main.84e5ce89.js.LICENSE.txt */ -!function(){var e={498:function(e){e.exports=function(){"use strict";var e=function(t,n){return e=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,t){e.__proto__=t}||function(e,t){for(var n in t)Object.prototype.hasOwnProperty.call(t,n)&&(e[n]=t[n])},e(t,n)};function t(t,n){if("function"!==typeof n&&null!==n)throw new TypeError("Class extends value "+String(n)+" is not a constructor or null");function r(){this.constructor=t}e(t,n),t.prototype=null===n?Object.create(n):(r.prototype=n.prototype,new r)}var n=function(){return n=Object.assign||function(e){for(var t,n=1,r=arguments.length;n0&&i[i.length-1])&&(6===A[0]||2===A[0])){a=0;continue}if(3===A[0]&&(!i||A[1]>i[0]&&A[1]=55296&&i<=56319&&n>10),a%1024+56320)),(i+1===n||r.length>16384)&&(A+=String.fromCharCode.apply(String,r),r.length=0)}return A},c="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",d="undefined"===typeof Uint8Array?[]:new Uint8Array(256),h=0;h>4,u[s++]=(15&r)<<4|i>>2,u[s++]=(3&i)<<6|63&A;return l},v=function(e){for(var t=e.length,n=[],r=0;r>w,x=(1<>w)+32,S=65536>>B,E=(1<=0){if(e<55296||e>56319&&e<=65535)return t=((t=this.index[e>>w])<<_)+(e&x),this.data[t];if(e<=65535)return t=((t=this.index[b+(e-55296>>w)])<<_)+(e&x),this.data[t];if(e>B),t=this.index[t],t+=e>>w&E,t=((t=this.index[t])<<_)+(e&x),this.data[t];if(e<=1114111)return this.data[this.highValueIndex]}return this.errorValue},e}(),k="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",Q="undefined"===typeof Uint8Array?[]:new Uint8Array(256),L=0;LD?(i.push(!0),a-=D):i.push(!1),-1!==["normal","auto","loose"].indexOf(t)&&-1!==[8208,8211,12316,12448].indexOf(e))return r.push(A),n.push(Y);if(a===P||a===K){if(0===A)return r.push(A),n.push(ue);var o=n[A-1];return-1===Qe.indexOf(o)?(r.push(r[A-1]),n.push(o)):(r.push(A),n.push(ue))}return r.push(A),a===ce?n.push("strict"===t?te:me):a===_e||a===le?n.push(ue):a===be?e>=131072&&e<=196605||e>=196608&&e<=262141?n.push(me):n.push(ue):void n.push(a)})),[r,n,i]},Re=function(e,t,n,r){var i=r[n];if(Array.isArray(e)?-1!==e.indexOf(i):e===i)for(var A=n;A<=r.length;){if((s=r[++A])===t)return!0;if(s!==G)break}if(i===G)for(A=n;A>0;){var a=r[--A];if(Array.isArray(e)?-1!==e.indexOf(a):e===a)for(var o=n;o<=r.length;){var s;if((s=r[++o])===t)return!0;if(s!==G)break}if(a!==G)break}return!1},He=function(e,t){for(var n=e;n>=0;){var r=t[n];if(r!==G)return r;n--}return 0},Pe=function(e,t,n,r,i){if(0===n[r])return Se;var A=r-1;if(Array.isArray(i)&&!0===i[A])return Se;var a=A-1,o=A+1,s=t[A],l=a>=0?t[a]:0,u=t[o];if(s===R&&u===H)return Se;if(-1!==Fe.indexOf(s))return Ce;if(-1!==Fe.indexOf(u))return Se;if(-1!==Te.indexOf(u))return Se;if(He(A,t)===V)return Ee;if(Ue.get(e[A])===K)return Se;if((s===de||s===he)&&Ue.get(e[o])===K)return Se;if(s===O||u===O)return Se;if(s===z)return Se;if(-1===[G,j,q].indexOf(s)&&u===z)return Se;if(-1!==[J,Z,$,ie,se].indexOf(u))return Se;if(He(A,t)===ne)return Se;if(Re(re,ne,A,t))return Se;if(Re([J,Z],te,A,t))return Se;if(Re(W,W,A,t))return Se;if(s===G)return Ee;if(s===re||u===re)return Se;if(u===Y||s===Y)return Ee;if(-1!==[j,q,te].indexOf(u)||s===X)return Se;if(l===ge&&-1!==De.indexOf(s))return Se;if(s===se&&u===ge)return Se;if(u===ee)return Se;if(-1!==Me.indexOf(u)&&s===Ae||-1!==Me.indexOf(s)&&u===Ae)return Se;if(s===oe&&-1!==[me,de,he].indexOf(u)||-1!==[me,de,he].indexOf(s)&&u===ae)return Se;if(-1!==Me.indexOf(s)&&-1!==ke.indexOf(u)||-1!==ke.indexOf(s)&&-1!==Me.indexOf(u))return Se;if(-1!==[oe,ae].indexOf(s)&&(u===Ae||-1!==[ne,q].indexOf(u)&&t[o+1]===Ae)||-1!==[ne,q].indexOf(s)&&u===Ae||s===Ae&&-1!==[Ae,se,ie].indexOf(u))return Se;if(-1!==[Ae,se,ie,J,Z].indexOf(u))for(var c=A;c>=0;){if((d=t[c])===Ae)return Se;if(-1===[se,ie].indexOf(d))break;c--}if(-1!==[oe,ae].indexOf(u))for(c=-1!==[J,Z].indexOf(s)?a:A;c>=0;){var d;if((d=t[c])===Ae)return Se;if(-1===[se,ie].indexOf(d))break;c--}if(ve===s&&-1!==[ve,ye,fe,pe].indexOf(u)||-1!==[ye,fe].indexOf(s)&&-1!==[ye,we].indexOf(u)||-1!==[we,pe].indexOf(s)&&u===we)return Se;if(-1!==Le.indexOf(s)&&-1!==[ee,ae].indexOf(u)||-1!==Le.indexOf(u)&&s===oe)return Se;if(-1!==Me.indexOf(s)&&-1!==Me.indexOf(u))return Se;if(s===ie&&-1!==Me.indexOf(u))return Se;if(-1!==Me.concat(Ae).indexOf(s)&&u===ne&&-1===xe.indexOf(e[o])||-1!==Me.concat(Ae).indexOf(u)&&s===Z)return Se;if(s===Be&&u===Be){for(var h=n[A],f=1;h>0&&t[--h]===Be;)f++;if(f%2!==0)return Se}return s===de&&u===he?Se:Ee},Ne=function(e,t){t||(t={lineBreak:"normal",wordBreak:"normal"});var n=Ie(e,t.lineBreak),r=n[0],i=n[1],A=n[2];"break-all"!==t.wordBreak&&"break-word"!==t.wordBreak||(i=i.map((function(e){return-1!==[Ae,ue,_e].indexOf(e)?me:e})));var a="keep-all"===t.wordBreak?A.map((function(t,n){return t&&e[n]>=19968&&e[n]<=40959})):void 0;return[r,i,a]},Oe=function(){function e(e,t,n,r){this.codePoints=e,this.required=t===Ce,this.start=n,this.end=r}return e.prototype.slice=function(){return u.apply(void 0,this.codePoints.slice(this.start,this.end))},e}(),Ve=function(e,t){var n=l(e),r=Ne(n,t),i=r[0],A=r[1],a=r[2],o=n.length,s=0,u=0;return{next:function(){if(u>=o)return{done:!0,value:null};for(var e=Se;u=Dt&&e<=57},jt=function(e){return e>=55296&&e<=57343},Xt=function(e){return Wt(e)||e>=Ot&&e<=zt||e>=It&&e<=Ht},qt=function(e){return e>=It&&e<=Nt},Yt=function(e){return e>=Ot&&e<=Kt},Jt=function(e){return qt(e)||Yt(e)},Zt=function(e){return e>=wt},$t=function(e){return e===je||e===Ye||e===Je},en=function(e){return Jt(e)||Zt(e)||e===at},tn=function(e){return en(e)||Wt(e)||e===ot},nn=function(e){return e>=Ut&&e<=Mt||e===Ft||e>=Tt&&e<=kt||e===Qt},rn=function(e,t){return e===qe&&t!==je},An=function(e,t,n){return e===ot?en(t)||rn(t,n):!!en(e)||!(e!==qe||!rn(e,t))},an=function(e,t,n){return e===bt||e===ot?!!Wt(t)||t===Et&&Wt(n):Wt(e===Et?t:e)},on=function(e){var t=0,n=1;e[t]!==bt&&e[t]!==ot||(e[t]===ot&&(n=-1),t++);for(var r=[];Wt(e[t]);)r.push(e[t++]);var i=r.length?parseInt(u.apply(void 0,r),10):0;e[t]===Et&&t++;for(var A=[];Wt(e[t]);)A.push(e[t++]);var a=A.length,o=a?parseInt(u.apply(void 0,A),10):0;e[t]!==Vt&&e[t]!==Rt||t++;var s=1;e[t]!==bt&&e[t]!==ot||(e[t]===ot&&(s=-1),t++);for(var l=[];Wt(e[t]);)l.push(e[t++]);var c=l.length?parseInt(u.apply(void 0,l),10):0;return n*(i+o*Math.pow(10,-a))*Math.pow(10,s*c)},sn={type:2},ln={type:3},un={type:4},cn={type:13},dn={type:8},hn={type:21},fn={type:9},pn={type:10},gn={type:11},mn={type:12},vn={type:14},yn={type:23},wn={type:1},Bn={type:25},_n={type:24},bn={type:26},xn={type:27},Cn={type:28},Sn={type:29},En={type:31},Un={type:32},Mn=function(){function e(){this._value=[]}return e.prototype.write=function(e){this._value=this._value.concat(l(e))},e.prototype.read=function(){for(var e=[],t=this.consumeToken();t!==Un;)e.push(t),t=this.consumeToken();return e},e.prototype.consumeToken=function(){var e=this.consumeCodePoint();switch(e){case Ze:return this.consumeStringToken(Ze);case et:var t=this.peekCodePoint(0),n=this.peekCodePoint(1),r=this.peekCodePoint(2);if(tn(t)||rn(n,r)){var i=An(t,n,r)?Ge:ze;return{type:5,value:this.consumeName(),flags:i}}break;case tt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),cn;break;case rt:return this.consumeStringToken(rt);case it:return sn;case At:return ln;case _t:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),vn;break;case bt:if(an(e,this.peekCodePoint(0),this.peekCodePoint(1)))return this.reconsumeCodePoint(e),this.consumeNumericToken();break;case xt:return un;case ot:var A=e,a=this.peekCodePoint(0),o=this.peekCodePoint(1);if(an(A,a,o))return this.reconsumeCodePoint(e),this.consumeNumericToken();if(An(A,a,o))return this.reconsumeCodePoint(e),this.consumeIdentLikeToken();if(a===ot&&o===ut)return this.consumeCodePoint(),this.consumeCodePoint(),_n;break;case Et:if(an(e,this.peekCodePoint(0),this.peekCodePoint(1)))return this.reconsumeCodePoint(e),this.consumeNumericToken();break;case Xe:if(this.peekCodePoint(0)===_t)for(this.consumeCodePoint();;){var s=this.consumeCodePoint();if(s===_t&&(s=this.consumeCodePoint())===Xe)return this.consumeToken();if(s===Lt)return this.consumeToken()}break;case Ct:return bn;case St:return xn;case lt:if(this.peekCodePoint(0)===st&&this.peekCodePoint(1)===ot&&this.peekCodePoint(2)===ot)return this.consumeCodePoint(),this.consumeCodePoint(),Bn;break;case ct:var l=this.peekCodePoint(0),c=this.peekCodePoint(1),d=this.peekCodePoint(2);if(An(l,c,d))return{type:7,value:this.consumeName()};break;case dt:return Cn;case qe:if(rn(e,this.peekCodePoint(0)))return this.reconsumeCodePoint(e),this.consumeIdentLikeToken();break;case ht:return Sn;case ft:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),dn;break;case pt:return gn;case mt:return mn;case Pt:case Gt:var h=this.peekCodePoint(0),f=this.peekCodePoint(1);return h!==bt||!Xt(f)&&f!==gt||(this.consumeCodePoint(),this.consumeUnicodeRangeToken()),this.reconsumeCodePoint(e),this.consumeIdentLikeToken();case vt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),fn;if(this.peekCodePoint(0)===vt)return this.consumeCodePoint(),hn;break;case yt:if(this.peekCodePoint(0)===$e)return this.consumeCodePoint(),pn;break;case Lt:return Un}return $t(e)?(this.consumeWhiteSpace(),En):Wt(e)?(this.reconsumeCodePoint(e),this.consumeNumericToken()):en(e)?(this.reconsumeCodePoint(e),this.consumeIdentLikeToken()):{type:6,value:u(e)}},e.prototype.consumeCodePoint=function(){var e=this._value.shift();return"undefined"===typeof e?-1:e},e.prototype.reconsumeCodePoint=function(e){this._value.unshift(e)},e.prototype.peekCodePoint=function(e){return e>=this._value.length?-1:this._value[e]},e.prototype.consumeUnicodeRangeToken=function(){for(var e=[],t=this.consumeCodePoint();Xt(t)&&e.length<6;)e.push(t),t=this.consumeCodePoint();for(var n=!1;t===gt&&e.length<6;)e.push(t),t=this.consumeCodePoint(),n=!0;if(n)return{type:30,start:parseInt(u.apply(void 0,e.map((function(e){return e===gt?Dt:e}))),16),end:parseInt(u.apply(void 0,e.map((function(e){return e===gt?zt:e}))),16)};var r=parseInt(u.apply(void 0,e),16);if(this.peekCodePoint(0)===ot&&Xt(this.peekCodePoint(1))){this.consumeCodePoint(),t=this.consumeCodePoint();for(var i=[];Xt(t)&&i.length<6;)i.push(t),t=this.consumeCodePoint();return{type:30,start:r,end:parseInt(u.apply(void 0,i),16)}}return{type:30,start:r,end:r}},e.prototype.consumeIdentLikeToken=function(){var e=this.consumeName();return"url"===e.toLowerCase()&&this.peekCodePoint(0)===it?(this.consumeCodePoint(),this.consumeUrlToken()):this.peekCodePoint(0)===it?(this.consumeCodePoint(),{type:19,value:e}):{type:20,value:e}},e.prototype.consumeUrlToken=function(){var e=[];if(this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt)return{type:22,value:""};var t=this.peekCodePoint(0);if(t===rt||t===Ze){var n=this.consumeStringToken(this.consumeCodePoint());return 0===n.type&&(this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt||this.peekCodePoint(0)===At)?(this.consumeCodePoint(),{type:22,value:n.value}):(this.consumeBadUrlRemnants(),yn)}for(;;){var r=this.consumeCodePoint();if(r===Lt||r===At)return{type:22,value:u.apply(void 0,e)};if($t(r))return this.consumeWhiteSpace(),this.peekCodePoint(0)===Lt||this.peekCodePoint(0)===At?(this.consumeCodePoint(),{type:22,value:u.apply(void 0,e)}):(this.consumeBadUrlRemnants(),yn);if(r===Ze||r===rt||r===it||nn(r))return this.consumeBadUrlRemnants(),yn;if(r===qe){if(!rn(r,this.peekCodePoint(0)))return this.consumeBadUrlRemnants(),yn;e.push(this.consumeEscapedCodePoint())}else e.push(r)}},e.prototype.consumeWhiteSpace=function(){for(;$t(this.peekCodePoint(0));)this.consumeCodePoint()},e.prototype.consumeBadUrlRemnants=function(){for(;;){var e=this.consumeCodePoint();if(e===At||e===Lt)return;rn(e,this.peekCodePoint(0))&&this.consumeEscapedCodePoint()}},e.prototype.consumeStringSlice=function(e){for(var t=5e4,n="";e>0;){var r=Math.min(t,e);n+=u.apply(void 0,this._value.splice(0,r)),e-=r}return this._value.shift(),n},e.prototype.consumeStringToken=function(e){for(var t="",n=0;;){var r=this._value[n];if(r===Lt||void 0===r||r===e)return{type:0,value:t+=this.consumeStringSlice(n)};if(r===je)return this._value.splice(0,n),wn;if(r===qe){var i=this._value[n+1];i!==Lt&&void 0!==i&&(i===je?(t+=this.consumeStringSlice(n),n=-1,this._value.shift()):rn(r,i)&&(t+=this.consumeStringSlice(n),t+=u(this.consumeEscapedCodePoint()),n=-1))}n++}},e.prototype.consumeNumber=function(){var e=[],t=Ke,n=this.peekCodePoint(0);for(n!==bt&&n!==ot||e.push(this.consumeCodePoint());Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());n=this.peekCodePoint(0);var r=this.peekCodePoint(1);if(n===Et&&Wt(r))for(e.push(this.consumeCodePoint(),this.consumeCodePoint()),t=We;Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());n=this.peekCodePoint(0),r=this.peekCodePoint(1);var i=this.peekCodePoint(2);if((n===Vt||n===Rt)&&((r===bt||r===ot)&&Wt(i)||Wt(r)))for(e.push(this.consumeCodePoint(),this.consumeCodePoint()),t=We;Wt(this.peekCodePoint(0));)e.push(this.consumeCodePoint());return[on(e),t]},e.prototype.consumeNumericToken=function(){var e=this.consumeNumber(),t=e[0],n=e[1],r=this.peekCodePoint(0),i=this.peekCodePoint(1),A=this.peekCodePoint(2);return An(r,i,A)?{type:15,number:t,flags:n,unit:this.consumeName()}:r===nt?(this.consumeCodePoint(),{type:16,number:t,flags:n}):{type:17,number:t,flags:n}},e.prototype.consumeEscapedCodePoint=function(){var e=this.consumeCodePoint();if(Xt(e)){for(var t=u(e);Xt(this.peekCodePoint(0))&&t.length<6;)t+=u(this.consumeCodePoint());$t(this.peekCodePoint(0))&&this.consumeCodePoint();var n=parseInt(t,16);return 0===n||jt(n)||n>1114111?Bt:n}return e===Lt?Bt:e},e.prototype.consumeName=function(){for(var e="";;){var t=this.consumeCodePoint();if(tn(t))e+=u(t);else{if(!rn(t,this.peekCodePoint(0)))return this.reconsumeCodePoint(t),e;e+=u(this.consumeEscapedCodePoint())}}},e}(),Fn=function(){function e(e){this._tokens=e}return e.create=function(t){var n=new Mn;return n.write(t),new e(n.read())},e.parseValue=function(t){return e.create(t).parseComponentValue()},e.parseValues=function(t){return e.create(t).parseComponentValues()},e.prototype.parseComponentValue=function(){for(var e=this.consumeToken();31===e.type;)e=this.consumeToken();if(32===e.type)throw new SyntaxError("Error parsing CSS component value, unexpected EOF");this.reconsumeToken(e);var t=this.consumeComponentValue();do{e=this.consumeToken()}while(31===e.type);if(32===e.type)return t;throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one")},e.prototype.parseComponentValues=function(){for(var e=[];;){var t=this.consumeComponentValue();if(32===t.type)return e;e.push(t),e.push()}},e.prototype.consumeComponentValue=function(){var e=this.consumeToken();switch(e.type){case 11:case 28:case 2:return this.consumeSimpleBlock(e.type);case 19:return this.consumeFunction(e)}return e},e.prototype.consumeSimpleBlock=function(e){for(var t={type:e,values:[]},n=this.consumeToken();;){if(32===n.type||Pn(n,e))return t;this.reconsumeToken(n),t.values.push(this.consumeComponentValue()),n=this.consumeToken()}},e.prototype.consumeFunction=function(e){for(var t={name:e.value,values:[],type:18};;){var n=this.consumeToken();if(32===n.type||3===n.type)return t;this.reconsumeToken(n),t.values.push(this.consumeComponentValue())}},e.prototype.consumeToken=function(){var e=this._tokens.shift();return"undefined"===typeof e?Un:e},e.prototype.reconsumeToken=function(e){this._tokens.unshift(e)},e}(),Tn=function(e){return 15===e.type},kn=function(e){return 17===e.type},Qn=function(e){return 20===e.type},Ln=function(e){return 0===e.type},Dn=function(e,t){return Qn(e)&&e.value===t},In=function(e){return 31!==e.type},Rn=function(e){return 31!==e.type&&4!==e.type},Hn=function(e){var t=[],n=[];return e.forEach((function(e){if(4===e.type){if(0===n.length)throw new Error("Error parsing function args, zero tokens for arg");return t.push(n),void(n=[])}31!==e.type&&n.push(e)})),n.length&&t.push(n),t},Pn=function(e,t){return 11===t&&12===e.type||28===t&&29===e.type||2===t&&3===e.type},Nn=function(e){return 17===e.type||15===e.type},On=function(e){return 16===e.type||Nn(e)},Vn=function(e){return e.length>1?[e[0],e[1]]:[e[0]]},zn={type:17,number:0,flags:Ke},Gn={type:16,number:50,flags:Ke},Kn={type:16,number:100,flags:Ke},Wn=function(e,t,n){var r=e[0],i=e[1];return[jn(r,t),jn("undefined"!==typeof i?i:r,n)]},jn=function(e,t){if(16===e.type)return e.number/100*t;if(Tn(e))switch(e.unit){case"rem":case"em":return 16*e.number;default:return e.number}return e.number},Xn="deg",qn="grad",Yn="rad",Jn="turn",Zn={name:"angle",parse:function(e,t){if(15===t.type)switch(t.unit){case Xn:return Math.PI*t.number/180;case qn:return Math.PI/200*t.number;case Yn:return t.number;case Jn:return 2*Math.PI*t.number}throw new Error("Unsupported angle type")}},$n=function(e){return 15===e.type&&(e.unit===Xn||e.unit===qn||e.unit===Yn||e.unit===Jn)},er=function(e){switch(e.filter(Qn).map((function(e){return e.value})).join(" ")){case"to bottom right":case"to right bottom":case"left top":case"top left":return[zn,zn];case"to top":case"bottom":return tr(0);case"to bottom left":case"to left bottom":case"right top":case"top right":return[zn,Kn];case"to right":case"left":return tr(90);case"to top left":case"to left top":case"right bottom":case"bottom right":return[Kn,Kn];case"to bottom":case"top":return tr(180);case"to top right":case"to right top":case"left bottom":case"bottom left":return[Kn,zn];case"to left":case"right":return tr(270)}return 0},tr=function(e){return Math.PI*e/180},nr={name:"color",parse:function(e,t){if(18===t.type){var n=ur[t.name];if("undefined"===typeof n)throw new Error('Attempting to parse an unsupported color function "'+t.name+'"');return n(e,t.values)}if(5===t.type){if(3===t.value.length){var r=t.value.substring(0,1),i=t.value.substring(1,2),A=t.value.substring(2,3);return Ar(parseInt(r+r,16),parseInt(i+i,16),parseInt(A+A,16),1)}if(4===t.value.length){r=t.value.substring(0,1),i=t.value.substring(1,2),A=t.value.substring(2,3);var a=t.value.substring(3,4);return Ar(parseInt(r+r,16),parseInt(i+i,16),parseInt(A+A,16),parseInt(a+a,16)/255)}if(6===t.value.length)return r=t.value.substring(0,2),i=t.value.substring(2,4),A=t.value.substring(4,6),Ar(parseInt(r,16),parseInt(i,16),parseInt(A,16),1);if(8===t.value.length)return r=t.value.substring(0,2),i=t.value.substring(2,4),A=t.value.substring(4,6),a=t.value.substring(6,8),Ar(parseInt(r,16),parseInt(i,16),parseInt(A,16),parseInt(a,16)/255)}if(20===t.type){var o=dr[t.value.toUpperCase()];if("undefined"!==typeof o)return o}return dr.TRANSPARENT}},rr=function(e){return 0===(255&e)},ir=function(e){var t=255&e,n=255&e>>8,r=255&e>>16,i=255&e>>24;return t<255?"rgba("+i+","+r+","+n+","+t/255+")":"rgb("+i+","+r+","+n+")"},Ar=function(e,t,n,r){return(e<<24|t<<16|n<<8|Math.round(255*r)<<0)>>>0},ar=function(e,t){if(17===e.type)return e.number;if(16===e.type){var n=3===t?1:255;return 3===t?e.number/100*n:Math.round(e.number/100*n)}return 0},or=function(e,t){var n=t.filter(Rn);if(3===n.length){var r=n.map(ar),i=r[0],A=r[1],a=r[2];return Ar(i,A,a,1)}if(4===n.length){var o=n.map(ar),s=(i=o[0],A=o[1],a=o[2],o[3]);return Ar(i,A,a,s)}return 0};function sr(e,t,n){return n<0&&(n+=1),n>=1&&(n-=1),n<1/6?(t-e)*n*6+e:n<.5?t:n<2/3?6*(t-e)*(2/3-n)+e:e}var lr=function(e,t){var n=t.filter(Rn),r=n[0],i=n[1],A=n[2],a=n[3],o=(17===r.type?tr(r.number):Zn.parse(e,r))/(2*Math.PI),s=On(i)?i.number/100:0,l=On(A)?A.number/100:0,u="undefined"!==typeof a&&On(a)?jn(a,1):1;if(0===s)return Ar(255*l,255*l,255*l,1);var c=l<=.5?l*(s+1):l+s-l*s,d=2*l-c,h=sr(d,c,o+1/3),f=sr(d,c,o),p=sr(d,c,o-1/3);return Ar(255*h,255*f,255*p,u)},ur={hsl:lr,hsla:lr,rgb:or,rgba:or},cr=function(e,t){return nr.parse(e,Fn.create(t).parseComponentValue())},dr={ALICEBLUE:4042850303,ANTIQUEWHITE:4209760255,AQUA:16777215,AQUAMARINE:2147472639,AZURE:4043309055,BEIGE:4126530815,BISQUE:4293182719,BLACK:255,BLANCHEDALMOND:4293643775,BLUE:65535,BLUEVIOLET:2318131967,BROWN:2771004159,BURLYWOOD:3736635391,CADETBLUE:1604231423,CHARTREUSE:2147418367,CHOCOLATE:3530104575,CORAL:4286533887,CORNFLOWERBLUE:1687547391,CORNSILK:4294499583,CRIMSON:3692313855,CYAN:16777215,DARKBLUE:35839,DARKCYAN:9145343,DARKGOLDENROD:3095837695,DARKGRAY:2846468607,DARKGREEN:6553855,DARKGREY:2846468607,DARKKHAKI:3182914559,DARKMAGENTA:2332068863,DARKOLIVEGREEN:1433087999,DARKORANGE:4287365375,DARKORCHID:2570243327,DARKRED:2332033279,DARKSALMON:3918953215,DARKSEAGREEN:2411499519,DARKSLATEBLUE:1211993087,DARKSLATEGRAY:793726975,DARKSLATEGREY:793726975,DARKTURQUOISE:13554175,DARKVIOLET:2483082239,DEEPPINK:4279538687,DEEPSKYBLUE:12582911,DIMGRAY:1768516095,DIMGREY:1768516095,DODGERBLUE:512819199,FIREBRICK:2988581631,FLORALWHITE:4294635775,FORESTGREEN:579543807,FUCHSIA:4278255615,GAINSBORO:3705462015,GHOSTWHITE:4177068031,GOLD:4292280575,GOLDENROD:3668254975,GRAY:2155905279,GREEN:8388863,GREENYELLOW:2919182335,GREY:2155905279,HONEYDEW:4043305215,HOTPINK:4285117695,INDIANRED:3445382399,INDIGO:1258324735,IVORY:4294963455,KHAKI:4041641215,LAVENDER:3873897215,LAVENDERBLUSH:4293981695,LAWNGREEN:2096890111,LEMONCHIFFON:4294626815,LIGHTBLUE:2916673279,LIGHTCORAL:4034953471,LIGHTCYAN:3774873599,LIGHTGOLDENRODYELLOW:4210742015,LIGHTGRAY:3553874943,LIGHTGREEN:2431553791,LIGHTGREY:3553874943,LIGHTPINK:4290167295,LIGHTSALMON:4288707327,LIGHTSEAGREEN:548580095,LIGHTSKYBLUE:2278488831,LIGHTSLATEGRAY:2005441023,LIGHTSLATEGREY:2005441023,LIGHTSTEELBLUE:2965692159,LIGHTYELLOW:4294959359,LIME:16711935,LIMEGREEN:852308735,LINEN:4210091775,MAGENTA:4278255615,MAROON:2147483903,MEDIUMAQUAMARINE:1724754687,MEDIUMBLUE:52735,MEDIUMORCHID:3126187007,MEDIUMPURPLE:2473647103,MEDIUMSEAGREEN:1018393087,MEDIUMSLATEBLUE:2070474495,MEDIUMSPRINGGREEN:16423679,MEDIUMTURQUOISE:1221709055,MEDIUMVIOLETRED:3340076543,MIDNIGHTBLUE:421097727,MINTCREAM:4127193855,MISTYROSE:4293190143,MOCCASIN:4293178879,NAVAJOWHITE:4292783615,NAVY:33023,OLDLACE:4260751103,OLIVE:2155872511,OLIVEDRAB:1804477439,ORANGE:4289003775,ORANGERED:4282712319,ORCHID:3664828159,PALEGOLDENROD:4008225535,PALEGREEN:2566625535,PALETURQUOISE:2951671551,PALEVIOLETRED:3681588223,PAPAYAWHIP:4293907967,PEACHPUFF:4292524543,PERU:3448061951,PINK:4290825215,PLUM:3718307327,POWDERBLUE:2967529215,PURPLE:2147516671,REBECCAPURPLE:1714657791,RED:4278190335,ROSYBROWN:3163525119,ROYALBLUE:1097458175,SADDLEBROWN:2336560127,SALMON:4202722047,SANDYBROWN:4104413439,SEAGREEN:780883967,SEASHELL:4294307583,SIENNA:2689740287,SILVER:3233857791,SKYBLUE:2278484991,SLATEBLUE:1784335871,SLATEGRAY:1887473919,SLATEGREY:1887473919,SNOW:4294638335,SPRINGGREEN:16744447,STEELBLUE:1182971135,TAN:3535047935,TEAL:8421631,THISTLE:3636451583,TOMATO:4284696575,TRANSPARENT:0,TURQUOISE:1088475391,VIOLET:4001558271,WHEAT:4125012991,WHITE:4294967295,WHITESMOKE:4126537215,YELLOW:4294902015,YELLOWGREEN:2597139199},hr={name:"background-clip",initialValue:"border-box",prefix:!1,type:1,parse:function(e,t){return t.map((function(e){if(Qn(e))switch(e.value){case"padding-box":return 1;case"content-box":return 2}return 0}))}},fr={name:"background-color",initialValue:"transparent",prefix:!1,type:3,format:"color"},pr=function(e,t){var n=nr.parse(e,t[0]),r=t[1];return r&&On(r)?{color:n,stop:r}:{color:n,stop:null}},gr=function(e,t){var n=e[0],r=e[e.length-1];null===n.stop&&(n.stop=zn),null===r.stop&&(r.stop=Kn);for(var i=[],A=0,a=0;aA?i.push(s):i.push(A),A=s}else i.push(null)}var l=null;for(a=0;ae.optimumDistance)?{optimumCorner:t,optimumDistance:o}:e}),{optimumDistance:i?1/0:-1/0,optimumCorner:null}).optimumCorner},Br=function(e,t,n,r,i){var A=0,a=0;switch(e.size){case 0:0===e.shape?A=a=Math.min(Math.abs(t),Math.abs(t-r),Math.abs(n),Math.abs(n-i)):1===e.shape&&(A=Math.min(Math.abs(t),Math.abs(t-r)),a=Math.min(Math.abs(n),Math.abs(n-i)));break;case 2:if(0===e.shape)A=a=Math.min(yr(t,n),yr(t,n-i),yr(t-r,n),yr(t-r,n-i));else if(1===e.shape){var o=Math.min(Math.abs(n),Math.abs(n-i))/Math.min(Math.abs(t),Math.abs(t-r)),s=wr(r,i,t,n,!0),l=s[0],u=s[1];a=o*(A=yr(l-t,(u-n)/o))}break;case 1:0===e.shape?A=a=Math.max(Math.abs(t),Math.abs(t-r),Math.abs(n),Math.abs(n-i)):1===e.shape&&(A=Math.max(Math.abs(t),Math.abs(t-r)),a=Math.max(Math.abs(n),Math.abs(n-i)));break;case 3:if(0===e.shape)A=a=Math.max(yr(t,n),yr(t,n-i),yr(t-r,n),yr(t-r,n-i));else if(1===e.shape){o=Math.max(Math.abs(n),Math.abs(n-i))/Math.max(Math.abs(t),Math.abs(t-r));var c=wr(r,i,t,n,!1);l=c[0],u=c[1],a=o*(A=yr(l-t,(u-n)/o))}}return Array.isArray(e.size)&&(A=jn(e.size[0],r),a=2===e.size.length?jn(e.size[1],i):A),[A,a]},_r=function(e,t){var n=tr(180),r=[];return Hn(t).forEach((function(t,i){if(0===i){var A=t[0];if(20===A.type&&-1!==["top","left","right","bottom"].indexOf(A.value))return void(n=er(t));if($n(A))return void(n=(Zn.parse(e,A)+tr(270))%tr(360))}var a=pr(e,t);r.push(a)})),{angle:n,stops:r,type:1}},br="closest-side",xr="farthest-side",Cr="closest-corner",Sr="farthest-corner",Er="circle",Ur="ellipse",Mr="cover",Fr="contain",Tr=function(e,t){var n=0,r=3,i=[],A=[];return Hn(t).forEach((function(t,a){var o=!0;if(0===a?o=t.reduce((function(e,t){if(Qn(t))switch(t.value){case"center":return A.push(Gn),!1;case"top":case"left":return A.push(zn),!1;case"right":case"bottom":return A.push(Kn),!1}else if(On(t)||Nn(t))return A.push(t),!1;return e}),o):1===a&&(o=t.reduce((function(e,t){if(Qn(t))switch(t.value){case Er:return n=0,!1;case Ur:return n=1,!1;case Fr:case br:return r=0,!1;case xr:return r=1,!1;case Cr:return r=2,!1;case Mr:case Sr:return r=3,!1}else if(Nn(t)||On(t))return Array.isArray(r)||(r=[]),r.push(t),!1;return e}),o)),o){var s=pr(e,t);i.push(s)}})),{size:r,shape:n,stops:i,position:A,type:2}},kr=function(e){return 1===e.type},Qr=function(e){return 2===e.type},Lr={name:"image",parse:function(e,t){if(22===t.type){var n={url:t.value,type:0};return e.cache.addImage(t.value),n}if(18===t.type){var r=Rr[t.name];if("undefined"===typeof r)throw new Error('Attempting to parse an unsupported image function "'+t.name+'"');return r(e,t.values)}throw new Error("Unsupported image type "+t.type)}};function Dr(e){return!(20===e.type&&"none"===e.value)&&(18!==e.type||!!Rr[e.name])}var Ir,Rr={"linear-gradient":function(e,t){var n=tr(180),r=[];return Hn(t).forEach((function(t,i){if(0===i){var A=t[0];if(20===A.type&&"to"===A.value)return void(n=er(t));if($n(A))return void(n=Zn.parse(e,A))}var a=pr(e,t);r.push(a)})),{angle:n,stops:r,type:1}},"-moz-linear-gradient":_r,"-ms-linear-gradient":_r,"-o-linear-gradient":_r,"-webkit-linear-gradient":_r,"radial-gradient":function(e,t){var n=0,r=3,i=[],A=[];return Hn(t).forEach((function(t,a){var o=!0;if(0===a){var s=!1;o=t.reduce((function(e,t){if(s)if(Qn(t))switch(t.value){case"center":return A.push(Gn),e;case"top":case"left":return A.push(zn),e;case"right":case"bottom":return A.push(Kn),e}else(On(t)||Nn(t))&&A.push(t);else if(Qn(t))switch(t.value){case Er:return n=0,!1;case Ur:return n=1,!1;case"at":return s=!0,!1;case br:return r=0,!1;case Mr:case xr:return r=1,!1;case Fr:case Cr:return r=2,!1;case Sr:return r=3,!1}else if(Nn(t)||On(t))return Array.isArray(r)||(r=[]),r.push(t),!1;return e}),o)}if(o){var l=pr(e,t);i.push(l)}})),{size:r,shape:n,stops:i,position:A,type:2}},"-moz-radial-gradient":Tr,"-ms-radial-gradient":Tr,"-o-radial-gradient":Tr,"-webkit-radial-gradient":Tr,"-webkit-gradient":function(e,t){var n=tr(180),r=[],i=1,A=0,a=3,o=[];return Hn(t).forEach((function(t,n){var A=t[0];if(0===n){if(Qn(A)&&"linear"===A.value)return void(i=1);if(Qn(A)&&"radial"===A.value)return void(i=2)}if(18===A.type)if("from"===A.name){var a=nr.parse(e,A.values[0]);r.push({stop:zn,color:a})}else if("to"===A.name)a=nr.parse(e,A.values[0]),r.push({stop:Kn,color:a});else if("color-stop"===A.name){var o=A.values.filter(Rn);if(2===o.length){a=nr.parse(e,o[1]);var s=o[0];kn(s)&&r.push({stop:{type:16,number:100*s.number,flags:s.flags},color:a})}}})),1===i?{angle:(n+tr(180))%tr(360),stops:r,type:i}:{size:a,shape:A,stops:r,position:o,type:i}}},Hr={name:"background-image",initialValue:"none",type:1,prefix:!1,parse:function(e,t){if(0===t.length)return[];var n=t[0];return 20===n.type&&"none"===n.value?[]:t.filter((function(e){return Rn(e)&&Dr(e)})).map((function(t){return Lr.parse(e,t)}))}},Pr={name:"background-origin",initialValue:"border-box",prefix:!1,type:1,parse:function(e,t){return t.map((function(e){if(Qn(e))switch(e.value){case"padding-box":return 1;case"content-box":return 2}return 0}))}},Nr={name:"background-position",initialValue:"0% 0%",type:1,prefix:!1,parse:function(e,t){return Hn(t).map((function(e){return e.filter(On)})).map(Vn)}},Or={name:"background-repeat",initialValue:"repeat",prefix:!1,type:1,parse:function(e,t){return Hn(t).map((function(e){return e.filter(Qn).map((function(e){return e.value})).join(" ")})).map(Vr)}},Vr=function(e){switch(e){case"no-repeat":return 1;case"repeat-x":case"repeat no-repeat":return 2;case"repeat-y":case"no-repeat repeat":return 3;default:return 0}};!function(e){e.AUTO="auto",e.CONTAIN="contain",e.COVER="cover"}(Ir||(Ir={}));var zr,Gr={name:"background-size",initialValue:"0",prefix:!1,type:1,parse:function(e,t){return Hn(t).map((function(e){return e.filter(Kr)}))}},Kr=function(e){return Qn(e)||On(e)},Wr=function(e){return{name:"border-"+e+"-color",initialValue:"transparent",prefix:!1,type:3,format:"color"}},jr=Wr("top"),Xr=Wr("right"),qr=Wr("bottom"),Yr=Wr("left"),Jr=function(e){return{name:"border-radius-"+e,initialValue:"0 0",prefix:!1,type:1,parse:function(e,t){return Vn(t.filter(On))}}},Zr=Jr("top-left"),$r=Jr("top-right"),ei=Jr("bottom-right"),ti=Jr("bottom-left"),ni=function(e){return{name:"border-"+e+"-style",initialValue:"solid",prefix:!1,type:2,parse:function(e,t){switch(t){case"none":return 0;case"dashed":return 2;case"dotted":return 3;case"double":return 4}return 1}}},ri=ni("top"),ii=ni("right"),Ai=ni("bottom"),ai=ni("left"),oi=function(e){return{name:"border-"+e+"-width",initialValue:"0",type:0,prefix:!1,parse:function(e,t){return Tn(t)?t.number:0}}},si=oi("top"),li=oi("right"),ui=oi("bottom"),ci=oi("left"),di={name:"color",initialValue:"transparent",prefix:!1,type:3,format:"color"},hi={name:"direction",initialValue:"ltr",prefix:!1,type:2,parse:function(e,t){return"rtl"===t?1:0}},fi={name:"display",initialValue:"inline-block",prefix:!1,type:1,parse:function(e,t){return t.filter(Qn).reduce((function(e,t){return e|pi(t.value)}),0)}},pi=function(e){switch(e){case"block":case"-webkit-box":return 2;case"inline":return 4;case"run-in":return 8;case"flow":return 16;case"flow-root":return 32;case"table":return 64;case"flex":case"-webkit-flex":return 128;case"grid":case"-ms-grid":return 256;case"ruby":return 512;case"subgrid":return 1024;case"list-item":return 2048;case"table-row-group":return 4096;case"table-header-group":return 8192;case"table-footer-group":return 16384;case"table-row":return 32768;case"table-cell":return 65536;case"table-column-group":return 131072;case"table-column":return 262144;case"table-caption":return 524288;case"ruby-base":return 1048576;case"ruby-text":return 2097152;case"ruby-base-container":return 4194304;case"ruby-text-container":return 8388608;case"contents":return 16777216;case"inline-block":return 33554432;case"inline-list-item":return 67108864;case"inline-table":return 134217728;case"inline-flex":return 268435456;case"inline-grid":return 536870912}return 0},gi={name:"float",initialValue:"none",prefix:!1,type:2,parse:function(e,t){switch(t){case"left":return 1;case"right":return 2;case"inline-start":return 3;case"inline-end":return 4}return 0}},mi={name:"letter-spacing",initialValue:"0",prefix:!1,type:0,parse:function(e,t){return 20===t.type&&"normal"===t.value?0:17===t.type||15===t.type?t.number:0}};!function(e){e.NORMAL="normal",e.STRICT="strict"}(zr||(zr={}));var vi,yi={name:"line-break",initialValue:"normal",prefix:!1,type:2,parse:function(e,t){return"strict"===t?zr.STRICT:zr.NORMAL}},wi={name:"line-height",initialValue:"normal",prefix:!1,type:4},Bi=function(e,t){return Qn(e)&&"normal"===e.value?1.2*t:17===e.type?t*e.number:On(e)?jn(e,t):t},_i={name:"list-style-image",initialValue:"none",type:0,prefix:!1,parse:function(e,t){return 20===t.type&&"none"===t.value?null:Lr.parse(e,t)}},bi={name:"list-style-position",initialValue:"outside",prefix:!1,type:2,parse:function(e,t){return"inside"===t?0:1}},xi={name:"list-style-type",initialValue:"none",prefix:!1,type:2,parse:function(e,t){switch(t){case"disc":return 0;case"circle":return 1;case"square":return 2;case"decimal":return 3;case"cjk-decimal":return 4;case"decimal-leading-zero":return 5;case"lower-roman":return 6;case"upper-roman":return 7;case"lower-greek":return 8;case"lower-alpha":return 9;case"upper-alpha":return 10;case"arabic-indic":return 11;case"armenian":return 12;case"bengali":return 13;case"cambodian":return 14;case"cjk-earthly-branch":return 15;case"cjk-heavenly-stem":return 16;case"cjk-ideographic":return 17;case"devanagari":return 18;case"ethiopic-numeric":return 19;case"georgian":return 20;case"gujarati":return 21;case"gurmukhi":case"hebrew":return 22;case"hiragana":return 23;case"hiragana-iroha":return 24;case"japanese-formal":return 25;case"japanese-informal":return 26;case"kannada":return 27;case"katakana":return 28;case"katakana-iroha":return 29;case"khmer":return 30;case"korean-hangul-formal":return 31;case"korean-hanja-formal":return 32;case"korean-hanja-informal":return 33;case"lao":return 34;case"lower-armenian":return 35;case"malayalam":return 36;case"mongolian":return 37;case"myanmar":return 38;case"oriya":return 39;case"persian":return 40;case"simp-chinese-formal":return 41;case"simp-chinese-informal":return 42;case"tamil":return 43;case"telugu":return 44;case"thai":return 45;case"tibetan":return 46;case"trad-chinese-formal":return 47;case"trad-chinese-informal":return 48;case"upper-armenian":return 49;case"disclosure-open":return 50;case"disclosure-closed":return 51;default:return-1}}},Ci=function(e){return{name:"margin-"+e,initialValue:"0",prefix:!1,type:4}},Si=Ci("top"),Ei=Ci("right"),Ui=Ci("bottom"),Mi=Ci("left"),Fi={name:"overflow",initialValue:"visible",prefix:!1,type:1,parse:function(e,t){return t.filter(Qn).map((function(e){switch(e.value){case"hidden":return 1;case"scroll":return 2;case"clip":return 3;case"auto":return 4;default:return 0}}))}},Ti={name:"overflow-wrap",initialValue:"normal",prefix:!1,type:2,parse:function(e,t){return"break-word"===t?"break-word":"normal"}},ki=function(e){return{name:"padding-"+e,initialValue:"0",prefix:!1,type:3,format:"length-percentage"}},Qi=ki("top"),Li=ki("right"),Di=ki("bottom"),Ii=ki("left"),Ri={name:"text-align",initialValue:"left",prefix:!1,type:2,parse:function(e,t){switch(t){case"right":return 2;case"center":case"justify":return 1;default:return 0}}},Hi={name:"position",initialValue:"static",prefix:!1,type:2,parse:function(e,t){switch(t){case"relative":return 1;case"absolute":return 2;case"fixed":return 3;case"sticky":return 4}return 0}},Pi={name:"text-shadow",initialValue:"none",type:1,prefix:!1,parse:function(e,t){return 1===t.length&&Dn(t[0],"none")?[]:Hn(t).map((function(t){for(var n={color:dr.TRANSPARENT,offsetX:zn,offsetY:zn,blur:zn},r=0,i=0;i1?1:0],this.overflowWrap=vA(e,Ti,t.overflowWrap),this.paddingTop=vA(e,Qi,t.paddingTop),this.paddingRight=vA(e,Li,t.paddingRight),this.paddingBottom=vA(e,Di,t.paddingBottom),this.paddingLeft=vA(e,Ii,t.paddingLeft),this.paintOrder=vA(e,dA,t.paintOrder),this.position=vA(e,Hi,t.position),this.textAlign=vA(e,Ri,t.textAlign),this.textDecorationColor=vA(e,Ji,null!==(n=t.textDecorationColor)&&void 0!==n?n:t.color),this.textDecorationLine=vA(e,Zi,null!==(r=t.textDecorationLine)&&void 0!==r?r:t.textDecoration),this.textShadow=vA(e,Pi,t.textShadow),this.textTransform=vA(e,Ni,t.textTransform),this.transform=vA(e,Oi,t.transform),this.transformOrigin=vA(e,Ki,t.transformOrigin),this.visibility=vA(e,Wi,t.visibility),this.webkitTextStrokeColor=vA(e,hA,t.webkitTextStrokeColor),this.webkitTextStrokeWidth=vA(e,fA,t.webkitTextStrokeWidth),this.wordBreak=vA(e,ji,t.wordBreak),this.zIndex=vA(e,Xi,t.zIndex)}return e.prototype.isVisible=function(){return this.display>0&&this.opacity>0&&0===this.visibility},e.prototype.isTransparent=function(){return rr(this.backgroundColor)},e.prototype.isTransformed=function(){return null!==this.transform},e.prototype.isPositioned=function(){return 0!==this.position},e.prototype.isPositionedWithZIndex=function(){return this.isPositioned()&&!this.zIndex.auto},e.prototype.isFloating=function(){return 0!==this.float},e.prototype.isInlineLevel=function(){return iA(this.display,4)||iA(this.display,33554432)||iA(this.display,268435456)||iA(this.display,536870912)||iA(this.display,67108864)||iA(this.display,134217728)},e}(),gA=function(){function e(e,t){this.content=vA(e,AA,t.content),this.quotes=vA(e,lA,t.quotes)}return e}(),mA=function(){function e(e,t){this.counterIncrement=vA(e,aA,t.counterIncrement),this.counterReset=vA(e,oA,t.counterReset)}return e}(),vA=function(e,t,n){var r=new Mn,i=null!==n&&"undefined"!==typeof n?n.toString():t.initialValue;r.write(i);var A=new Fn(r.read());switch(t.type){case 2:var a=A.parseComponentValue();return t.parse(e,Qn(a)?a.value:t.initialValue);case 0:return t.parse(e,A.parseComponentValue());case 1:return t.parse(e,A.parseComponentValues());case 4:return A.parseComponentValue();case 3:switch(t.format){case"angle":return Zn.parse(e,A.parseComponentValue());case"color":return nr.parse(e,A.parseComponentValue());case"image":return Lr.parse(e,A.parseComponentValue());case"length":var o=A.parseComponentValue();return Nn(o)?o:zn;case"length-percentage":var s=A.parseComponentValue();return On(s)?s:zn;case"time":return qi.parse(e,A.parseComponentValue())}}},yA="data-html2canvas-debug",wA=function(e){switch(e.getAttribute(yA)){case"all":return 1;case"clone":return 2;case"parse":return 3;case"render":return 4;default:return 0}},BA=function(e,t){var n=wA(e);return 1===n||t===n},_A=function(){function e(e,t){this.context=e,this.textNodes=[],this.elements=[],this.flags=0,BA(t,3),this.styles=new pA(e,window.getComputedStyle(t,null)),lo(t)&&(this.styles.animationDuration.some((function(e){return e>0}))&&(t.style.animationDuration="0s"),null!==this.styles.transform&&(t.style.transform="none")),this.bounds=o(this.context,t),BA(t,4)&&(this.flags|=16)}return e}(),bA="AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA=",xA="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/",CA="undefined"===typeof Uint8Array?[]:new Uint8Array(256),SA=0;SA>4,u[s++]=(15&r)<<4|i>>2,u[s++]=(3&i)<<6|63&A;return l},UA=function(e){for(var t=e.length,n=[],r=0;r>FA,LA=(1<>FA)+32,IA=65536>>TA,RA=(1<=0){if(e<55296||e>56319&&e<=65535)return t=((t=this.index[e>>FA])<>FA)])<>TA),t=this.index[t],t+=e>>FA&RA,t=((t=this.index[t])<=55296&&i<=56319&&n>10),a%1024+56320)),(i+1===n||r.length>16384)&&(A+=String.fromCharCode.apply(String,r),r.length=0)}return A},sa=NA(bA),la="\xd7",ua="\xf7",ca=function(e){return sa.get(e)},da=function(e,t,n){var r=n-2,i=t[r],A=t[n-1],a=t[n];if(A===jA&&a===XA)return la;if(A===jA||A===XA||A===qA)return ua;if(a===jA||a===XA||a===qA)return ua;if(A===ZA&&-1!==[ZA,$A,ta,na].indexOf(a))return la;if((A===ta||A===$A)&&(a===$A||a===ea))return la;if((A===na||A===ea)&&a===ea)return la;if(a===ra||a===YA)return la;if(a===JA)return la;if(A===WA)return la;if(A===ra&&a===ia){for(;i===YA;)i=t[--r];if(i===ia)return la}if(A===Aa&&a===Aa){for(var o=0;i===Aa;)o++,i=t[--r];if(o%2===0)return la}return ua},ha=function(e){var t=aa(e),n=t.length,r=0,i=0,A=t.map(ca);return{next:function(){if(r>=n)return{done:!0,value:null};for(var e=la;ra.x||i.y>a.y;return a=i,0===t||o}));return e.body.removeChild(t),o},ma=function(){return"undefined"!==typeof(new Image).crossOrigin},va=function(){return"string"===typeof(new XMLHttpRequest).responseType},ya=function(e){var t=new Image,n=e.createElement("canvas"),r=n.getContext("2d");if(!r)return!1;t.src="data:image/svg+xml,";try{r.drawImage(t,0,0),n.toDataURL()}catch(Rt){return!1}return!0},wa=function(e){return 0===e[0]&&255===e[1]&&0===e[2]&&255===e[3]},Ba=function(e){var t=e.createElement("canvas"),n=100;t.width=n,t.height=n;var r=t.getContext("2d");if(!r)return Promise.reject(!1);r.fillStyle="rgb(0, 255, 0)",r.fillRect(0,0,n,n);var i=new Image,A=t.toDataURL();i.src=A;var a=_a(n,n,0,0,i);return r.fillStyle="red",r.fillRect(0,0,n,n),ba(a).then((function(t){r.drawImage(t,0,0);var i=r.getImageData(0,0,n,n).data;r.fillStyle="red",r.fillRect(0,0,n,n);var a=e.createElement("div");return a.style.backgroundImage="url("+A+")",a.style.height=n+"px",wa(i)?ba(_a(n,n,0,0,a)):Promise.reject(!1)})).then((function(e){return r.drawImage(e,0,0),wa(r.getImageData(0,0,n,n).data)})).catch((function(){return!1}))},_a=function(e,t,n,r,i){var A="http://www.w3.org/2000/svg",a=document.createElementNS(A,"svg"),o=document.createElementNS(A,"foreignObject");return a.setAttributeNS(null,"width",e.toString()),a.setAttributeNS(null,"height",t.toString()),o.setAttributeNS(null,"width","100%"),o.setAttributeNS(null,"height","100%"),o.setAttributeNS(null,"x",n.toString()),o.setAttributeNS(null,"y",r.toString()),o.setAttributeNS(null,"externalResourcesRequired","true"),a.appendChild(o),o.appendChild(i),a},ba=function(e){return new Promise((function(t,n){var r=new Image;r.onload=function(){return t(r)},r.onerror=n,r.src="data:image/svg+xml;charset=utf-8,"+encodeURIComponent((new XMLSerializer).serializeToString(e))}))},xa={get SUPPORT_RANGE_BOUNDS(){var e=pa(document);return Object.defineProperty(xa,"SUPPORT_RANGE_BOUNDS",{value:e}),e},get SUPPORT_WORD_BREAKING(){var e=xa.SUPPORT_RANGE_BOUNDS&&ga(document);return Object.defineProperty(xa,"SUPPORT_WORD_BREAKING",{value:e}),e},get SUPPORT_SVG_DRAWING(){var e=ya(document);return Object.defineProperty(xa,"SUPPORT_SVG_DRAWING",{value:e}),e},get SUPPORT_FOREIGNOBJECT_DRAWING(){var e="function"===typeof Array.from&&"function"===typeof window.fetch?Ba(document):Promise.resolve(!1);return Object.defineProperty(xa,"SUPPORT_FOREIGNOBJECT_DRAWING",{value:e}),e},get SUPPORT_CORS_IMAGES(){var e=ma();return Object.defineProperty(xa,"SUPPORT_CORS_IMAGES",{value:e}),e},get SUPPORT_RESPONSE_TYPE(){var e=va();return Object.defineProperty(xa,"SUPPORT_RESPONSE_TYPE",{value:e}),e},get SUPPORT_CORS_XHR(){var e="withCredentials"in new XMLHttpRequest;return Object.defineProperty(xa,"SUPPORT_CORS_XHR",{value:e}),e},get SUPPORT_NATIVE_TEXT_SEGMENTATION(){var e=!("undefined"===typeof Intl||!Intl.Segmenter);return Object.defineProperty(xa,"SUPPORT_NATIVE_TEXT_SEGMENTATION",{value:e}),e}},Ca=function(){function e(e,t){this.text=e,this.bounds=t}return e}(),Sa=function(e,t,n,r){var i=Ta(t,n),A=[],o=0;return i.forEach((function(t){if(n.textDecorationLine.length||t.trim().length>0)if(xa.SUPPORT_RANGE_BOUNDS){var i=Ua(r,o,t.length).getClientRects();if(i.length>1){var s=Ma(t),l=0;s.forEach((function(t){A.push(new Ca(t,a.fromDOMRectList(e,Ua(r,l+o,t.length).getClientRects()))),l+=t.length}))}else A.push(new Ca(t,a.fromDOMRectList(e,i)))}else{var u=r.splitText(t.length);A.push(new Ca(t,Ea(e,r))),r=u}else xa.SUPPORT_RANGE_BOUNDS||(r=r.splitText(t.length));o+=t.length})),A},Ea=function(e,t){var n=t.ownerDocument;if(n){var r=n.createElement("html2canvaswrapper");r.appendChild(t.cloneNode(!0));var i=t.parentNode;if(i){i.replaceChild(r,t);var A=o(e,r);return r.firstChild&&i.replaceChild(r.firstChild,r),A}}return a.EMPTY},Ua=function(e,t,n){var r=e.ownerDocument;if(!r)throw new Error("Node has no owner document");var i=r.createRange();return i.setStart(e,t),i.setEnd(e,t+n),i},Ma=function(e){if(xa.SUPPORT_NATIVE_TEXT_SEGMENTATION){var t=new Intl.Segmenter(void 0,{granularity:"grapheme"});return Array.from(t.segment(e)).map((function(e){return e.segment}))}return fa(e)},Fa=function(e,t){if(xa.SUPPORT_NATIVE_TEXT_SEGMENTATION){var n=new Intl.Segmenter(void 0,{granularity:"word"});return Array.from(n.segment(e)).map((function(e){return e.segment}))}return Qa(e,t)},Ta=function(e,t){return 0!==t.letterSpacing?Ma(e):Fa(e,t)},ka=[32,160,4961,65792,65793,4153,4241],Qa=function(e,t){for(var n,r=Ve(e,{lineBreak:t.lineBreak,wordBreak:"break-word"===t.overflowWrap?"break-word":t.wordBreak}),i=[],A=function(){if(n.value){var e=n.value.slice(),t=l(e),r="";t.forEach((function(e){-1===ka.indexOf(e)?r+=u(e):(r.length&&i.push(r),i.push(u(e)),r="")})),r.length&&i.push(r)}};!(n=r.next()).done;)A();return i},La=function(){function e(e,t,n){this.text=Da(t.data,n.textTransform),this.textBounds=Sa(e,this.text,n,t)}return e}(),Da=function(e,t){switch(t){case 1:return e.toLowerCase();case 3:return e.replace(Ia,Ra);case 2:return e.toUpperCase();default:return e}},Ia=/(^|\s|:|-|\(|\))([a-z])/g,Ra=function(e,t,n){return e.length>0?t+n.toUpperCase():e},Ha=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.src=n.currentSrc||n.src,r.intrinsicWidth=n.naturalWidth,r.intrinsicHeight=n.naturalHeight,r.context.cache.addImage(r.src),r}return t(n,e),n}(_A),Pa=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.canvas=n,r.intrinsicWidth=n.width,r.intrinsicHeight=n.height,r}return t(n,e),n}(_A),Na=function(e){function n(t,n){var r=e.call(this,t,n)||this,i=new XMLSerializer,A=o(t,n);return n.setAttribute("width",A.width+"px"),n.setAttribute("height",A.height+"px"),r.svg="data:image/svg+xml,"+encodeURIComponent(i.serializeToString(n)),r.intrinsicWidth=n.width.baseVal.value,r.intrinsicHeight=n.height.baseVal.value,r.context.cache.addImage(r.svg),r}return t(n,e),n}(_A),Oa=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.value=n.value,r}return t(n,e),n}(_A),Va=function(e){function n(t,n){var r=e.call(this,t,n)||this;return r.start=n.start,r.reversed="boolean"===typeof n.reversed&&!0===n.reversed,r}return t(n,e),n}(_A),za=[{type:15,flags:0,unit:"px",number:3}],Ga=[{type:16,flags:0,number:50}],Ka=function(e){return e.width>e.height?new a(e.left+(e.width-e.height)/2,e.top,e.height,e.height):e.width0)r.textNodes.push(new La(t,A,r.styles));else if(so(A))if(So(A)&&A.assignedNodes)A.assignedNodes().forEach((function(n){return e(t,n,r,i)}));else{var o=ro(t,A);o.styles.isVisible()&&(Ao(A,o,i)?o.flags|=4:ao(o.styles)&&(o.flags|=2),-1!==to.indexOf(A.tagName)&&(o.flags|=8),r.elements.push(o),A.slot,A.shadowRoot?e(t,A.shadowRoot,o,i):xo(A)||go(A)||Co(A)||e(t,A,o,i))}},ro=function(e,t){return wo(t)?new Ha(e,t):vo(t)?new Pa(e,t):go(t)?new Na(e,t):co(t)?new Oa(e,t):ho(t)?new Va(e,t):fo(t)?new Ja(e,t):Co(t)?new Za(e,t):xo(t)?new $a(e,t):Bo(t)?new eo(e,t):new _A(e,t)},io=function(e,t){var n=ro(e,t);return n.flags|=4,no(e,t,n,n),n},Ao=function(e,t,n){return t.styles.isPositionedWithZIndex()||t.styles.opacity<1||t.styles.isTransformed()||mo(e)&&n.styles.isTransparent()},ao=function(e){return e.isPositioned()||e.isFloating()},oo=function(e){return e.nodeType===Node.TEXT_NODE},so=function(e){return e.nodeType===Node.ELEMENT_NODE},lo=function(e){return so(e)&&"undefined"!==typeof e.style&&!uo(e)},uo=function(e){return"object"===typeof e.className},co=function(e){return"LI"===e.tagName},ho=function(e){return"OL"===e.tagName},fo=function(e){return"INPUT"===e.tagName},po=function(e){return"HTML"===e.tagName},go=function(e){return"svg"===e.tagName},mo=function(e){return"BODY"===e.tagName},vo=function(e){return"CANVAS"===e.tagName},yo=function(e){return"VIDEO"===e.tagName},wo=function(e){return"IMG"===e.tagName},Bo=function(e){return"IFRAME"===e.tagName},_o=function(e){return"STYLE"===e.tagName},bo=function(e){return"SCRIPT"===e.tagName},xo=function(e){return"TEXTAREA"===e.tagName},Co=function(e){return"SELECT"===e.tagName},So=function(e){return"SLOT"===e.tagName},Eo=function(e){return e.tagName.indexOf("-")>0},Uo=function(){function e(){this.counters={}}return e.prototype.getCounterValue=function(e){var t=this.counters[e];return t&&t.length?t[t.length-1]:1},e.prototype.getCounterValues=function(e){var t=this.counters[e];return t||[]},e.prototype.pop=function(e){var t=this;e.forEach((function(e){return t.counters[e].pop()}))},e.prototype.parse=function(e){var t=this,n=e.counterIncrement,r=e.counterReset,i=!0;null!==n&&n.forEach((function(e){var n=t.counters[e.counter];n&&0!==e.increment&&(i=!1,n.length||n.push(1),n[Math.max(0,n.length-1)]+=e.increment)}));var A=[];return i&&r.forEach((function(e){var n=t.counters[e.counter];A.push(e.counter),n||(n=t.counters[e.counter]=[]),n.push(e.reset)})),A},e}(),Mo={integers:[1e3,900,500,400,100,90,50,40,10,9,5,4,1],values:["M","CM","D","CD","C","XC","L","XL","X","IX","V","IV","I"]},Fo={integers:[9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,900,800,700,600,500,400,300,200,100,90,80,70,60,50,40,30,20,10,9,8,7,6,5,4,3,2,1],values:["\u0554","\u0553","\u0552","\u0551","\u0550","\u054f","\u054e","\u054d","\u054c","\u054b","\u054a","\u0549","\u0548","\u0547","\u0546","\u0545","\u0544","\u0543","\u0542","\u0541","\u0540","\u053f","\u053e","\u053d","\u053c","\u053b","\u053a","\u0539","\u0538","\u0537","\u0536","\u0535","\u0534","\u0533","\u0532","\u0531"]},To={integers:[1e4,9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,400,300,200,100,90,80,70,60,50,40,30,20,19,18,17,16,15,10,9,8,7,6,5,4,3,2,1],values:["\u05d9\u05f3","\u05d8\u05f3","\u05d7\u05f3","\u05d6\u05f3","\u05d5\u05f3","\u05d4\u05f3","\u05d3\u05f3","\u05d2\u05f3","\u05d1\u05f3","\u05d0\u05f3","\u05ea","\u05e9","\u05e8","\u05e7","\u05e6","\u05e4","\u05e2","\u05e1","\u05e0","\u05de","\u05dc","\u05db","\u05d9\u05d8","\u05d9\u05d7","\u05d9\u05d6","\u05d8\u05d6","\u05d8\u05d5","\u05d9","\u05d8","\u05d7","\u05d6","\u05d5","\u05d4","\u05d3","\u05d2","\u05d1","\u05d0"]},ko={integers:[1e4,9e3,8e3,7e3,6e3,5e3,4e3,3e3,2e3,1e3,900,800,700,600,500,400,300,200,100,90,80,70,60,50,40,30,20,10,9,8,7,6,5,4,3,2,1],values:["\u10f5","\u10f0","\u10ef","\u10f4","\u10ee","\u10ed","\u10ec","\u10eb","\u10ea","\u10e9","\u10e8","\u10e7","\u10e6","\u10e5","\u10e4","\u10f3","\u10e2","\u10e1","\u10e0","\u10df","\u10de","\u10dd","\u10f2","\u10dc","\u10db","\u10da","\u10d9","\u10d8","\u10d7","\u10f1","\u10d6","\u10d5","\u10d4","\u10d3","\u10d2","\u10d1","\u10d0"]},Qo=function(e,t,n,r,i,A){return en?Wo(e,i,A.length>0):r.integers.reduce((function(t,n,i){for(;e>=n;)e-=n,t+=r.values[i];return t}),"")+A},Lo=function(e,t,n,r){var i="";do{n||e--,i=r(e)+i,e/=t}while(e*t>=t);return i},Do=function(e,t,n,r,i){var A=n-t+1;return(e<0?"-":"")+(Lo(Math.abs(e),A,r,(function(e){return u(Math.floor(e%A)+t)}))+i)},Io=function(e,t,n){void 0===n&&(n=". ");var r=t.length;return Lo(Math.abs(e),r,!1,(function(e){return t[Math.floor(e%r)]}))+n},Ro=1,Ho=2,Po=4,No=8,Oo=function(e,t,n,r,i,A){if(e<-9999||e>9999)return Wo(e,4,i.length>0);var a=Math.abs(e),o=i;if(0===a)return t[0]+o;for(var s=0;a>0&&s<=4;s++){var l=a%10;0===l&&iA(A,Ro)&&""!==o?o=t[l]+o:l>1||1===l&&0===s||1===l&&1===s&&iA(A,Ho)||1===l&&1===s&&iA(A,Po)&&e>100||1===l&&s>1&&iA(A,No)?o=t[l]+(s>0?n[s-1]:"")+o:1===l&&s>0&&(o=n[s-1]+o),a=Math.floor(a/10)}return(e<0?r:"")+o},Vo="\u5341\u767e\u5343\u842c",zo="\u62fe\u4f70\u4edf\u842c",Go="\u30de\u30a4\u30ca\u30b9",Ko="\ub9c8\uc774\ub108\uc2a4",Wo=function(e,t,n){var r=n?". ":"",i=n?"\u3001":"",A=n?", ":"",a=n?" ":"";switch(t){case 0:return"\u2022"+a;case 1:return"\u25e6"+a;case 2:return"\u25fe"+a;case 5:var o=Do(e,48,57,!0,r);return o.length<4?"0"+o:o;case 4:return Io(e,"\u3007\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",i);case 6:return Qo(e,1,3999,Mo,3,r).toLowerCase();case 7:return Qo(e,1,3999,Mo,3,r);case 8:return Do(e,945,969,!1,r);case 9:return Do(e,97,122,!1,r);case 10:return Do(e,65,90,!1,r);case 11:return Do(e,1632,1641,!0,r);case 12:case 49:return Qo(e,1,9999,Fo,3,r);case 35:return Qo(e,1,9999,Fo,3,r).toLowerCase();case 13:return Do(e,2534,2543,!0,r);case 14:case 30:return Do(e,6112,6121,!0,r);case 15:return Io(e,"\u5b50\u4e11\u5bc5\u536f\u8fb0\u5df3\u5348\u672a\u7533\u9149\u620c\u4ea5",i);case 16:return Io(e,"\u7532\u4e59\u4e19\u4e01\u620a\u5df1\u5e9a\u8f9b\u58ec\u7678",i);case 17:case 48:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",Vo,"\u8ca0",i,Ho|Po|No);case 47:return Oo(e,"\u96f6\u58f9\u8cb3\u53c3\u8086\u4f0d\u9678\u67d2\u634c\u7396",zo,"\u8ca0",i,Ro|Ho|Po|No);case 42:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d",Vo,"\u8d1f",i,Ho|Po|No);case 41:return Oo(e,"\u96f6\u58f9\u8d30\u53c1\u8086\u4f0d\u9646\u67d2\u634c\u7396",zo,"\u8d1f",i,Ro|Ho|Po|No);case 26:return Oo(e,"\u3007\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u5341\u767e\u5343\u4e07",Go,i,0);case 25:return Oo(e,"\u96f6\u58f1\u5f10\u53c2\u56db\u4f0d\u516d\u4e03\u516b\u4e5d","\u62fe\u767e\u5343\u4e07",Go,i,Ro|Ho|Po);case 31:return Oo(e,"\uc601\uc77c\uc774\uc0bc\uc0ac\uc624\uc721\uce60\ud314\uad6c","\uc2ed\ubc31\ucc9c\ub9cc",Ko,A,Ro|Ho|Po);case 33:return Oo(e,"\u96f6\u4e00\u4e8c\u4e09\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u5341\u767e\u5343\u842c",Ko,A,0);case 32:return Oo(e,"\u96f6\u58f9\u8cb3\u53c3\u56db\u4e94\u516d\u4e03\u516b\u4e5d","\u62fe\u767e\u5343",Ko,A,Ro|Ho|Po);case 18:return Do(e,2406,2415,!0,r);case 20:return Qo(e,1,19999,ko,3,r);case 21:return Do(e,2790,2799,!0,r);case 22:return Do(e,2662,2671,!0,r);case 22:return Qo(e,1,10999,To,3,r);case 23:return Io(e,"\u3042\u3044\u3046\u3048\u304a\u304b\u304d\u304f\u3051\u3053\u3055\u3057\u3059\u305b\u305d\u305f\u3061\u3064\u3066\u3068\u306a\u306b\u306c\u306d\u306e\u306f\u3072\u3075\u3078\u307b\u307e\u307f\u3080\u3081\u3082\u3084\u3086\u3088\u3089\u308a\u308b\u308c\u308d\u308f\u3090\u3091\u3092\u3093");case 24:return Io(e,"\u3044\u308d\u306f\u306b\u307b\u3078\u3068\u3061\u308a\u306c\u308b\u3092\u308f\u304b\u3088\u305f\u308c\u305d\u3064\u306d\u306a\u3089\u3080\u3046\u3090\u306e\u304a\u304f\u3084\u307e\u3051\u3075\u3053\u3048\u3066\u3042\u3055\u304d\u3086\u3081\u307f\u3057\u3091\u3072\u3082\u305b\u3059");case 27:return Do(e,3302,3311,!0,r);case 28:return Io(e,"\u30a2\u30a4\u30a6\u30a8\u30aa\u30ab\u30ad\u30af\u30b1\u30b3\u30b5\u30b7\u30b9\u30bb\u30bd\u30bf\u30c1\u30c4\u30c6\u30c8\u30ca\u30cb\u30cc\u30cd\u30ce\u30cf\u30d2\u30d5\u30d8\u30db\u30de\u30df\u30e0\u30e1\u30e2\u30e4\u30e6\u30e8\u30e9\u30ea\u30eb\u30ec\u30ed\u30ef\u30f0\u30f1\u30f2\u30f3",i);case 29:return Io(e,"\u30a4\u30ed\u30cf\u30cb\u30db\u30d8\u30c8\u30c1\u30ea\u30cc\u30eb\u30f2\u30ef\u30ab\u30e8\u30bf\u30ec\u30bd\u30c4\u30cd\u30ca\u30e9\u30e0\u30a6\u30f0\u30ce\u30aa\u30af\u30e4\u30de\u30b1\u30d5\u30b3\u30a8\u30c6\u30a2\u30b5\u30ad\u30e6\u30e1\u30df\u30b7\u30f1\u30d2\u30e2\u30bb\u30b9",i);case 34:return Do(e,3792,3801,!0,r);case 37:return Do(e,6160,6169,!0,r);case 38:return Do(e,4160,4169,!0,r);case 39:return Do(e,2918,2927,!0,r);case 40:return Do(e,1776,1785,!0,r);case 43:return Do(e,3046,3055,!0,r);case 44:return Do(e,3174,3183,!0,r);case 45:return Do(e,3664,3673,!0,r);case 46:return Do(e,3872,3881,!0,r);default:return Do(e,48,57,!0,r)}},jo="data-html2canvas-ignore",Xo=function(){function e(e,t,n){if(this.context=e,this.options=n,this.scrolledElements=[],this.referenceElement=t,this.counters=new Uo,this.quoteDepth=0,!t.ownerDocument)throw new Error("Cloned element does not have an owner document");this.documentElement=this.cloneNode(t.ownerDocument.documentElement,!1)}return e.prototype.toIFrame=function(e,t){var n=this,A=Yo(e,t);if(!A.contentWindow)return Promise.reject("Unable to find iframe window");var a=e.defaultView.pageXOffset,o=e.defaultView.pageYOffset,s=A.contentWindow,l=s.document,u=$o(A).then((function(){return r(n,void 0,void 0,(function(){var e,n;return i(this,(function(r){switch(r.label){case 0:return this.scrolledElements.forEach(is),s&&(s.scrollTo(t.left,t.top),!/(iPad|iPhone|iPod)/g.test(navigator.userAgent)||s.scrollY===t.top&&s.scrollX===t.left||(this.context.logger.warn("Unable to restore scroll position for cloned document"),this.context.windowBounds=this.context.windowBounds.add(s.scrollX-t.left,s.scrollY-t.top,0,0))),e=this.options.onclone,"undefined"===typeof(n=this.clonedReferenceElement)?[2,Promise.reject("Error finding the "+this.referenceElement.nodeName+" in the cloned document")]:l.fonts&&l.fonts.ready?[4,l.fonts.ready]:[3,2];case 1:r.sent(),r.label=2;case 2:return/(AppleWebKit)/g.test(navigator.userAgent)?[4,Zo(l)]:[3,4];case 3:r.sent(),r.label=4;case 4:return"function"===typeof e?[2,Promise.resolve().then((function(){return e(l,n)})).then((function(){return A}))]:[2,A]}}))}))}));return l.open(),l.write(ns(document.doctype)+""),rs(this.referenceElement.ownerDocument,a,o),l.replaceChild(l.adoptNode(this.documentElement),l.documentElement),l.close(),u},e.prototype.createElementClone=function(e){if(BA(e,2),vo(e))return this.createCanvasClone(e);if(yo(e))return this.createVideoClone(e);if(_o(e))return this.createStyleClone(e);var t=e.cloneNode(!1);return wo(t)&&(wo(e)&&e.currentSrc&&e.currentSrc!==e.src&&(t.src=e.currentSrc,t.srcset=""),"lazy"===t.loading&&(t.loading="eager")),Eo(t)?this.createCustomElementClone(t):t},e.prototype.createCustomElementClone=function(e){var t=document.createElement("html2canvascustomelement");return ts(e.style,t),t},e.prototype.createStyleClone=function(e){try{var t=e.sheet;if(t&&t.cssRules){var n=[].slice.call(t.cssRules,0).reduce((function(e,t){return t&&"string"===typeof t.cssText?e+t.cssText:e}),""),r=e.cloneNode(!1);return r.textContent=n,r}}catch(Rt){if(this.context.logger.error("Unable to access cssRules property",Rt),"SecurityError"!==Rt.name)throw Rt}return e.cloneNode(!1)},e.prototype.createCanvasClone=function(e){var t;if(this.options.inlineImages&&e.ownerDocument){var n=e.ownerDocument.createElement("img");try{return n.src=e.toDataURL(),n}catch(Rt){this.context.logger.info("Unable to inline canvas contents, canvas is tainted",e)}}var r=e.cloneNode(!1);try{r.width=e.width,r.height=e.height;var i=e.getContext("2d"),A=r.getContext("2d");if(A)if(!this.options.allowTaint&&i)A.putImageData(i.getImageData(0,0,e.width,e.height),0,0);else{var a=null!==(t=e.getContext("webgl2"))&&void 0!==t?t:e.getContext("webgl");if(a){var o=a.getContextAttributes();!1===(null===o||void 0===o?void 0:o.preserveDrawingBuffer)&&this.context.logger.warn("Unable to clone WebGL context as it has preserveDrawingBuffer=false",e)}A.drawImage(e,0,0)}return r}catch(Rt){this.context.logger.info("Unable to clone canvas as it is tainted",e)}return r},e.prototype.createVideoClone=function(e){var t=e.ownerDocument.createElement("canvas");t.width=e.offsetWidth,t.height=e.offsetHeight;var n=t.getContext("2d");try{return n&&(n.drawImage(e,0,0,t.width,t.height),this.options.allowTaint||n.getImageData(0,0,t.width,t.height)),t}catch(Rt){this.context.logger.info("Unable to clone video as it is tainted",e)}var r=e.ownerDocument.createElement("canvas");return r.width=e.offsetWidth,r.height=e.offsetHeight,r},e.prototype.appendChildNode=function(e,t,n){so(t)&&(bo(t)||t.hasAttribute(jo)||"function"===typeof this.options.ignoreElements&&this.options.ignoreElements(t))||this.options.copyStyles&&so(t)&&_o(t)||e.appendChild(this.cloneNode(t,n))},e.prototype.cloneChildNodes=function(e,t,n){for(var r=this,i=e.shadowRoot?e.shadowRoot.firstChild:e.firstChild;i;i=i.nextSibling)if(so(i)&&So(i)&&"function"===typeof i.assignedNodes){var A=i.assignedNodes();A.length&&A.forEach((function(e){return r.appendChildNode(t,e,n)}))}else this.appendChildNode(t,i,n)},e.prototype.cloneNode=function(e,t){if(oo(e))return document.createTextNode(e.data);if(!e.ownerDocument)return e.cloneNode(!1);var n=e.ownerDocument.defaultView;if(n&&so(e)&&(lo(e)||uo(e))){var r=this.createElementClone(e);r.style.transitionProperty="none";var i=n.getComputedStyle(e),A=n.getComputedStyle(e,":before"),a=n.getComputedStyle(e,":after");this.referenceElement===e&&lo(r)&&(this.clonedReferenceElement=r),mo(r)&&us(r);var o=this.counters.parse(new mA(this.context,i)),s=this.resolvePseudoContent(e,r,A,KA.BEFORE);Eo(e)&&(t=!0),yo(e)||this.cloneChildNodes(e,r,t),s&&r.insertBefore(s,r.firstChild);var l=this.resolvePseudoContent(e,r,a,KA.AFTER);return l&&r.appendChild(l),this.counters.pop(o),(i&&(this.options.copyStyles||uo(e))&&!Bo(e)||t)&&ts(i,r),0===e.scrollTop&&0===e.scrollLeft||this.scrolledElements.push([r,e.scrollLeft,e.scrollTop]),(xo(e)||Co(e))&&(xo(r)||Co(r))&&(r.value=e.value),r}return e.cloneNode(!1)},e.prototype.resolvePseudoContent=function(e,t,n,r){var i=this;if(n){var A=n.content,a=t.ownerDocument;if(a&&A&&"none"!==A&&"-moz-alt-content"!==A&&"none"!==n.display){this.counters.parse(new mA(this.context,n));var o=new gA(this.context,n),s=a.createElement("html2canvaspseudoelement");ts(n,s),o.content.forEach((function(t){if(0===t.type)s.appendChild(a.createTextNode(t.value));else if(22===t.type){var n=a.createElement("img");n.src=t.value,n.style.opacity="1",s.appendChild(n)}else if(18===t.type){if("attr"===t.name){var r=t.values.filter(Qn);r.length&&s.appendChild(a.createTextNode(e.getAttribute(r[0].value)||""))}else if("counter"===t.name){var A=t.values.filter(Rn),l=A[0],u=A[1];if(l&&Qn(l)){var c=i.counters.getCounterValue(l.value),d=u&&Qn(u)?xi.parse(i.context,u.value):3;s.appendChild(a.createTextNode(Wo(c,d,!1)))}}else if("counters"===t.name){var h=t.values.filter(Rn),f=(l=h[0],h[1]);if(u=h[2],l&&Qn(l)){var p=i.counters.getCounterValues(l.value),g=u&&Qn(u)?xi.parse(i.context,u.value):3,m=f&&0===f.type?f.value:"",v=p.map((function(e){return Wo(e,g,!1)})).join(m);s.appendChild(a.createTextNode(v))}}}else if(20===t.type)switch(t.value){case"open-quote":s.appendChild(a.createTextNode(uA(o.quotes,i.quoteDepth++,!0)));break;case"close-quote":s.appendChild(a.createTextNode(uA(o.quotes,--i.quoteDepth,!1)));break;default:s.appendChild(a.createTextNode(t.value))}})),s.className=os+" "+ss;var l=r===KA.BEFORE?" "+os:" "+ss;return uo(t)?t.className.baseValue+=l:t.className+=l,s}}},e.destroy=function(e){return!!e.parentNode&&(e.parentNode.removeChild(e),!0)},e}();!function(e){e[e.BEFORE=0]="BEFORE",e[e.AFTER=1]="AFTER"}(KA||(KA={}));var qo,Yo=function(e,t){var n=e.createElement("iframe");return n.className="html2canvas-container",n.style.visibility="hidden",n.style.position="fixed",n.style.left="-10000px",n.style.top="0px",n.style.border="0",n.width=t.width.toString(),n.height=t.height.toString(),n.scrolling="no",n.setAttribute(jo,"true"),e.body.appendChild(n),n},Jo=function(e){return new Promise((function(t){e.complete?t():e.src?(e.onload=t,e.onerror=t):t()}))},Zo=function(e){return Promise.all([].slice.call(e.images,0).map(Jo))},$o=function(e){return new Promise((function(t,n){var r=e.contentWindow;if(!r)return n("No window assigned for iframe");var i=r.document;r.onload=e.onload=function(){r.onload=e.onload=null;var n=setInterval((function(){i.body.childNodes.length>0&&"complete"===i.readyState&&(clearInterval(n),t(e))}),50)}}))},es=["all","d","content"],ts=function(e,t){for(var n=e.length-1;n>=0;n--){var r=e.item(n);-1===es.indexOf(r)&&t.style.setProperty(r,e.getPropertyValue(r))}return t},ns=function(e){var t="";return e&&(t+=""),t},rs=function(e,t,n){e&&e.defaultView&&(t!==e.defaultView.pageXOffset||n!==e.defaultView.pageYOffset)&&e.defaultView.scrollTo(t,n)},is=function(e){var t=e[0],n=e[1],r=e[2];t.scrollLeft=n,t.scrollTop=r},As=":before",as=":after",os="___html2canvas___pseudoelement_before",ss="___html2canvas___pseudoelement_after",ls='{\n content: "" !important;\n display: none !important;\n}',us=function(e){cs(e,"."+os+As+ls+"\n ."+ss+as+ls)},cs=function(e,t){var n=e.ownerDocument;if(n){var r=n.createElement("style");r.textContent=t,e.appendChild(r)}},ds=function(){function e(){}return e.getOrigin=function(t){var n=e._link;return n?(n.href=t,n.href=n.href,n.protocol+n.hostname+n.port):"about:blank"},e.isSameOrigin=function(t){return e.getOrigin(t)===e._origin},e.setContext=function(t){e._link=t.document.createElement("a"),e._origin=e.getOrigin(t.location.href)},e._origin="about:blank",e}(),hs=function(){function e(e,t){this.context=e,this._options=t,this._cache={}}return e.prototype.addImage=function(e){var t=Promise.resolve();return this.has(e)?t:ws(e)||ms(e)?((this._cache[e]=this.loadImage(e)).catch((function(){})),t):t},e.prototype.match=function(e){return this._cache[e]},e.prototype.loadImage=function(e){return r(this,void 0,void 0,(function(){var t,n,r,A,a=this;return i(this,(function(i){switch(i.label){case 0:return t=ds.isSameOrigin(e),n=!vs(e)&&!0===this._options.useCORS&&xa.SUPPORT_CORS_IMAGES&&!t,r=!vs(e)&&!t&&!ws(e)&&"string"===typeof this._options.proxy&&xa.SUPPORT_CORS_XHR&&!n,t||!1!==this._options.allowTaint||vs(e)||ws(e)||r||n?(A=e,r?[4,this.proxy(A)]:[3,2]):[2];case 1:A=i.sent(),i.label=2;case 2:return this.context.logger.debug("Added image "+e.substring(0,256)),[4,new Promise((function(e,t){var r=new Image;r.onload=function(){return e(r)},r.onerror=t,(ys(A)||n)&&(r.crossOrigin="anonymous"),r.src=A,!0===r.complete&&setTimeout((function(){return e(r)}),500),a._options.imageTimeout>0&&setTimeout((function(){return t("Timed out ("+a._options.imageTimeout+"ms) loading image")}),a._options.imageTimeout)}))];case 3:return[2,i.sent()]}}))}))},e.prototype.has=function(e){return"undefined"!==typeof this._cache[e]},e.prototype.keys=function(){return Promise.resolve(Object.keys(this._cache))},e.prototype.proxy=function(e){var t=this,n=this._options.proxy;if(!n)throw new Error("No proxy defined");var r=e.substring(0,256);return new Promise((function(i,A){var a=xa.SUPPORT_RESPONSE_TYPE?"blob":"text",o=new XMLHttpRequest;o.onload=function(){if(200===o.status)if("text"===a)i(o.response);else{var e=new FileReader;e.addEventListener("load",(function(){return i(e.result)}),!1),e.addEventListener("error",(function(e){return A(e)}),!1),e.readAsDataURL(o.response)}else A("Failed to proxy resource "+r+" with status code "+o.status)},o.onerror=A;var s=n.indexOf("?")>-1?"&":"?";if(o.open("GET",""+n+s+"url="+encodeURIComponent(e)+"&responseType="+a),"text"!==a&&o instanceof XMLHttpRequest&&(o.responseType=a),t._options.imageTimeout){var l=t._options.imageTimeout;o.timeout=l,o.ontimeout=function(){return A("Timed out ("+l+"ms) proxying "+r)}}o.send()}))},e}(),fs=/^data:image\/svg\+xml/i,ps=/^data:image\/.*;base64,/i,gs=/^data:image\/.*/i,ms=function(e){return xa.SUPPORT_SVG_DRAWING||!Bs(e)},vs=function(e){return gs.test(e)},ys=function(e){return ps.test(e)},ws=function(e){return"blob"===e.substr(0,4)},Bs=function(e){return"svg"===e.substr(-3).toLowerCase()||fs.test(e)},_s=function(){function e(e,t){this.type=0,this.x=e,this.y=t}return e.prototype.add=function(t,n){return new e(this.x+t,this.y+n)},e}(),bs=function(e,t,n){return new _s(e.x+(t.x-e.x)*n,e.y+(t.y-e.y)*n)},xs=function(){function e(e,t,n,r){this.type=1,this.start=e,this.startControl=t,this.endControl=n,this.end=r}return e.prototype.subdivide=function(t,n){var r=bs(this.start,this.startControl,t),i=bs(this.startControl,this.endControl,t),A=bs(this.endControl,this.end,t),a=bs(r,i,t),o=bs(i,A,t),s=bs(a,o,t);return n?new e(this.start,r,a,s):new e(s,o,A,this.end)},e.prototype.add=function(t,n){return new e(this.start.add(t,n),this.startControl.add(t,n),this.endControl.add(t,n),this.end.add(t,n))},e.prototype.reverse=function(){return new e(this.end,this.endControl,this.startControl,this.start)},e}(),Cs=function(e){return 1===e.type},Ss=function(){function e(e){var t=e.styles,n=e.bounds,r=Wn(t.borderTopLeftRadius,n.width,n.height),i=r[0],A=r[1],a=Wn(t.borderTopRightRadius,n.width,n.height),o=a[0],s=a[1],l=Wn(t.borderBottomRightRadius,n.width,n.height),u=l[0],c=l[1],d=Wn(t.borderBottomLeftRadius,n.width,n.height),h=d[0],f=d[1],p=[];p.push((i+o)/n.width),p.push((h+u)/n.width),p.push((A+f)/n.height),p.push((s+c)/n.height);var g=Math.max.apply(Math,p);g>1&&(i/=g,A/=g,o/=g,s/=g,u/=g,c/=g,h/=g,f/=g);var m=n.width-o,v=n.height-c,y=n.width-u,w=n.height-f,B=t.borderTopWidth,_=t.borderRightWidth,b=t.borderBottomWidth,x=t.borderLeftWidth,C=jn(t.paddingTop,e.bounds.width),S=jn(t.paddingRight,e.bounds.width),E=jn(t.paddingBottom,e.bounds.width),U=jn(t.paddingLeft,e.bounds.width);this.topLeftBorderDoubleOuterBox=i>0||A>0?Es(n.left+x/3,n.top+B/3,i-x/3,A-B/3,qo.TOP_LEFT):new _s(n.left+x/3,n.top+B/3),this.topRightBorderDoubleOuterBox=i>0||A>0?Es(n.left+m,n.top+B/3,o-_/3,s-B/3,qo.TOP_RIGHT):new _s(n.left+n.width-_/3,n.top+B/3),this.bottomRightBorderDoubleOuterBox=u>0||c>0?Es(n.left+y,n.top+v,u-_/3,c-b/3,qo.BOTTOM_RIGHT):new _s(n.left+n.width-_/3,n.top+n.height-b/3),this.bottomLeftBorderDoubleOuterBox=h>0||f>0?Es(n.left+x/3,n.top+w,h-x/3,f-b/3,qo.BOTTOM_LEFT):new _s(n.left+x/3,n.top+n.height-b/3),this.topLeftBorderDoubleInnerBox=i>0||A>0?Es(n.left+2*x/3,n.top+2*B/3,i-2*x/3,A-2*B/3,qo.TOP_LEFT):new _s(n.left+2*x/3,n.top+2*B/3),this.topRightBorderDoubleInnerBox=i>0||A>0?Es(n.left+m,n.top+2*B/3,o-2*_/3,s-2*B/3,qo.TOP_RIGHT):new _s(n.left+n.width-2*_/3,n.top+2*B/3),this.bottomRightBorderDoubleInnerBox=u>0||c>0?Es(n.left+y,n.top+v,u-2*_/3,c-2*b/3,qo.BOTTOM_RIGHT):new _s(n.left+n.width-2*_/3,n.top+n.height-2*b/3),this.bottomLeftBorderDoubleInnerBox=h>0||f>0?Es(n.left+2*x/3,n.top+w,h-2*x/3,f-2*b/3,qo.BOTTOM_LEFT):new _s(n.left+2*x/3,n.top+n.height-2*b/3),this.topLeftBorderStroke=i>0||A>0?Es(n.left+x/2,n.top+B/2,i-x/2,A-B/2,qo.TOP_LEFT):new _s(n.left+x/2,n.top+B/2),this.topRightBorderStroke=i>0||A>0?Es(n.left+m,n.top+B/2,o-_/2,s-B/2,qo.TOP_RIGHT):new _s(n.left+n.width-_/2,n.top+B/2),this.bottomRightBorderStroke=u>0||c>0?Es(n.left+y,n.top+v,u-_/2,c-b/2,qo.BOTTOM_RIGHT):new _s(n.left+n.width-_/2,n.top+n.height-b/2),this.bottomLeftBorderStroke=h>0||f>0?Es(n.left+x/2,n.top+w,h-x/2,f-b/2,qo.BOTTOM_LEFT):new _s(n.left+x/2,n.top+n.height-b/2),this.topLeftBorderBox=i>0||A>0?Es(n.left,n.top,i,A,qo.TOP_LEFT):new _s(n.left,n.top),this.topRightBorderBox=o>0||s>0?Es(n.left+m,n.top,o,s,qo.TOP_RIGHT):new _s(n.left+n.width,n.top),this.bottomRightBorderBox=u>0||c>0?Es(n.left+y,n.top+v,u,c,qo.BOTTOM_RIGHT):new _s(n.left+n.width,n.top+n.height),this.bottomLeftBorderBox=h>0||f>0?Es(n.left,n.top+w,h,f,qo.BOTTOM_LEFT):new _s(n.left,n.top+n.height),this.topLeftPaddingBox=i>0||A>0?Es(n.left+x,n.top+B,Math.max(0,i-x),Math.max(0,A-B),qo.TOP_LEFT):new _s(n.left+x,n.top+B),this.topRightPaddingBox=o>0||s>0?Es(n.left+Math.min(m,n.width-_),n.top+B,m>n.width+_?0:Math.max(0,o-_),Math.max(0,s-B),qo.TOP_RIGHT):new _s(n.left+n.width-_,n.top+B),this.bottomRightPaddingBox=u>0||c>0?Es(n.left+Math.min(y,n.width-x),n.top+Math.min(v,n.height-b),Math.max(0,u-_),Math.max(0,c-b),qo.BOTTOM_RIGHT):new _s(n.left+n.width-_,n.top+n.height-b),this.bottomLeftPaddingBox=h>0||f>0?Es(n.left+x,n.top+Math.min(w,n.height-b),Math.max(0,h-x),Math.max(0,f-b),qo.BOTTOM_LEFT):new _s(n.left+x,n.top+n.height-b),this.topLeftContentBox=i>0||A>0?Es(n.left+x+U,n.top+B+C,Math.max(0,i-(x+U)),Math.max(0,A-(B+C)),qo.TOP_LEFT):new _s(n.left+x+U,n.top+B+C),this.topRightContentBox=o>0||s>0?Es(n.left+Math.min(m,n.width+x+U),n.top+B+C,m>n.width+x+U?0:o-x+U,s-(B+C),qo.TOP_RIGHT):new _s(n.left+n.width-(_+S),n.top+B+C),this.bottomRightContentBox=u>0||c>0?Es(n.left+Math.min(y,n.width-(x+U)),n.top+Math.min(v,n.height+B+C),Math.max(0,u-(_+S)),c-(b+E),qo.BOTTOM_RIGHT):new _s(n.left+n.width-(_+S),n.top+n.height-(b+E)),this.bottomLeftContentBox=h>0||f>0?Es(n.left+x+U,n.top+w,Math.max(0,h-(x+U)),f-(b+E),qo.BOTTOM_LEFT):new _s(n.left+x+U,n.top+n.height-(b+E))}return e}();!function(e){e[e.TOP_LEFT=0]="TOP_LEFT",e[e.TOP_RIGHT=1]="TOP_RIGHT",e[e.BOTTOM_RIGHT=2]="BOTTOM_RIGHT",e[e.BOTTOM_LEFT=3]="BOTTOM_LEFT"}(qo||(qo={}));var Es=function(e,t,n,r,i){var A=(Math.sqrt(2)-1)/3*4,a=n*A,o=r*A,s=e+n,l=t+r;switch(i){case qo.TOP_LEFT:return new xs(new _s(e,l),new _s(e,l-o),new _s(s-a,t),new _s(s,t));case qo.TOP_RIGHT:return new xs(new _s(e,t),new _s(e+a,t),new _s(s,l-o),new _s(s,l));case qo.BOTTOM_RIGHT:return new xs(new _s(s,t),new _s(s,t+o),new _s(e+a,l),new _s(e,l));case qo.BOTTOM_LEFT:default:return new xs(new _s(s,l),new _s(s-a,l),new _s(e,t+o),new _s(e,t))}},Us=function(e){return[e.topLeftBorderBox,e.topRightBorderBox,e.bottomRightBorderBox,e.bottomLeftBorderBox]},Ms=function(e){return[e.topLeftContentBox,e.topRightContentBox,e.bottomRightContentBox,e.bottomLeftContentBox]},Fs=function(e){return[e.topLeftPaddingBox,e.topRightPaddingBox,e.bottomRightPaddingBox,e.bottomLeftPaddingBox]},Ts=function(){function e(e,t,n){this.offsetX=e,this.offsetY=t,this.matrix=n,this.type=0,this.target=6}return e}(),ks=function(){function e(e,t){this.path=e,this.target=t,this.type=1}return e}(),Qs=function(){function e(e){this.opacity=e,this.type=2,this.target=6}return e}(),Ls=function(e){return 0===e.type},Ds=function(e){return 1===e.type},Is=function(e){return 2===e.type},Rs=function(e,t){return e.length===t.length&&e.some((function(e,n){return e===t[n]}))},Hs=function(e,t,n,r,i){return e.map((function(e,A){switch(A){case 0:return e.add(t,n);case 1:return e.add(t+r,n);case 2:return e.add(t+r,n+i);case 3:return e.add(t,n+i)}return e}))},Ps=function(){function e(e){this.element=e,this.inlineLevel=[],this.nonInlineLevel=[],this.negativeZIndex=[],this.zeroOrAutoZIndexOrTransformedOrOpacity=[],this.positiveZIndex=[],this.nonPositionedFloats=[],this.nonPositionedInlineLevel=[]}return e}(),Ns=function(){function e(e,t){if(this.container=e,this.parent=t,this.effects=[],this.curves=new Ss(this.container),this.container.styles.opacity<1&&this.effects.push(new Qs(this.container.styles.opacity)),null!==this.container.styles.transform){var n=this.container.bounds.left+this.container.styles.transformOrigin[0].number,r=this.container.bounds.top+this.container.styles.transformOrigin[1].number,i=this.container.styles.transform;this.effects.push(new Ts(n,r,i))}if(0!==this.container.styles.overflowX){var A=Us(this.curves),a=Fs(this.curves);Rs(A,a)?this.effects.push(new ks(A,6)):(this.effects.push(new ks(A,2)),this.effects.push(new ks(a,4)))}}return e.prototype.getEffects=function(e){for(var t=-1===[2,3].indexOf(this.container.styles.position),n=this.parent,r=this.effects.slice(0);n;){var i=n.effects.filter((function(e){return!Ds(e)}));if(t||0!==n.container.styles.position||!n.parent){if(r.unshift.apply(r,i),t=-1===[2,3].indexOf(n.container.styles.position),0!==n.container.styles.overflowX){var A=Us(n.curves),a=Fs(n.curves);Rs(A,a)||r.unshift(new ks(a,6))}}else r.unshift.apply(r,i);n=n.parent}return r.filter((function(t){return iA(t.target,e)}))},e}(),Os=function e(t,n,r,i){t.container.elements.forEach((function(A){var a=iA(A.flags,4),o=iA(A.flags,2),s=new Ns(A,t);iA(A.styles.display,2048)&&i.push(s);var l=iA(A.flags,8)?[]:i;if(a||o){var u=a||A.styles.isPositioned()?r:n,c=new Ps(s);if(A.styles.isPositioned()||A.styles.opacity<1||A.styles.isTransformed()){var d=A.styles.zIndex.order;if(d<0){var h=0;u.negativeZIndex.some((function(e,t){return d>e.element.container.styles.zIndex.order?(h=t,!1):h>0})),u.negativeZIndex.splice(h,0,c)}else if(d>0){var f=0;u.positiveZIndex.some((function(e,t){return d>=e.element.container.styles.zIndex.order?(f=t+1,!1):f>0})),u.positiveZIndex.splice(f,0,c)}else u.zeroOrAutoZIndexOrTransformedOrOpacity.push(c)}else A.styles.isFloating()?u.nonPositionedFloats.push(c):u.nonPositionedInlineLevel.push(c);e(s,c,a?c:r,l)}else A.styles.isInlineLevel()?n.inlineLevel.push(s):n.nonInlineLevel.push(s),e(s,n,r,l);iA(A.flags,8)&&Vs(A,l)}))},Vs=function(e,t){for(var n=e instanceof Va?e.start:1,r=e instanceof Va&&e.reversed,i=0;i0&&e.intrinsicHeight>0){var r=Js(e),i=Fs(t);this.path(i),this.ctx.save(),this.ctx.clip(),this.ctx.drawImage(n,0,0,e.intrinsicWidth,e.intrinsicHeight,r.left,r.top,r.width,r.height),this.ctx.restore()}},n.prototype.renderNodeContent=function(e){return r(this,void 0,void 0,(function(){var t,r,A,o,s,l,u,c,d,h,f,p,g,m,v,y,w,B;return i(this,(function(i){switch(i.label){case 0:this.applyEffects(e.getEffects(4)),t=e.container,r=e.curves,A=t.styles,o=0,s=t.textNodes,i.label=1;case 1:return o0&&x>0&&(v=r.ctx.createPattern(p,"repeat"),r.renderRepeat(w,v,S,E))):Qr(n)&&(y=el(e,t,[null,null,null]),w=y[0],B=y[1],_=y[2],b=y[3],x=y[4],C=0===n.position.length?[Gn]:n.position,S=jn(C[0],b),E=jn(C[C.length-1],x),U=Br(n,S,E,b,x),M=U[0],F=U[1],M>0&&F>0&&(T=r.ctx.createRadialGradient(B+S,_+E,0,B+S,_+E,M),gr(n.stops,2*M).forEach((function(e){return T.addColorStop(e.stop,ir(e.color))})),r.path(w),r.ctx.fillStyle=T,M!==F?(k=e.bounds.left+.5*e.bounds.width,Q=e.bounds.top+.5*e.bounds.height,D=1/(L=F/M),r.ctx.save(),r.ctx.translate(k,Q),r.ctx.transform(1,0,0,L,0,0),r.ctx.translate(-k,-Q),r.ctx.fillRect(B,D*(_-Q)+Q,b,x*D),r.ctx.restore()):r.ctx.fill())),i.label=6;case 6:return t--,[2]}}))},r=this,A=0,a=e.styles.backgroundImage.slice(0).reverse(),s.label=1;case 1:return A0?2!==l.style?[3,5]:[4,this.renderDashedDottedBorder(l.color,l.width,a,e.curves,2)]:[3,11]:[3,13];case 4:return i.sent(),[3,11];case 5:return 3!==l.style?[3,7]:[4,this.renderDashedDottedBorder(l.color,l.width,a,e.curves,3)];case 6:return i.sent(),[3,11];case 7:return 4!==l.style?[3,9]:[4,this.renderDoubleBorder(l.color,l.width,a,e.curves)];case 8:return i.sent(),[3,11];case 9:return[4,this.renderSolidBorder(l.color,a,e.curves)];case 10:i.sent(),i.label=11;case 11:a++,i.label=12;case 12:return o++,[3,3];case 13:return[2]}}))}))},n.prototype.renderDashedDottedBorder=function(e,t,n,A,a){return r(this,void 0,void 0,(function(){var r,o,s,l,u,c,d,h,f,p,g,m,v,y,w,B;return i(this,(function(i){return this.ctx.save(),r=js(A,n),o=Gs(A,n),2===a&&(this.path(o),this.ctx.clip()),Cs(o[0])?(s=o[0].start.x,l=o[0].start.y):(s=o[0].x,l=o[0].y),Cs(o[1])?(u=o[1].end.x,c=o[1].end.y):(u=o[1].x,c=o[1].y),d=0===n||2===n?Math.abs(s-u):Math.abs(l-c),this.ctx.beginPath(),3===a?this.formatPath(r):this.formatPath(o.slice(0,2)),h=t<3?3*t:2*t,f=t<3?2*t:t,3===a&&(h=t,f=t),p=!0,d<=2*h?p=!1:d<=2*h+f?(h*=g=d/(2*h+f),f*=g):(m=Math.floor((d+f)/(h+f)),v=(d-m*h)/(m-1),f=(y=(d-(m+1)*h)/m)<=0||Math.abs(f-v)