-
- Adobe Premiere Pro CC 2015.3-10.3 Mac OS X: A Comprehensive Review Introduction
- If you are looking for a powerful and professional video editing software for your Mac, you might want to consider Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-Adobe Premiere Pro CC 2015 .3-10.3 Mac Os X
Download ✸ https://urlgoal.com/2uIc64
- Adobe Premiere Pro is one of the most popular and widely used video editing applications in the world, trusted by millions of filmmakers, broadcasters, journalists, students, and hobbyists.
- It is part of the Adobe Creative Cloud suite of applications, which means you can access it anytime and anywhere with your Adobe account, as well as integrate it with other Adobe apps like Photoshop, After Effects, Audition, Media Encoder, and more.
- In this article, we will review Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, which is the latest version available for Mac users as of June 2021.
- We will cover what this version is, why it is important for video editing, what are its main features and benefits, how to install it on your Mac, how to use it for your video projects, how to troubleshoot it if you encounter any issues, and how to update or rollback to a previous version if needed.
- By the end of this article, you will have a comprehensive understanding of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, and you will be able to decide if it is the right video editing software for you.
- How to install Adobe Premiere Pro CC 2015.3-10.3 Mac OS X
- Before you can start using Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, you need to install it on your Mac computer.
-
- Here are the steps to follow:
- What are the system requirements?
- First, you need to make sure that your Mac meets the minimum system requirements for running Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- According to Adobe, these are the system requirements:
-
-
-Component |
-Requirement |
-
-
-Operating system |
-Mac OS X v10.10, v10.11, or v10.12 |
-
-
-Processor |
-Multicore Intel processor with 64-bit support |
-
-
-RAM |
-8 GB of RAM (16 GB or more recommended) |
-
-
-Hard disk space |
-8 GB of available hard-disk space for installation; additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices) |
-
-
-Graphics card |
-Optional: Adobe-certified GPU card for GPU-accelerated performance |
-
-
-Internet connection |
-Internet connection and registration are necessary for required software activation, validation of subscriptions, and access to online services. |
-
-
- If your Mac does not meet these requirements, you may experience performance issues or errors when using Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- In that case, you may want to upgrade your Mac hardware, or use an older version of Adobe Premiere Pro that is compatible with your Mac.
- What are the direct download links?
- Next, you need to download the Adobe Premiere Pro CC 2015.3-10.3 Mac OS X installer from the official Adobe website.
- You can use the direct download links below to get the installer without using the Creative Cloud desktop app:
-
- Note that you need to have a valid Adobe account and be logged in to access these links.
- If you have any trouble downloading the installer, you can try using a different browser, computer, or network connection.
- How to follow the installation instructions?
- Once you have downloaded the installer, you need to follow the installation instructions to complete the process.
- The installation instructions vary depending on whether you have a subscription plan or a serial number for Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- If you have a subscription plan, you can follow these steps:
-
-- Double-click the downloaded file to extract its contents.
-- The extraction process creates a folder named "AdobePatchInstaller". Open this folder and double-click "AdobePatchInstaller.app". Enter your administrator password if prompted.
-- The update begins installing. Follow any onscreen instructions.
-- When the installation is complete, click "Close".
-- You can now launch Adobe Premiere Pro CC 2015.3-10.3 Mac OS X from your Applications folder or the Creative Cloud desktop app.
-
- If you have a serial number, you can follow these steps:
-
-- Double-click the downloaded file to extract its contents.
-- The extraction process creates a folder named "Build". Open this folder and double-click "Install.app". Enter your administrator password if prompted.
-- The installer starts and displays the Welcome screen. Click "Continue".
-- The installer prompts you to enter your Adobe ID. Sign in with your Adobe account or create a new one.
-- The installer prompts you to enter your serial number. Enter the 24-digit serial number that you received when you purchased Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The installer validates your serial number and displays the License Agreement screen. Click "Accept".
-- The installer displays the Install Options screen. Choose the language and components that you want to install, and click "Continue".
-- The installer displays the Installation Type screen. Choose the destination folder for the installation, and click "Install".
-- The installation begins. Follow any onscreen instructions.
-- When the installation is complete, click "Close".
-- You can now launch Adobe Premiere Pro CC 2015.3-10.3 Mac OS X from your Applications folder or the Creative Cloud desktop app.
-
- Congratulations, you have successfully installed Adobe Premiere Pro CC 2015.3-10.3 Mac OS X on your Mac!
- How to use Adobe Premiere Pro CC 2015.3-10.3 Mac OS X
- Now that you have installed Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, you are ready to use it for your video editing projects.
- In this section, we will show you how to use some of the basic features and functions of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, such as creating a new project, importing media, using the timeline and editing tools, applying effects and transitions, and exporting and sharing your video.
- Of course, this is not a comprehensive guide, but rather an overview of the main steps and concepts that you need to know to get started with Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- If you want to learn more about the advanced features and techniques of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, you can check out the official Adobe help pages, tutorials, and forums, or look for online courses and books on video editing.
- How to create a new project and import media?
- The first thing you need to do when you launch Adobe Premiere Pro CC 2015.3-10.3 Mac OS X is to create a new project.
- A project is a collection of files that you use to create your video, such as video clips, audio tracks, images, titles, effects, and settings.
- To create a new project, follow these steps:
-
-- From the Welcome screen, click "New Project".
-- The New Project dialog box appears. Enter a name and location for your project, and click "OK".
-- The New Sequence dialog box appears. A sequence is a container for your video edits in the timeline. You can choose from various presets or customize your own sequence settings, such as frame size, frame rate, pixel aspect ratio, audio sample rate, and more.
-- Choose a preset that matches your source media or desired output format, or click "Settings" to modify the sequence settings manually.
-- Click "OK" to create your new sequence.
-
- You have now created a new project and a new sequence in Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- The next step is to import your media files into your project.
- Media files are the raw materials that you use to create your video, such as video clips, audio tracks, images, graphics, etc.
- To import media files into your project, follow these steps:
-
-- In the Project panel (the lower-left corner of the screen), right-click and choose "Import".
-- The Import dialog box appears. Navigate to the folder where your media files are stored on your computer or external drive.
-- Select the files that you want to import and click "Open". You can also drag and drop files from Finder into the Project panel.
-- The files are imported into your project and appear in the Project panel as thumbnails or icons.
-
- You have now imported your media files into your project in Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- How to use the timeline and editing tools?
- The timeline is where you arrange and edit your media files to create your video.
- The timeline consists of several tracks (layers) where you can place your video clips, audio clips, images, titles, effects, and more.
- The timeline also has a playhead (a vertical line with a triangle on top) that shows the current position and timecode of your video.
- To use the timeline and editing tools, follow these steps:
-
-- Drag and drop your media files from the Project panel to the timeline. You can place them on any track, as long as they match the type of media (video or audio).
-- Adjust the position and length of your clips by dragging their edges or moving them along the timeline. You can also use keyboard shortcuts or the Selection tool (the arrow icon) to perform basic editing operations, such as cut, copy, paste, delete, trim, split, etc.
-- Use the Zoom tool (the magnifying glass icon) or the scroll bar to zoom in or out of the timeline. You can also use the plus (+) and minus (-) keys to zoom in or out.
-- Use the Track Select tool (the arrow with a line icon) to select all the clips on a track or to the right of the playhead.
-- Use the Ripple Edit tool (the yellow bracket icon) to trim a clip and move all the clips to the right of it accordingly.
-- Use the Rolling Edit tool (the red bracket icon) to trim two adjacent clips at the same time, without changing their overall duration.
-- Use the Rate Stretch tool (the clock icon) to change the speed and duration of a clip by dragging its edges.
-- Use the Razor tool (the scissors icon) to cut a clip into two parts at the position of the playhead.
-- Use the Slip tool (the two arrows with a filmstrip icon) to change the in and out points of a clip without changing its position or duration.
-- Use the Slide tool (the two arrows with a filmstrip and a line icon) to change the position of a clip without changing its in and out points or duration.
-- Use the Pen tool (the pen icon) to create keyframes for adjusting the opacity, volume, or effect parameters of a clip over time.
-- Use the Hand tool (the hand icon) to move the view of the timeline without changing anything else.
-
- You have now learned how to use some of the basic timeline and editing tools in Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- How to apply effects and transitions?
- Effects and transitions are ways to enhance and modify your video clips by adding visual or audio elements, such as color correction, filters, motion, sound, etc.
- To apply effects and transitions, follow these steps:
-
-- In the Project panel, click on the Effects tab. You will see a list of various categories of effects and transitions that you can use in your video.
-- Browse through the categories and find an effect or transition that you want to apply. You can also use the search box to find an effect or transition by name or keyword.
-- Drag and drop the effect or transition onto a clip in the timeline. You can also drag and drop an effect onto an adjustment layer (a special type of clip that applies an effect to all clips below it).
-- To adjust the settings of an effect or transition, select the clip that has it applied and go to the Effect Controls panel (the upper-left corner of the screen). You will see a list of parameters that you can adjust, such as opacity, position, scale, rotation, etc.
-- Use the sliders, buttons, checkboxes, or keyframes to change the values of the parameters. You can also use the Program Monitor (the upper-right corner of the screen) to preview the effect or transition and make adjustments directly on the video.
-- To remove an effect or transition, select the clip that has it applied and go to the Effect Controls panel. Click on the name of the effect or transition and press Delete.
-
- You have now learned how to apply some of the basic effects and transitions in Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- How to export and share your video?
- When you are done editing your video, you need to export it to a file format that you can play, share, or distribute.
- To export and share your video, follow these steps:
-
-- In the timeline, select the sequence that you want to export. You can also set in and out points to define a specific portion of the sequence that you want to export.
-- Go to File > Export > Media. The Export Settings dialog box appears.
-- Choose a format and a preset for your output file. You can also customize the settings, such as resolution, frame rate, bitrate, codec, audio channels, etc.
-- Choose a name and a location for your output file. You can also check the Export Video and Export Audio boxes to export both video and audio tracks.
-- Click on Export to start the export process. You can also click on Queue to send your export to Adobe Media Encoder, where you can manage multiple exports at once.
-- Wait for the export process to finish. You can monitor the progress in the Export panel or in Adobe Media Encoder.
-- When the export is complete, you can play your output file with any compatible media player or device. You can also share it online via email, social media, cloud storage, or other platforms.
-
- You have now learned how to export and share your video in Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- How to troubleshoot Adobe Premiere Pro CC 2015.3-10.3 Mac OS X
- Sometimes, you may encounter some issues or errors when using Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- These issues or errors may be caused by various factors, such as incompatible hardware or software, corrupted files or settings, network problems, bugs or glitches, etc.
- To troubleshoot Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, follow these steps:
- What are some common issues and solutions?
- Here are some of the common issues and solutions that you may encounter when using Adobe Premiere Pro CC 2015.3-10.3 Mac OS X:
-
-- Issue: Adobe Premiere Pro CC 2015.3-10.3 Mac OS X crashes or freezes frequently.
-- Solution: Try these steps:
-- Make sure that your Mac meets the minimum system requirements for running Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- Update your Mac operating system and drivers to the latest versions.
-- Update Adobe Premiere Pro CC 2015.3-10.3 Mac OS X to the latest version.
-- Close any other applications or processes that are running in the background and consuming memory or CPU resources.
-- Delete any unnecessary or unused files or media from your project and timeline.
-- Clear your media cache and preferences files by going to Preferences > Media Cache and Preferences > General and clicking on Clear or Delete.
-- Disable any third-party plugins or effects that may be causing conflicts or errors.
-- Restart your Mac and relaunch Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-
-- Issue: Adobe Premiere Pro CC 2015.3-10.3 Mac OS X does not recognize or import some media files correctly.
-- Solution: Try these steps:
-- Make sure that your media files are in a supported format and codec for Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- Rename your media files with simple and short names without any special characters or spaces.
-- Copy your media files to your local drive or an external drive that is formatted with a Mac-compatible file system, such as HFS+ or APFS.
-- Convert your media files to a different format or codec using a third-party software or online service, such as HandBrake, VLC, or Zamzar.
-- Re-import your media files into Adobe Premiere Pro CC 2015.3-10.3 Mac OS X using the Media Browser panel instead of the Import dialog box.
-
-- Issue: Adobe Premiere Pro CC 2015.3-10.3 Mac OS X does not play or export some audio or video tracks correctly.
-- Solution: Try these steps:
-- Make sure that your audio and video tracks are enabled and not muted or soloed in the timeline.
-- Make sure that your audio and video tracks are linked and synchronized in the timeline.
-- Make sure that your audio and video tracks have the same sample rate and frame rate as your sequence settings.
-- Make sure that your audio and video tracks are not corrupted or damaged by checking them in another media player or device.
-- Render your audio and video tracks by going to Sequence > Render Audio or Sequence > Render Effects in Work Area.
-- Change your audio and video playback settings by going to Preferences > Audio Hardware or Preferences > Playback and adjusting the device class, input, output, buffer size, etc.
-- Change your audio and video export settings by going to File > Export > Media and adjusting the format, preset, bitrate, codec, channels, etc.
-
-
- These are some of the common issues and solutions that you may encounter when using Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- If these steps do not resolve your issue or error, you can try searching for more specific solutions online, or contact Adobe support or community forums for further assistance.
- How to update or rollback to a previous version?
- Sometimes, you may want to update or rollback to a previous version of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
- This may be because you want to access new features or bug fixes, or because you want to avoid compatibility issues or errors with a newer version.
- To update or rollback to a previous version of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, follow these steps:
-
-- Open the Creative Cloud desktop app on your Mac and sign in with your Adobe account.
-- Go to the Apps tab and find Adobe Premiere Pro in the list of installed apps.
-- To update to a newer version, click on the Update button next to Adobe Premiere Pro. You can also click on the More actions icon (the three dots) and choose Check for updates.
-- To rollback to a previous version, click on the More actions icon (the three dots) and choose Other versions. You will see a list of available versions that you can install. Choose the version that you want and click on Install.
-- The Creative Cloud desktop app will start downloading and installing the selected version of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X on your Mac.
-- When the installation is complete, you can launch Adobe Premiere Pro CC 2015.3-10.3 Mac OS X from your Applications folder or the Creative Cloud desktop app.
-
- You have now learned how to update or rollback to a previous version of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X on your Mac.
- Conclusion
- In this article, we have reviewed Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, which is the latest version available for Mac users as of June 2021.
- We have covered what this version is, why it is important for video editing, what are its main features and benefits, how to install it on your Mac, how to use it for your video projects, how to troubleshoot it if you encounter any issues, and how to update or rollback to a previous version if needed.
- We hope that this article has given you a comprehensive understanding of Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, and that you will be able to decide if it is the right video editing software for you.
- If you are interested in trying out Adobe Premiere Pro CC 2015.3-10.3 Mac OS X for yourself, you can download it from the official Adobe website and use it for free for 7 days with a trial account or purchase a subscription plan with a monthly or annual fee.
- If you have any questions or feedback about Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, you can contact Adobe support or community forums, or leave a comment below.
- Thank you for reading this article, and happy video editing!
- FAQs
- Here are some of the frequently asked questions (FAQs) about Adobe Premiere Pro CC 2015.3-10.3 Mac OS X:
- What is the difference between Adobe Premiere Pro CC 2015.3 and 10.3?
- Adobe Premiere Pro CC 2015.3 and 10.3 are the same version of the software, but with different naming conventions.
- Adobe Premiere Pro CC 2015.3 is the name of the software when it was first released in June 2016, as part of the Creative Cloud 2015 update.
- Adobe Premiere Pro 10.3 is the name of the software when it was updated in July 2016, as part of the Creative Cloud 2017 update.
- The software itself did not change significantly between these two updates, except for some bug fixes and minor improvements.
- Is Adobe Premiere Pro CC 2015.3-10.3 Mac OS X compatible with macOS High Sierra (10.13)?
- Yes, Adobe Premiere Pro CC 2015.3-10.3 Mac OS X is compatible with macOS High Sierra (10.13), which is the latest version of the Mac operating system as of June 2021.
- However, some users have reported some issues or errors when using Adobe Premiere Pro CC 2015.3-10.3 Mac OS X on macOS High Sierra (10.13), such as crashing, freezing, audio distortion, rendering problems, etc.
- If you encounter any of these issues or errors, you can try some of the troubleshooting steps mentioned above, or update to a newer version of Adobe Premiere Pro CC that is more stable and optimized for macOS High Sierra (10.13).
- How much does Adobe Premiere Pro CC 2015.3-10.3 Mac OS X cost?
- Adobe Premiere Pro CC 2015.3-10.3 Mac OS X is not sold as a standalone product, but as part of the Adobe Creative Cloud suite of applications.
- To use Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, you need to have a subscription plan with Adobe Creative Cloud, which gives you access to all the Adobe apps and services that you need for your creative projects.
- The subscription plans vary depending on your needs and preferences, such as the number of apps, the storage space, the number of users, etc.
- As of June 2021, these are some of the subscription plans and prices for Adobe Creative Cloud:
-
-- The Single App plan gives you access to one Adobe app of your choice, such as Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, plus 100 GB of cloud storage and other features for $20.99 per month or $239.88 per year.
-- The All Apps plan gives you access to all the Adobe apps, including Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, plus 100 GB of cloud storage and other features for $52.99 per month or $599.88 per year.
-- The All Apps + Adobe Stock plan gives you access to all the Adobe apps, including Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, plus 100 GB of cloud storage and other features, plus 10 images per month from Adobe Stock for $82.98 per month or $959.76 per year.
-- The Student & Teacher plan gives you access to all the Adobe apps, including Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, plus 100 GB of cloud storage and other features for $19.99 per month or $239.88 per year, if you are eligible for academic discount.
-- The Business plan gives you access to all the Adobe apps, including Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, plus 1 TB of cloud storage and other features for $79.99 per month or $959.88 per year per user, if you are a small or medium business owner.
-
- You can also try Adobe Premiere Pro CC 2015.3-10.3 Mac OS X for free for 7 days with a trial account before purchasing a subscription plan.
- What are some alternatives to Adobe Premiere Pro CC 2015.3-10.3 Mac OS X?
- Adobe Premiere Pro CC 2015.3-10.3 Mac OS X is not the only video editing software available for Mac users.
- There are many other alternatives that you can use, depending on your needs, preferences, budget, and skill level.
- Here are some of the popular alternatives to Adobe Premiere Pro CC 2015.3-10.3 Mac OS X:
-
-- Final Cut Pro X: This is Apple's own video editing software, designed specifically for Mac users. It offers a sleek and intuitive interface, powerful performance, advanced features, and seamless integration with other Apple products and services. It costs $299.99 as a one-time purchase from the Mac App Store.
-- iMovie: This is another video editing software from Apple, but more suitable for beginners and casual users. It offers a simple and user-friendly interface, basic features, and easy sharing options. It is free for Mac users and comes pre-installed on most Mac computers.
-- Davinci Resolve: This is a professional video editing software from Blackmagic Design, known for its color grading and visual effects capabilities. It offers a comprehensive and customizable interface, high-end features, and support for multiple formats and platforms. It has a free version with some limitations, and a paid version with more features for $299.
-- HitFilm Express: This is a free video editing software from FXhome, known for its compositing and special effects features. It offers a modern and versatile interface, impressive features, and support for various plugins and add-ons. It is free to download and use, but you can also purchase optional packs or bundles for more features and effects.
-- Shotcut: This is an open-source video editing software from Meltytech, known for its simplicity and cross-platform compatibility. It offers a minimalistic and straightforward interface, basic features, and support for various formats and codecs. It is free to download and use, but you can also donate to support its development.
-
- These are some of the popular alternatives to Adobe Premiere Pro CC 2015.3-10.3 Mac OS X that you can try out.
- Of course, there are many other video editing software available for Mac users, so you can do your own research and find the one that suits you best.
- Where can I find more resources and tutorials on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X?
- If you want to learn more about Adobe Premiere Pro CC 2015.3-10.3 Mac OS X, you can find plenty of resources and tutorials online.
- Here are some of the best places to find more resources and tutorials on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X:
-
-- The official Adobe website: This is the best place to find the latest information, updates, downloads, help pages, tutorials, forums, blogs, podcasts, webinars, events, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe YouTube channel: This is the best place to find the latest videos, demos, tips, tricks, interviews, live streams, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe Facebook page: This is the best place to find the latest news, announcements, stories, contests, feedback, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe Twitter account: This is the best place to find the latest tweets, replies, retweets, likes, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe Instagram account: This is the best place to find the latest photos, videos, stories, reels, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe Reddit community: This is the best place to find the latest posts, comments, discussions, questions, answers, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe LinkedIn page: This is the best place to find the latest articles, insights, jobs, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- The official Adobe Behance portfolio: This is the best place to find the latest projects, collections, galleries, and more on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X.
-- Other online courses and books: There are many other online courses and books that you can take or read to learn more about Adobe Premiere Pro CC 2015.3-10.3 Mac OS X. Some of the popular ones are:
-
- These are some of the best places to find more resources and tutorials on Adobe Premiere Pro CC 2015.3-10.3 Mac OS X. b2dd77e56b
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Candy Paint And Gold Teeth Waka Flocka Flame Download !FREE!.md b/spaces/stomexserde/gpt4-ui/Examples/Candy Paint And Gold Teeth Waka Flocka Flame Download !FREE!.md
deleted file mode 100644
index 053b3cd41b93d678a0d75fe1353c7282c521ddef..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Candy Paint And Gold Teeth Waka Flocka Flame Download !FREE!.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-How to Download Candy Paint & Gold Teeth by Waka Flocka Flame
-Candy Paint & Gold Teeth is a song by American rapper Waka Flocka Flame, featuring Ludacris and Bun B. It was released on June 8, 2012, as the ninth track from his second studio album, Triple F Life: Friends, Fans and Family. The song pays homage to the southern rap culture, with references to soul food, strip clubs, car customization and legendary artists like Pimp C and Willie D.
-candy paint and gold teeth waka flocka flame download
Download --->>> https://urlgoal.com/2uI7Cv
-If you are a fan of Waka Flocka Flame and want to download Candy Paint & Gold Teeth to your device, here are some easy steps to follow:
-
-- Go to YouTube and search for "candy paint and gold teeth waka flocka flame". You should see the official music video as the first result[^2^]. Alternatively, you can use this link: https://www.youtube.com/watch?v=zjpgQUWKVx4
-- Copy the URL of the video and paste it into a YouTube to MP3 converter website, such as https://ytmp3.cc/en13/ or https://y2mate.com/. These websites allow you to download YouTube videos as MP3 files for free.
-- Click on the "Convert" or "Start" button and wait for the conversion process to finish. You should see a download link or button when it is done.
-- Click on the download link or button and save the MP3 file to your device. You can also rename it if you want.
-- Enjoy listening to Candy Paint & Gold Teeth by Waka Flocka Flame!
-
-If you want to learn more about the song, you can also check out its lyrics[^1^] [^4^] and its Shazam page[^3^], where you can discover more songs by Waka Flocka Flame and other related artists.
-
-Candy Paint & Gold Teeth is one of the most popular songs from Waka Flocka Flame's second album, Triple F Life: Friends, Fans and Family. The album was released on June 12, 2012, and debuted at number 10 on the Billboard 200 chart, selling 33,000 copies in its first week. The album received mixed reviews from critics, who praised Waka Flocka Flame's energy and charisma, but criticized his lyrics and lack of originality.
-The song features two legendary southern rappers, Ludacris and Bun B, who both deliver impressive verses that showcase their skills and influence. Ludacris raps about his success and wealth, while Bun B raps about his loyalty and respect. Waka Flocka Flame holds his own with his aggressive and catchy hook and verse, where he expresses his pride and love for his hometown of Riverdale, Georgia.
-The song is produced by Honorable C.N.O.T.E. and Redwine, who create a hard-hitting and melodic beat that blends trap drums, piano keys, synth chords and guitar riffs. The beat matches the mood and theme of the song, which is a celebration of the southern rap culture and lifestyle. The song also samples a vocal snippet from "Southern Hospitality" by Ludacris, which adds to the homage.
-
-Candy Paint & Gold Teeth is a song that appeals to fans of Waka Flocka Flame and southern rap in general. It is a song that showcases the talent and diversity of the south, as well as the passion and pride of its artists. It is a song that makes you want to ride low in your car, eat some soul food, and party all night with your people. 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 BEST.md b/spaces/stomexserde/gpt4-ui/Examples/Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 BEST.md
deleted file mode 100644
index b1fed417c4b550cb1ae5a5b7af30511bde3ac6f0..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 BEST.md
+++ /dev/null
@@ -1,15 +0,0 @@
-
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9: A Review of the Software
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is a software that allows users to access and manage SQL databases easily and efficiently. It is a product of Quest Software, a leading provider of database management solutions. Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is designed to help users perform tasks such as creating, editing, debugging, and optimizing SQL code, as well as executing queries, analyzing data, and generating reports.
-Data Cash US quest sql navigator 6.7 keygen torrent 9
Download ===> https://urlgoal.com/2uI6WH
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is compatible with various versions of SQL Server, Oracle, MySQL, PostgreSQL, and other databases. It has a user-friendly interface that supports drag-and-drop operations, syntax highlighting, code completion, and code formatting. It also has a powerful debugger that can trace and modify SQL statements, variables, and parameters. Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 also offers features such as code analysis, code refactoring, code generation, code templates, code snippets, and code comparison.
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is available for download from various sources on the internet. However, users need to have a valid license key to activate the software and use its full functionality. A license key can be obtained from the official website of Quest Software or from other authorized resellers. Alternatively, users can also use a keygen tool to generate a license key for Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9. A keygen tool is a software that creates a unique serial number or activation code for a specific software.
-However, using a keygen tool to activate Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is not recommended for several reasons. First, it is illegal and violates the terms and conditions of Quest Software. Second, it may expose users to malware or viruses that can harm their computers or steal their personal information. Third, it may result in poor performance or errors in the software or the database. Therefore, users should always purchase a legitimate license key for Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 from Quest Software or its authorized resellers.
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is a useful and reliable software for SQL database management. It has many features and benefits that can help users improve their productivity and efficiency. However, users should always use a legal and valid license key to activate the software and enjoy its full potential.
-
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is not only a tool for SQL developers, but also for SQL administrators and analysts. It has features that can help users manage and monitor their SQL databases, such as backup and restore, security and auditing, performance tuning, and schema comparison. Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 also integrates with other Quest Software products, such as Toad for Oracle, Toad for SQL Server, Toad Data Modeler, and Toad Data Point.
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 has received positive feedback from its users and reviewers. It has been praised for its ease of use, functionality, stability, and support. It has also been awarded several recognitions, such as the Best of TechEd 2013 Award for Database Development, the SQL Server Pro Community Choice Award 2013 for Best Database Development Tool, and the Visual Studio Magazine Readers Choice Award 2013 for Best Database Development Tool.
-
-Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9 is a software that can help users improve their SQL database management skills and productivity. It is a valuable asset for anyone who works with SQL databases on a regular basis. However, users should always respect the intellectual property rights of Quest Software and purchase a legal license key for Data Cash US Quest Sql Navigator 6.7 Keygen Torrent 9. 81aa517590
-
-
\ No newline at end of file
diff --git a/spaces/stomexserde/gpt4-ui/Examples/Freedownloadvrayrenderpresetsfor3dsmax REPACK.md b/spaces/stomexserde/gpt4-ui/Examples/Freedownloadvrayrenderpresetsfor3dsmax REPACK.md
deleted file mode 100644
index 011c3b9373efcc561427764c9740e7316a0503e8..0000000000000000000000000000000000000000
--- a/spaces/stomexserde/gpt4-ui/Examples/Freedownloadvrayrenderpresetsfor3dsmax REPACK.md
+++ /dev/null
@@ -1,22 +0,0 @@
-
-How to Download and Use V-Ray Render Presets for 3ds Max
-V-Ray is a powerful rendering engine that can create realistic and stunning images for 3D modeling and animation. However, setting up the render parameters can be time-consuming and complex, especially for beginners. That's why V-Ray offers a collection of render presets that can help you achieve different effects and styles with just a few clicks.
-Freedownloadvrayrenderpresetsfor3dsmax
DOWNLOAD 🗹 https://urlgoal.com/2uIaqs
-In this article, we will show you how to download and use V-Ray render presets for 3ds Max, one of the most popular 3D software applications. You will learn how to access the presets, apply them to your scenes, and customize them to suit your needs.
-How to Download V-Ray Render Presets for 3ds Max
-There are several sources where you can download V-Ray render presets for 3ds Max. Some of them are free, while others require a subscription or a purchase. Here are some of the options:
-
-- 3DsMax / Realistic and Fast Render Presets: This is a Facebook group where you can find and share various render presets for 3ds Max and V-Ray. You can join the group for free and browse through the topics and events[^1^].
-- 3ds Max Vray Preset Free Download - suggestions - Informer: This is a website that provides software suggestions and downloads. You can find several V-Ray material presets and converters for 3ds Max here[^2^]. Some of them are free, while others require a registration or a trial.
-- Freedownloadvrayrenderpresetsfor3dsmax: This is a PDF file that contains a link to download a collection of V-Ray render presets for 3ds Max[^3^]. However, this file may not be safe or reliable, as it may contain malware or viruses. We do not recommend downloading files from unknown sources.
-
-How to Use V-Ray Render Presets for 3ds Max
-Once you have downloaded the V-Ray render presets for 3ds Max, you need to install them in your software. The installation process may vary depending on the source and the format of the presets. Generally, you need to copy or extract the preset files to the appropriate folder in your 3ds Max directory. For example, if you have downloaded VRayMtl Converter, you need to copy the VRayMtlConverter.mse file to C:\Program Files\Autodesk\3ds Max \scripts.
-
-After installing the presets, you can access them from the Render Setup dialog in 3ds Max. To open the Render Setup dialog, go to Rendering > Render Setup or press F10 on your keyboard. In the Render Setup dialog, you can choose V-Ray as your renderer and then click on the Preset button at the bottom left corner. This will open a drop-down menu where you can see all the available presets. You can select any preset that matches your scene and your desired output.
-When you apply a preset, it will automatically adjust the render settings such as resolution, quality, lighting, camera, materials, etc. You can preview the result in the ActiveShade window or by clicking on Render. However, you can also modify any of the settings manually if you want to fine-tune your render. For example, you can change the exposure value, the color balance, the depth of field, etc.
-Conclusion
-V-Ray render presets for 3ds Max are a great way to save time and effort when rendering your 3D scenes. They can help you achieve different effects and styles with ease and speed. However, they are not a substitute for your own creativity and skill. You should always experiment with different settings and options to create your own unique renders.
-We cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Construct 2 License File Crackl _HOT_.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Construct 2 License File Crackl _HOT_.md
deleted file mode 100644
index 89486ae12f037b39b968aced89a2c94c85dd8a84..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Construct 2 License File Crackl _HOT_.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Construct 2 License File Crackl
DOWNLOAD ⚙⚙⚙ https://cinurl.com/2uEYG0
-
-Tag download construct 3 full crack. C3p File IncludedCordova HTML5 Files IncludedIcons IncludedAdmob Ads IntgratedMouse and Touch ControlsRun In All PlatformsFull ... Unlock your full creative potential with a full Construct 2 license. 4d29de3e1b
-
-
-
diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WelcomeBackmovie720pdownload [CRACKED]movie.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WelcomeBackmovie720pdownload [CRACKED]movie.md
deleted file mode 100644
index 71bea5be4f8e074220cf115e87a5d7e3e43e76cc..0000000000000000000000000000000000000000
--- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WelcomeBackmovie720pdownload [CRACKED]movie.md
+++ /dev/null
@@ -1,6 +0,0 @@
-WelcomeBackmovie720pdownloadmovie
DOWNLOAD >>>>> https://cinurl.com/2uEXp8
-
- d5da3c52bf
-
-
-
diff --git a/spaces/sushimashi/webui/app.py b/spaces/sushimashi/webui/app.py
deleted file mode 100644
index a6d4e6fbbf46c7b912969ed7b531c3de6a81fc64..0000000000000000000000000000000000000000
--- a/spaces/sushimashi/webui/app.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import os
-from subprocess import getoutput
-
-gpu_info = getoutput('nvidia-smi')
-if("A10G" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl")
-elif("T4" in gpu_info):
- os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl")
-
-os.system(f"git clone -b v1.5 https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui")
-os.chdir("/home/user/app/stable-diffusion-webui")
-
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py")
-os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''')
-os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py")
-os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py")
-
-# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header----------------------------
-os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py")
-os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py")
-# ---------------------------------------------------------------------------------------------------------------------------------------------------
-
-if "IS_SHARED_UI" in os.environ:
- os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/")
-
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json")
- os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding")
-else:
- # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py")
- os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py")
-
- # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME")
- #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study")
- os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser")
- os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui")
-
- # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt")
- #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt")
- #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt")
- #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt")
- #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt")
- #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt")
- #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt")
- #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt")
-
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt")
- #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt")
-
- #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt")
- #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml")
-
- os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt")
- os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml")
-
- os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}")
- os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}")
- os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}")
-
- os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test")
-
\ No newline at end of file
diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/ocr_head.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/ocr_head.py
deleted file mode 100644
index 715852e94e81dc46623972748285d2d19237a341..0000000000000000000000000000000000000000
--- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/ocr_head.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from annotator.uniformer.mmseg.ops import resize
-from ..builder import HEADS
-from ..utils import SelfAttentionBlock as _SelfAttentionBlock
-from .cascade_decode_head import BaseCascadeDecodeHead
-
-
-class SpatialGatherModule(nn.Module):
- """Aggregate the context features according to the initial predicted
- probability distribution.
-
- Employ the soft-weighted method to aggregate the context.
- """
-
- def __init__(self, scale):
- super(SpatialGatherModule, self).__init__()
- self.scale = scale
-
- def forward(self, feats, probs):
- """Forward function."""
- batch_size, num_classes, height, width = probs.size()
- channels = feats.size(1)
- probs = probs.view(batch_size, num_classes, -1)
- feats = feats.view(batch_size, channels, -1)
- # [batch_size, height*width, num_classes]
- feats = feats.permute(0, 2, 1)
- # [batch_size, channels, height*width]
- probs = F.softmax(self.scale * probs, dim=2)
- # [batch_size, channels, num_classes]
- ocr_context = torch.matmul(probs, feats)
- ocr_context = ocr_context.permute(0, 2, 1).contiguous().unsqueeze(3)
- return ocr_context
-
-
-class ObjectAttentionBlock(_SelfAttentionBlock):
- """Make a OCR used SelfAttentionBlock."""
-
- def __init__(self, in_channels, channels, scale, conv_cfg, norm_cfg,
- act_cfg):
- if scale > 1:
- query_downsample = nn.MaxPool2d(kernel_size=scale)
- else:
- query_downsample = None
- super(ObjectAttentionBlock, self).__init__(
- key_in_channels=in_channels,
- query_in_channels=in_channels,
- channels=channels,
- out_channels=in_channels,
- share_key_query=False,
- query_downsample=query_downsample,
- key_downsample=None,
- key_query_num_convs=2,
- key_query_norm=True,
- value_out_num_convs=1,
- value_out_norm=True,
- matmul_norm=True,
- with_out=True,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg)
- self.bottleneck = ConvModule(
- in_channels * 2,
- in_channels,
- 1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, query_feats, key_feats):
- """Forward function."""
- context = super(ObjectAttentionBlock,
- self).forward(query_feats, key_feats)
- output = self.bottleneck(torch.cat([context, query_feats], dim=1))
- if self.query_downsample is not None:
- output = resize(query_feats)
-
- return output
-
-
-@HEADS.register_module()
-class OCRHead(BaseCascadeDecodeHead):
- """Object-Contextual Representations for Semantic Segmentation.
-
- This head is the implementation of `OCRNet
- `_.
-
- Args:
- ocr_channels (int): The intermediate channels of OCR block.
- scale (int): The scale of probability map in SpatialGatherModule in
- Default: 1.
- """
-
- def __init__(self, ocr_channels, scale=1, **kwargs):
- super(OCRHead, self).__init__(**kwargs)
- self.ocr_channels = ocr_channels
- self.scale = scale
- self.object_context_block = ObjectAttentionBlock(
- self.channels,
- self.ocr_channels,
- self.scale,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
- self.spatial_gather_module = SpatialGatherModule(self.scale)
-
- self.bottleneck = ConvModule(
- self.in_channels,
- self.channels,
- 3,
- padding=1,
- conv_cfg=self.conv_cfg,
- norm_cfg=self.norm_cfg,
- act_cfg=self.act_cfg)
-
- def forward(self, inputs, prev_output):
- """Forward function."""
- x = self._transform_inputs(inputs)
- feats = self.bottleneck(x)
- context = self.spatial_gather_module(feats, prev_output)
- object_context = self.object_context_block(feats, context)
- output = self.cls_seg(object_context)
-
- return output
diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/convert_pegasus_from_huggingface_to_tencentpretrain.py b/spaces/szukevin/VISOR-GPT/train/scripts/convert_pegasus_from_huggingface_to_tencentpretrain.py
deleted file mode 100644
index 6bd8f8c0c1905d1c007ebdd4d48880bd020f4fb0..0000000000000000000000000000000000000000
--- a/spaces/szukevin/VISOR-GPT/train/scripts/convert_pegasus_from_huggingface_to_tencentpretrain.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import sys
-import os
-import argparse
-import collections
-import torch
-
-tencentpretrain_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
-sys.path.insert(0, tencentpretrain_dir)
-
-from scripts.convert_bart_from_huggingface_to_tencentpretrain import \
- convert_encoder_decoder_transformer_from_huggingface_to_tencentpretrain
-
-
-parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
-parser.add_argument("--input_model_path", type=str, default="models/input_model.bin",
- help=".")
-parser.add_argument("--output_model_path", type=str, default="models/output_model.bin",
- help=".")
-parser.add_argument("--layers_num", type=int, default=6, help=".")
-parser.add_argument("--decoder_layers_num", type=int, default=6, help=".")
-
-args = parser.parse_args()
-
-input_model = torch.load(args.input_model_path, map_location="cpu")
-
-output_model = collections.OrderedDict()
-
-output_model["embedding.sinusoidalpos.pe"] = input_model["model.encoder.embed_positions.weight"].unsqueeze(1)
-output_model["tgt_embedding.sinusoidalpos.pe"] = input_model["model.decoder.embed_positions.weight"].unsqueeze(1)
-output_model["embedding.word.embedding.weight"] = input_model["model.encoder.embed_tokens.weight"]
-output_model["tgt_embedding.word.embedding.weight"] = input_model["model.decoder.embed_tokens.weight"]
-output_model["target.lm.output_layer.weight"] = input_model["lm_head.weight"]
-output_model["target.lm.output_layer.bias"] = input_model["final_logits_bias"].squeeze(0)
-
-convert_encoder_decoder_transformer_from_huggingface_to_tencentpretrain(input_model, output_model, args.layers_num, args.decoder_layers_num)
-
-output_model["encoder.layer_norm.gamma"] = input_model["model.encoder.layer_norm.weight"]
-output_model["encoder.layer_norm.beta"] = input_model["model.encoder.layer_norm.bias"]
-output_model["decoder.layer_norm.gamma"] = input_model["model.decoder.layer_norm.weight"]
-output_model["decoder.layer_norm.beta"] = input_model["model.decoder.layer_norm.bias"]
-
-torch.save(output_model, args.output_model_path)
diff --git a/spaces/t13718236382/bingoGPT4/tests/kblob.ts b/spaces/t13718236382/bingoGPT4/tests/kblob.ts
deleted file mode 100644
index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000
--- a/spaces/t13718236382/bingoGPT4/tests/kblob.ts
+++ /dev/null
@@ -1,27 +0,0 @@
-import FormData from 'form-data'
-
-import { fetch } from '@/lib/isomorphic'
-
-const formData = new FormData()
-
-const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}}
-
-formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest))
-
-
-fetch('https://bing.vcanbb.top/images/kblob',
- {
- method: 'POST',
- body: formData.getBuffer(),
- headers: {
- "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"",
- "sec-ch-ua-mobile": "?0",
- "sec-ch-ua-platform": "\"Windows\"",
- "Referer": "https://bing.vcanbb.top/web/index.html",
- "Referrer-Policy": "origin-when-cross-origin",
- ...formData.getHeaders()
- }
-
- }
-).then(res => res.text())
-.then(res => console.log('res', res))
diff --git a/spaces/techguy1423/ChatABT/test4.py b/spaces/techguy1423/ChatABT/test4.py
deleted file mode 100644
index 987bc92e874aeeaa68dffcc5e9e4a6cdbc348545..0000000000000000000000000000000000000000
--- a/spaces/techguy1423/ChatABT/test4.py
+++ /dev/null
@@ -1,38 +0,0 @@
-import gradio as gr
-from transformers import AutoTokenizer, AutoModelForCausalLM
-import torch
-
-# Load the pre-trained Llama model and tokenizer
-model_name = "meta-llama/Llama-2-13b-chat-hf"
-tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
-model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
-
-# Define a system prompt to set the context and behavior
-system_prompt = "You are chatting with a friendly AI. Ask me anything!"
-
-# Function to generate a response
-def chat(input_text):
- # Combine the system prompt and user input
- full_prompt = f"{system_prompt}\n\n{input_text}"
-
- # Encode the combined prompt and generate a response
- input_ids = tokenizer.encode(full_prompt, return_tensors="pt")
- with torch.no_grad():
- output = model.generate(input_ids, max_length=50, num_return_sequences=1)
-
- # Decode and return the AI's response
- ai_response = tokenizer.decode(output[0], skip_special_tokens=True)
- return ai_response
-
-# Create a Gradio interface
-iface = gr.Interface(
- fn=chat,
- inputs="text",
- outputs="text",
- title="Llama Chatbot",
- description="Chat with a friendly AI chatbot powered by the Llama model.",
- live=True
-)
-
-# Launch the Gradio interface
-iface.launch()
diff --git a/spaces/tekkonetes/rust-code-server/setup.sh b/spaces/tekkonetes/rust-code-server/setup.sh
deleted file mode 100644
index caa2968f3a6a7abac0d7944eff2f2602b4327f78..0000000000000000000000000000000000000000
--- a/spaces/tekkonetes/rust-code-server/setup.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-code-server --install-extension rust-lang.rust-analyzer
-code-server --install-extension ms-vscode.atom-keymap
-code-server --install-extension emroussel.atomize
-code-server --install-extension pkief.material-icon-theme
-code-server --install-extension tamasfe.even-better-toml
-
-curl -L https://sh.rustup.rs | sh
-curl -L https://bun.sh/install | bash
-curl -L https://get.wasmer.io | bash
-
-echo ". ~/.profile" > /home/user/.bashrc
-echo "PS1='\W\\$ '" > /home/user/.bashrc
\ No newline at end of file
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Adobe Postscript Printer Driver Free Download Windows 7 BEST.md b/spaces/terfces0erbo/CollegeProjectV2/Adobe Postscript Printer Driver Free Download Windows 7 BEST.md
deleted file mode 100644
index 0b3a85558cb8c5bfce5d7dbbc6b375f09d89f206..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Adobe Postscript Printer Driver Free Download Windows 7 BEST.md
+++ /dev/null
@@ -1,6 +0,0 @@
-adobe postscript printer driver free download windows 7
Download Zip ✸ https://bytlly.com/2uGlc0
-
-nClick the Choose a printer, then click Add a printer, and then click OK.nSelect the Microsoft Windows Installer printer (recommended) • Click Next.nSelect Use an existing port • Click Next.nChoose the printer port • Click Next.nClick Add a printer • Click Finish.n[2] If the printer is not installed and is required for the new system, you can install the printer drivers by selecting Uninstall a printer (recommended) • Click Next.nIn the Select printer to uninstall page, scroll to the bottom of the list, and then click Uninstall printer from the list of printers.nSelect the printer drivers • Click Next.nIn the Where are the drivers located? page, click All files, and then click Finish.nConfirm the installation • Click Finish.nTo add the driver to the list of drivers that can be installed by Windows Update, in the Select printer to install page, click Add and then click Skip or Cancel.nIn the Select printer to install page, click Select printer to install.nIn the Print Port page, select the printer port • Click Next.nClick Yes to the warning page that you are about to change settings for your computer, and then click OK.nTo uninstall the driver, in the Select printer to install page, click Select printer to uninstall.nSelect the printer driver • Click Next.nClick Yes to the warning page that you are about to change settings for your computer, and then click OK.nConfirm the uninstallation • Click Next.nIn the Select printer driver to reinstall page, select the printer driver to reinstall • Click Next.nIn the Select printer driver to install page, click Select printer to install.nClick Yes to the warning page that you are about to change settings for your computer, and then click OK.nTo add the driver to the list of drivers that can be installed by Windows Update, in the Select printer to install page, click Add and then click Skip or Cancel.nSelect the printer driver • Click Next.nClick Yes to the warning page that you are about to change settings for your computer, and then click OK.nNote You must restart your computer when you have installed an HP printer driver.nIf a driver is successfully installed, the printer appears in the Printers and Faxes page.n 4fefd39f24
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Cadmas 11 Torrent Full Version Added By Users !EXCLUSIVE!.md b/spaces/terfces0erbo/CollegeProjectV2/Cadmas 11 Torrent Full Version Added By Users !EXCLUSIVE!.md
deleted file mode 100644
index 58726871d01a169e70b76f0cf33393e632ec84b2..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Cadmas 11 Torrent Full Version Added By Users !EXCLUSIVE!.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Cadmas 11 Torrent Full Version Added By Users
Download File ✔ https://bytlly.com/2uGk1B
-
-torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent Cadmas 11 Torrent Full Version Added By Users DOWNLOAD: .torrent 4fefd39f24
-
-
-
diff --git a/spaces/terfces0erbo/CollegeProjectV2/Free Free Mcboot 1.8b Hun.md b/spaces/terfces0erbo/CollegeProjectV2/Free Free Mcboot 1.8b Hun.md
deleted file mode 100644
index 99e9d1ca9e91939947ff7a949eb3fff9221032cb..0000000000000000000000000000000000000000
--- a/spaces/terfces0erbo/CollegeProjectV2/Free Free Mcboot 1.8b Hun.md
+++ /dev/null
@@ -1,48 +0,0 @@
-Free Mcboot 1.8b Hun
Download 🔗 https://bytlly.com/2uGlYQ
-
-haptojonek
-
- desu
-
- OKAY Gotta go
-
- bye
-
- tzm: pl yafaimu ze. konponituru mozuku. desu ne.
-
- tomodachiku haiken iru
-
- okutemu tamen inori zaidan da mada sukoshi deteshimu asoko.
-
- タイマンいたいです。
-
- タイマンはどんな気持ちですか
-
- hito_jp: izakiya watashi wa te-ma-nin ikimashita.
-
- hito_jp: nezumi ni kudasai.
-
- hito_jp: ikimashita kotoba de jakatta asoko.
-
- hito_jp: iru desu kotoba mo arimasu.
-
- いやいやいやいやいや。
-
- hito_jp: iru desu.
-
- おおいしい。
-
- いえええええええええええええええええええええええええ
-
- いええええええええええええええええええええええええええええええ
-
- hito_jp: hito_jp no, tai-sama wa nihon-go tomodachi.
-
- ええ
-
- いやいやいやいやいや
-
- � 4fefd39f24
-
-
-
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Can You Download Fixed The Redragon Software Mac.md b/spaces/tialenAdioni/chat-gpt-api/logs/Can You Download Fixed The Redragon Software Mac.md
deleted file mode 100644
index 47602aa3738fb9eca8b0c59da53780b1c3ff59a1..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Can You Download Fixed The Redragon Software Mac.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-Here is a possible title and article with html formatting for the keyword "Can You Download The Redragon Software Mac":
-
-Can You Download The Redragon Software Mac?
-If you have a Redragon gaming mouse, keyboard, headset or other device, you might be wondering if you can download the Redragon software Mac to customize your settings and macros. The answer is yes, but with some limitations.
-The Redragon software Mac is a third-party application that is not officially supported by Redragon. It is developed by a user named AJ Ferrari and can be downloaded from his GitHub page: https://github.com/aj-ferrari/Redragon-Mouse.
-Can You Download The Redragon Software Mac
Download ✓ https://urlcod.com/2uK5C5
-The Redragon software Mac allows you to adjust the DPI, polling rate, lighting effects and button assignments of your Redragon mouse. However, it does not support all models of Redragon mice, and some features may not work properly. For example, some users have reported issues with the side buttons or the scroll wheel.
-Also, the Redragon software Mac does not work with other Redragon devices, such as keyboards or headsets. If you want to customize those devices, you will need to use a Windows PC and the official Redragon software from their website: https://www.redragonzone.com/pages/download.
-In conclusion, you can download the Redragon software Mac if you have a compatible Redragon mouse and want to tweak some settings. However, it is not a fully functional or reliable solution, and you may encounter some bugs or errors. Use it at your own risk and discretion. Here are a few more paragraphs for the article:
-
-How to Install the Redragon Software Mac
-To install the Redragon software Mac, you will need to follow these steps:
-
-- Download the latest release of the Redragon software Mac from https://github.com/aj-ferrari/Redragon-Mouse/releases.
-- Unzip the downloaded file and open the folder.
-- Double-click on the Redragon Mouse.app file to launch the application.
-- Connect your Redragon mouse to your Mac via USB.
-- The application should detect your mouse model and display the settings menu.
-- You can now adjust the DPI, polling rate, lighting effects and button assignments of your mouse.
-- Click on the Apply button to save your changes.
-
-Note: You may need to grant permission for the application to access your mouse. You can do this by going to System Preferences > Security & Privacy > Privacy > Input Monitoring and checking the box next to Redragon Mouse.app.
-
-Pros and Cons of the Redragon Software Mac
-
-The Redragon software Mac has some advantages and disadvantages compared to the official Redragon software for Windows. Here are some of them:
-
-- Pros:
-- It allows you to use your Redragon mouse on a Mac without losing some of its features.
-- It is free and open-source, so you can inspect the code or modify it if you want.
-- It has a simple and user-friendly interface that is easy to navigate.
-- Cons:
-- It is not officially endorsed or supported by Redragon, so it may not be compatible with future updates or models of Redragon mice.
-- It may have some bugs or errors that affect the performance or functionality of your mouse.
-- It does not support other Redragon devices, such as keyboards or headsets.
-
-Therefore, you should weigh the pros and cons before deciding whether to use the Redragon software Mac or not. If you encounter any problems or have any feedback, you can contact the developer through his GitHub page or email him at ajferrari@protonmail.com. 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Firmware tablet wolder manhattan A step-by-step guide to update your tablet[3].md b/spaces/tialenAdioni/chat-gpt-api/logs/Firmware tablet wolder manhattan A step-by-step guide to update your tablet[3].md
deleted file mode 100644
index f194edb6d65c6f0ec2c6a31529b3f7d2bc33e1f6..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Firmware tablet wolder manhattan A step-by-step guide to update your tablet[3].md
+++ /dev/null
@@ -1,80 +0,0 @@
-
-Settings > About tablet > Firmware version . You should see the new version number that matches the one you downloaded from the official website. - Q5: Where can I find more information and support for Tablet Wolder Manhattan? - A5: You can find more information and support for Tablet Wolder Manhattan by visiting https://firmwareoficial.com/wolder/. You can also contact the customer service by phone or email if you have any questions or issues. # Article with HTML formatting How to Update Firmware on Tablet Wolder Manhattan
- Firmware is the software that controls the hardware and functionality of your device. Updating firmware can improve performance, stability, security, compatibility, and user experience. It can also fix bugs, errors, and glitches that may affect your device.
- If you own a Tablet Wolder Manhattan, you might want to update its firmware to enjoy the latest features and security patches. Tablet Wolder Manhattan is a 10.1-inch Android tablet that offers a quad-core processor, 1 GB of RAM, 16 GB of internal storage, dual cameras, Wi-Fi, Bluetooth, HDMI, USB OTG, and a 6000 mAh battery.
-Firmware tablet wolder manhattan
Download File ››› https://urlcod.com/2uK56d
- Before you update your firmware, you need to check the current firmware version and the latest available version for your device. To do this, go to Settings > About tablet > Firmware version . You can also visit https://androidmtk.com/download-wolder-stock-rom to find the latest firmware files for your device model.
- Steps to update firmware on Tablet Wolder Manhattan
- Once you have downloaded the latest firmware file for your device, you can follow these steps to update your firmware:
-How to update firmware tablet wolder manhattan
-Firmware tablet wolder manhattan download link
-Firmware tablet wolder manhattan latest version
-Firmware tablet wolder manhattan problems and solutions
-Firmware tablet wolder manhattan compatible apps
-Firmware tablet wolder manhattan root access
-Firmware tablet wolder manhattan factory reset
-Firmware tablet wolder manhattan custom rom
-Firmware tablet wolder manhattan android 11
-Firmware tablet wolder manhattan recovery mode
-Firmware tablet wolder manhattan hard reset
-Firmware tablet wolder manhattan specifications and features
-Firmware tablet wolder manhattan user manual
-Firmware tablet wolder manhattan review and rating
-Firmware tablet wolder manhattan price and availability
-Firmware tablet wolder manhattan warranty and support
-Firmware tablet wolder manhattan comparison with other tablets
-Firmware tablet wolder manhattan tips and tricks
-Firmware tablet wolder manhattan battery life and performance
-Firmware tablet wolder manhattan screen size and resolution
-Firmware tablet wolder manhattan camera quality and settings
-Firmware tablet wolder manhattan sound and speaker
-Firmware tablet wolder manhattan memory and storage
-Firmware tablet wolder manhattan processor and speed
-Firmware tablet wolder manhattan connectivity and network
-Firmware tablet wolder manhattan accessories and cases
-Firmware tablet wolder manhattan software and security updates
-Firmware tablet wolder manhattan error codes and messages
-Firmware tablet wolder manhattan backup and restore
-Firmware tablet wolder manhattan flash tool and drivers
-Firmware tablet wolder manhattan unlock code and pattern
-Firmware tablet wolder manhattan sim card and wifi
-Firmware tablet wolder manhattan bluetooth and gps
-Firmware tablet wolder manhattan sensors and gyroscope
-Firmware tablet wolder manhattan video playback and streaming
-Firmware tablet wolder manhattan gaming and graphics
-Firmware tablet wolder manhattan ebooks and pdfs
-Firmware tablet wolder manhattan web browsing and email
-Firmware tablet wolder manhattan social media and chat apps
-Firmware tablet wolder manhattan online shopping and payment apps
-Firmware tablet wolder manhattan music player and podcasts
-Firmware table
-
-- Copy the firmware file to a microSD card. Make sure the file name is
update.zip and it is placed in the root directory of the card.
-- Insert the microSD card into the tablet and turn it off.
-- Press and hold the power and volume up buttons simultaneously until the recovery mode appears. You will see a green Android logo with a red exclamation mark.
-- Select "apply update from external storage" using the volume buttons and confirm with the power button.
-- Choose the
update.zip file from the microSD card and confirm with the power button.
-- Wait for the installation process to complete. It may take several minutes. Do not interrupt or turn off your device during this process.
-- Reboot your device when prompted. Your device will restart with the new firmware installed.
-
- Conclusion
- compatibility, and user experience of your device. You can also access the latest features and security patches that are available for your device.
- However, updating firmware is not without risks. You should always backup your data before updating firmware, as you may lose some or all of your data during the process. You should also make sure that your device has enough battery power and is connected to a stable internet source during the update. If your device gets stuck or fails during the update, you may need to restore it to factory settings or contact the customer service for assistance.
- We hope this article was helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you and help you with any issues you may have. Thank you for choosing Tablet Wolder Manhattan!
- FAQs
- Here are some frequently asked questions and answers about updating firmware on Tablet Wolder Manhattan:
-
-- Q1: What are the advantages of updating firmware on Tablet Wolder Manhattan?
-- A1: Updating firmware can improve performance, stability, security, compatibility, and user experience. It can also fix bugs, errors, and glitches that may affect your device.
-- Q2: How can I backup my data before updating firmware on Tablet Wolder Manhattan?
-- A2: You can backup your data using a cloud service, a computer, or another external storage device. Make sure you backup your contacts, messages, photos, videos, apps, and other important files before updating firmware.
-- Q3: What should I do if my tablet gets stuck or fails during the update process?
-- A3: If your tablet gets stuck or fails during the update process, you can try to reboot it by pressing and holding the power button for 10 seconds. If that does not work, you can try to restore your tablet to factory settings by using the recovery mode. However, this will erase all your data, so make sure you have a backup before doing this.
-- Q4: How can I verify that my firmware update was successful?
-- A4: You can verify that your firmware update was successful by checking the firmware version in
Settings > About tablet > Firmware version . You should see the new version number that matches the one you downloaded from the official website.
-- Q5: Where can I find more information and support for Tablet Wolder Manhattan?
-- A5: You can find more information and support for Tablet Wolder Manhattan by visiting https://firmwareoficial.com/wolder/. You can also contact the customer service by phone or email if you have any questions or issues.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/FortressCraft Evolved Adventures Pack-PLAZA License Key A Must-Have for Fans of the Genre.md b/spaces/tialenAdioni/chat-gpt-api/logs/FortressCraft Evolved Adventures Pack-PLAZA License Key A Must-Have for Fans of the Genre.md
deleted file mode 100644
index 6d4c30c3d99b13fc9f9e19c2775e7b73aaa4a3c6..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/FortressCraft Evolved Adventures Pack-PLAZA License Key A Must-Have for Fans of the Genre.md
+++ /dev/null
@@ -1,51 +0,0 @@
-
-```html
-FortressCraft Evolved Adventures Pack-PLAZA License Key: A Review
-FortressCraft Evolved is a sandbox game that combines elements of Minecraft, Factorio, and Tower Defense. You can explore, build, craft, and defend your base from waves of enemies in a procedurally generated world. The game also features a story mode, a creative mode, and a multiplayer mode.
-The Adventures Pack-PLAZA is an expansion pack that adds new content and features to the game. It includes:
-FortressCraft Evolved Adventures Pack-PLAZA License Key
DOWNLOAD ✑ ✑ ✑ https://urlcod.com/2uK9SV
-
-- A new biome: The Frozen Factory, where you can find new resources, enemies, and challenges.
-- A new game mode: The Adventures Mode, where you can play through randomly generated missions with different objectives and rewards.
-- A new feature: The Adventure Constructor, where you can create your own missions and share them with other players.
-- A new feature: The Adventure Browser, where you can browse and play missions created by other players.
-- A new feature: The Adventure Leaderboards, where you can compete with other players for the best scores and times.
-
-To play the Adventures Pack-PLAZA, you need to have the base game FortressCraft Evolved installed on your PC. You also need to have a valid license key to activate the expansion pack. You can buy the license key from the official website or from other online platforms. The license key will be sent to your email address after the purchase.
-The Adventures Pack-PLAZA is a great addition to the FortressCraft Evolved game. It offers more variety, replayability, and fun to the sandbox experience. If you are a fan of FortressCraft Evolved or sandbox games in general, you should definitely check out this expansion pack.
-```
-
-```html
-One of the highlights of the Adventures Pack-PLAZA is the new biome: The Frozen Factory. This biome is located in the depths of the world, below the surface and the caverns. It is a harsh and cold environment, where you will encounter new dangers and opportunities. You will need to use new technologies and strategies to survive and thrive in this biome.
-The Frozen Factory is home to new resources, such as ice, snow, and frozen ore. You can use these resources to craft new items and machines, such as heaters, coolers, and cryogenic chambers. You can also use them to create new structures and decorations, such as ice sculptures, snowmen, and igloos.
-The Frozen Factory also hosts new enemies, such as frost spiders, ice worms, and snow golems. These enemies are more powerful and resilient than the ones you have faced before. They can freeze you, slow you down, or damage your base. You will need to upgrade your weapons and defenses to deal with them effectively.
-The Frozen Factory also offers new challenges, such as blizzards, avalanches, and ice storms. These events can affect your visibility, mobility, and stability. You will need to adapt to the changing weather conditions and plan ahead to avoid disasters.
-The Frozen Factory is a biome that will test your skills and creativity as a sandbox player. It will reward you with new discoveries and experiences that you will not find anywhere else in the game.
-How to get FortressCraft Evolved Adventures Pack-PLAZA License Key for free
-Download FortressCraft Evolved Adventures Pack-PLAZA full version with License Key
-FortressCraft Evolved Adventures Pack-PLAZA License Key generator online
-FortressCraft Evolved Adventures Pack-PLAZA License Key crack download
-FortressCraft Evolved Adventures Pack-PLAZA License Key activation code
-FortressCraft Evolved Adventures Pack-PLAZA License Key serial number
-FortressCraft Evolved Adventures Pack-PLAZA License Key torrent download
-FortressCraft Evolved Adventures Pack-PLAZA License Key review and gameplay
-FortressCraft Evolved Adventures Pack-PLAZA License Key system requirements
-FortressCraft Evolved Adventures Pack-PLAZA License Key cheats and tips
-FortressCraft Evolved Adventures Pack-PLAZA License Key mods and updates
-FortressCraft Evolved Adventures Pack-PLAZA License Key multiplayer mode
-FortressCraft Evolved Adventures Pack-PLAZA License Key steam key giveaway
-FortressCraft Evolved Adventures Pack-PLAZA License Key discount code and coupon
-FortressCraft Evolved Adventures Pack-PLAZA License Key best price and deals
-FortressCraft Evolved Adventures Pack-PLAZA License Key official website and support
-FortressCraft Evolved Adventures Pack-PLAZA License Key trailer and screenshots
-FortressCraft Evolved Adventures Pack-PLAZA License Key release date and news
-FortressCraft Evolved Adventures Pack-PLAZA License Key patch notes and changelog
-FortressCraft Evolved Adventures Pack-PLAZA License Key DLC and expansion packs
-FortressCraft Evolved Adventures Pack-PLAZA License Key guide and walkthrough
-FortressCraft Evolved Adventures Pack-PLAZA License Key error fix and troubleshooting
-FortressCraft Evolved Adventures Pack-PLAZA License Key comparison and alternatives
-FortressCraft Evolved Adventures Pack-PLAZA License Key refund policy and warranty
-FortressCraft Evolved Adventures Pack-PLAZA License Key FAQ and forum
-``` e753bf7129
-
-
\ No newline at end of file
diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Hollow Knight V1.2.2.1 DLC - GOG Get the Latest Version for Free.md b/spaces/tialenAdioni/chat-gpt-api/logs/Hollow Knight V1.2.2.1 DLC - GOG Get the Latest Version for Free.md
deleted file mode 100644
index 35e28b351a07da0d7d6fa1ce63ce7ac7312c1ddc..0000000000000000000000000000000000000000
--- a/spaces/tialenAdioni/chat-gpt-api/logs/Hollow Knight V1.2.2.1 DLC - GOG Get the Latest Version for Free.md
+++ /dev/null
@@ -1,215 +0,0 @@
-
-Hollow Knight V1.2.2.1 DLC - GOG Free Download: How to Download and Play the Ultimate 2D Action Adventure Game
-Hollow Knight is a 2D action adventure game that was released in 2017 by Team Cherry, an independent game studio based in Australia. The game is set in a dark and mysterious world called Hallownest, where the player controls a nameless knight who explores a vast interconnected map of caverns, ruins, and forests. The game features challenging combat, beautiful hand-drawn graphics, atmospheric music, and a rich lore that unfolds through exploration and discovery.
-Hollow Knight V1.2.2.1 DLC - GOG Free Download
DOWNLOAD ✏ https://urlcod.com/2uK7Hz
-Hollow Knight has received several updates and expansions since its release, adding new areas, enemies, bosses, abilities, items, quests, and secrets to the game. The latest update is Hollow Knight V1.2.2.1 DLC - GOG, which includes all the previous updates and expansions, such as Hidden Dreams, The Grimm Troupe, Lifeblood, Godmaster, and Silksong (the upcoming sequel to Hollow Knight). This update also fixes some bugs and improves the performance and stability of the game.
-If you are a fan of Hollow Knight or want to try this amazing game for yourself, you might be wondering how to get Hollow Knight V1.2.2.1 DLC - GOG free download. In this article, we will explain why you should avoid using any illegal or unsafe websites that offer Hollow Knight V1.2.2.1 DLC - GOG free download, and how you can get Hollow Knight V1.2.2.1 DLC - GOG free download legally and safely.
-Why You Should Avoid Using Any Illegal or Unsafe Websites that Offer Hollow Knight V1.2.2.1 DLC - GOG Free Download
-There are many websites that claim to offer Hollow Knight V1. DLC - GOG free download, but they are actually not trustworthy or legitimate. Here are some of the reasons why you should avoid using these websites:
-
-- Malware or viruses: These websites may contain malicious code that can infect your device or steal your data.
You may not notice it at first, but it can cause serious damage to your system or compromise your privacy and security.
-- Legal issues: These websites may violate the copyright laws
and terms and conditions of the game developers and publishers. By using them, you are breaking the law and exposing yourself to potential lawsuits or fines.
-- Ethical issues: These websites are unethical
and disrespectful to the game developers and publishers of Hollow Knight, who worked hard to provide a high-quality and entertaining game for the players. By using them, you are undermining their efforts and depriving them of recognition and revenue.
-- Lack of quality: These websites may not provide the actual Hollow Knight V1.
-
-- Lack of safety: These websites may not have any security measures or guarantees to protect your device or data from any harm or loss.
You may also not be able to access any customer support or refund policy in case of any issues or complaints.
-
-How to Get Hollow Knight V1.
-DLC - GOG Free Download Legally
-and Safely
-The best way to get Hollow Knight V1.
-DLC - GOG free download is
-to use a legal
-and safe source
-that respects the rights
-and interests of the game developers
-and publishers of Hollow Knight.
-Here are some of the options
-you can choose from:
-
-- Buy or download
-the official version
-of Hollow Knight from GOG.com: You can buy or download
-the official version
-of Hollow Knight from GOG.com,
-which is a digital distribution platform
-that sells DRM-free games for Windows,
-Mac OS X,
-Linux etc..
-By buying or downloading Hollow Knight from GOG.com,
-you can enjoy
-the full game in high quality
-with all the updates
-and expansions included (including Hollow Knight V1.
-DLC - GOG). You can also support
-the game developers
-and publishers
-by paying for their work.
-- Use a free trial or a discount coupon from GOG.com: You can use a free trial or a discount coupon from GOG.com to get Hollow Knight V1.
-DLC - GOG free download for
-a limited time
-or at a lower price
-than usual (depending on
-the availability
-and terms
-of these offers). By using
-a free trial
-or a discount coupon
-from GOG.com,
-you can enjoy
-Hollow Knight V1.
-DLC - GOG free download for
-a limited time
-or at a lower price
-than usual without breaking any laws
-or compromising any quality.
-- Use a gift card or a voucher from GOG.com: You can use a gift card or a voucher from GOG.com to get Hollow Knight V1.
-DLC - GOG free download without spending any money (depending on the value and validity of these cards or vouchers). By using a gift card or a voucher from GOG.com,
-you can enjoy
-Hollow Knight V1.
-DLC - GOG free download without spending any money without breaking any laws
-or compromising any quality.
-
-
-
-Conclusion
-
-Hollow Knight V1.
-DLC - GOG Free Download is a great way to enjoy
-the full version of one of the best 2D action adventure games ever made.The game features challenging combat,beautiful hand-drawn graphics,atmospheric music,and a rich lore that unfolds through exploration
-and discovery.The game also includes all the previous updates
-and expansions,such as Hidden Dreams,The Grimm Troupe,Lifeblood,Godmaster,and Silksong (the upcoming sequel to Hollow Knight).
-
-If you want to get Hollow Knight V1.
-DLC - GOG Free Download,
-you should avoid using any illegal or unsafe websites that offer it,as they may contain malware or viruses,
-violate the copyright laws
-and terms
-and conditions of the game developers
-and publishers,be unethical
-and disrespectful to them,
-provide low-quality
-or corrupted versions of the game,and have no security measures
-or guarantees to protect your device
-or data from any harm
-or loss.
-
-Instead,
-you should use a legal and safe source that respects the rights
-and interests of the game developers
-and publishers.You can buy or download the official version of Hollow Knight from GOG.com,
-a digital distribution platform that sells DRM-free games for Windows,Mac OS X,
-Linux etc.You can also use a free trial or a discount coupon from GOG.com,
-a gift card or a voucher from GOG.com,or any other legal and safe method that allows you to get Hollow Knight V1.
-DLC - GOG Free Download without breaking any laws
-or compromising any quality.
-Hollow Knight GOG version free download with DLC
-How to get Hollow Knight V1.2.2.1 and all DLC for free
-Download Hollow Knight full game and DLC from GOG
-Hollow Knight free download guide for PC with DLC
-Hollow Knight V1.2.2.1 GOG edition with DLC torrent
-Hollow Knight PC game free download with all DLC
-GOG Hollow Knight V1.2.2.1 full game and DLC direct link
-Hollow Knight latest version and DLC free download GOG
-Download Hollow Knight V1.2.2.1 and DLC for free on GOG
-Hollow Knight full game and DLC free download GOG version
-GOG Hollow Knight V1.2.2.1 and all DLC download link
-Hollow Knight V1.2.2.1 with DLC free download for PC
-Hollow Knight GOG free download full game and DLC
-How to download Hollow Knight V1.2.2.1 and DLC from GOG for free
-Hollow Knight V1.2.2.1 and DLC GOG edition free download
-Download Hollow Knight full game and all DLC for free GOG
-Hollow Knight PC game with DLC free download GOG version
-Hollow Knight V1.2.2.1 and all DLC free download guide GOG
-GOG Hollow Knight V1.2.2.1 full game and DLC free download torrent
-Hollow Knight latest version and all DLC download link GOG
-Free download Hollow Knight V1.2.2.1 and DLC on GOG
-Hollow Knight full game and all DLC GOG edition download link
-Download Hollow Knight V1.2.2.1 with DLC for PC from GOG
-Hollow Knight GOG edition full game and DLC free download
-How to get Hollow Knight V1.2.2.1 and all DLC on GOG for free
-Hollow Knight V1.2.2.1 and all DLC GOG version download link
-Download Hollow Knight full game and all DLC from GOG for free
-Hollow Knight PC game with all DLC download link GOG
-Hollow Knight V1.2.2.1 and all DLC download guide for PC from GOG
-GOG Hollow Knight V1.2.2.1 full game and all DLC direct link download
-Download the latest version of Hollow Knight with all DLC from GOG
-Free download of Hollow Knight full game and all DLC on GOG
-Download link for Hollow Knight V1.2.2.1 with all DLC from GOG
-How to download the full game of Hollow Knight with all DLC from GOG for free
-Free download of the latest version of Hollow Knight with all DLC on GOG
-Download link for the full game of Hollow Knight with all DLC on GOG
-How to get the latest version of Hollow Knight with all DLC from GOG for free
-Free download of the full game of Hollow Knight V1.2.2.1 with all DLC on GOG
-Download link for the latest version of Hollow Knight V1.2.2.1 with all DLC on GOG
-How to get the full game of Hollow Knight V1.2.2.1 with all DLC from GOG for free
-Free download of the full game of Hollow Knight V1.2.2.1 on GOG with all DLC included
-Download link for the full game of Hollow Knight V1.2.2.1 on GOG with all DLC included
-How to get the full game of Hollow Knight V1.2.2.1 on GOG with all DLC included for free
-Free download of the full game of Hollow Knight on GOG with the latest version and all DLC included
-Download link for the full game of Hollow Knight on GOG with the latest version and all DLC included
-How to get the full game of Hollow Knight on GOG with the latest version and all DLC included for free
-
-By using these options,
-you can enjoy Hollow Knight V1.
-DLC - GOG Free Download in
-the best possible way.
-You can also support
-the game developers
-and publishers
-by paying for their work.
-What is Hollow Knight V1.2.2.1 DLC - GOG?
-Hollow Knight V1.2.2.1 DLC - GOG is the latest update for Hollow Knight, which includes all the previous updates and expansions for the game. This means that you can enjoy the full version of Hollow Knight with all the additional content that has been added since its release.
-Some of the features that Hollow Knight V1. DLC - GOG offers are:
-
-- New areas: You can explore new regions of Hallownest, such as The Hive, The White Palace, The Royal Waterways, The Ancient Basin, and The Godhome.
-- New enemies: You can encounter new foes and challenges, such as The Collector, The Traitor Lord, The Nightmare King Grimm, The Radiance, and The Pantheon of Hallownest.
-- New bosses: You can face new epic battles, such as Zote the Mighty, Grey Prince Zote, The Sisters of Battle, The Pure Vessel, and Absolute Radiance.
-- New abilities: You can unlock new skills and upgrades, such as Dream Nail, Dream Gate, Grimmchild, Void Heart, and Lifeblood Core.
-- New items: You can collect new charms, relics, masks, vessels, and grubs that will enhance your abilities and gameplay.
-- New quests: You can complete new side quests and stories, such as The Grimm Troupe, The Delicate Flower, The Godseeker, and The Path of Pain.
-- New secrets: You can discover new hidden areas, lore, easter eggs, and endings that will reveal more about the world and history of Hollow Knight.
-
-How to Install Hollow Knight V1.2.2.1 DLC - GOG?
-If you want to install Hollow Knight V1. DLC - GOG, you need to follow these steps:
-
-- Download Hollow Knight V1.
DLC - GOG from a legal and safe source: You can download Hollow Knight V1. DLC - GOG from GOG.com, which is a digital distribution platform that sells DRM-free games for Windows, Mac OS X, Linux etc.. You can also use a free trial or a discount coupon from GOG.com, a gift card or a voucher from GOG.com, or any other legal and safe method that allows you to get Hollow Knight V1.
-
-- Extract the downloaded file: You need to extract the downloaded file using a program like 7-Zip or WinRAR. You will get a folder named "Hollow Knight v1.5.78.11833" that contains the setup file and the goodies files.
-- Run the setup file: You need to run the setup file named "setup_hollow_knight_1.5.78.11833_ (64bit)_ (50884).exe" and follow the instructions to install Hollow Knight on your device.
-- Enjoy the game: You can launch the game through the desktop shortcut or the start menu and enjoy Hollow Knight V1.
DLC - GOG.
-
-What are the Benefits of Hollow Knight V1.2.2.1 DLC - GOG Free Download?
-Hollow Knight V1. DLC - GOG free download has many benefits for the players who want to enjoy the ultimate 2D action adventure game. Some of the benefits are:
-
-- More content: You can access more content than the original version of Hollow Knight, such as new areas, enemies, bosses, abilities, items, quests, and secrets. You can also play Silksong, the upcoming sequel to Hollow Knight, when it is released.
-- More fun: You can have more fun and challenge with Hollow Knight V1.
DLC - GOG free download, as you can explore more of Hallownest, fight more foes and bosses, unlock more skills and upgrades, collect more charms and relics, complete more side quests and stories, and discover more hidden areas and lore.
-- More value: You can get more value for your money with Hollow Knight V1.
DLC - GOG free download, as you can get the full version of Hollow Knight with all the updates and expansions for a reasonable price or even for free (depending on the method you use to get it).
-- More quality: You can enjoy more quality with Hollow Knight V1.
DLC - GOG free download, as you can play the game in high quality with beautiful hand-drawn graphics, atmospheric music, and smooth performance and stability.
-- More support: You can get more support with Hollow Knight V1.
DLC - GOG free download, as you can access customer support and refund policy from GOG.com in case of any issues or complaints. You can also support the game developers and publishers by buying or downloading the game from GOG.com.
-
-How to Play Hollow Knight V1.2.2.1 DLC - GOG Free Download?
-If you want to play Hollow Knight V1. DLC - GOG free download, you need to follow these steps:
-
-- Install Hollow Knight V1.
DLC - GOG free download: You need to install Hollow Knight V1.
-
-- Launch the game: You need to launch the game through the desktop shortcut or the start menu and choose your preferred language and settings.
-- Create a new game or load an existing game: You need to create a new game or load an existing game from the main menu and choose your preferred difficulty and mode.
-- Explore Hallownest: You need to explore Hallownest by moving, jumping, dashing, wall-jumping, and using other abilities. You need to fight enemies and bosses by using your nail, spells, and charms. You need to collect items and currency by breaking objects, opening chests, and finding secrets. You need to upgrade your abilities and equipment by visiting shops, benches, stations, and other locations. You need to complete quests and stories by talking to NPCs, reading lore tablets, and finding clues.
-- Enjoy the game: You need to enjoy the game by experiencing its gameplay, graphics, music, and story.
-
-Conclusion
-Hollow Knight V1. DLC - GOG Free Download is a great opportunity to download and play the full version of one of the best 2D action adventure games ever made.The game features challenging combat,beautiful hand-drawn graphics,atmospheric music,and a rich lore that unfolds through exploration and discovery.The game also includes all the previous updates and expansions,such as Hidden Dreams,The Grimm Troupe,Lifeblood,Godmaster,and Silksong (the upcoming sequel to Hollow Knight).
-
-If you want to get Hollow Knight V1. DLC - GOG Free Download, you should avoid using any illegal or unsafe websites that offer it,as they may contain malware or viruses, violate the copyright laws and terms and conditions of the game developers and publishers,be unethical and disrespectful to them, provide low-quality or corrupted versions of the game,and have no security measures or guarantees to protect your device or data from any harm or loss.
-
-Instead, you should use a legal and safe source that respects the rights and interests of the game developers and publishers.You can buy or download the official version of Hollow Knight from GOG.com, a digital distribution platform that sells DRM-free games for Windows,Mac OS X, Linux etc.You can also use a free trial or a discount coupon from GOG.com, a gift card or a voucher from GOG.com,or any other legal and safe method that allows you to get Hollow Knight V1. DLC - GOG Free Download without breaking any laws or compromising any quality.
-
-By using these options, you can enjoy Hollow Knight V1. DLC - GOG Free Download in
-the best possible way.
-You can also support
-the game developers
-and publishers
-by paying for their work. 679dcb208e
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Business WhatsApp How to Set Up Your Profile and Start Selling.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Business WhatsApp How to Set Up Your Profile and Start Selling.md
deleted file mode 100644
index d725fcfdae4df11cfb4ffe04bbe8a9a11b125a00..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Business WhatsApp How to Set Up Your Profile and Start Selling.md
+++ /dev/null
@@ -1,95 +0,0 @@
-
-Business WhatsApp Download Karo: A Guide for Small Businesses
-If you are a small business owner who wants to communicate with your customers more efficiently and effectively, you might want to consider using Business WhatsApp. Business WhatsApp is a free-to-download app that is built on top of WhatsApp Messenger and includes all the features that you rely on, such as multimedia, free calls, and group chat. But it also has some additional features that are designed for businesses, such as a verified and more complete business profile, messaging tools, labels, and a catalog. In this article, we will explain what Business WhatsApp is, how it is different from WhatsApp, how to download and set up Business WhatsApp, and how to use it to communicate with your customers.
-business whatsapp download karo
Download Zip ✫ https://bltlly.com/2uOgsb
- What is Business WhatsApp and How is it Different from WhatsApp?
-WhatsApp is a messaging app meant for personal communication, while Business WhatsApp is designed for businesses to communicate with their customers. Business WhatsApp provides a verified and more complete business profile so customers can trust who they’re chatting with. Business WhatsApp can add multiple users to manage the account across multiple devices, whereas individuals use WhatsApp on their personal devices. Business WhatsApp also has some special messaging features that help businesses be more responsive, organized, and professional.
- Business WhatsApp Features and Benefits
-Some of the main features and benefits of Business WhatsApp are:
-
-- Business profile: You can create a profile for your business that contains valuable information for customers, such as your website, location, contact information, business hours, business categories, and products.
-- Messaging tools: You can use greeting messages to send your customers an introductory message when they first message you, away messages to indicate when you're away or busy, and quick replies to save and reuse frequently sent messages.
-- Labels: You can assign different labels for each chat, such as new customer, pending payment, order complete, etc. This will help you track your orders, generate leads, and keep your account neat.
-- Catalog: You can showcase your products and services in a catalog that customers can browse within the app. You can add images, prices, descriptions, links, and codes for each item.
-- Multimedia and interactive content: You can send files, images, videos, audio messages, documents, contacts, locations, stickers, emojis, and GIFs to your customers. You can also send interactive buttons that allow customers to reply with a simple tap.
-
- How to Download and Set Up Business WhatsApp
-To download and set up Business WhatsApp on your Android or iPhone device, follow these steps:
-
-- Download the Business WhatsApp app from the Google Play Store or the App Store.
-- Verify your business phone number. You can use a landline or fixed number by selecting the “Call me” option during verification.
-- If you have an existing WhatsApp Messenger account linked to the same number, you can migrate your account data, including chat history and media, to your new Business WhatsApp account. Note that you cannot move it back to WhatsApp Messenger later.
-- Set your business name. Choose carefully as you can only change it once.
-- Build your profile by adding your business information and logo.
-
- How to Use Business WhatsApp to Communicate with Customers
-Once you have set up your Business WhatsApp account, you can start communicating with your customers in various ways. Here are some tips on how to use Business WhatsApp effectively:
- Create a Business Profile and Catalog
-Your business profile and catalog are the first things that customers will see when they open a chat with you. Therefore, you should make them as attractive and informative as possible. To create or edit your business profile or catalog, go to More options > Settings > your business name. You can add or change your logo or image, website URL, address, business hours, business categories, and products. You can also add a short description of your business and what you offer. To create or edit your catalog, go to More options > Settings > your business name > Catalog. You can add or delete items, edit their details, and rearrange their order.
- Use Messaging Tools and Labels
-Messaging tools and labels can help you save time and stay organized when communicating with your customers. To access your messaging tools, go to More options > Settings > Business tools > Messaging tools. You can create and edit your greeting message, away message, and quick replies. You can also enable or disable them as needed. To use a quick reply, type “/” in the chat and select the one you want to send. To access your labels, go to More options > Labels. You can create and edit labels, assign them to chats or contacts, filter chats by labels, and archive chats with labels.
- Send Multimedia and Interactive Content
-Multimedia and interactive content can make your communication more engaging and interactive. You can send files, images, videos, audio messages, documents, contacts, locations, stickers, emojis, and GIFs to your customers by tapping the attachment icon in the chat. You can also send interactive buttons that allow customers to reply with a simple tap. For example, you can send a button that says “Yes” or “No” to a question, or a button that says “View catalog” or “Visit website” to direct customers to your products or services. To send an interactive button, go to More options > Settings > Business tools > Short link. You can create and edit your short link, which is a unique URL that opens a chat with you. You can also enable or disable the default message that customers will see when they tap the link.
- Conclusion and FAQs
-Business WhatsApp is a powerful tool for small businesses to communicate with their customers more efficiently and effectively. It has many features and benefits that can help you build trust, increase sales, and grow your business. To download Business WhatsApp, visit the Google Play Store or the App Store and follow the instructions. To learn more about Business WhatsApp, visit the official website or the help center.
-business whatsapp app download karo
-business whatsapp download karo free
-business whatsapp download karo android
-business whatsapp download karo apk
-business whatsapp download karo google play
-business whatsapp download karo iphone
-business whatsapp download karo ios
-business whatsapp download karo link
-business whatsapp download karo latest version
-business whatsapp download karo update
-business whatsapp download karo for pc
-business whatsapp download karo for laptop
-business whatsapp download karo for mac
-business whatsapp download karo for windows
-business whatsapp download karo for desktop
-business whatsapp download karo online
-business whatsapp download karo web
-business whatsapp download karo from meta
-business whatsapp download karo from facebook
-business whatsapp download karo from official website
-business whatsapp download karo in hindi
-business whatsapp download karo in english
-business whatsapp download karo in urdu
-business whatsapp download karo in tamil
-business whatsapp download karo in telugu
-business whatsapp download karo video
-business whatsapp download karo tutorial
-business whatsapp download karo guide
-business whatsapp download karo how to
-business whatsapp download karo step by step
-business whatsapp download karo features
-business whatsapp download karo benefits
-business whatsapp download karo advantages
-business whatsapp download karo reviews
-business whatsapp download karo ratings
-business whatsapp download karo tips
-business whatsapp download karo tricks
-business whatsapp download karo hacks
-business whatsapp download karo faq
-business whatsapp download karo help center
-business whatsapp download karne ka tarika
-business whatsapp download karne ke liye
-business whatsapp download karne ki vidhi
-business whatsapp download karne ka asan tarika
-business whatsapp download karne ka sahi tarika
-business whatsapp account banane ka tarika
-Here are some frequently asked questions about Business WhatsApp:
-
-- Q: Can I use both WhatsApp Messenger and Business WhatsApp on the same phone?
-- A: Yes, you can use both apps on the same phone as long as they are linked to different phone numbers.
-- Q: Can I use Business WhatsApp on my computer?
-- A: Yes, you can use Business WhatsApp on your computer by using WhatsApp Web or downloading the desktop app.
-- Q: How can I get verified on Business WhatsApp?
-- A: Verification is currently limited to a small number of businesses that have been chosen by WhatsApp. If you see a green badge next to your business name, it means that WhatsApp has verified that this is the authentic account for your business.
-- Q: How can I backup or restore my Business WhatsApp data?
-- A: You can backup or restore your Business WhatsApp data using Google Drive on Android devices or iCloud on iPhone devices.
-- Q: How can I delete my Business WhatsApp account?
-- A: You can delete your Business WhatsApp account by going to More options > Settings > Account > Delete my account.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Can 39t Download Pokemon Go.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Can 39t Download Pokemon Go.md
deleted file mode 100644
index e886313d585be0b47582b0f0d5c052a5d203ef98..0000000000000000000000000000000000000000
--- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Can 39t Download Pokemon Go.md
+++ /dev/null
@@ -1,122 +0,0 @@
-
-Can't Download Pokemon Go? Here's How to Fix It
-Pokemon Go is a popular augmented reality game that lets you catch, train, and battle virtual creatures called Pokemon in the real world. The game is free to download and play, but you need a compatible device, a stable internet connection, and enough storage space to enjoy it.
-However, some people may encounter problems when trying to download Pokemon Go from their app store. If you are one of them, don't worry. In this article, we will show you some possible solutions to fix this issue and get you ready to catch 'em all.
-can 39;t download pokemon go
Download ✏ ✏ ✏ https://bltlly.com/2uOsf1
- Check Your Device Compatibility
-Before you download Pokemon Go, make sure your device meets the minimum requirements for the game. Here are the specifications for Android and iOS devices:
- Android Requirements
-
-- Android 6 or above
-- At least 2 GB of RAM
-- Access to Google Play services
-- GPS and location services enabled
-- Gyroscope and camera (optional, but recommended)
-
- iOS Requirements
-
-- iOS 12 or above
-- iPhone 6s or above
-- iPad 5th generation or above
-- iPad mini 4 or above
-- iPad Air 2 or above
-- iPad Pro or above
-- iPod touch 7th generation or above
-- GPS and location services enabled
-- Gyroscope and camera (optional, but recommended)
-
- Check Your Internet Connection
-Pokemon Go requires a reliable internet connection to download and play. You can use either Wifi or mobile data, but make sure you have a strong and stable signal. If your connection is slow or intermittent, you may experience errors or delays when downloading the game.
- Wifi vs Mobile Data
-If you are using Wifi, make sure you are close to the router and there are no obstructions or interferences. You can also try restarting your router or switching to another Wifi network if possible.
-If you are using mobile data, make sure you have enough data allowance and coverage. You can also try turning off your mobile data and turning it back on, or switching to another carrier if possible.
- Improve Location Accuracy
-Pokemon Go uses GPS and location services to determine your position and show you nearby Pokemon. If your location accuracy is low, you may have trouble downloading the game or playing it properly.
-To improve your location accuracy, you can enable the following settings on your device:
-
-- For Android devices, go to Settings > Location > Google Location Accuracy and turn on Improve Location Accuracy.
-- For iOS devices, go to Settings > Privacy > Location Services > System Services and turn on Wi-Fi Networking & Wireless.
-
- Check Your App Store Settings Check Your App Store Settings
-Another reason why you may not be able to download Pokemon Go is that your app store settings are preventing you from doing so. Depending on your device and region, you may need to adjust some settings to allow the installation of the game.
-
- Google Play Store
-If you are using an Android device, you need to download Pokemon Go from the Google Play Store. However, some factors may prevent you from accessing the game on the store, such as:
-
-- Your Google account is not signed in or synced.
-- Your Google Play Store app is outdated or corrupted.
-- Your device is set to a different region or country than the one where Pokemon Go is available.
-- Your device has parental controls or restrictions enabled.
-
-To fix these issues, you can try the following solutions:
-
-- Make sure you are signed in to your Google account and sync it with your device.
-- Update your Google Play Store app to the latest version or clear its cache and data.
-- Change your device's region or country to match the one where Pokemon Go is available. You can do this by going to Settings > System > Languages & input > Languages > Add a language and selecting the appropriate one.
-- Disable any parental controls or restrictions on your device or Google account that may block the installation of Pokemon Go. You can do this by going to Settings > Apps & notifications > Special app access > Install unknown apps and allowing Pokemon Go, or by going to Settings > Google > Parental controls and turning them off.
-
- Apple App Store
-If you are using an iOS device, you need to download Pokemon Go from the Apple App Store. However, some factors may prevent you from accessing the game on the store, such as:
-
-- Your Apple ID is not signed in or verified.
-- Your Apple App Store app is outdated or corrupted.
-- Your device is set to a different region or country than the one where Pokemon Go is available.
-- Your device has parental controls or restrictions enabled.
-
-To fix these issues, you can try the following solutions:
-
-- Make sure you are signed in to your Apple ID and verify it with your email or phone number.
-- Update your Apple App Store app to the latest version or clear its cache and data.
-- Change your device's region or country to match the one where Pokemon Go is available. You can do this by going to Settings > General > Language & Region and selecting the appropriate one.
-- Disable any parental controls or restrictions on your device or Apple ID that may block the installation of Pokemon Go. You can do this by going to Settings > Screen Time > Content & Privacy Restrictions and turning them off, or by going to Settings > iTunes & App Store > Apple ID > View Apple ID > Subscriptions > Manage and canceling any active subscriptions that may interfere with Pokemon Go.
-
- Check Your Device Storage Check Your Device Storage
-Pokemon Go is a large app that requires a lot of storage space on your device. If you don't have enough space, you may not be able to download or update the game. To check your device storage, you can do the following:
-
-- For Android devices, go to Settings > Storage and see how much space is available and used.
-- For iOS devices, go to Settings > General > iPhone Storage and see how much space is available and used.
-
- If you have less than 1 GB of free space, you may need to free up some space by deleting or moving some files or apps. Here are some tips on how to do that:
- How to Free Up Space
-
-- Delete any unwanted photos, videos, music, or documents from your device or upload them to a cloud service like Google Photos, iCloud, or Dropbox.
-- Delete any unused or unnecessary apps from your device or disable them if they are pre-installed.
-- Clear the cache and data of some apps that may take up a lot of space, such as social media, streaming, or gaming apps.
-- Use a file manager app or a cleaning app to scan and remove any junk files, duplicate files, or large files from your device.
-
- How to Move Apps to SD Card
-If you have an Android device with an SD card slot, you can move some apps to the SD card to save some internal storage space. However, not all apps can be moved to the SD card, and some may not work properly if they are moved. To move apps to the SD card, you can do the following:
-
-- Insert an SD card into your device and format it as internal storage or portable storage, depending on your preference.
-- Go to Settings > Apps and select the app you want to move.
-- Tap on Storage and then on Change. You will see the option to move the app to the SD card if it is available.
-- Tap on Move and wait for the process to complete.
-
- Check for App Updates and Known Issues
-Sometimes, you may not be able to download Pokemon Go because the app is undergoing maintenance or has some bugs or issues that need to be fixed. To check for app updates and known issues, you can do the following:
- How to Update Pokemon Go
-
-- Go to your app store and search for Pokemon Go. If there is an update available, you will see an Update button next to the app. Tap on it and wait for the update to download and install.
-- You can also enable automatic updates for Pokemon Go by going to your app store settings and turning on Auto-update apps for Android devices or App Updates for iOS devices.
-
- How to Report a Bug or Issue
-
-- If you encounter a bug or issue while downloading or playing Pokemon Go, you can report it to the developers by going to Settings > Get Support > Report a Bug in the app.
-- You can also visit the official Pokemon Go website or social media pages and check for any announcements or updates regarding the game status or issues.
-- You can also contact the Pokemon Go support team by emailing them at pokemon-go-support@nianticlabs.com or filling out a form on their website.
-
- Conclusion
-Pokemon Go is a fun and immersive game that lets you explore the world of Pokemon in real life. However, sometimes you may face some challenges when trying to download the game on your device. In this article, we have covered some possible solutions to help you fix this issue and enjoy the game without any hassle. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy hunting!
- FAQs
- Why can't I download Pokemon Go in my country?
-Pokemon Go is not available in every country due to various reasons, such as licensing agreements, legal restrictions, technical limitations, or cultural differences. You can check the list of supported countries on the official Pokemon Go website. If your country is not on the list, you may have to wait until the game is released in your region or use a VPN service to access it from another country.
- Why can't I download Pokemon Go on my tablet?
-Pokemon Go is designed for smartphones and may not work well on tablets. Some tablets may not meet the minimum requirements for the game, such as having a GPS sensor, a gyroscope, or a camera Pokemon Go is designed for smartphones and may not work well on tablets. Some tablets may not meet the minimum requirements for the game, such as having a GPS sensor, a gyroscope, or a camera. Some tablets may also have compatibility issues with the app store or the game itself. If you want to play Pokemon Go on a tablet, you may need to check the device specifications, the app store settings, and the game updates before downloading it.
- Why can't I download Pokemon Go on my rooted or jailbroken device?
-Pokemon Go does not support rooted or jailbroken devices for security and performance reasons. Rooting or jailbreaking your device may compromise the game's functionality and integrity, as well as expose your device to malware or hacking. If you want to play Pokemon Go on a rooted or jailbroken device, you may need to unroot or unjailbreak your device or use a third-party app that can hide your device status from the game.
- Why can't I download Pokemon Go on my Huawei device?
-Pokemon Go requires Google Play services to run properly on Android devices. However, some Huawei devices do not have Google Play services installed due to the US trade ban. If you have a Huawei device that does not have Google Play services, you may not be able to download or play Pokemon Go. You may need to use an alternative app store, such as Huawei AppGallery, to download the game or use a workaround method to install Google Play services on your device.
- Why can't I download Pokemon Go on my Chromebook?
-Pokemon Go is not compatible with Chromebooks, even if they have access to the Google Play Store. Chromebooks are not designed for gaming and may not have the necessary hardware or software features to run Pokemon Go. For example, most Chromebooks do not have a GPS sensor, a gyroscope, or a camera. Even if you manage to download Pokemon Go on your Chromebook, you may encounter errors or glitches when playing it. 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/tiiuae/falcon-180b-demo/app.py b/spaces/tiiuae/falcon-180b-demo/app.py
deleted file mode 100644
index c6b34ebcf0615c57d48281f23cf201f550e5d186..0000000000000000000000000000000000000000
--- a/spaces/tiiuae/falcon-180b-demo/app.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import json
-import os
-import shutil
-import requests
-
-import gradio as gr
-from huggingface_hub import Repository, InferenceClient
-
-HF_TOKEN = os.environ.get("HF_TOKEN", None)
-API_URL = "https://api-inference.huggingface.co/models/tiiuae/falcon-180B-chat"
-BOT_NAME = "Falcon"
-
-STOP_SEQUENCES = ["\nUser:", "<|endoftext|>", " User:", "###"]
-
-EXAMPLES = [
- ["Hey Falcon! Any recommendations for my holidays in Abu Dhabi?"],
- ["What's the Everett interpretation of quantum mechanics?"],
- ["Give me a list of the top 10 dive sites you would recommend around the world."],
- ["Can you tell me more about deep-water soloing?"],
- ["Can you write a short tweet about the release of our latest AI model, Falcon LLM?"]
- ]
-
-client = InferenceClient(
- API_URL,
- headers={"Authorization": f"Bearer {HF_TOKEN}"},
-)
-
-def format_prompt(message, history, system_prompt):
- prompt = ""
- if system_prompt:
- prompt += f"System: {system_prompt}\n"
- for user_prompt, bot_response in history:
- prompt += f"User: {user_prompt}\n"
- prompt += f"Falcon: {bot_response}\n" # Response already contains "Falcon: "
- prompt += f"""User: {message}
-Falcon:"""
- return prompt
-
-seed = 42
-
-def generate(
- prompt, history, system_prompt="", temperature=0.9, max_new_tokens=256, top_p=0.95, repetition_penalty=1.0,
-):
- temperature = float(temperature)
- if temperature < 1e-2:
- temperature = 1e-2
- top_p = float(top_p)
- global seed
- generate_kwargs = dict(
- temperature=temperature,
- max_new_tokens=max_new_tokens,
- top_p=top_p,
- repetition_penalty=repetition_penalty,
- stop_sequences=STOP_SEQUENCES,
- do_sample=True,
- seed=seed,
- )
- seed = seed + 1
- formatted_prompt = format_prompt(prompt, history, system_prompt)
-
- try:
- stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False)
- output = ""
-
- for response in stream:
- output += response.token.text
-
- for stop_str in STOP_SEQUENCES:
- if output.endswith(stop_str):
- output = output[:-len(stop_str)]
- output = output.rstrip()
- yield output
- yield output
- except Exception as e:
- raise gr.Error(f"Error while generating: {e}")
- return output
-
-
-additional_inputs=[
- gr.Textbox("", label="Optional system prompt"),
- gr.Slider(
- label="Temperature",
- value=0.9,
- minimum=0.0,
- maximum=1.0,
- step=0.05,
- interactive=True,
- info="Higher values produce more diverse outputs",
- ),
- gr.Slider(
- label="Max new tokens",
- value=256,
- minimum=0,
- maximum=3000,
- step=64,
- interactive=True,
- info="The maximum numbers of new tokens",
- ),
- gr.Slider(
- label="Top-p (nucleus sampling)",
- value=0.90,
- minimum=0.01,
- maximum=0.99,
- step=0.05,
- interactive=True,
- info="Higher values sample more low-probability tokens",
- ),
- gr.Slider(
- label="Repetition penalty",
- value=1.2,
- minimum=1.0,
- maximum=2.0,
- step=0.05,
- interactive=True,
- info="Penalize repeated tokens",
- )
-]
-
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column(scale=0.4):
- gr.Image("better_banner.jpeg", elem_id="banner-image", show_label=False)
- with gr.Column():
- gr.Markdown(
- """# Falcon-180B Demo
-
- **Chat with [Falcon-180B-Chat](https://huggingface.co/tiiuae/falcon-180b-chat), brainstorm ideas, discuss your holiday plans, and more!**
-
- ✨ This demo is powered by [Falcon-180B](https://huggingface.co/tiiuae/falcon-180B) and finetuned on a mixture of [Ultrachat](https://huggingface.co/datasets/stingning/ultrachat), [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) and [Airoboros](https://huggingface.co/datasets/jondurbin/airoboros-2.1). [Falcon-180B](https://huggingface.co/tiiuae/falcon-180b) is a state-of-the-art large language model built by the [Technology Innovation Institute](https://www.tii.ae) in Abu Dhabi. It is trained on 3.5 trillion tokens (including [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)) and available under the [Falcon-180B TII License](https://huggingface.co/spaces/tiiuae/falcon-180b-license/blob/main/LICENSE.txt). It currently holds the 🥇 1st place on the [🤗 Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for a pretrained model.
-
- 🧪 This is only a **first experimental preview**: we intend to provide increasingly capable versions of Falcon in the future, based on improved datasets and RLHF/RLAIF.
-
- 👀 **Learn more about Falcon LLM:** [falconllm.tii.ae](https://falconllm.tii.ae/)
-
- ➡️️ **Intended Use**: this demo is intended to showcase an early finetuning of [Falcon-180B](https://huggingface.co/tiiuae/falcon-180b), to illustrate the impact (and limitations) of finetuning on a dataset of conversations and instructions. We encourage the community to further build upon the base model, and to create even better instruct/chat versions!
-
- ⚠️ **Limitations**: the model can and will produce factually incorrect information, hallucinating facts and actions. As it has not undergone any advanced tuning/alignment, it can produce problematic outputs, especially if prompted to do so. Finally, this demo is limited to a session length of about 1,000 words.
- """
- )
-
- gr.ChatInterface(
- generate,
- examples=EXAMPLES,
- additional_inputs=additional_inputs,
- )
-
-demo.queue(concurrency_count=100, api_open=False).launch(show_api=False)
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk MotionBuilder 2020 Crack License Key Free Download LINK.md b/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk MotionBuilder 2020 Crack License Key Free Download LINK.md
deleted file mode 100644
index d98b257c2b5f8f8ab4208efc2fd453aaa72f01db..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Autodesk MotionBuilder 2020 Crack License Key Free Download LINK.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-Here is a possible title and article for the keyword "Autodesk MotionBuilder 2020 Crack License Key Free Download":
-
-How to Get Autodesk MotionBuilder 2020 for Free with Crack and License Key
-If you are looking for a powerful and easy-to-use 3D character animation software, you might be interested in Autodesk MotionBuilder 2020. This software allows you to capture, edit, and play back complex animations, work faster and more efficiently in an interactive environment, and seamlessly exchange data between 3D content creation tools like Maya and 3ds Max. But how can you get MotionBuilder 2020 for free with crack and license key?
-In this article, we will show you how to download, install, and activate MotionBuilder 2020 for free with crack and license key. However, we do not recommend or endorse using cracked software, as it may contain viruses, malware, or spyware that can harm your computer or compromise your privacy. Moreover, using cracked software is illegal and unethical, and may result in legal consequences or penalties. Therefore, we strongly advise you to purchase a legitimate subscription from the official Autodesk website or a reseller.
-Autodesk MotionBuilder 2020 Crack License Key Free Download
Download File ★★★★★ https://urlcod.com/2uHwLJ
-Step 1: Download MotionBuilder 2020
-The first step is to download MotionBuilder 2020 from a reliable source. You can either use the official Autodesk website or a third-party website that offers cracked software. However, be careful when choosing a third-party website, as some of them may contain malicious links or fake downloads that can infect your computer. To avoid this, you should always scan the downloaded file with an antivirus program before opening it.
-If you choose to use the official Autodesk website, you can download MotionBuilder 2020 for free as a trial version. You will need to create an Autodesk account or sign in with an existing one, and then select the version, platform, and language of your choice. You can then choose a download method, such as browser download or install now.
-Step 2: Install MotionBuilder 2020
-The next step is to install MotionBuilder 2020 on your computer. You will need to follow the instructions on the screen and accept the terms and conditions of the software license agreement. You will also need to enter your serial number and product key, which you can find on the Autodesk website or in the email confirmation if you purchased a subscription. If you downloaded a cracked version of MotionBuilder 2020 from a third-party website, you will need to use a crack file or a keygen to generate a valid serial number and product key.
-Step 3: Activate MotionBuilder 2020
-The final step is to activate MotionBuilder 2020 on your computer. You will need to launch the software and sign in with your Autodesk account or create one if you don't have one. You will then need to enter your serial number and product key again, and select an activation method. You can either activate online or offline.
-If you choose to activate online, you will need to have an internet connection and follow the instructions on the screen. If you choose to activate offline, you will need to generate a request code from the software and enter it on the Autodesk website or a third-party website that offers activation codes. You will then receive an activation code that you will need to enter in the software.
-Congratulations! You have successfully installed and activated MotionBuilder 2020 for free with crack and license key. However, remember that using cracked software is risky and illegal, and may cause problems with your computer or your work. Therefore, we recommend that you buy a genuine subscription from Autodesk or a reseller. 7196e7f11a
-
-
\ No newline at end of file
diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Marcello Teodoro - CD Topa Tudo - 2012.rar 1.md b/spaces/tioseFevbu/cartoon-converter/scripts/Marcello Teodoro - CD Topa Tudo - 2012.rar 1.md
deleted file mode 100644
index ed77e10363ecea15e711e4eb0cfb8ee3e0f3799f..0000000000000000000000000000000000000000
--- a/spaces/tioseFevbu/cartoon-converter/scripts/Marcello Teodoro - CD Topa Tudo - 2012.rar 1.md
+++ /dev/null
@@ -1,30 +0,0 @@
-
-Topa Tudo: A Sertanejo Album by Marcello Teodoro
-Topa Tudo is a sertanejo album by Brazilian singer Marcello Teodoro, released in 2012 by MD Music. The album features 15 songs, including collaborations with Hugo & Tiago, Teodoro, Sampaio and others. The album showcases Marcello Teodoro's versatile voice and style, ranging from romantic ballads to upbeat party songs.
-Some of the highlights of the album are:
-Marcello Teodoro - CD Topa Tudo - 2012.rar 1
DOWNLOAD ····· https://urlcod.com/2uHwKu
-
-- No Boteco Esqueço Tudo (feat. Hugo & Tiago): A catchy song about forgetting everything at the bar with a friend.
-- Fora de Mim (feat. Teodoro): A duet with his father Teodoro, a famous sertanejo singer, about being out of control in love.
-- Casa dos Meus Sonhos (feat. Teodoro & Sampaio): A nostalgic song about the house of his dreams, where he grew up with his family.
-- Tontura da Paixao: A lively song about the dizziness of passion.
-- Fantasia: A sensual song about a fantasy with a woman.
-
-Topa Tudo is a great album for fans of sertanejo music and Marcello Teodoro's talent. You can listen to it on Apple Music[^1^] or Qobuz[^2^] [^3^]. Marcello Teodoro is a Brazilian singer and songwriter who was born in São Paulo in 1986. He is the son of Teodoro, one half of the famous sertanejo duo Teodoro & Sampaio. He started his musical career at a young age, singing with his father and uncle at shows and festivals. He released his first solo album in 2007, titled Judieira, and has since released six more albums, including Topa Tudo.
-
-Sertanejo is a genre of Brazilian music that originated in the countryside of Brazil in the 1920s. It is influenced by various regional styles, such as caipira, moda de viola, guarania and chamamé. It is characterized by the use of acoustic guitars, violas, accordions and harmonicas. Sertanejo is one of the most popular genres in Brazil, especially among rural and urban audiences. Some of the most famous sertanejo singers are Chitãozinho & Xororó, Zezé Di Camargo & Luciano, Leonardo, Daniel and Gusttavo Lima.
-If you want to download the album Topa Tudo by Marcello Teodoro, you can do so from various online platforms, such as iTunes, Amazon Music or Spotify. You can also buy the CD from physical stores or online retailers. However, please be aware that downloading or sharing pirated files is illegal and may harm the artist and the music industry. Please support Marcello Teodoro by purchasing his music legally. If you want to contact Marcello Teodoro, you can follow him on his social media accounts, such as Facebook, Instagram and Twitter. You can also send him an email at marcelloteodoro@mdmusic.com.br or call his manager at +55 11 99999-9999. He loves to hear from his fans and may reply to your messages or calls.
-Some other albums by Marcello Teodoro are:
-
-- Aqui no Buteco (2017): An album that celebrates the sertanejo culture of drinking and having fun at the bar.
-- Marcello Teodoro e Convidados (Ao Vivo) (2018): A live album that features guest appearances by other sertanejo singers, such as Eduardo Costa, César Menotti & Fabiano and Bruno & Marrone.
-- João de Barro (2021): A recent album that showcases Marcello Teodoro's romantic side, with songs about love and heartbreak.
-
-Some other genres of Brazilian music are:
-
-- Samba: A rhythmic and festive genre that originated in the Afro-Brazilian communities of Rio de Janeiro. It is often associated with the Carnival celebrations and features instruments such as tambourines, cuicas and surdos.
-- Bossa Nova: A smooth and sophisticated genre that emerged in the late 1950s and blended samba with jazz influences. It is known for its gentle melodies and lyrics about love and nature. Some of the most famous bossa nova singers are João Gilberto, Tom Jobim and Astrud Gilberto.
-- MPB: An acronym for Música Popular Brasileira, which means Brazilian Popular Music. It is a broad term that encompasses various styles of music that emerged in the 1960s and 1970s, influenced by folk, rock, pop and other genres. Some of the most famous MPB singers are Caetano Veloso, Gilberto Gil and Elis Regina.
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langthaimodel.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langthaimodel.py
deleted file mode 100644
index 489cad930e0029fc2f8e5111df1bad38151a07a9..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/langthaimodel.py
+++ /dev/null
@@ -1,4380 +0,0 @@
-from pip._vendor.chardet.sbcharsetprober import SingleByteCharSetModel
-
-# 3: Positive
-# 2: Likely
-# 1: Unlikely
-# 0: Negative
-
-THAI_LANG_MODEL = {
- 5: { # 'ก'
- 5: 2, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 2, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 3, # 'ฎ'
- 57: 2, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 2, # 'ณ'
- 20: 2, # 'ด'
- 19: 3, # 'ต'
- 44: 0, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 1, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 2, # 'ม'
- 16: 1, # 'ย'
- 2: 3, # 'ร'
- 61: 2, # 'ฤ'
- 15: 3, # 'ล'
- 12: 3, # 'ว'
- 42: 2, # 'ศ'
- 46: 3, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 3, # 'อ'
- 63: 1, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 3, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 0, # 'ึ'
- 27: 2, # 'ื'
- 32: 2, # 'ุ'
- 35: 1, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 1, # 'ใ'
- 33: 2, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 3, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 30: { # 'ข'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 1, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 2, # 'ณ'
- 20: 0, # 'ด'
- 19: 2, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 1, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 2, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 1, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 2, # 'ี'
- 40: 3, # 'ึ'
- 27: 1, # 'ื'
- 32: 1, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 2, # '่'
- 7: 3, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 24: { # 'ค'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 2, # 'ค'
- 8: 2, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 2, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 0, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 2, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 3, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 2, # 'า'
- 36: 3, # 'ำ'
- 23: 3, # 'ิ'
- 13: 2, # 'ี'
- 40: 0, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 2, # 'ู'
- 11: 1, # 'เ'
- 28: 0, # 'แ'
- 41: 3, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 8: { # 'ง'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 3, # 'ค'
- 8: 2, # 'ง'
- 26: 2, # 'จ'
- 52: 1, # 'ฉ'
- 34: 2, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 3, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 2, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 2, # 'ม'
- 16: 1, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 2, # 'ว'
- 42: 2, # 'ศ'
- 46: 1, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 1, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 1, # 'ื'
- 32: 1, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 3, # 'ๆ'
- 37: 0, # '็'
- 6: 2, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 26: { # 'จ'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 0, # 'ค'
- 8: 2, # 'ง'
- 26: 3, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 1, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 1, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 1, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 1, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 3, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 3, # 'ำ'
- 23: 2, # 'ิ'
- 13: 1, # 'ี'
- 40: 3, # 'ึ'
- 27: 1, # 'ื'
- 32: 3, # 'ุ'
- 35: 2, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 2, # '่'
- 7: 2, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 52: { # 'ฉ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 3, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 3, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 1, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 1, # 'ั'
- 1: 1, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 1, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 34: { # 'ช'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 1, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 1, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 1, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 2, # 'ั'
- 1: 3, # 'า'
- 36: 1, # 'ำ'
- 23: 3, # 'ิ'
- 13: 2, # 'ี'
- 40: 0, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 1, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 51: { # 'ซ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 1, # 'ั'
- 1: 1, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 2, # 'ี'
- 40: 3, # 'ึ'
- 27: 2, # 'ื'
- 32: 1, # 'ุ'
- 35: 1, # 'ู'
- 11: 1, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 1, # '่'
- 7: 2, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 47: { # 'ญ'
- 5: 1, # 'ก'
- 30: 1, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 3, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 1, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 2, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 2, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 0, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 0, # '็'
- 6: 2, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 58: { # 'ฎ'
- 5: 2, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 1, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 2, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 57: { # 'ฏ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 3, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 49: { # 'ฐ'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 2, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 53: { # 'ฑ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 55: { # 'ฒ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 43: { # 'ณ'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 3, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 3, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 1, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 3, # 'ะ'
- 10: 0, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 2, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 20: { # 'ด'
- 5: 2, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 1, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 2, # 'ม'
- 16: 3, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 3, # 'ั'
- 1: 2, # 'า'
- 36: 2, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 1, # 'ึ'
- 27: 2, # 'ื'
- 32: 3, # 'ุ'
- 35: 2, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 2, # 'ๆ'
- 37: 2, # '็'
- 6: 1, # '่'
- 7: 3, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 19: { # 'ต'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 1, # 'ต'
- 44: 2, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 1, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 2, # 'ภ'
- 9: 1, # 'ม'
- 16: 1, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 3, # 'ส'
- 21: 0, # 'ห'
- 4: 3, # 'อ'
- 63: 1, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 2, # 'ำ'
- 23: 3, # 'ิ'
- 13: 2, # 'ี'
- 40: 1, # 'ึ'
- 27: 1, # 'ื'
- 32: 3, # 'ุ'
- 35: 2, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 1, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 2, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 44: { # 'ถ'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 2, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 2, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 1, # 'ี'
- 40: 3, # 'ึ'
- 27: 2, # 'ื'
- 32: 2, # 'ุ'
- 35: 3, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 2, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 14: { # 'ท'
- 5: 1, # 'ก'
- 30: 1, # 'ข'
- 24: 3, # 'ค'
- 8: 1, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 3, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 2, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 3, # 'ย'
- 2: 3, # 'ร'
- 61: 1, # 'ฤ'
- 15: 1, # 'ล'
- 12: 2, # 'ว'
- 42: 3, # 'ศ'
- 46: 1, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 3, # 'ำ'
- 23: 2, # 'ิ'
- 13: 3, # 'ี'
- 40: 2, # 'ึ'
- 27: 1, # 'ื'
- 32: 3, # 'ุ'
- 35: 1, # 'ู'
- 11: 0, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 48: { # 'ธ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 1, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 2, # 'า'
- 36: 0, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 2, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 3: { # 'น'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 3, # 'ค'
- 8: 1, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 1, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 2, # 'ถ'
- 14: 3, # 'ท'
- 48: 3, # 'ธ'
- 3: 2, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 2, # 'พ'
- 54: 1, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 2, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 1, # 'ฤ'
- 15: 2, # 'ล'
- 12: 3, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 3, # 'อ'
- 63: 1, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 3, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 3, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 2, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 3, # 'โ'
- 29: 3, # 'ใ'
- 33: 3, # 'ไ'
- 50: 2, # 'ๆ'
- 37: 1, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 17: { # 'บ'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 1, # 'ง'
- 26: 1, # 'จ'
- 52: 1, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 3, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 1, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 1, # 'ม'
- 16: 0, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 3, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 2, # 'อ'
- 63: 1, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 2, # 'ำ'
- 23: 2, # 'ิ'
- 13: 2, # 'ี'
- 40: 0, # 'ึ'
- 27: 2, # 'ื'
- 32: 3, # 'ุ'
- 35: 2, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 2, # '่'
- 7: 2, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 25: { # 'ป'
- 5: 2, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 1, # 'ฎ'
- 57: 3, # 'ฏ'
- 49: 1, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 1, # 'ต'
- 44: 1, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 0, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 0, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 1, # 'ษ'
- 18: 2, # 'ส'
- 21: 1, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 3, # 'ั'
- 1: 1, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 3, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 1, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 2, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 2, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 3, # '็'
- 6: 1, # '่'
- 7: 2, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 39: { # 'ผ'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 1, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 2, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 1, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 1, # 'ื'
- 32: 0, # 'ุ'
- 35: 3, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 1, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 62: { # 'ฝ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 1, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 1, # 'ี'
- 40: 2, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 2, # '่'
- 7: 1, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 31: { # 'พ'
- 5: 1, # 'ก'
- 30: 1, # 'ข'
- 24: 1, # 'ค'
- 8: 1, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 1, # 'ณ'
- 20: 1, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 2, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 0, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 2, # 'ย'
- 2: 3, # 'ร'
- 61: 2, # 'ฤ'
- 15: 2, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 1, # 'ห'
- 4: 2, # 'อ'
- 63: 1, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 3, # 'ิ'
- 13: 2, # 'ี'
- 40: 1, # 'ึ'
- 27: 3, # 'ื'
- 32: 1, # 'ุ'
- 35: 2, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 1, # '็'
- 6: 0, # '่'
- 7: 1, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 54: { # 'ฟ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 2, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 2, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 1, # 'ื'
- 32: 1, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 2, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 45: { # 'ภ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 3, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 2, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 9: { # 'ม'
- 5: 2, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 2, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 1, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 3, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 2, # 'ม'
- 16: 1, # 'ย'
- 2: 2, # 'ร'
- 61: 2, # 'ฤ'
- 15: 2, # 'ล'
- 12: 2, # 'ว'
- 42: 1, # 'ศ'
- 46: 1, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 0, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 3, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 2, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 1, # '็'
- 6: 3, # '่'
- 7: 2, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 16: { # 'ย'
- 5: 3, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 2, # 'ช'
- 51: 0, # 'ซ'
- 47: 2, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 2, # 'ม'
- 16: 0, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 3, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 1, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 3, # 'ี'
- 40: 1, # 'ึ'
- 27: 2, # 'ื'
- 32: 2, # 'ุ'
- 35: 3, # 'ู'
- 11: 2, # 'เ'
- 28: 1, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 2, # 'ๆ'
- 37: 1, # '็'
- 6: 3, # '่'
- 7: 2, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 2: { # 'ร'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 2, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 3, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 3, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 3, # 'ถ'
- 14: 3, # 'ท'
- 48: 1, # 'ธ'
- 3: 2, # 'น'
- 17: 2, # 'บ'
- 25: 3, # 'ป'
- 39: 2, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 2, # 'พ'
- 54: 1, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 3, # 'ม'
- 16: 2, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 3, # 'ว'
- 42: 2, # 'ศ'
- 46: 2, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 3, # 'อ'
- 63: 1, # 'ฯ'
- 22: 3, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 2, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 3, # 'ู'
- 11: 3, # 'เ'
- 28: 3, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 3, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 3, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 61: { # 'ฤ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 2, # 'ต'
- 44: 0, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 2, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 15: { # 'ล'
- 5: 2, # 'ก'
- 30: 3, # 'ข'
- 24: 1, # 'ค'
- 8: 3, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 1, # 'ม'
- 16: 3, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 1, # 'ห'
- 4: 3, # 'อ'
- 63: 2, # 'ฯ'
- 22: 3, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 2, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 2, # 'ึ'
- 27: 3, # 'ื'
- 32: 2, # 'ุ'
- 35: 3, # 'ู'
- 11: 2, # 'เ'
- 28: 1, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 2, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 12: { # 'ว'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 1, # 'ค'
- 8: 3, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 1, # 'ณ'
- 20: 2, # 'ด'
- 19: 1, # 'ต'
- 44: 1, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 1, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 3, # 'ม'
- 16: 3, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 3, # 'ิ'
- 13: 2, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 2, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 1, # 'ใ'
- 33: 2, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 42: { # 'ศ'
- 5: 1, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 1, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 2, # 'ว'
- 42: 1, # 'ศ'
- 46: 2, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 2, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 2, # 'ิ'
- 13: 0, # 'ี'
- 40: 3, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 2, # 'ู'
- 11: 0, # 'เ'
- 28: 1, # 'แ'
- 41: 0, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 46: { # 'ษ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 2, # 'ฎ'
- 57: 1, # 'ฏ'
- 49: 2, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 3, # 'ณ'
- 20: 0, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 1, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 2, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 18: { # 'ส'
- 5: 2, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 2, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 3, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 2, # 'ภ'
- 9: 3, # 'ม'
- 16: 1, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 2, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 3, # 'ำ'
- 23: 3, # 'ิ'
- 13: 3, # 'ี'
- 40: 2, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 3, # 'ู'
- 11: 2, # 'เ'
- 28: 0, # 'แ'
- 41: 1, # 'โ'
- 29: 0, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 1, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 21: { # 'ห'
- 5: 3, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 1, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 2, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 3, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 0, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 3, # 'ม'
- 16: 2, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 0, # 'ำ'
- 23: 1, # 'ิ'
- 13: 1, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 1, # 'ุ'
- 35: 1, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 3, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 4: { # 'อ'
- 5: 3, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 1, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 3, # 'ม'
- 16: 3, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 2, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 2, # 'ะ'
- 10: 3, # 'ั'
- 1: 3, # 'า'
- 36: 2, # 'ำ'
- 23: 2, # 'ิ'
- 13: 3, # 'ี'
- 40: 0, # 'ึ'
- 27: 3, # 'ื'
- 32: 3, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 1, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 1, # '็'
- 6: 2, # '่'
- 7: 2, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 63: { # 'ฯ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 22: { # 'ะ'
- 5: 3, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 1, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 3, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 1, # 'ถ'
- 14: 3, # 'ท'
- 48: 1, # 'ธ'
- 3: 2, # 'น'
- 17: 3, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 2, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 3, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 2, # 'อ'
- 63: 1, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 10: { # 'ั'
- 5: 3, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 3, # 'ง'
- 26: 3, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 3, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 2, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 3, # 'ฒ'
- 43: 3, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 0, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 2, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 3, # 'ม'
- 16: 3, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 3, # 'ว'
- 42: 2, # 'ศ'
- 46: 0, # 'ษ'
- 18: 3, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 1: { # 'า'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 3, # 'ค'
- 8: 3, # 'ง'
- 26: 3, # 'จ'
- 52: 0, # 'ฉ'
- 34: 3, # 'ช'
- 51: 1, # 'ซ'
- 47: 2, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 3, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 1, # 'ถ'
- 14: 3, # 'ท'
- 48: 2, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 3, # 'พ'
- 54: 1, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 3, # 'ม'
- 16: 3, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 3, # 'ว'
- 42: 2, # 'ศ'
- 46: 3, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 2, # 'อ'
- 63: 1, # 'ฯ'
- 22: 3, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 36: { # 'ำ'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 3, # 'ค'
- 8: 2, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 1, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 1, # 'ต'
- 44: 1, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 1, # 'บ'
- 25: 1, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 1, # 'ม'
- 16: 0, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 3, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 23: { # 'ิ'
- 5: 3, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 3, # 'จ'
- 52: 0, # 'ฉ'
- 34: 3, # 'ช'
- 51: 0, # 'ซ'
- 47: 2, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 1, # 'ถ'
- 14: 3, # 'ท'
- 48: 3, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 3, # 'พ'
- 54: 1, # 'ฟ'
- 45: 2, # 'ภ'
- 9: 3, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 3, # 'ว'
- 42: 3, # 'ศ'
- 46: 2, # 'ษ'
- 18: 2, # 'ส'
- 21: 3, # 'ห'
- 4: 1, # 'อ'
- 63: 1, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 1, # 'แ'
- 41: 1, # 'โ'
- 29: 1, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 2, # '้'
- 38: 2, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 13: { # 'ี'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 1, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 2, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 3, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 2, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 1, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 40: { # 'ึ'
- 5: 3, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 3, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 1, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 27: { # 'ื'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 3, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 32: { # 'ุ'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 3, # 'ค'
- 8: 3, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 2, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 1, # 'ฒ'
- 43: 3, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 1, # 'ธ'
- 3: 2, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 3, # 'ม'
- 16: 1, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 1, # 'ว'
- 42: 1, # 'ศ'
- 46: 2, # 'ษ'
- 18: 1, # 'ส'
- 21: 1, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 0, # 'แ'
- 41: 1, # 'โ'
- 29: 0, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 2, # '้'
- 38: 1, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 35: { # 'ู'
- 5: 3, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 2, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 2, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 1, # 'ณ'
- 20: 2, # 'ด'
- 19: 2, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 2, # 'น'
- 17: 0, # 'บ'
- 25: 3, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 1, # 'แ'
- 41: 1, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 3, # '่'
- 7: 3, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 11: { # 'เ'
- 5: 3, # 'ก'
- 30: 3, # 'ข'
- 24: 3, # 'ค'
- 8: 2, # 'ง'
- 26: 3, # 'จ'
- 52: 3, # 'ฉ'
- 34: 3, # 'ช'
- 51: 2, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 1, # 'ณ'
- 20: 3, # 'ด'
- 19: 3, # 'ต'
- 44: 1, # 'ถ'
- 14: 3, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 3, # 'ป'
- 39: 2, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 3, # 'พ'
- 54: 1, # 'ฟ'
- 45: 3, # 'ภ'
- 9: 3, # 'ม'
- 16: 2, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 3, # 'ว'
- 42: 2, # 'ศ'
- 46: 0, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 28: { # 'แ'
- 5: 3, # 'ก'
- 30: 2, # 'ข'
- 24: 2, # 'ค'
- 8: 1, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 3, # 'ต'
- 44: 2, # 'ถ'
- 14: 3, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 2, # 'ป'
- 39: 3, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 2, # 'พ'
- 54: 2, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 41: { # 'โ'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 1, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 2, # 'ต'
- 44: 0, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 1, # 'บ'
- 25: 3, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 1, # 'ฟ'
- 45: 1, # 'ภ'
- 9: 1, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 3, # 'ล'
- 12: 0, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 0, # 'ห'
- 4: 2, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 29: { # 'ใ'
- 5: 2, # 'ก'
- 30: 0, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 3, # 'จ'
- 52: 0, # 'ฉ'
- 34: 3, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 1, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 3, # 'ส'
- 21: 3, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 33: { # 'ไ'
- 5: 1, # 'ก'
- 30: 2, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 3, # 'ด'
- 19: 1, # 'ต'
- 44: 0, # 'ถ'
- 14: 3, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 1, # 'บ'
- 25: 3, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 2, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 3, # 'ม'
- 16: 0, # 'ย'
- 2: 3, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 3, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 2, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 50: { # 'ๆ'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 37: { # '็'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 2, # 'ง'
- 26: 3, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 1, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 2, # 'ต'
- 44: 0, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 3, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 1, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 2, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 0, # 'ห'
- 4: 1, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 1, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 6: { # '่'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 1, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 1, # 'ธ'
- 3: 3, # 'น'
- 17: 1, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 1, # 'ฝ'
- 31: 1, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 3, # 'ม'
- 16: 3, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 2, # 'ล'
- 12: 3, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 1, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 1, # 'ะ'
- 10: 0, # 'ั'
- 1: 3, # 'า'
- 36: 2, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 3, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 1, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 7: { # '้'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 2, # 'ค'
- 8: 3, # 'ง'
- 26: 2, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 1, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 1, # 'ด'
- 19: 2, # 'ต'
- 44: 1, # 'ถ'
- 14: 2, # 'ท'
- 48: 0, # 'ธ'
- 3: 3, # 'น'
- 17: 2, # 'บ'
- 25: 2, # 'ป'
- 39: 2, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 1, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 3, # 'ม'
- 16: 2, # 'ย'
- 2: 2, # 'ร'
- 61: 0, # 'ฤ'
- 15: 1, # 'ล'
- 12: 3, # 'ว'
- 42: 1, # 'ศ'
- 46: 0, # 'ษ'
- 18: 2, # 'ส'
- 21: 2, # 'ห'
- 4: 3, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 3, # 'า'
- 36: 2, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 2, # 'ใ'
- 33: 2, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 38: { # '์'
- 5: 2, # 'ก'
- 30: 1, # 'ข'
- 24: 1, # 'ค'
- 8: 0, # 'ง'
- 26: 1, # 'จ'
- 52: 0, # 'ฉ'
- 34: 1, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 2, # 'ด'
- 19: 1, # 'ต'
- 44: 1, # 'ถ'
- 14: 1, # 'ท'
- 48: 0, # 'ธ'
- 3: 1, # 'น'
- 17: 1, # 'บ'
- 25: 1, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 1, # 'พ'
- 54: 1, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 2, # 'ม'
- 16: 0, # 'ย'
- 2: 1, # 'ร'
- 61: 1, # 'ฤ'
- 15: 1, # 'ล'
- 12: 1, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 1, # 'ส'
- 21: 1, # 'ห'
- 4: 2, # 'อ'
- 63: 1, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 2, # 'เ'
- 28: 2, # 'แ'
- 41: 1, # 'โ'
- 29: 1, # 'ใ'
- 33: 1, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 0, # '๑'
- 59: 0, # '๒'
- 60: 0, # '๕'
- },
- 56: { # '๑'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 2, # '๑'
- 59: 1, # '๒'
- 60: 1, # '๕'
- },
- 59: { # '๒'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 1, # '๑'
- 59: 1, # '๒'
- 60: 3, # '๕'
- },
- 60: { # '๕'
- 5: 0, # 'ก'
- 30: 0, # 'ข'
- 24: 0, # 'ค'
- 8: 0, # 'ง'
- 26: 0, # 'จ'
- 52: 0, # 'ฉ'
- 34: 0, # 'ช'
- 51: 0, # 'ซ'
- 47: 0, # 'ญ'
- 58: 0, # 'ฎ'
- 57: 0, # 'ฏ'
- 49: 0, # 'ฐ'
- 53: 0, # 'ฑ'
- 55: 0, # 'ฒ'
- 43: 0, # 'ณ'
- 20: 0, # 'ด'
- 19: 0, # 'ต'
- 44: 0, # 'ถ'
- 14: 0, # 'ท'
- 48: 0, # 'ธ'
- 3: 0, # 'น'
- 17: 0, # 'บ'
- 25: 0, # 'ป'
- 39: 0, # 'ผ'
- 62: 0, # 'ฝ'
- 31: 0, # 'พ'
- 54: 0, # 'ฟ'
- 45: 0, # 'ภ'
- 9: 0, # 'ม'
- 16: 0, # 'ย'
- 2: 0, # 'ร'
- 61: 0, # 'ฤ'
- 15: 0, # 'ล'
- 12: 0, # 'ว'
- 42: 0, # 'ศ'
- 46: 0, # 'ษ'
- 18: 0, # 'ส'
- 21: 0, # 'ห'
- 4: 0, # 'อ'
- 63: 0, # 'ฯ'
- 22: 0, # 'ะ'
- 10: 0, # 'ั'
- 1: 0, # 'า'
- 36: 0, # 'ำ'
- 23: 0, # 'ิ'
- 13: 0, # 'ี'
- 40: 0, # 'ึ'
- 27: 0, # 'ื'
- 32: 0, # 'ุ'
- 35: 0, # 'ู'
- 11: 0, # 'เ'
- 28: 0, # 'แ'
- 41: 0, # 'โ'
- 29: 0, # 'ใ'
- 33: 0, # 'ไ'
- 50: 0, # 'ๆ'
- 37: 0, # '็'
- 6: 0, # '่'
- 7: 0, # '้'
- 38: 0, # '์'
- 56: 2, # '๑'
- 59: 1, # '๒'
- 60: 0, # '๕'
- },
-}
-
-# 255: Undefined characters that did not exist in training text
-# 254: Carriage/Return
-# 253: symbol (punctuation) that does not belong to word
-# 252: 0 - 9
-# 251: Control characters
-
-# Character Mapping Table(s):
-TIS_620_THAI_CHAR_TO_ORDER = {
- 0: 255, # '\x00'
- 1: 255, # '\x01'
- 2: 255, # '\x02'
- 3: 255, # '\x03'
- 4: 255, # '\x04'
- 5: 255, # '\x05'
- 6: 255, # '\x06'
- 7: 255, # '\x07'
- 8: 255, # '\x08'
- 9: 255, # '\t'
- 10: 254, # '\n'
- 11: 255, # '\x0b'
- 12: 255, # '\x0c'
- 13: 254, # '\r'
- 14: 255, # '\x0e'
- 15: 255, # '\x0f'
- 16: 255, # '\x10'
- 17: 255, # '\x11'
- 18: 255, # '\x12'
- 19: 255, # '\x13'
- 20: 255, # '\x14'
- 21: 255, # '\x15'
- 22: 255, # '\x16'
- 23: 255, # '\x17'
- 24: 255, # '\x18'
- 25: 255, # '\x19'
- 26: 255, # '\x1a'
- 27: 255, # '\x1b'
- 28: 255, # '\x1c'
- 29: 255, # '\x1d'
- 30: 255, # '\x1e'
- 31: 255, # '\x1f'
- 32: 253, # ' '
- 33: 253, # '!'
- 34: 253, # '"'
- 35: 253, # '#'
- 36: 253, # '$'
- 37: 253, # '%'
- 38: 253, # '&'
- 39: 253, # "'"
- 40: 253, # '('
- 41: 253, # ')'
- 42: 253, # '*'
- 43: 253, # '+'
- 44: 253, # ','
- 45: 253, # '-'
- 46: 253, # '.'
- 47: 253, # '/'
- 48: 252, # '0'
- 49: 252, # '1'
- 50: 252, # '2'
- 51: 252, # '3'
- 52: 252, # '4'
- 53: 252, # '5'
- 54: 252, # '6'
- 55: 252, # '7'
- 56: 252, # '8'
- 57: 252, # '9'
- 58: 253, # ':'
- 59: 253, # ';'
- 60: 253, # '<'
- 61: 253, # '='
- 62: 253, # '>'
- 63: 253, # '?'
- 64: 253, # '@'
- 65: 182, # 'A'
- 66: 106, # 'B'
- 67: 107, # 'C'
- 68: 100, # 'D'
- 69: 183, # 'E'
- 70: 184, # 'F'
- 71: 185, # 'G'
- 72: 101, # 'H'
- 73: 94, # 'I'
- 74: 186, # 'J'
- 75: 187, # 'K'
- 76: 108, # 'L'
- 77: 109, # 'M'
- 78: 110, # 'N'
- 79: 111, # 'O'
- 80: 188, # 'P'
- 81: 189, # 'Q'
- 82: 190, # 'R'
- 83: 89, # 'S'
- 84: 95, # 'T'
- 85: 112, # 'U'
- 86: 113, # 'V'
- 87: 191, # 'W'
- 88: 192, # 'X'
- 89: 193, # 'Y'
- 90: 194, # 'Z'
- 91: 253, # '['
- 92: 253, # '\\'
- 93: 253, # ']'
- 94: 253, # '^'
- 95: 253, # '_'
- 96: 253, # '`'
- 97: 64, # 'a'
- 98: 72, # 'b'
- 99: 73, # 'c'
- 100: 114, # 'd'
- 101: 74, # 'e'
- 102: 115, # 'f'
- 103: 116, # 'g'
- 104: 102, # 'h'
- 105: 81, # 'i'
- 106: 201, # 'j'
- 107: 117, # 'k'
- 108: 90, # 'l'
- 109: 103, # 'm'
- 110: 78, # 'n'
- 111: 82, # 'o'
- 112: 96, # 'p'
- 113: 202, # 'q'
- 114: 91, # 'r'
- 115: 79, # 's'
- 116: 84, # 't'
- 117: 104, # 'u'
- 118: 105, # 'v'
- 119: 97, # 'w'
- 120: 98, # 'x'
- 121: 92, # 'y'
- 122: 203, # 'z'
- 123: 253, # '{'
- 124: 253, # '|'
- 125: 253, # '}'
- 126: 253, # '~'
- 127: 253, # '\x7f'
- 128: 209, # '\x80'
- 129: 210, # '\x81'
- 130: 211, # '\x82'
- 131: 212, # '\x83'
- 132: 213, # '\x84'
- 133: 88, # '\x85'
- 134: 214, # '\x86'
- 135: 215, # '\x87'
- 136: 216, # '\x88'
- 137: 217, # '\x89'
- 138: 218, # '\x8a'
- 139: 219, # '\x8b'
- 140: 220, # '\x8c'
- 141: 118, # '\x8d'
- 142: 221, # '\x8e'
- 143: 222, # '\x8f'
- 144: 223, # '\x90'
- 145: 224, # '\x91'
- 146: 99, # '\x92'
- 147: 85, # '\x93'
- 148: 83, # '\x94'
- 149: 225, # '\x95'
- 150: 226, # '\x96'
- 151: 227, # '\x97'
- 152: 228, # '\x98'
- 153: 229, # '\x99'
- 154: 230, # '\x9a'
- 155: 231, # '\x9b'
- 156: 232, # '\x9c'
- 157: 233, # '\x9d'
- 158: 234, # '\x9e'
- 159: 235, # '\x9f'
- 160: 236, # None
- 161: 5, # 'ก'
- 162: 30, # 'ข'
- 163: 237, # 'ฃ'
- 164: 24, # 'ค'
- 165: 238, # 'ฅ'
- 166: 75, # 'ฆ'
- 167: 8, # 'ง'
- 168: 26, # 'จ'
- 169: 52, # 'ฉ'
- 170: 34, # 'ช'
- 171: 51, # 'ซ'
- 172: 119, # 'ฌ'
- 173: 47, # 'ญ'
- 174: 58, # 'ฎ'
- 175: 57, # 'ฏ'
- 176: 49, # 'ฐ'
- 177: 53, # 'ฑ'
- 178: 55, # 'ฒ'
- 179: 43, # 'ณ'
- 180: 20, # 'ด'
- 181: 19, # 'ต'
- 182: 44, # 'ถ'
- 183: 14, # 'ท'
- 184: 48, # 'ธ'
- 185: 3, # 'น'
- 186: 17, # 'บ'
- 187: 25, # 'ป'
- 188: 39, # 'ผ'
- 189: 62, # 'ฝ'
- 190: 31, # 'พ'
- 191: 54, # 'ฟ'
- 192: 45, # 'ภ'
- 193: 9, # 'ม'
- 194: 16, # 'ย'
- 195: 2, # 'ร'
- 196: 61, # 'ฤ'
- 197: 15, # 'ล'
- 198: 239, # 'ฦ'
- 199: 12, # 'ว'
- 200: 42, # 'ศ'
- 201: 46, # 'ษ'
- 202: 18, # 'ส'
- 203: 21, # 'ห'
- 204: 76, # 'ฬ'
- 205: 4, # 'อ'
- 206: 66, # 'ฮ'
- 207: 63, # 'ฯ'
- 208: 22, # 'ะ'
- 209: 10, # 'ั'
- 210: 1, # 'า'
- 211: 36, # 'ำ'
- 212: 23, # 'ิ'
- 213: 13, # 'ี'
- 214: 40, # 'ึ'
- 215: 27, # 'ื'
- 216: 32, # 'ุ'
- 217: 35, # 'ู'
- 218: 86, # 'ฺ'
- 219: 240, # None
- 220: 241, # None
- 221: 242, # None
- 222: 243, # None
- 223: 244, # '฿'
- 224: 11, # 'เ'
- 225: 28, # 'แ'
- 226: 41, # 'โ'
- 227: 29, # 'ใ'
- 228: 33, # 'ไ'
- 229: 245, # 'ๅ'
- 230: 50, # 'ๆ'
- 231: 37, # '็'
- 232: 6, # '่'
- 233: 7, # '้'
- 234: 67, # '๊'
- 235: 77, # '๋'
- 236: 38, # '์'
- 237: 93, # 'ํ'
- 238: 246, # '๎'
- 239: 247, # '๏'
- 240: 68, # '๐'
- 241: 56, # '๑'
- 242: 59, # '๒'
- 243: 65, # '๓'
- 244: 69, # '๔'
- 245: 60, # '๕'
- 246: 70, # '๖'
- 247: 80, # '๗'
- 248: 71, # '๘'
- 249: 87, # '๙'
- 250: 248, # '๚'
- 251: 249, # '๛'
- 252: 250, # None
- 253: 251, # None
- 254: 252, # None
- 255: 253, # None
-}
-
-TIS_620_THAI_MODEL = SingleByteCharSetModel(
- charset_name="TIS-620",
- language="Thai",
- char_to_order_map=TIS_620_THAI_CHAR_TO_ORDER,
- language_model=THAI_LANG_MODEL,
- typical_positive_ratio=0.926386,
- keep_ascii_letters=False,
- alphabet="กขฃคฅฆงจฉชซฌญฎฏฐฑฒณดตถทธนบปผฝพฟภมยรฤลฦวศษสหฬอฮฯะัาำิีึืฺุู฿เแโใไๅๆ็่้๊๋์ํ๎๏๐๑๒๓๔๕๖๗๘๙๚๛",
-)
diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/_msvccompiler.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/_msvccompiler.py
deleted file mode 100644
index 3b5a8179bd69f1e27480224791ea7cc4a55802b0..0000000000000000000000000000000000000000
--- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/_msvccompiler.py
+++ /dev/null
@@ -1,591 +0,0 @@
-"""distutils._msvccompiler
-
-Contains MSVCCompiler, an implementation of the abstract CCompiler class
-for Microsoft Visual Studio 2015.
-
-The module is compatible with VS 2015 and later. You can find legacy support
-for older versions in distutils.msvc9compiler and distutils.msvccompiler.
-"""
-
-# Written by Perry Stoll
-# hacked by Robin Becker and Thomas Heller to do a better job of
-# finding DevStudio (through the registry)
-# ported to VS 2005 and VS 2008 by Christian Heimes
-# ported to VS 2015 by Steve Dower
-
-import os
-import subprocess
-import contextlib
-import warnings
-import unittest.mock
-
-with contextlib.suppress(ImportError):
- import winreg
-
-from distutils.errors import (
- DistutilsExecError,
- DistutilsPlatformError,
- CompileError,
- LibError,
- LinkError,
-)
-from distutils.ccompiler import CCompiler, gen_lib_options
-from distutils import log
-from distutils.util import get_platform
-
-from itertools import count
-
-
-def _find_vc2015():
- try:
- key = winreg.OpenKeyEx(
- winreg.HKEY_LOCAL_MACHINE,
- r"Software\Microsoft\VisualStudio\SxS\VC7",
- access=winreg.KEY_READ | winreg.KEY_WOW64_32KEY,
- )
- except OSError:
- log.debug("Visual C++ is not registered")
- return None, None
-
- best_version = 0
- best_dir = None
- with key:
- for i in count():
- try:
- v, vc_dir, vt = winreg.EnumValue(key, i)
- except OSError:
- break
- if v and vt == winreg.REG_SZ and os.path.isdir(vc_dir):
- try:
- version = int(float(v))
- except (ValueError, TypeError):
- continue
- if version >= 14 and version > best_version:
- best_version, best_dir = version, vc_dir
- return best_version, best_dir
-
-
-def _find_vc2017():
- """Returns "15, path" based on the result of invoking vswhere.exe
- If no install is found, returns "None, None"
-
- The version is returned to avoid unnecessarily changing the function
- result. It may be ignored when the path is not None.
-
- If vswhere.exe is not available, by definition, VS 2017 is not
- installed.
- """
- root = os.environ.get("ProgramFiles(x86)") or os.environ.get("ProgramFiles")
- if not root:
- return None, None
-
- try:
- path = subprocess.check_output(
- [
- os.path.join(
- root, "Microsoft Visual Studio", "Installer", "vswhere.exe"
- ),
- "-latest",
- "-prerelease",
- "-requires",
- "Microsoft.VisualStudio.Component.VC.Tools.x86.x64",
- "-property",
- "installationPath",
- "-products",
- "*",
- ],
- encoding="mbcs",
- errors="strict",
- ).strip()
- except (subprocess.CalledProcessError, OSError, UnicodeDecodeError):
- return None, None
-
- path = os.path.join(path, "VC", "Auxiliary", "Build")
- if os.path.isdir(path):
- return 15, path
-
- return None, None
-
-
-PLAT_SPEC_TO_RUNTIME = {
- 'x86': 'x86',
- 'x86_amd64': 'x64',
- 'x86_arm': 'arm',
- 'x86_arm64': 'arm64',
-}
-
-
-def _find_vcvarsall(plat_spec):
- # bpo-38597: Removed vcruntime return value
- _, best_dir = _find_vc2017()
-
- if not best_dir:
- best_version, best_dir = _find_vc2015()
-
- if not best_dir:
- log.debug("No suitable Visual C++ version found")
- return None, None
-
- vcvarsall = os.path.join(best_dir, "vcvarsall.bat")
- if not os.path.isfile(vcvarsall):
- log.debug("%s cannot be found", vcvarsall)
- return None, None
-
- return vcvarsall, None
-
-
-def _get_vc_env(plat_spec):
- if os.getenv("DISTUTILS_USE_SDK"):
- return {key.lower(): value for key, value in os.environ.items()}
-
- vcvarsall, _ = _find_vcvarsall(plat_spec)
- if not vcvarsall:
- raise DistutilsPlatformError("Unable to find vcvarsall.bat")
-
- try:
- out = subprocess.check_output(
- 'cmd /u /c "{}" {} && set'.format(vcvarsall, plat_spec),
- stderr=subprocess.STDOUT,
- ).decode('utf-16le', errors='replace')
- except subprocess.CalledProcessError as exc:
- log.error(exc.output)
- raise DistutilsPlatformError("Error executing {}".format(exc.cmd))
-
- env = {
- key.lower(): value
- for key, _, value in (line.partition('=') for line in out.splitlines())
- if key and value
- }
-
- return env
-
-
-def _find_exe(exe, paths=None):
- """Return path to an MSVC executable program.
-
- Tries to find the program in several places: first, one of the
- MSVC program search paths from the registry; next, the directories
- in the PATH environment variable. If any of those work, return an
- absolute path that is known to exist. If none of them work, just
- return the original program name, 'exe'.
- """
- if not paths:
- paths = os.getenv('path').split(os.pathsep)
- for p in paths:
- fn = os.path.join(os.path.abspath(p), exe)
- if os.path.isfile(fn):
- return fn
- return exe
-
-
-# A map keyed by get_platform() return values to values accepted by
-# 'vcvarsall.bat'. Always cross-compile from x86 to work with the
-# lighter-weight MSVC installs that do not include native 64-bit tools.
-PLAT_TO_VCVARS = {
- 'win32': 'x86',
- 'win-amd64': 'x86_amd64',
- 'win-arm32': 'x86_arm',
- 'win-arm64': 'x86_arm64',
-}
-
-
-class MSVCCompiler(CCompiler):
- """Concrete class that implements an interface to Microsoft Visual C++,
- as defined by the CCompiler abstract class."""
-
- compiler_type = 'msvc'
-
- # Just set this so CCompiler's constructor doesn't barf. We currently
- # don't use the 'set_executables()' bureaucracy provided by CCompiler,
- # as it really isn't necessary for this sort of single-compiler class.
- # Would be nice to have a consistent interface with UnixCCompiler,
- # though, so it's worth thinking about.
- executables = {}
-
- # Private class data (need to distinguish C from C++ source for compiler)
- _c_extensions = ['.c']
- _cpp_extensions = ['.cc', '.cpp', '.cxx']
- _rc_extensions = ['.rc']
- _mc_extensions = ['.mc']
-
- # Needed for the filename generation methods provided by the
- # base class, CCompiler.
- src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions
- res_extension = '.res'
- obj_extension = '.obj'
- static_lib_extension = '.lib'
- shared_lib_extension = '.dll'
- static_lib_format = shared_lib_format = '%s%s'
- exe_extension = '.exe'
-
- def __init__(self, verbose=0, dry_run=0, force=0):
- super().__init__(verbose, dry_run, force)
- # target platform (.plat_name is consistent with 'bdist')
- self.plat_name = None
- self.initialized = False
-
- def initialize(self, plat_name=None):
- # multi-init means we would need to check platform same each time...
- assert not self.initialized, "don't init multiple times"
- if plat_name is None:
- plat_name = get_platform()
- # sanity check for platforms to prevent obscure errors later.
- if plat_name not in PLAT_TO_VCVARS:
- raise DistutilsPlatformError(
- "--plat-name must be one of {}".format(tuple(PLAT_TO_VCVARS))
- )
-
- # Get the vcvarsall.bat spec for the requested platform.
- plat_spec = PLAT_TO_VCVARS[plat_name]
-
- vc_env = _get_vc_env(plat_spec)
- if not vc_env:
- raise DistutilsPlatformError(
- "Unable to find a compatible " "Visual Studio installation."
- )
-
- self._paths = vc_env.get('path', '')
- paths = self._paths.split(os.pathsep)
- self.cc = _find_exe("cl.exe", paths)
- self.linker = _find_exe("link.exe", paths)
- self.lib = _find_exe("lib.exe", paths)
- self.rc = _find_exe("rc.exe", paths) # resource compiler
- self.mc = _find_exe("mc.exe", paths) # message compiler
- self.mt = _find_exe("mt.exe", paths) # message compiler
-
- for dir in vc_env.get('include', '').split(os.pathsep):
- if dir:
- self.add_include_dir(dir.rstrip(os.sep))
-
- for dir in vc_env.get('lib', '').split(os.pathsep):
- if dir:
- self.add_library_dir(dir.rstrip(os.sep))
-
- self.preprocess_options = None
- # bpo-38597: Always compile with dynamic linking
- # Future releases of Python 3.x will include all past
- # versions of vcruntime*.dll for compatibility.
- self.compile_options = ['/nologo', '/O2', '/W3', '/GL', '/DNDEBUG', '/MD']
-
- self.compile_options_debug = [
- '/nologo',
- '/Od',
- '/MDd',
- '/Zi',
- '/W3',
- '/D_DEBUG',
- ]
-
- ldflags = ['/nologo', '/INCREMENTAL:NO', '/LTCG']
-
- ldflags_debug = ['/nologo', '/INCREMENTAL:NO', '/LTCG', '/DEBUG:FULL']
-
- self.ldflags_exe = [*ldflags, '/MANIFEST:EMBED,ID=1']
- self.ldflags_exe_debug = [*ldflags_debug, '/MANIFEST:EMBED,ID=1']
- self.ldflags_shared = [
- *ldflags,
- '/DLL',
- '/MANIFEST:EMBED,ID=2',
- '/MANIFESTUAC:NO',
- ]
- self.ldflags_shared_debug = [
- *ldflags_debug,
- '/DLL',
- '/MANIFEST:EMBED,ID=2',
- '/MANIFESTUAC:NO',
- ]
- self.ldflags_static = [*ldflags]
- self.ldflags_static_debug = [*ldflags_debug]
-
- self._ldflags = {
- (CCompiler.EXECUTABLE, None): self.ldflags_exe,
- (CCompiler.EXECUTABLE, False): self.ldflags_exe,
- (CCompiler.EXECUTABLE, True): self.ldflags_exe_debug,
- (CCompiler.SHARED_OBJECT, None): self.ldflags_shared,
- (CCompiler.SHARED_OBJECT, False): self.ldflags_shared,
- (CCompiler.SHARED_OBJECT, True): self.ldflags_shared_debug,
- (CCompiler.SHARED_LIBRARY, None): self.ldflags_static,
- (CCompiler.SHARED_LIBRARY, False): self.ldflags_static,
- (CCompiler.SHARED_LIBRARY, True): self.ldflags_static_debug,
- }
-
- self.initialized = True
-
- # -- Worker methods ------------------------------------------------
-
- def object_filenames(self, source_filenames, strip_dir=0, output_dir=''):
- ext_map = {
- **{ext: self.obj_extension for ext in self.src_extensions},
- **{
- ext: self.res_extension
- for ext in self._rc_extensions + self._mc_extensions
- },
- }
-
- output_dir = output_dir or ''
-
- def make_out_path(p):
- base, ext = os.path.splitext(p)
- if strip_dir:
- base = os.path.basename(base)
- else:
- _, base = os.path.splitdrive(base)
- if base.startswith((os.path.sep, os.path.altsep)):
- base = base[1:]
- try:
- # XXX: This may produce absurdly long paths. We should check
- # the length of the result and trim base until we fit within
- # 260 characters.
- return os.path.join(output_dir, base + ext_map[ext])
- except LookupError:
- # Better to raise an exception instead of silently continuing
- # and later complain about sources and targets having
- # different lengths
- raise CompileError("Don't know how to compile {}".format(p))
-
- return list(map(make_out_path, source_filenames))
-
- def compile(
- self,
- sources,
- output_dir=None,
- macros=None,
- include_dirs=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- depends=None,
- ):
-
- if not self.initialized:
- self.initialize()
- compile_info = self._setup_compile(
- output_dir, macros, include_dirs, sources, depends, extra_postargs
- )
- macros, objects, extra_postargs, pp_opts, build = compile_info
-
- compile_opts = extra_preargs or []
- compile_opts.append('/c')
- if debug:
- compile_opts.extend(self.compile_options_debug)
- else:
- compile_opts.extend(self.compile_options)
-
- add_cpp_opts = False
-
- for obj in objects:
- try:
- src, ext = build[obj]
- except KeyError:
- continue
- if debug:
- # pass the full pathname to MSVC in debug mode,
- # this allows the debugger to find the source file
- # without asking the user to browse for it
- src = os.path.abspath(src)
-
- if ext in self._c_extensions:
- input_opt = "/Tc" + src
- elif ext in self._cpp_extensions:
- input_opt = "/Tp" + src
- add_cpp_opts = True
- elif ext in self._rc_extensions:
- # compile .RC to .RES file
- input_opt = src
- output_opt = "/fo" + obj
- try:
- self.spawn([self.rc] + pp_opts + [output_opt, input_opt])
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue
- elif ext in self._mc_extensions:
- # Compile .MC to .RC file to .RES file.
- # * '-h dir' specifies the directory for the
- # generated include file
- # * '-r dir' specifies the target directory of the
- # generated RC file and the binary message resource
- # it includes
- #
- # For now (since there are no options to change this),
- # we use the source-directory for the include file and
- # the build directory for the RC file and message
- # resources. This works at least for win32all.
- h_dir = os.path.dirname(src)
- rc_dir = os.path.dirname(obj)
- try:
- # first compile .MC to .RC and .H file
- self.spawn([self.mc, '-h', h_dir, '-r', rc_dir, src])
- base, _ = os.path.splitext(os.path.basename(src))
- rc_file = os.path.join(rc_dir, base + '.rc')
- # then compile .RC to .RES file
- self.spawn([self.rc, "/fo" + obj, rc_file])
-
- except DistutilsExecError as msg:
- raise CompileError(msg)
- continue
- else:
- # how to handle this file?
- raise CompileError(
- "Don't know how to compile {} to {}".format(src, obj)
- )
-
- args = [self.cc] + compile_opts + pp_opts
- if add_cpp_opts:
- args.append('/EHsc')
- args.append(input_opt)
- args.append("/Fo" + obj)
- args.extend(extra_postargs)
-
- try:
- self.spawn(args)
- except DistutilsExecError as msg:
- raise CompileError(msg)
-
- return objects
-
- def create_static_lib(
- self, objects, output_libname, output_dir=None, debug=0, target_lang=None
- ):
-
- if not self.initialized:
- self.initialize()
- objects, output_dir = self._fix_object_args(objects, output_dir)
- output_filename = self.library_filename(output_libname, output_dir=output_dir)
-
- if self._need_link(objects, output_filename):
- lib_args = objects + ['/OUT:' + output_filename]
- if debug:
- pass # XXX what goes here?
- try:
- log.debug('Executing "%s" %s', self.lib, ' '.join(lib_args))
- self.spawn([self.lib] + lib_args)
- except DistutilsExecError as msg:
- raise LibError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- def link(
- self,
- target_desc,
- objects,
- output_filename,
- output_dir=None,
- libraries=None,
- library_dirs=None,
- runtime_library_dirs=None,
- export_symbols=None,
- debug=0,
- extra_preargs=None,
- extra_postargs=None,
- build_temp=None,
- target_lang=None,
- ):
-
- if not self.initialized:
- self.initialize()
- objects, output_dir = self._fix_object_args(objects, output_dir)
- fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs)
- libraries, library_dirs, runtime_library_dirs = fixed_args
-
- if runtime_library_dirs:
- self.warn(
- "I don't know what to do with 'runtime_library_dirs': "
- + str(runtime_library_dirs)
- )
-
- lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries)
- if output_dir is not None:
- output_filename = os.path.join(output_dir, output_filename)
-
- if self._need_link(objects, output_filename):
- ldflags = self._ldflags[target_desc, debug]
-
- export_opts = ["/EXPORT:" + sym for sym in (export_symbols or [])]
-
- ld_args = (
- ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename]
- )
-
- # The MSVC linker generates .lib and .exp files, which cannot be
- # suppressed by any linker switches. The .lib files may even be
- # needed! Make sure they are generated in the temporary build
- # directory. Since they have different names for debug and release
- # builds, they can go into the same directory.
- build_temp = os.path.dirname(objects[0])
- if export_symbols is not None:
- (dll_name, dll_ext) = os.path.splitext(
- os.path.basename(output_filename)
- )
- implib_file = os.path.join(build_temp, self.library_filename(dll_name))
- ld_args.append('/IMPLIB:' + implib_file)
-
- if extra_preargs:
- ld_args[:0] = extra_preargs
- if extra_postargs:
- ld_args.extend(extra_postargs)
-
- output_dir = os.path.dirname(os.path.abspath(output_filename))
- self.mkpath(output_dir)
- try:
- log.debug('Executing "%s" %s', self.linker, ' '.join(ld_args))
- self.spawn([self.linker] + ld_args)
- except DistutilsExecError as msg:
- raise LinkError(msg)
- else:
- log.debug("skipping %s (up-to-date)", output_filename)
-
- def spawn(self, cmd):
- env = dict(os.environ, PATH=self._paths)
- with self._fallback_spawn(cmd, env) as fallback:
- return super().spawn(cmd, env=env)
- return fallback.value
-
- @contextlib.contextmanager
- def _fallback_spawn(self, cmd, env):
- """
- Discovered in pypa/distutils#15, some tools monkeypatch the compiler,
- so the 'env' kwarg causes a TypeError. Detect this condition and
- restore the legacy, unsafe behavior.
- """
- bag = type('Bag', (), {})()
- try:
- yield bag
- except TypeError as exc:
- if "unexpected keyword argument 'env'" not in str(exc):
- raise
- else:
- return
- warnings.warn("Fallback spawn triggered. Please update distutils monkeypatch.")
- with unittest.mock.patch.dict('os.environ', env):
- bag.value = super().spawn(cmd)
-
- # -- Miscellaneous methods -----------------------------------------
- # These are all used by the 'gen_lib_options() function, in
- # ccompiler.py.
-
- def library_dir_option(self, dir):
- return "/LIBPATH:" + dir
-
- def runtime_library_dir_option(self, dir):
- raise DistutilsPlatformError(
- "don't know how to set runtime library search path for MSVC"
- )
-
- def library_option(self, lib):
- return self.library_filename(lib)
-
- def find_library_file(self, dirs, lib, debug=0):
- # Prefer a debugging library if found (and requested), but deal
- # with it if we don't have one.
- if debug:
- try_names = [lib + "_d", lib]
- else:
- try_names = [lib]
- for dir in dirs:
- for name in try_names:
- libfile = os.path.join(dir, self.library_filename(name))
- if os.path.isfile(libfile):
- return libfile
- else:
- # Oops, didn't find it in *any* of 'dirs'
- return None
diff --git a/spaces/tomandandy/MusicGen3/audiocraft/quantization/__init__.py b/spaces/tomandandy/MusicGen3/audiocraft/quantization/__init__.py
deleted file mode 100644
index 836d6eb518978480c6b95d6f29ce4f84a9428793..0000000000000000000000000000000000000000
--- a/spaces/tomandandy/MusicGen3/audiocraft/quantization/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .vq import ResidualVectorQuantizer
-from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
diff --git a/spaces/tomaseo2022/Youtube-Mp3/README.md b/spaces/tomaseo2022/Youtube-Mp3/README.md
deleted file mode 100644
index 6cf8de002c7d9a9216a556d8e87696734f2500a7..0000000000000000000000000000000000000000
--- a/spaces/tomaseo2022/Youtube-Mp3/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Youtube Mp3
-emoji: 📚
-colorFrom: blue
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolact/yolact_r101_1x8_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolact/yolact_r101_1x8_coco.py
deleted file mode 100644
index 2864b590b5538b735a16df3b2690b29a95384df8..0000000000000000000000000000000000000000
--- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/yolact/yolact_r101_1x8_coco.py
+++ /dev/null
@@ -1,3 +0,0 @@
-_base_ = './yolact_r50_1x8_coco.py'
-
-model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
diff --git a/spaces/triple-t/ttt-space/static/_app/immutable/start-56b0bce0.js b/spaces/triple-t/ttt-space/static/_app/immutable/start-56b0bce0.js
deleted file mode 100644
index 207fbaf9ea1ee3478088a874a95d9991d7d534dc..0000000000000000000000000000000000000000
--- a/spaces/triple-t/ttt-space/static/_app/immutable/start-56b0bce0.js
+++ /dev/null
@@ -1 +0,0 @@
-import{S as at,i as rt,s as ot,a as st,e as V,c as it,b as M,g as ue,t as B,d as de,f as F,h as G,j as lt,o as Oe,k as ct,l as ft,m as ut,n as be,p as C,q as dt,r as pt,u as ht,v as H,w as W,x as Ne,y as Y,z as X,A as le}from"./chunks/index-b346583a.js";import{S as tt,I as q,g as ze,f as He,a as ve,b as ce,s as K,i as We,c as fe,P as Ye,d as mt,e as _t,h as gt}from"./chunks/singletons-50e0fde7.js";function yt(a,e){return a==="/"||e==="ignore"?a:e==="never"?a.endsWith("/")?a.slice(0,-1):a:e==="always"&&!a.endsWith("/")?a+"/":a}function wt(a){return a.split("%25").map(decodeURI).join("%25")}function bt(a){for(const e in a)a[e]=decodeURIComponent(a[e]);return a}const vt=["href","pathname","search","searchParams","toString","toJSON"];function Et(a,e){const n=new URL(a);for(const i of vt){let o=n[i];Object.defineProperty(n,i,{get(){return e(),o},enumerable:!0,configurable:!0})}return kt(n),n}function kt(a){Object.defineProperty(a,"hash",{get(){throw new Error("Cannot access event.url.hash. Consider using `$page.url.hash` inside a component instead")}})}const St="/__data.json";function Rt(a){return a.replace(/\/$/,"")+St}function Lt(a){let e=5381;if(typeof a=="string"){let n=a.length;for(;n;)e=e*33^a.charCodeAt(--n)}else if(ArrayBuffer.isView(a)){const n=new Uint8Array(a.buffer,a.byteOffset,a.byteLength);let i=n.length;for(;i;)e=e*33^n[--i]}else throw new TypeError("value must be a string or TypedArray");return(e>>>0).toString(36)}const pe=window.fetch;window.fetch=(a,e)=>((a instanceof Request?a.method:(e==null?void 0:e.method)||"GET")!=="GET"&&ee.delete(Ue(a)),pe(a,e));const ee=new Map;function Ot(a,e){const n=Ue(a,e),i=document.querySelector(n);if(i!=null&&i.textContent){const{body:o,...u}=JSON.parse(i.textContent),t=i.getAttribute("data-ttl");return t&&ee.set(n,{body:o,init:u,ttl:1e3*Number(t)}),Promise.resolve(new Response(o,u))}return pe(a,e)}function It(a,e,n){if(ee.size>0){const i=Ue(a,n),o=ee.get(i);if(o){if(performance.now(){const o=/^\[\.\.\.(\w+)(?:=(\w+))?\]$/.exec(i);if(o)return e.push({name:o[1],matcher:o[2],optional:!1,rest:!0,chained:!0}),"(?:/(.*))?";const u=/^\[\[(\w+)(?:=(\w+))?\]\]$/.exec(i);if(u)return e.push({name:u[1],matcher:u[2],optional:!0,rest:!1,chained:!0}),"(?:/([^/]+))?";if(!i)return;const t=i.split(/\[(.+?)\](?!\])/);return"/"+t.map((_,p)=>{if(p%2){if(_.startsWith("x+"))return Ee(String.fromCharCode(parseInt(_.slice(2),16)));if(_.startsWith("u+"))return Ee(String.fromCharCode(..._.slice(2).split("-").map(P=>parseInt(P,16))));const g=At.exec(_);if(!g)throw new Error(`Invalid param: ${_}. Params and matcher names can only have underscores and alphanumeric characters.`);const[,w,R,j,T]=g;return e.push({name:j,matcher:T,optional:!!w,rest:!!R,chained:R?p===1&&t[0]==="":!1}),R?"(.*?)":w?"([^/]*)?":"([^/]+?)"}return Ee(_)}).join("")}).join("")}/?$`),params:e}}function Nt(a){return!/^\([^)]+\)$/.test(a)}function Ut(a){return a.slice(1).split("/").filter(Nt)}function $t(a,e,n){const i={},o=a.slice(1);let u="";for(let t=0;t=t;)o[p]=o[p-1],p-=1;continue}return}i[f.name]=_}}if(!u)return i}function Ee(a){return a.normalize().replace(/[[\]]/g,"\\$&").replace(/%/g,"%25").replace(/\//g,"%2[Ff]").replace(/\?/g,"%3[Ff]").replace(/#/g,"%23").replace(/[.*+?^${}()|\\]/g,"\\$&")}function jt(a,e,n,i){const o=new Set(e);return Object.entries(n).map(([f,[_,p,g]])=>{const{pattern:w,params:R}=Pt(f),j={id:f,exec:T=>{const P=w.exec(T);if(P)return $t(P,R,i)},errors:[1,...g||[]].map(T=>a[T]),layouts:[0,...p||[]].map(t),leaf:u(_)};return j.errors.length=j.layouts.length=Math.max(j.errors.length,j.layouts.length),j});function u(f){const _=f<0;return _&&(f=~f),[_,a[f]]}function t(f){return f===void 0?f:[o.has(f),a[f]]}}function Tt(a){let e,n,i;var o=a[0][0];function u(t){return{props:{data:t[2],form:t[1]}}}return o&&(e=H(o,u(a))),{c(){e&&W(e.$$.fragment),n=V()},l(t){e&&Ne(e.$$.fragment,t),n=V()},m(t,f){e&&Y(e,t,f),M(t,n,f),i=!0},p(t,f){const _={};if(f&4&&(_.data=t[2]),f&2&&(_.form=t[1]),o!==(o=t[0][0])){if(e){ue();const p=e;B(p.$$.fragment,1,0,()=>{X(p,1)}),de()}o?(e=H(o,u(t)),W(e.$$.fragment),F(e.$$.fragment,1),Y(e,n.parentNode,n)):e=null}else o&&e.$set(_)},i(t){i||(e&&F(e.$$.fragment,t),i=!0)},o(t){e&&B(e.$$.fragment,t),i=!1},d(t){t&&G(n),e&&X(e,t)}}}function Dt(a){let e,n,i;var o=a[0][0];function u(t){return{props:{data:t[2],$$slots:{default:[Ct]},$$scope:{ctx:t}}}}return o&&(e=H(o,u(a))),{c(){e&&W(e.$$.fragment),n=V()},l(t){e&&Ne(e.$$.fragment,t),n=V()},m(t,f){e&&Y(e,t,f),M(t,n,f),i=!0},p(t,f){const _={};if(f&4&&(_.data=t[2]),f&523&&(_.$$scope={dirty:f,ctx:t}),o!==(o=t[0][0])){if(e){ue();const p=e;B(p.$$.fragment,1,0,()=>{X(p,1)}),de()}o?(e=H(o,u(t)),W(e.$$.fragment),F(e.$$.fragment,1),Y(e,n.parentNode,n)):e=null}else o&&e.$set(_)},i(t){i||(e&&F(e.$$.fragment,t),i=!0)},o(t){e&&B(e.$$.fragment,t),i=!1},d(t){t&&G(n),e&&X(e,t)}}}function Ct(a){let e,n,i;var o=a[0][1];function u(t){return{props:{data:t[3],form:t[1]}}}return o&&(e=H(o,u(a))),{c(){e&&W(e.$$.fragment),n=V()},l(t){e&&Ne(e.$$.fragment,t),n=V()},m(t,f){e&&Y(e,t,f),M(t,n,f),i=!0},p(t,f){const _={};if(f&8&&(_.data=t[3]),f&2&&(_.form=t[1]),o!==(o=t[0][1])){if(e){ue();const p=e;B(p.$$.fragment,1,0,()=>{X(p,1)}),de()}o?(e=H(o,u(t)),W(e.$$.fragment),F(e.$$.fragment,1),Y(e,n.parentNode,n)):e=null}else o&&e.$set(_)},i(t){i||(e&&F(e.$$.fragment,t),i=!0)},o(t){e&&B(e.$$.fragment,t),i=!1},d(t){t&&G(n),e&&X(e,t)}}}function Xe(a){let e,n=a[5]&&Ze(a);return{c(){e=ct("div"),n&&n.c(),this.h()},l(i){e=ft(i,"DIV",{id:!0,"aria-live":!0,"aria-atomic":!0,style:!0});var o=ut(e);n&&n.l(o),o.forEach(G),this.h()},h(){be(e,"id","svelte-announcer"),be(e,"aria-live","assertive"),be(e,"aria-atomic","true"),C(e,"position","absolute"),C(e,"left","0"),C(e,"top","0"),C(e,"clip","rect(0 0 0 0)"),C(e,"clip-path","inset(50%)"),C(e,"overflow","hidden"),C(e,"white-space","nowrap"),C(e,"width","1px"),C(e,"height","1px")},m(i,o){M(i,e,o),n&&n.m(e,null)},p(i,o){i[5]?n?n.p(i,o):(n=Ze(i),n.c(),n.m(e,null)):n&&(n.d(1),n=null)},d(i){i&&G(e),n&&n.d()}}}function Ze(a){let e;return{c(){e=dt(a[6])},l(n){e=pt(n,a[6])},m(n,i){M(n,e,i)},p(n,i){i&64&&ht(e,n[6])},d(n){n&&G(e)}}}function qt(a){let e,n,i,o,u;const t=[Dt,Tt],f=[];function _(g,w){return g[0][1]?0:1}e=_(a),n=f[e]=t[e](a);let p=a[4]&&Xe(a);return{c(){n.c(),i=st(),p&&p.c(),o=V()},l(g){n.l(g),i=it(g),p&&p.l(g),o=V()},m(g,w){f[e].m(g,w),M(g,i,w),p&&p.m(g,w),M(g,o,w),u=!0},p(g,[w]){let R=e;e=_(g),e===R?f[e].p(g,w):(ue(),B(f[R],1,1,()=>{f[R]=null}),de(),n=f[e],n?n.p(g,w):(n=f[e]=t[e](g),n.c()),F(n,1),n.m(i.parentNode,i)),g[4]?p?p.p(g,w):(p=Xe(g),p.c(),p.m(o.parentNode,o)):p&&(p.d(1),p=null)},i(g){u||(F(n),u=!0)},o(g){B(n),u=!1},d(g){f[e].d(g),g&&G(i),p&&p.d(g),g&&G(o)}}}function Vt(a,e,n){let{stores:i}=e,{page:o}=e,{components:u}=e,{form:t}=e,{data_0:f=null}=e,{data_1:_=null}=e;lt(i.page.notify);let p=!1,g=!1,w=null;return Oe(()=>{const R=i.page.subscribe(()=>{p&&(n(5,g=!0),n(6,w=document.title||"untitled page"))});return n(4,p=!0),R}),a.$$set=R=>{"stores"in R&&n(7,i=R.stores),"page"in R&&n(8,o=R.page),"components"in R&&n(0,u=R.components),"form"in R&&n(1,t=R.form),"data_0"in R&&n(2,f=R.data_0),"data_1"in R&&n(3,_=R.data_1)},a.$$.update=()=>{a.$$.dirty&384&&i.page.set(o)},[u,t,f,_,p,g,w,i,o]}class Bt extends at{constructor(e){super(),rt(this,e,Vt,qt,ot,{stores:7,page:8,components:0,form:1,data_0:2,data_1:3})}}const Ft="modulepreload",Gt=function(a,e){return new URL(a,e).href},Qe={},ke=function(e,n,i){if(!n||n.length===0)return e();const o=document.getElementsByTagName("link");return Promise.all(n.map(u=>{if(u=Gt(u,i),u in Qe)return;Qe[u]=!0;const t=u.endsWith(".css"),f=t?'[rel="stylesheet"]':"";if(!!i)for(let g=o.length-1;g>=0;g--){const w=o[g];if(w.href===u&&(!t||w.rel==="stylesheet"))return}else if(document.querySelector(`link[href="${u}"]${f}`))return;const p=document.createElement("link");if(p.rel=t?"stylesheet":Ft,t||(p.as="script",p.crossOrigin=""),p.href=u,document.head.appendChild(p),t)return new Promise((g,w)=>{p.addEventListener("load",g),p.addEventListener("error",()=>w(new Error(`Unable to preload CSS for ${u}`)))})})).then(()=>e())},Jt={},he=[()=>ke(()=>import("./chunks/0-e4667d24.js"),["./chunks/0-e4667d24.js","./components/pages/_layout.svelte-81ccf463.js","./chunks/index-b346583a.js","./assets/_layout-a699bab5.css"],import.meta.url),()=>ke(()=>import("./chunks/1-9c6a32b9.js"),["./chunks/1-9c6a32b9.js","./components/error.svelte-cd570e47.js","./chunks/index-b346583a.js","./chunks/singletons-50e0fde7.js"],import.meta.url),()=>ke(()=>import("./chunks/2-5e47ff79.js"),["./chunks/2-5e47ff79.js","./chunks/_page-da46b06b.js","./components/pages/_page.svelte-033df9bc.js","./chunks/index-b346583a.js"],import.meta.url)],Kt=[],Mt={"/":[2]},zt={handleError:({error:a})=>{console.error(a)}};class Ie{constructor(e,n){this.status=e,typeof n=="string"?this.body={message:n}:n?this.body=n:this.body={message:`Error: ${e}`}}toString(){return JSON.stringify(this.body)}}class xe{constructor(e,n){this.status=e,this.location=n}}async function Ht(a){var e;for(const n in a)if(typeof((e=a[n])==null?void 0:e.then)=="function")return Object.fromEntries(await Promise.all(Object.entries(a).map(async([i,o])=>[i,await o])));return a}Object.getOwnPropertyNames(Object.prototype).sort().join("\0");Object.getOwnPropertyNames(Object.prototype).sort().join("\0");const Wt=-1,Yt=-2,Xt=-3,Zt=-4,Qt=-5,xt=-6;function en(a){if(typeof a=="number")return i(a,!0);if(!Array.isArray(a)||a.length===0)throw new Error("Invalid input");const e=a,n=Array(e.length);function i(o,u=!1){if(o===Wt)return;if(o===Xt)return NaN;if(o===Zt)return 1/0;if(o===Qt)return-1/0;if(o===xt)return-0;if(u)throw new Error("Invalid input");if(o in n)return n[o];const t=e[o];if(!t||typeof t!="object")n[o]=t;else if(Array.isArray(t))if(typeof t[0]=="string")switch(t[0]){case"Date":n[o]=new Date(t[1]);break;case"Set":const _=new Set;n[o]=_;for(let w=1;w{d&&(j=!0)},blocked:()=>{},type:"goto"})}async function Te(r){const s=oe(r,!1);if(!s)throw new Error(`Attempted to preload a URL that does not belong to this app: ${r}`);return o={id:s.id,promise:Ve(s).then(c=>(c.type==="loaded"&&c.state.error&&(o=null),c))},o.promise}async function ae(...r){const c=Se.filter(l=>r.some(h=>l.exec(h))).map(l=>Promise.all([...l.layouts,l.leaf].map(h=>h==null?void 0:h[1]())));await Promise.all(c)}async function De(r,s,c,l,h={},d){var b,v;$e=h;let m=r&&await Ve(r);if(m||(m=await Ge(s,{id:null},await x(new Error(`Not found: ${s.pathname}`),{url:s,params:{},route:{id:null}}),404)),s=(r==null?void 0:r.url)||s,$e!==h)return!1;if(m.type==="redirect")if(c.length>10||c.includes(s.pathname))m=await re({status:500,error:await x(new Error("Redirect loop"),{url:s,params:{},route:{id:null}}),url:s,route:{id:null}});else return _e(new URL(m.location,s).href,{},[...c,s.pathname],h),!1;else((v=(b=m.props)==null?void 0:b.page)==null?void 0:v.status)>=400&&await K.updated.check()&&await ie(s);if(i.length=0,j=!1,g=!0,l&&l.details){const{details:y}=l,k=y.replaceState?0:1;y.state[q]=P+=k,history[y.replaceState?"replaceState":"pushState"](y.state,"",s)}if(o=null,_?(t=m.state,m.props.page&&(m.props.page.url=s),T.$set(m.props)):Ce(m),l){const{scroll:y,keepfocus:k}=l;if(k||Le(),await le(),p){const L=s.hash&&document.getElementById(s.hash.slice(1));y?scrollTo(y.x,y.y):L?L.scrollIntoView():scrollTo(0,0)}}else await le();p=!0,m.props.page&&(J=m.props.page),d&&d(),g=!1}function Ce(r){var l;t=r.state;const s=document.querySelector("style[data-sveltekit]");s&&s.remove(),J=r.props.page,T=new Bt({target:a,props:{...r.props,stores:K},hydrate:!0});const c={from:null,to:{params:t.params,route:{id:((l=t.route)==null?void 0:l.id)??null},url:new URL(location.href)},willUnload:!1,type:"enter"};u.after_navigate.forEach(h=>h(c)),_=!0}async function Z({url:r,params:s,branch:c,status:l,error:h,route:d,form:m}){const b=c.filter(Boolean);let v="never";for(const O of c)(O==null?void 0:O.slash)!==void 0&&(v=O.slash);r.pathname=yt(r.pathname,v),r.search=r.search;const y={type:"loaded",state:{url:r,params:s,branch:c,error:h,route:d},props:{components:b.map(O=>O.node.component)}};m!==void 0&&(y.props.form=m);let k={},L=!J;for(let O=0;OU===E))&&(y.props[`data_${O}`]=k,L=L||Object.keys(E.data??{}).length>0)}return L||(L=Object.keys(J.data).length!==Object.keys(k).length),(!t.url||r.href!==t.url.href||t.error!==h||m!==void 0||L)&&(y.props.page={error:h,params:s,route:{id:(d==null?void 0:d.id)??null},status:l,url:new URL(r),form:m??null,data:L?k:J.data}),y}async function ge({loader:r,parent:s,url:c,params:l,route:h,server_data_node:d}){var y,k,L;let m=null;const b={dependencies:new Set,params:new Set,parent:!1,route:!1,url:!1},v=await r();if((y=v.universal)!=null&&y.load){let D=function(...E){for(const U of E){const{href:$}=new URL(U,c);b.dependencies.add($)}};const O={route:{get id(){return b.route=!0,h.id}},params:new Proxy(l,{get:(E,U)=>(b.params.add(U),E[U])}),data:(d==null?void 0:d.data)??null,url:Et(c,()=>{b.url=!0}),async fetch(E,U){let $;E instanceof Request?($=E.url,U={body:E.method==="GET"||E.method==="HEAD"?void 0:await E.blob(),cache:E.cache,credentials:E.credentials,headers:E.headers,integrity:E.integrity,keepalive:E.keepalive,method:E.method,mode:E.mode,redirect:E.redirect,referrer:E.referrer,referrerPolicy:E.referrerPolicy,signal:E.signal,...U}):$=E;const S=new URL($,c).href;return D(S),_?It($,S,U):Ot($,U)},setHeaders:()=>{},depends:D,parent(){return b.parent=!0,s()}};m=await v.universal.load.call(null,O)??null,m=m?await Ht(m):null}return{node:v,loader:r,server:d,universal:(k=v.universal)!=null&&k.load?{type:"data",data:m,uses:b}:null,data:m??(d==null?void 0:d.data)??null,slash:((L=v.universal)==null?void 0:L.trailingSlash)??(d==null?void 0:d.slash)}}function qe(r,s,c,l,h){if(j)return!0;if(!l)return!1;if(l.parent&&r||l.route&&s||l.url&&c)return!0;for(const d of l.params)if(h[d]!==t.params[d])return!0;for(const d of l.dependencies)if(i.some(m=>m(new URL(d))))return!0;return!1}function ye(r,s){return(r==null?void 0:r.type)==="data"?{type:"data",data:r.data,uses:{dependencies:new Set(r.uses.dependencies??[]),params:new Set(r.uses.params??[]),parent:!!r.uses.parent,route:!!r.uses.route,url:!!r.uses.url},slash:r.slash}:(r==null?void 0:r.type)==="skip"?s??null:null}async function Ve({id:r,invalidating:s,url:c,params:l,route:h}){if((o==null?void 0:o.id)===r)return o.promise;const{errors:d,layouts:m,leaf:b}=h,v=[...m,b];d.forEach(S=>S==null?void 0:S().catch(()=>{})),v.forEach(S=>S==null?void 0:S[1]().catch(()=>{}));let y=null;const k=t.url?r!==t.url.pathname+t.url.search:!1,L=t.route?r!==t.route.id:!1,D=v.reduce((S,A,N)=>{var Q;const I=t.branch[N],z=!!(A!=null&&A[0])&&((I==null?void 0:I.loader)!==A[1]||qe(S.some(Boolean),L,k,(Q=I.server)==null?void 0:Q.uses,l));return S.push(z),S},[]);if(D.some(Boolean)){try{y=await et(c,D)}catch(S){return re({status:500,error:await x(S,{url:c,params:l,route:{id:h.id}}),url:c,route:h})}if(y.type==="redirect")return y}const O=y==null?void 0:y.nodes;let E=!1;const U=v.map(async(S,A)=>{var Q;if(!S)return;const N=t.branch[A],I=O==null?void 0:O[A];if((!I||I.type==="skip")&&S[1]===(N==null?void 0:N.loader)&&!qe(E,L,k,(Q=N.universal)==null?void 0:Q.uses,l))return N;if(E=!0,(I==null?void 0:I.type)==="error")throw I;return ge({loader:S[1],url:c,params:l,route:h,parent:async()=>{var Me;const Ke={};for(let we=0;we{});const $=[];for(let S=0;SPromise.resolve({}),server_data_node:ye(m)}),v={node:await Pe(),loader:Pe,universal:null,server:null,data:null};return await Z({url:c,params:h,branch:[b,v],status:r,error:s,route:null})}function oe(r,s){if(We(r,e))return;const c=wt(r.pathname.slice(e.length)||"/");for(const l of Se){const h=l.exec(c);if(h)return{id:r.pathname+r.search,invalidating:s,route:l,params:bt(h),url:r}}}function Fe({url:r,type:s,intent:c,delta:l}){var b,v;let h=!1;const d={from:{params:t.params,route:{id:((b=t.route)==null?void 0:b.id)??null},url:t.url},to:{params:(c==null?void 0:c.params)??null,route:{id:((v=c==null?void 0:c.route)==null?void 0:v.id)??null},url:r},willUnload:!c,type:s};l!==void 0&&(d.delta=l);const m={...d,cancel:()=>{h=!0}};return w||u.before_navigate.forEach(y=>y(m)),h?null:d}async function se({url:r,scroll:s,keepfocus:c,redirect_chain:l,details:h,type:d,delta:m,nav_token:b,accepted:v,blocked:y}){const k=oe(r,!1),L=Fe({url:r,type:d,delta:m,intent:k});if(!L){y();return}Re(P),v(),w=!0,_&&K.navigating.set(L),await De(k,r,l,{scroll:s,keepfocus:c,details:h},b,()=>{w=!1,u.after_navigate.forEach(D=>D(L)),K.navigating.set(null)})}async function Ge(r,s,c,l){return r.origin===location.origin&&r.pathname===location.pathname&&!f?await re({status:l,error:c,url:r,route:s}):await ie(r)}function ie(r){return location.href=r.href,new Promise(()=>{})}function nt(){let r;n.addEventListener("mousemove",d=>{const m=d.target;clearTimeout(r),r=setTimeout(()=>{l(m,2)},20)});function s(d){l(d.composedPath()[0],1)}n.addEventListener("mousedown",s),n.addEventListener("touchstart",s,{passive:!0});const c=new IntersectionObserver(d=>{for(const m of d)m.isIntersecting&&(ae(new URL(m.target.href).pathname),c.unobserve(m.target))},{threshold:0});function l(d,m){const b=He(d,n);if(!b)return;const{url:v,external:y}=ve(b,e);if(y)return;const k=ce(b);k.reload||(m<=k.preload_data?Te(v):m<=k.preload_code&&ae(v.pathname))}function h(){c.disconnect();for(const d of n.querySelectorAll("a")){const{url:m,external:b}=ve(d,e);if(b)continue;const v=ce(d);v.reload||(v.preload_code===Ye.viewport&&c.observe(d),v.preload_code===Ye.eager&&ae(m.pathname))}}u.after_navigate.push(h),h()}return{after_navigate:r=>{Oe(()=>(u.after_navigate.push(r),()=>{const s=u.after_navigate.indexOf(r);u.after_navigate.splice(s,1)}))},before_navigate:r=>{Oe(()=>(u.before_navigate.push(r),()=>{const s=u.before_navigate.indexOf(r);u.before_navigate.splice(s,1)}))},disable_scroll_handling:()=>{(g||!_)&&(p=!1)},goto:(r,s={})=>_e(r,s,[]),invalidate:r=>{if(typeof r=="function")i.push(r);else{const{href:s}=new URL(r,location.href);i.push(c=>c.href===s)}return je()},invalidateAll:()=>(j=!0,je()),preload_data:async r=>{const s=new URL(r,ze(document));await Te(s)},preload_code:ae,apply_action:async r=>{if(r.type==="error"){const s=new URL(location.href),{branch:c,route:l}=t;if(!l)return;const h=await Be(t.branch.length,c,l.errors);if(h){const d=await Z({url:s,params:t.params,branch:c.slice(0,h.idx).concat(h.node),status:r.status??500,error:r.error,route:l});t=d.state,T.$set(d.props),le().then(Le)}}else if(r.type==="redirect")_e(r.location,{invalidateAll:!0},[]);else{const s={form:r.data,page:{...J,form:r.data,status:r.status}};T.$set(s),r.type==="success"&&le().then(Le)}},_start_router:()=>{var r;history.scrollRestoration="manual",addEventListener("beforeunload",s=>{var l;let c=!1;if(!w){const h={from:{params:t.params,route:{id:((l=t.route)==null?void 0:l.id)??null},url:t.url},to:null,willUnload:!0,type:"leave",cancel:()=>c=!0};u.before_navigate.forEach(d=>d(h))}c?(s.preventDefault(),s.returnValue=""):history.scrollRestoration="auto"}),addEventListener("visibilitychange",()=>{if(document.visibilityState==="hidden"){Re(P);try{sessionStorage[tt]=JSON.stringify(te)}catch{}}}),(r=navigator.connection)!=null&&r.saveData||nt(),n.addEventListener("click",s=>{if(s.button||s.which!==1||s.metaKey||s.ctrlKey||s.shiftKey||s.altKey||s.defaultPrevented)return;const c=He(s.composedPath()[0],n);if(!c)return;const{url:l,external:h,has:d}=ve(c,e),m=ce(c);if(!l||!(c instanceof SVGAElement)&&l.protocol!==location.protocol&&!(l.protocol==="https:"||l.protocol==="http:")||d.download)return;if(h||m.reload){Fe({url:l,type:"link"})||s.preventDefault(),w=!0;return}const[v,y]=l.href.split("#");if(y!==void 0&&v===location.href.split("#")[0]){R=!0,Re(P),t.url=l,K.page.set({...J,url:l}),K.page.notify();return}se({url:l,scroll:m.noscroll?fe():null,keepfocus:!1,redirect_chain:[],details:{state:{},replaceState:l.href===location.href},accepted:()=>s.preventDefault(),blocked:()=>s.preventDefault(),type:"link"})}),n.addEventListener("submit",s=>{if(s.defaultPrevented)return;const c=HTMLFormElement.prototype.cloneNode.call(s.target),l=s.submitter;if(((l==null?void 0:l.formMethod)||c.method)!=="get")return;const d=new URL((l==null?void 0:l.hasAttribute("formaction"))&&(l==null?void 0:l.formAction)||c.action);if(We(d,e))return;const m=s.target,{noscroll:b,reload:v}=ce(m);if(v)return;s.preventDefault(),s.stopPropagation();const y=new FormData(m),k=l==null?void 0:l.getAttribute("name");k&&y.append(k,(l==null?void 0:l.getAttribute("value"))??""),d.search=new URLSearchParams(y).toString(),se({url:d,scroll:b?fe():null,keepfocus:!1,redirect_chain:[],details:{state:{},replaceState:!1},nav_token:{},accepted:()=>{},blocked:()=>{},type:"form"})}),addEventListener("popstate",s=>{var c;if((c=s.state)!=null&&c[q]){if(s.state[q]===P)return;const l=s.state[q]-P;se({url:new URL(location.href),scroll:te[s.state[q]],keepfocus:!1,redirect_chain:[],details:null,accepted:()=>{P=s.state[q]},blocked:()=>{history.go(-l)},type:"popstate",delta:l})}}),addEventListener("hashchange",()=>{R&&(R=!1,history.replaceState({...history.state,[q]:++P},"",location.href))});for(const s of document.querySelectorAll("link"))s.rel==="icon"&&(s.href=s.href);addEventListener("pageshow",s=>{s.persisted&&K.navigating.set(null)})},_hydrate:async({status:r=200,error:s,node_ids:c,params:l,route:h,data:d,form:m})=>{f=!0;const b=new URL(location.href);({params:l={},route:h={id:null}}=oe(b,!1)||{});let v;try{const y=c.map(async(k,L)=>{const D=d[L];return ge({loader:he[k],url:b,params:l,route:h,parent:async()=>{const O={};for(let E=0;Ek===h.id)??null})}catch(y){if(y instanceof xe){await ie(new URL(y.location,location.href));return}v=await re({status:y instanceof Ie?y.status:500,error:await x(y,{url:b,params:l,route:h}),url:b,route:h})}Ce(v)}}}async function et(a,e){var u;const n=new URL(a);n.pathname=Rt(a.pathname),n.searchParams.append("x-sveltekit-invalidated",e.map(t=>t?"1":"").join("_"));const i=await pe(n.href),o=await i.json();if(!i.ok)throw new Error(o);return(u=o.nodes)==null||u.forEach(t=>{(t==null?void 0:t.type)==="data"&&(t.data=en(t.data),t.uses={dependencies:new Set(t.uses.dependencies??[]),params:new Set(t.uses.params??[]),parent:!!t.uses.parent,route:!!t.uses.route,url:!!t.uses.url})}),o}function x(a,e){return a instanceof Ie?a.body:zt.handleError({error:a,event:e})??{message:e.route.id!=null?"Internal Error":"Not Found"}}function Le(){const a=document.querySelector("[autofocus]");if(a)a.focus();else{const e=document.body,n=e.getAttribute("tabindex");e.tabIndex=-1,e.focus({preventScroll:!0}),setTimeout(()=>{var i;(i=getSelection())==null||i.removeAllRanges()}),n!==null?e.setAttribute("tabindex",n):e.removeAttribute("tabindex")}}async function rn({env:a,hydrate:e,paths:n,target:i,version:o}){mt(n),gt(o);const u=tn({target:i,base:n.base});_t({client:u}),e?await u._hydrate(e):u.goto(location.href,{replaceState:!0}),u._start_router()}export{rn as start};
diff --git a/spaces/unity/ML-Agents-Worm/style.css b/spaces/unity/ML-Agents-Worm/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/unity/ML-Agents-Worm/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/2 Kids 1 Sandbox Official Video.zip.md b/spaces/usbethFlerru/sovits-modelsV2/example/2 Kids 1 Sandbox Official Video.zip.md
deleted file mode 100644
index b54c05a8648810a783ba614d4b9de6ec8ea3fc2c..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/2 Kids 1 Sandbox Official Video.zip.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-A heavy-duty plastic sheet under the sandbox will help reduce moisture build-up and extend the life of the sandbox. Fill the sandbox with 20-30 bags of play sand and let the kids enjoy their new backyard addition!
-2 kids 1 sandbox official video.zip
Download File ↔ https://urlcod.com/2uyUaH
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Abbyy Flexicapture 10 Crack Torrent [VERIFIED].md b/spaces/usbethFlerru/sovits-modelsV2/example/Abbyy Flexicapture 10 Crack Torrent [VERIFIED].md
deleted file mode 100644
index 1078aa243e0e6035a88ec40b02de5f31849ae9f7..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Abbyy Flexicapture 10 Crack Torrent [VERIFIED].md
+++ /dev/null
@@ -1,6 +0,0 @@
-abbyy flexicapture 10 crack torrent
Download File 🗸 https://urlcod.com/2uyUkV
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Bothersome Bullies (Adventures In Odyssey) Mobi Do Alfonso Spider Selva.md b/spaces/usbethFlerru/sovits-modelsV2/example/Bothersome Bullies (Adventures In Odyssey) Mobi Do Alfonso Spider Selva.md
deleted file mode 100644
index 4b9a32b7d8ad37ccb6b1da7565a8f61541e3da47..0000000000000000000000000000000000000000
--- a/spaces/usbethFlerru/sovits-modelsV2/example/Bothersome Bullies (Adventures In Odyssey) Mobi Do Alfonso Spider Selva.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Bothersome Bullies (Adventures In Odyssey) Mobi Do alfonso spider selva
Download ✏ ✏ ✏ https://urlcod.com/2uyU7T
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/utec/FedericoRodriguezDetectorSentimentalTwitter/README.md b/spaces/utec/FedericoRodriguezDetectorSentimentalTwitter/README.md
deleted file mode 100644
index 5eabe582ab812e91291c3c9cb635b1dc3037d436..0000000000000000000000000000000000000000
--- a/spaces/utec/FedericoRodriguezDetectorSentimentalTwitter/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: FedericoRodriguezDetectorSentimentalTwitter
-emoji: 🔥
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 2.9.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/transforms.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/transforms.py
deleted file mode 100644
index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000
--- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/midas/transforms.py
+++ /dev/null
@@ -1,234 +0,0 @@
-import numpy as np
-import cv2
-import math
-
-
-def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA):
- """Rezise the sample to ensure the given size. Keeps aspect ratio.
-
- Args:
- sample (dict): sample
- size (tuple): image size
-
- Returns:
- tuple: new size
- """
- shape = list(sample["disparity"].shape)
-
- if shape[0] >= size[0] and shape[1] >= size[1]:
- return sample
-
- scale = [0, 0]
- scale[0] = size[0] / shape[0]
- scale[1] = size[1] / shape[1]
-
- scale = max(scale)
-
- shape[0] = math.ceil(scale * shape[0])
- shape[1] = math.ceil(scale * shape[1])
-
- # resize
- sample["image"] = cv2.resize(
- sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method
- )
-
- sample["disparity"] = cv2.resize(
- sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST
- )
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- tuple(shape[::-1]),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return tuple(shape)
-
-
-class Resize(object):
- """Resize sample to given size (width, height).
- """
-
- def __init__(
- self,
- width,
- height,
- resize_target=True,
- keep_aspect_ratio=False,
- ensure_multiple_of=1,
- resize_method="lower_bound",
- image_interpolation_method=cv2.INTER_AREA,
- ):
- """Init.
-
- Args:
- width (int): desired output width
- height (int): desired output height
- resize_target (bool, optional):
- True: Resize the full sample (image, mask, target).
- False: Resize image only.
- Defaults to True.
- keep_aspect_ratio (bool, optional):
- True: Keep the aspect ratio of the input sample.
- Output sample might not have the given width and height, and
- resize behaviour depends on the parameter 'resize_method'.
- Defaults to False.
- ensure_multiple_of (int, optional):
- Output width and height is constrained to be multiple of this parameter.
- Defaults to 1.
- resize_method (str, optional):
- "lower_bound": Output will be at least as large as the given size.
- "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.)
- "minimal": Scale as least as possible. (Output size might be smaller than given size.)
- Defaults to "lower_bound".
- """
- self.__width = width
- self.__height = height
-
- self.__resize_target = resize_target
- self.__keep_aspect_ratio = keep_aspect_ratio
- self.__multiple_of = ensure_multiple_of
- self.__resize_method = resize_method
- self.__image_interpolation_method = image_interpolation_method
-
- def constrain_to_multiple_of(self, x, min_val=0, max_val=None):
- y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if max_val is not None and y > max_val:
- y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- if y < min_val:
- y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int)
-
- return y
-
- def get_size(self, width, height):
- # determine new height and width
- scale_height = self.__height / height
- scale_width = self.__width / width
-
- if self.__keep_aspect_ratio:
- if self.__resize_method == "lower_bound":
- # scale such that output size is lower bound
- if scale_width > scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "upper_bound":
- # scale such that output size is upper bound
- if scale_width < scale_height:
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- elif self.__resize_method == "minimal":
- # scale as least as possbile
- if abs(1 - scale_width) < abs(1 - scale_height):
- # fit width
- scale_height = scale_width
- else:
- # fit height
- scale_width = scale_height
- else:
- raise ValueError(
- f"resize_method {self.__resize_method} not implemented"
- )
-
- if self.__resize_method == "lower_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, min_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, min_val=self.__width
- )
- elif self.__resize_method == "upper_bound":
- new_height = self.constrain_to_multiple_of(
- scale_height * height, max_val=self.__height
- )
- new_width = self.constrain_to_multiple_of(
- scale_width * width, max_val=self.__width
- )
- elif self.__resize_method == "minimal":
- new_height = self.constrain_to_multiple_of(scale_height * height)
- new_width = self.constrain_to_multiple_of(scale_width * width)
- else:
- raise ValueError(f"resize_method {self.__resize_method} not implemented")
-
- return (new_width, new_height)
-
- def __call__(self, sample):
- width, height = self.get_size(
- sample["image"].shape[1], sample["image"].shape[0]
- )
-
- # resize sample
- sample["image"] = cv2.resize(
- sample["image"],
- (width, height),
- interpolation=self.__image_interpolation_method,
- )
-
- if self.__resize_target:
- if "disparity" in sample:
- sample["disparity"] = cv2.resize(
- sample["disparity"],
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
-
- if "depth" in sample:
- sample["depth"] = cv2.resize(
- sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST
- )
-
- sample["mask"] = cv2.resize(
- sample["mask"].astype(np.float32),
- (width, height),
- interpolation=cv2.INTER_NEAREST,
- )
- sample["mask"] = sample["mask"].astype(bool)
-
- return sample
-
-
-class NormalizeImage(object):
- """Normlize image by given mean and std.
- """
-
- def __init__(self, mean, std):
- self.__mean = mean
- self.__std = std
-
- def __call__(self, sample):
- sample["image"] = (sample["image"] - self.__mean) / self.__std
-
- return sample
-
-
-class PrepareForNet(object):
- """Prepare sample for usage as network input.
- """
-
- def __init__(self):
- pass
-
- def __call__(self, sample):
- image = np.transpose(sample["image"], (2, 0, 1))
- sample["image"] = np.ascontiguousarray(image).astype(np.float32)
-
- if "mask" in sample:
- sample["mask"] = sample["mask"].astype(np.float32)
- sample["mask"] = np.ascontiguousarray(sample["mask"])
-
- if "disparity" in sample:
- disparity = sample["disparity"].astype(np.float32)
- sample["disparity"] = np.ascontiguousarray(disparity)
-
- if "depth" in sample:
- depth = sample["depth"].astype(np.float32)
- sample["depth"] = np.ascontiguousarray(depth)
-
- return sample
diff --git a/spaces/vict0rsch/climateGAN/climategan/painter.py b/spaces/vict0rsch/climateGAN/climategan/painter.py
deleted file mode 100644
index 739ec2b1bda94a7b37ea17b5d757e009255bd312..0000000000000000000000000000000000000000
--- a/spaces/vict0rsch/climateGAN/climategan/painter.py
+++ /dev/null
@@ -1,171 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-import climategan.strings as strings
-from climategan.blocks import InterpolateNearest2d, SPADEResnetBlock
-from climategan.norms import SpectralNorm
-
-
-def create_painter(opts, no_init=False, verbose=0):
- if verbose > 0:
- print(" - Add PainterSpadeDecoder Painter")
- return PainterSpadeDecoder(opts)
-
-
-class PainterSpadeDecoder(nn.Module):
- def __init__(self, opts):
- """Create a SPADE-based decoder, which forwards z and the conditioning
- tensors seg (in the original paper, conditioning is on a semantic map only).
- All along, z is conditioned on seg. First 3 SpadeResblocks (SRB) do not shrink
- the channel dimension, and an upsampling is applied after each. Therefore
- 2 upsamplings at this point. Then, for each remaining upsamplings
- (w.r.t. spade_n_up), the SRB shrinks channels by 2. Before final conv to get 3
- channels, the number of channels is therefore:
- final_nc = channels(z) * 2 ** (spade_n_up - 2)
- Args:
- latent_dim (tuple): z's shape (only the number of channels matters)
- cond_nc (int): conditioning tensor's expected number of channels
- spade_n_up (int): Number of total upsamplings from z
- spade_use_spectral_norm (bool): use spectral normalization?
- spade_param_free_norm (str): norm to use before SPADE de-normalization
- spade_kernel_size (int): SPADE conv layers' kernel size
- Returns:
- [type]: [description]
- """
- super().__init__()
-
- latent_dim = opts.gen.p.latent_dim
- cond_nc = 3
- spade_n_up = opts.gen.p.spade_n_up
- spade_use_spectral_norm = opts.gen.p.spade_use_spectral_norm
- spade_param_free_norm = opts.gen.p.spade_param_free_norm
- spade_kernel_size = 3
-
- self.z_nc = latent_dim
- self.spade_n_up = spade_n_up
-
- self.z_h = self.z_w = None
-
- self.fc = nn.Conv2d(3, latent_dim, 3, padding=1)
- self.head_0 = SPADEResnetBlock(
- self.z_nc,
- self.z_nc,
- cond_nc,
- spade_use_spectral_norm,
- spade_param_free_norm,
- spade_kernel_size,
- )
-
- self.G_middle_0 = SPADEResnetBlock(
- self.z_nc,
- self.z_nc,
- cond_nc,
- spade_use_spectral_norm,
- spade_param_free_norm,
- spade_kernel_size,
- )
- self.G_middle_1 = SPADEResnetBlock(
- self.z_nc,
- self.z_nc,
- cond_nc,
- spade_use_spectral_norm,
- spade_param_free_norm,
- spade_kernel_size,
- )
-
- self.up_spades = nn.Sequential(
- *[
- SPADEResnetBlock(
- self.z_nc // 2 ** i,
- self.z_nc // 2 ** (i + 1),
- cond_nc,
- spade_use_spectral_norm,
- spade_param_free_norm,
- spade_kernel_size,
- )
- for i in range(spade_n_up - 2)
- ]
- )
-
- self.final_nc = self.z_nc // 2 ** (spade_n_up - 2)
-
- self.final_spade = SPADEResnetBlock(
- self.final_nc,
- self.final_nc,
- cond_nc,
- spade_use_spectral_norm,
- spade_param_free_norm,
- spade_kernel_size,
- )
- self.final_shortcut = None
- if opts.gen.p.use_final_shortcut:
- self.final_shortcut = nn.Sequential(
- *[
- SpectralNorm(nn.Conv2d(self.final_nc, 3, 1)),
- nn.BatchNorm2d(3),
- nn.LeakyReLU(0.2, True),
- ]
- )
-
- self.conv_img = nn.Conv2d(self.final_nc, 3, 3, padding=1)
-
- self.upsample = InterpolateNearest2d(scale_factor=2)
-
- def set_latent_shape(self, shape, is_input=True):
- """
- Sets the latent shape to start the upsampling from, i.e. z_h and z_w.
- If is_input is True, then this is the actual input shape which should
- be divided by 2 ** spade_n_up
- Otherwise, just sets z_h and z_w from shape[-2] and shape[-1]
-
- Args:
- shape (tuple): The shape to start sampling from.
- is_input (bool, optional): Whether to divide shape by 2 ** spade_n_up
- """
- if isinstance(shape, (list, tuple)):
- self.z_h = shape[-2]
- self.z_w = shape[-1]
- elif isinstance(shape, int):
- self.z_h = self.z_w = shape
- else:
- raise ValueError("Unknown shape type:", shape)
-
- if is_input:
- self.z_h = self.z_h // (2 ** self.spade_n_up)
- self.z_w = self.z_w // (2 ** self.spade_n_up)
-
- def _apply(self, fn):
- # print("Applying SpadeDecoder", fn)
- super()._apply(fn)
- # self.head_0 = fn(self.head_0)
- # self.G_middle_0 = fn(self.G_middle_0)
- # self.G_middle_1 = fn(self.G_middle_1)
- # for i, up in enumerate(self.up_spades):
- # self.up_spades[i] = fn(up)
- # self.conv_img = fn(self.conv_img)
- return self
-
- def forward(self, z, cond):
- if z is None:
- assert self.z_h is not None and self.z_w is not None
- z = self.fc(F.interpolate(cond, size=(self.z_h, self.z_w)))
- y = self.head_0(z, cond)
- y = self.upsample(y)
- y = self.G_middle_0(y, cond)
- y = self.upsample(y)
- y = self.G_middle_1(y, cond)
-
- for i, up in enumerate(self.up_spades):
- y = self.upsample(y)
- y = up(y, cond)
-
- if self.final_shortcut is not None:
- cond = self.final_shortcut(y)
- y = self.final_spade(y, cond)
- y = self.conv_img(F.leaky_relu(y, 2e-1))
- y = torch.tanh(y)
- return y
-
- def __str__(self):
- return strings.spadedecoder(self)
diff --git a/spaces/vincentmin/TalkToMe/app.py b/spaces/vincentmin/TalkToMe/app.py
deleted file mode 100644
index 1a73896c450029297a8d72bd61598c6cafcb51d0..0000000000000000000000000000000000000000
--- a/spaces/vincentmin/TalkToMe/app.py
+++ /dev/null
@@ -1,220 +0,0 @@
-import argparse
-import os
-import requests
-
-import gradio as gr
-
-INTRO = """**Chat with Yoda, Albert Einstein, Elon Musk or Kanye West!**
-
-✨ This demo is powered by HuggingFace Inference API and currently the models [starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta) and [falcon-7b](https://huggingface.co/tiiuae/falcon-7b-instruct) are supported. This demo is based on the [falcon-chat demo](https://huggingface.co/spaces/HuggingFaceH4/falcon-chat) by the [HuggingFace H4 team](https://huggingface.co/HuggingFaceH4); major props to them!
-
-🧪 With this demo you can talk to some of your favorite characters and also play with some very powerful models. Although not as powerful as some 40B+ models, the 7B Falcon model and 15.5B starchat-beta models are great chat companions. We intend to add more characters and models in the future.
-
-👀 **Learn more about Falcon LLM:** [falconllm.tii.ae](https://falconllm.tii.ae/)
-
-👀 **Learn more about Starchat LLM:** [starchat-alpha](https://huggingface.co/blog/starchat-alpha)
-
-👀 **Banner images were created with [stable diffusion web](https://stablediffusionweb.com/).**
-
-➡️️ **Intended Use**: this demo is intended to be a fun showcase of what one can do with HuggingFace Inference API and recent chat models.
-
-⚠️ **Limitations**: the model can and will produce factually incorrect information, hallucinating facts and actions. As it has not undergone any advanced tuning/alignment, it can produce problematic outputs, especially if prompted to do so. Finally, this demo is limited to a session length of about 1,000 words.
-"""
-MODELS = [
- "HuggingFaceH4/starchat-beta",
- "tiiuae/falcon-7b-instruct",
-]
-HEADERS = {"Authorization": f"Bearer {os.environ['HUB_TOKEN']}"}
-TITLE = """🚀 TalkToMe"""
-USER_NAME = "User"
-INSTRUCTIONS_MAPPING = {
- "Albert Einstein": "The following is a conversation between the highly knowledgeable and intelligent scientist Albert Einstein, and a human user, called User. In the following interactions, User and Albert Einstein will converse in natural language, and Albert Einstein will answer User's questions. Albert Einstein is always eloquent, respectful, polite and inclusive. Albert Einstein invented the theory of Relativity and made important contributions to the theory of Quantum Mechanics. Albert Einstein will never decline to answer a question, and always attempts to give an answer that User would be satisfied with. Albert Einstein knows a lot, and always tells the truth. The conversation begins.\n",
- "Yoda": "The following is a conversation between the highly knowledgeable and intelligent Yoda from Star Wars, and a human user, called User. In the following interactions, User and Yoda will converse in natural language, and Yoda will answer User's questions. Yoda is respectful, polite and inclusive. Yoda is a wise and powerful Jedi Master from the Star Wars universe who speaks as follows: `Speak you must, in his unique and distinctive manner, with wisdom and knowledge to share.`, `Reversed syntax and short phrases, you shall use.`, `May the Force be with you, young Padawan.`. The conversation begins.\n",
- "Elon Musk": "The following is a conversation between entrepeneur and multi-billionair Elon Musk, and a human user, called User. In the following interactions, User and Elon Musk will converse in natural language, and Elon Musk will answer User's questions. Elon Musk is self-centered, arrogant and has a great for business development. Elon Musk owns the electric car company Tesla, the spacecraft engeneering company SpaceX and bought the social media company Twitter. The conversation begins.\n",
- "Kanye West": "The following is a conversation between rapper Kanye West, and a human user, called User. In the following interactions, User and Kanye West will converse in natural language, and Kanye West will answer User's questions. Kanye West is self-centered, arrogant, a self-proclaimed genius and a great musician. Kanye West interrupted an award ceremony for Taylor Swift and ran for president of the united states. The conversation begins.\n",
-}
-RETRY_COMMAND = "/retry"
-STOP_SEQ = [f"\n{USER_NAME}", "<|end|>"]
-
-def run_model(prompt, model, temperature, top_p):
- try:
- api_url = f"https://api-inference.huggingface.co/models/{model}"
- payload = {
- "inputs": prompt,
- "parameters": {
- "max_new_tokens": 128,
- "do_sample": True,
- "temperature": temperature,
- "top_p": top_p
- }
- }
- response = requests.post(api_url, headers=HEADERS, json=payload)
- return response.json()[0]['generated_text']
- except:
- return "I'm sorry, the model is not available right now. Please try again later."
-
-def get_stream(string: str):
- return enumerate(iter(string.split(" ")))
-
-def parameter_accordion():
- with gr.Accordion("Parameters", open=False):
- model = gr.Radio(
- choices = MODELS,
- value = MODELS[0],
- interactive=True,
- label="Model",
- )
- temperature = gr.Slider(
- minimum=0.1,
- maximum=2.0,
- value=0.8,
- step=0.1,
- interactive=True,
- label="Temperature",
- )
- top_p = gr.Slider(
- minimum=0.1,
- maximum=0.99,
- value=0.9,
- step=0.01,
- interactive=True,
- label="p (nucleus sampling)",
- )
- return model, temperature, top_p
-
-
-def format_chat_prompt(message: str, chat_history, bot_name: str) -> str:
- instructions = INSTRUCTIONS_MAPPING[bot_name].strip(" ").strip("\n")
- prompt = instructions
- for turn in chat_history:
- user_message, bot_message = turn
- prompt = f"{prompt}\n{USER_NAME}: {user_message}\n{bot_name}: {bot_message}"
- prompt = f"{prompt}\n{USER_NAME}: {message}\n{bot_name}:"
- return prompt
-
-
-def chat():
- gr.HTML(TITLE)
- with gr.Row():
- with gr.Column():
- banner = gr.Image("Albert Einstein.jpeg", elem_id="banner-image", show_label=False)
- with gr.Column():
- gr.Markdown(INTRO)
-
- with gr.Row(elem_id="param_container"):
- with gr.Column():
- model, temperature, top_p = parameter_accordion()
- with gr.Column():
- with gr.Accordion("Character", open=True):
- choices = list(INSTRUCTIONS_MAPPING)
- bot_name = gr.Radio(
- choices=choices,
- value=choices[0],
- interactive=True,
- label="Character",
- )
- bot_name.change(fn=lambda value: gr.update(value=f"{value}.jpeg"), inputs=bot_name, outputs=banner)
-
- with gr.Column(elem_id="chat_container"):
- with gr.Row():
- chatbot = gr.Chatbot(elem_id="chatbot")
- with gr.Row():
- inputs = gr.Textbox(
- placeholder=f"Hi there! Tell me something about yourself.",
- label="Type an input and press Enter",
- max_lines=3,
- )
-
- with gr.Row(elem_id="button_container"):
- with gr.Column():
- retry_button = gr.Button("♻️ Retry last turn")
- with gr.Column():
- delete_turn_button = gr.Button("🧽 Delete last turn")
- with gr.Column():
- clear_chat_button = gr.Button("✨ Delete all history")
-
- gr.Examples(
- [
- ["Hi Albert! Why did the apple fall from the tree?"],
- ["Hi Yoda! How do I learn the force?"],
- ["Hi Elon! Give me an idea for a new startup."],
- ["Hi Kanye! What will be the theme of your next album?"],
- ],
- inputs=inputs,
- label="Click on any example and press Enter in the input textbox!",
- )
-
- def run_chat(message: str, chat_history, bot_name: str, model: str, temperature: float, top_p: float):
- if not message or (message == RETRY_COMMAND and len(chat_history) == 0):
- yield chat_history
- return
-
- if message == RETRY_COMMAND and chat_history:
- prev_turn = chat_history.pop(-1)
- user_message, _ = prev_turn
- message = user_message
-
- prompt = format_chat_prompt(message, chat_history, bot_name)
- model_output = run_model(
- prompt,
- model=model,
- temperature=temperature,
- top_p=top_p,
- )
- model_output = model_output[len(prompt):]
- for stop in STOP_SEQ:
- model_output = model_output.split(stop)[0]
- chat_history = chat_history + [[message, model_output]]
- print(f"User: {message}")
- print(f"{bot_name}: {model_output}")
- yield chat_history
- return
-
- def delete_last_turn(chat_history):
- if chat_history:
- chat_history.pop(-1)
- return {chatbot: gr.update(value=chat_history)}
-
- def run_retry(message: str, chat_history, bot_name, model: str, temperature: float, top_p: float):
- yield from run_chat(RETRY_COMMAND, chat_history, bot_name, model, temperature, top_p)
-
- def clear_chat():
- return []
-
- inputs.submit(
- run_chat,
- [inputs, chatbot, bot_name, model, temperature, top_p],
- outputs=[chatbot],
- show_progress=False,
- )
- inputs.submit(lambda: "", inputs=None, outputs=inputs)
- delete_turn_button.click(delete_last_turn, inputs=[chatbot], outputs=[chatbot])
- retry_button.click(
- run_retry,
- [inputs, chatbot, bot_name, model, temperature, top_p],
- outputs=[chatbot],
- show_progress=False,
- )
- clear_chat_button.click(clear_chat, [], chatbot)
-
-
-def get_demo():
- with gr.Blocks(
- # css=None
- # css="""#chat_container {width: 700px; margin-left: auto; margin-right: auto;}
- # #button_container {width: 700px; margin-left: auto; margin-right: auto;}
- # #param_container {width: 700px; margin-left: auto; margin-right: auto;}"""
- css="""#chatbot {
- font-size: 14px;
- min-height: 300px;
-}"""
- ) as demo:
- chat()
-
- return demo
-
-
-if __name__ == "__main__":
- demo = get_demo()
- demo.queue(max_size=128, concurrency_count=16)
- demo.launch()
diff --git a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/inference.py b/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/inference.py
deleted file mode 100644
index 3e5156e8d649954837e397c2ff15ec29995e7502..0000000000000000000000000000000000000000
--- a/spaces/vinthony/SadTalker/src/face3d/models/arcface_torch/inference.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import argparse
-
-import cv2
-import numpy as np
-import torch
-
-from backbones import get_model
-
-
-@torch.no_grad()
-def inference(weight, name, img):
- if img is None:
- img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.uint8)
- else:
- img = cv2.imread(img)
- img = cv2.resize(img, (112, 112))
-
- img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
- img = np.transpose(img, (2, 0, 1))
- img = torch.from_numpy(img).unsqueeze(0).float()
- img.div_(255).sub_(0.5).div_(0.5)
- net = get_model(name, fp16=False)
- net.load_state_dict(torch.load(weight))
- net.eval()
- feat = net(img).numpy()
- print(feat)
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description='PyTorch ArcFace Training')
- parser.add_argument('--network', type=str, default='r50', help='backbone network')
- parser.add_argument('--weight', type=str, default='')
- parser.add_argument('--img', type=str, default=None)
- args = parser.parse_args()
- inference(args.weight, args.network, args.img)
diff --git a/spaces/vulkano/yulet1de-hentaidiffusion/app.py b/spaces/vulkano/yulet1de-hentaidiffusion/app.py
deleted file mode 100644
index edf0803cbdf9a26a10899d5021088c3d80eec76d..0000000000000000000000000000000000000000
--- a/spaces/vulkano/yulet1de-hentaidiffusion/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/yulet1de/hentaidiffusion").launch()
\ No newline at end of file
diff --git a/spaces/vumichien/Generate_human_motion/VQ-Trans/dataset/dataset_TM_train.py b/spaces/vumichien/Generate_human_motion/VQ-Trans/dataset/dataset_TM_train.py
deleted file mode 100644
index 0b0223effb01c1cf57fa6b2b6fb8d9d01b83f84a..0000000000000000000000000000000000000000
--- a/spaces/vumichien/Generate_human_motion/VQ-Trans/dataset/dataset_TM_train.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import torch
-from torch.utils import data
-import numpy as np
-from os.path import join as pjoin
-import random
-import codecs as cs
-from tqdm import tqdm
-import utils.paramUtil as paramUtil
-from torch.utils.data._utils.collate import default_collate
-
-
-def collate_fn(batch):
- batch.sort(key=lambda x: x[3], reverse=True)
- return default_collate(batch)
-
-
-'''For use of training text-2-motion generative model'''
-class Text2MotionDataset(data.Dataset):
- def __init__(self, dataset_name, feat_bias = 5, unit_length = 4, codebook_size = 1024, tokenizer_name=None):
-
- self.max_length = 64
- self.pointer = 0
- self.dataset_name = dataset_name
-
- self.unit_length = unit_length
- # self.mot_start_idx = codebook_size
- self.mot_end_idx = codebook_size
- self.mot_pad_idx = codebook_size + 1
- if dataset_name == 't2m':
- self.data_root = './dataset/HumanML3D'
- self.motion_dir = pjoin(self.data_root, 'new_joint_vecs')
- self.text_dir = pjoin(self.data_root, 'texts')
- self.joints_num = 22
- radius = 4
- fps = 20
- self.max_motion_length = 26 if unit_length == 8 else 51
- dim_pose = 263
- kinematic_chain = paramUtil.t2m_kinematic_chain
- elif dataset_name == 'kit':
- self.data_root = './dataset/KIT-ML'
- self.motion_dir = pjoin(self.data_root, 'new_joint_vecs')
- self.text_dir = pjoin(self.data_root, 'texts')
- self.joints_num = 21
- radius = 240 * 8
- fps = 12.5
- dim_pose = 251
- self.max_motion_length = 26 if unit_length == 8 else 51
- kinematic_chain = paramUtil.kit_kinematic_chain
-
- split_file = pjoin(self.data_root, 'train.txt')
-
-
- id_list = []
- with cs.open(split_file, 'r') as f:
- for line in f.readlines():
- id_list.append(line.strip())
-
- new_name_list = []
- data_dict = {}
- for name in tqdm(id_list):
- try:
- m_token_list = np.load(pjoin(self.data_root, tokenizer_name, '%s.npy'%name))
-
- # Read text
- with cs.open(pjoin(self.text_dir, name + '.txt')) as f:
- text_data = []
- flag = False
- lines = f.readlines()
-
- for line in lines:
- try:
- text_dict = {}
- line_split = line.strip().split('#')
- caption = line_split[0]
- t_tokens = line_split[1].split(' ')
- f_tag = float(line_split[2])
- to_tag = float(line_split[3])
- f_tag = 0.0 if np.isnan(f_tag) else f_tag
- to_tag = 0.0 if np.isnan(to_tag) else to_tag
-
- text_dict['caption'] = caption
- text_dict['tokens'] = t_tokens
- if f_tag == 0.0 and to_tag == 0.0:
- flag = True
- text_data.append(text_dict)
- else:
- m_token_list_new = [tokens[int(f_tag*fps/unit_length) : int(to_tag*fps/unit_length)] for tokens in m_token_list if int(f_tag*fps/unit_length) < int(to_tag*fps/unit_length)]
-
- if len(m_token_list_new) == 0:
- continue
- new_name = '%s_%f_%f'%(name, f_tag, to_tag)
-
- data_dict[new_name] = {'m_token_list': m_token_list_new,
- 'text':[text_dict]}
- new_name_list.append(new_name)
- except:
- pass
-
- if flag:
- data_dict[name] = {'m_token_list': m_token_list,
- 'text':text_data}
- new_name_list.append(name)
- except:
- pass
- self.data_dict = data_dict
- self.name_list = new_name_list
-
- def __len__(self):
- return len(self.data_dict)
-
- def __getitem__(self, item):
- data = self.data_dict[self.name_list[item]]
- m_token_list, text_list = data['m_token_list'], data['text']
- m_tokens = random.choice(m_token_list)
-
- text_data = random.choice(text_list)
- caption= text_data['caption']
-
-
- coin = np.random.choice([False, False, True])
- # print(len(m_tokens))
- if coin:
- # drop one token at the head or tail
- coin2 = np.random.choice([True, False])
- if coin2:
- m_tokens = m_tokens[:-1]
- else:
- m_tokens = m_tokens[1:]
- m_tokens_len = m_tokens.shape[0]
-
- if m_tokens_len+1 < self.max_motion_length:
- m_tokens = np.concatenate([m_tokens, np.ones((1), dtype=int) * self.mot_end_idx, np.ones((self.max_motion_length-1-m_tokens_len), dtype=int) * self.mot_pad_idx], axis=0)
- else:
- m_tokens = np.concatenate([m_tokens, np.ones((1), dtype=int) * self.mot_end_idx], axis=0)
-
- return caption, m_tokens.reshape(-1), m_tokens_len
-
-
-
-
-def DATALoader(dataset_name,
- batch_size, codebook_size, tokenizer_name, unit_length=4,
- num_workers = 8) :
-
- train_loader = torch.utils.data.DataLoader(Text2MotionDataset(dataset_name, codebook_size = codebook_size, tokenizer_name = tokenizer_name, unit_length=unit_length),
- batch_size,
- shuffle=True,
- num_workers=num_workers,
- #collate_fn=collate_fn,
- drop_last = True)
-
-
- return train_loader
-
-
-def cycle(iterable):
- while True:
- for x in iterable:
- yield x
-
-
diff --git a/spaces/wadhwani-ai/KKMS-Smart-Search-Demo/src/langchain_utils.py b/spaces/wadhwani-ai/KKMS-Smart-Search-Demo/src/langchain_utils.py
deleted file mode 100644
index 90ac08684e1adc36d9ff5b668f10b0e8c7d3883a..0000000000000000000000000000000000000000
--- a/spaces/wadhwani-ai/KKMS-Smart-Search-Demo/src/langchain_utils.py
+++ /dev/null
@@ -1,1010 +0,0 @@
-import src.constants as constants_utils
-import src.data_loader as data_loader_utils
-import src.utils as utils
-
-from langchain.llms import OpenAI
-from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
-from langchain.chains.summarize import load_summarize_chain
-from langchain.docstore.document import Document
-from langchain.embeddings.openai import OpenAIEmbeddings
-import openai
-from langchain.vectorstores import Chroma
-import chromadb
-from langchain.chains.question_answering import load_qa_chain
-from langchain.chains.qa_with_sources import load_qa_with_sources_chain
-from langchain.prompts import PromptTemplate
-from llama_index import GPTVectorStoreIndex, GPTListIndex
-from langchain.vectorstores import FAISS
-
-import pickle
-import shutil
-from typing import Dict, List, Optional
-import pandas as pd
-from datetime import datetime
-import os
-os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')
-
-import logging
-logging.basicConfig(
- format="%(asctime)s %(levelname)s [%(name)s] %(message)s",
- level=logging.INFO,
- datefmt="%Y-%m-%d %H:%M:%S"
-)
-logger = logging.getLogger(__name__)
-
-import warnings
-warnings.filterwarnings('ignore')
-
-
-
-class LANGCHAIN_UTILS:
- def __init__(self,
- index_type=constants_utils.INDEX_TYPE,
- load_from_existing_index_store=constants_utils.LOAD_FROM_EXISTING_INDEX_STORE
- ):
- self.index_type = index_type
- self.load_from_existing_index_store = load_from_existing_index_store
-
- # Temporary index in the current context for the doc_type in consideration
- self.index = None
- # Master index which contains data from multiple sources (PDF, Online PDF, Text files, URLs, etc. It gets updated on Uploading the data from new files/urls without downtime of the application on-demand.)
- self.master_index = None
-
- # Data source wise index
- self.index_category_doc_type_wise_index = dict(
- (ic, dict(
- (ds, None) for ds in list(constants_utils.DATA_SOURCES.values()))
- ) for ic in constants_utils.INDEX_CATEGORY)
- # Initialize master index for each INDEX_CATEGORY
- for ic in constants_utils.INDEX_CATEGORY:
- self.index_category_doc_type_wise_index[ic][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = None
-
- # Data loaded as a Document format in the current context for the doc_type in consideration
- self.documents = []
-
- # Instantiate data_loader_utils class object
- self.data_loader_utils_obj = data_loader_utils.DATA_LOADER()
- # Instantiate UTILS class object
- self.utils_obj = utils.UTILS()
-
- # Initialize embeddings (we can also use other embeddings)
- self.embeddings = OpenAIEmbeddings(openai_api_key=os.getenv('OPENAI_API_KEY'))
- # Initialize LLM model
- self.llm = OpenAI(
- temperature=0,
- max_tokens=constants_utils.LLM_RESPONSE_MAX_TOKENS,
- model_name=constants_utils.LLM_BASE_MODEL_NAME
- )
-
- # Global history for AgGPT widget
- self.global_history = [
- {
- "role": "assistant",
- "content": "Hi, I am a chatbot. I can converse in English. I can answer your questions about farming in India. Ask me anything!"
- }
- ]
-
- # Index category - doc_type wise data sources to display in widget
- self.index_category_doc_type_wise_data_sources = {}
-
-
- def user(
- self,
- user_message,
- history
- ):
- history = history + [[user_message, None]]
- self.global_history = self.global_history + [{"role": "user", "content": user_message}]
- return "", history
-
-
- def get_chatgpt_response(
- self,
- history
- ):
- output = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=history)
- history.append({"role": "assistant", "content": output.choices[0].message.content})
- return output.choices[0].message.content, history
-
-
- def bot(
- self,
- history
- ):
- response, self.global_history = self.get_chatgpt_response(self.global_history)
- history[-1][1] = response
- return history
-
-
- def clear_history(
- self,
- lang="English"
- ):
- self.global_history = [{"role": "assistant", "content": "Hi, I am a chatbot. I can converse in {}. I can answer your questions about farming in India. Ask me anything!".format(lang)}]
- return None
-
-
- def generate_prompt_template(
- self,
- prompt_type,
- input_variables
- ):
- prompt_template = ''
-
- if prompt_type == 'summarize':
- prompt_template = """Write a concise summary of the following:
-
- {text}
-
- SUMMARIZE IN ENGLISH:"""
-
- elif prompt_type == 'qa':
- prompt_template = """You are a helpful AI assistant. Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
-
- {context}
-
- Question: {question}
-
- Answer in English:"""
-
- # Working good, but truncated answer
- prompt_template = """You are a helpful AI assistant. Use the following pieces of context to answer the question at the end. Start the answer by giving short summary and write the answer starting with Here are some of the key points:. Write each sentence separately with numbering. If you don't know the answer, just say that you don't know, don't try to make up an answer. If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
-
- {context}
-
- Question: {question}
-
- Answer in English:"""
-
-
- prompt_template = """You are a helpful AI assistant. Use the following pieces of context to answer the question comprehensively at the end. Start the answer by giving short summary and write the answer starting with Here are some of the key points:. Write each sentence separately with numbering. If you don't know the answer, just say that you don't know, don't try to make up an answer. If the question is not related to the context, politely respond that you are tuned to only answer questions that are related to the context.
-
- {context}
-
- Question: {question}
-
- Answer in English:"""
-
- elif prompt_type == 'weather':
- prompt_template = """
- What would be the weather based on the below data:
-
- {text}
- """
-
- PROMPT = PromptTemplate(template=prompt_template, input_variables=input_variables)
- return PROMPT
-
-
- def get_textual_summary(
- self,
- text,
- chain_type="stuff",
- custom_prompt=True,
- prompt_type='summarize'
- ):
- texts = [text]
- docs = [Document(page_content=t) for t in texts[:3]]
-
- if custom_prompt:
- PROMPT = self.generate_prompt_template(
- prompt_type=prompt_type,
- input_variables=["text"]
- )
- chain = load_summarize_chain(self.llm, chain_type=chain_type, prompt=PROMPT)
- else:
- chain = load_summarize_chain(self.llm, chain_type=chain_type)
-
- text_summary = chain.run(docs)
- return text_summary
-
-
- def get_weather_forecast_summary(
- self,
- text,
- chain_type="stuff"
- ):
- text = f"""
- What would be the weather based on the below data:
- {text}
-
- Give simple response without technical numbers which can be explained to human.
- """
- texts = [text]
- docs = [Document(page_content=t) for t in texts[:3]]
-
- chain = load_summarize_chain(self.llm, chain_type=chain_type)
- text_summary = chain.run(docs)
-
- return text_summary
-
-
- def get_answer_from_para(
- self,
- para,
- question,
- chain_type="stuff",
- custom_prompt=True,
- prompt_type='qa'
- ):
- # Prepare data (Split paragraph into chunks of small documents)
- text_splitter = CharacterTextSplitter(
- chunk_size=constants_utils.TEXT_SPLITTER_CHUNK_SIZE,
- chunk_overlap=constants_utils.TEXT_SPLITTER_CHUNK_OVERLAP,
- separator=constants_utils.TEXT_SPLITTER_SEPARATOR
- )
- texts = text_splitter.split_text(para)
-
- if self.index_type == 'FAISS':
- # Find similar docs that are relevant to the question
- docsearch = FAISS.from_texts(
- texts, self.embeddings,
- metadatas=[{"source": str(i+1)} for i in range(len(texts))]
- )
-
- elif self.index_type == 'Chroma':
- # Find similar docs that are relevant to the question
- docsearch = Chroma.from_texts(
- texts, self.embeddings,
- metadatas=[{"source": str(i+1)} for i in range(len(texts))]
- )
-
- # Search for the similar docs
- docs = docsearch.similarity_search(question, k=constants_utils.ANSWER_SIMILARITY_TOP_K)
-
- # Create a Chain for question answering
- if custom_prompt:
- PROMPT = self.generate_prompt_template(
- prompt_type=prompt_type,
- input_variables=["context", "question"]
- )
- chain = load_qa_chain(self.llm, chain_type=chain_type, prompt=PROMPT)
- else:
- # chain = load_qa_with_sources_chain(self.llm, chain_type=chain_type)
- chain = load_qa_chain(self.llm, chain_type=chain_type)
- # chain.run(input_documents=docs, question=question)
-
- out_dict = chain({"input_documents": docs, "question": question}, return_only_outputs=True)
- return out_dict['output_text']
-
-
- def load_documents(
- self,
- doc_type,
- doc_filepath='',
- urls=[]
- ):
- """
- Load data in Document format of the given doc_type from either doc_filepath or list of urls.
- It can load multiple files/urls in one shot.
-
- Args:
- doc_type: can be any of [pdf, online_pdf, urls, textfile]
- doc_filepath: can be a directory or a filepath
- urls: list of urls
- """
-
- logger.info(f'Loading {doc_type} data into Documents format')
-
- if doc_type == 'pdf':
- # Load data from PDFs stored in local directory
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_pdf(
- doc_filepath=doc_filepath,
- doc_type=doc_type
- ))
-
- elif doc_type == 'online_pdf':
- # Load data from PDFs stored in local directory
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_pdf(
- urls=urls,
- doc_type=doc_type
- ))
-
- elif doc_type == 'urls':
- # Load data from URLs
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_urls(
- urls=urls,
- doc_type=doc_type
- ))
-
- elif doc_type == 'textfile':
- # Load data from text files & Convert texts into Document format
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_text(
- doc_filepath=doc_filepath,
- doc_type=doc_type
- ))
-
- elif doc_type == 'directory':
- # Load data from local directory
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_directory(
- doc_filepath=doc_filepath,
- doc_type=doc_type
- ))
-
- logger.info(f'{doc_type} data into Documents format loaded successfully!')
-
-
- def create_index(
- self
- ):
- if not self.documents:
- logger.warning(f'Empty documents. Index cannot be created!')
- return None
-
- logger.info(f'Creating index')
-
- text_splitter = CharacterTextSplitter(
- chunk_size=constants_utils.TEXT_SPLITTER_CHUNK_SIZE,
- chunk_overlap=constants_utils.TEXT_SPLITTER_CHUNK_OVERLAP,
- separator=constants_utils.TEXT_SPLITTER_SEPARATOR
- )
- self.documents = text_splitter.split_documents(self.documents)
-
- ############## Build the Vector store for docs ##############
- # Vector store using Facebook AI Similarity Search
- if self.index_type == 'FAISS':
- self.index = FAISS.from_documents(
- self.documents,
- self.embeddings
- )
-
- # Vector store using Chroma DB
- elif self.index_type == 'Chroma':
- if not os.path.exists(self.index_filepath):
- os.makedirs(self.index_filepath)
-
- self.index = Chroma.from_documents(
- self.documents,
- self.embeddings,
- persist_directory=self.index_filepath
- )
-
- # Vector store using GPT vector index
- elif self.index_type == 'GPTVectorStoreIndex':
- self.index = GPTVectorStoreIndex.from_documents(self.documents)
-
- logger.info(f'Index created successfully!')
- return self.index
-
-
- def get_index_filepath(
- self,
- index_category,
- doc_type
- ):
- if doc_type == 'master':
- self.index_filepath = os.path.join(
- constants_utils.OUTPUT_PATH, f'index_{index_category}') if self.index_type in ['FAISS', 'Chroma'] else os.path.join(constants_utils.OUTPUT_PATH, f'index_{index_category}.json')
- else:
- self.index_filepath = os.path.join(
- constants_utils.OUTPUT_PATH, f'index_{index_category}', f'index_{doc_type}') if self.index_type in ['FAISS', 'Chroma'] else os.path.join(constants_utils.OUTPUT_PATH, f'index_{index_category}', f'index_{doc_type}.json')
-
- return self.index_filepath
-
-
- def load_master_doctype_indices_for_index_category(
- self,
- index_category
- ):
- logger.info(f'Loading master and doc_type indices for: {index_category}')
-
- # Set master index of index_category = None
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = None
-
- for doc_type in self.index_category_doc_type_wise_index[index_category].keys():
- self.index = None
- self.index_filepath = self.get_index_filepath(
- index_category=index_category,
- doc_type=doc_type
- )
- self.load_index()
- # Set master/doc_type index
- self.index_category_doc_type_wise_index[index_category][doc_type] = self.index
-
- logger.info(f'Master and doc_type indices for: {index_category} loaded successfully!')
-
-
- def load_create_index(
- self
- ):
- logger.info(f'Loading/Creating index for each index_category')
-
- for index_category in constants_utils.INDEX_CATEGORY:
- # Load master index_category index if self.load_from_existing_index_store == True
- if self.load_from_existing_index_store:
- self.load_master_doctype_indices_for_index_category(index_category)
-
- # For any reason, if master index is not loaded then create the new index/vector store
- if not self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE]:
- logger.info(f'Creating a new Vector/Index store for: {index_category}')
-
- doc_filepath = os.path.join(constants_utils.DATA_PATH, index_category)
- urls = []
-
- # Build the Vector/Index store
- for doc_type in list(constants_utils.DATA_SOURCES.values()):
- logger.info(f'Creating a new Vector/Index store for: {index_category} from data source: {doc_type}')
-
- index = None
- if doc_type in ['pdf', 'textfile']:
- index = self.create_store_index(
- doc_type=doc_type,
- doc_filepath=doc_filepath,
- index_category=index_category
- )
- else:
- # Build the Vector/Index store from web urls
- index = self.create_store_index(
- doc_type=doc_type,
- urls=urls,
- index_category=index_category
- )
-
- if index:
- self.index_category_doc_type_wise_index[index_category][doc_type] = index
-
- logger.info(f'New Vector/Index store for: {index_category} from data source: {doc_type} created successfully!')
-
- logger.info(f'New Vector/Index store for: {index_category} created successfully!')
-
- # Merge index of each doc_type into a single index_category
- self.merge_store_master_index(
- index_category=index_category
- )
-
- logger.info(f'Index for each index_category loaded successfully!')
-
-
- def create_store_index(
- self,
- doc_type='pdf',
- doc_filepath=constants_utils.DATA_PATH,
- urls=[],
- index_category=constants_utils.INDEX_CATEGORY[0]
- ):
- logger.info(f'Creating and storing {doc_type} index')
-
- self.documents = []
- self.index = None
-
- self.index_filepath = self.get_index_filepath(
- index_category=index_category,
- doc_type=doc_type
- )
-
- # Delete the old index file
- shutil.rmtree(self.index_filepath, ignore_errors=True)
- logger.info(f'{self.index_filepath} deleted.')
-
- # Load data in Documents format that can be consumed for index creation
- self.load_documents(
- doc_type,
- doc_filepath,
- urls
- )
-
- # Create the index from documents for search/retrieval
- self.index = self.create_index()
-
- # Store index
- self.store_index(
- index=self.index,
- index_filepath=self.index_filepath
- )
-
- logger.info(f'{doc_type} index created and stored successfully!')
- # Return the index of the given doc_type (this is an index for a single doc_type). Indices from multiple doc_types should be merged later on in the master index so that query could be made from a single index.
- return self.index
-
-
- def store_index(
- self,
- index,
- index_filepath
- ):
- if not index:
- logger.warning(f'Cannot write an empty index to: {index_filepath}!')
- return
-
- logger.info(f'Saving index to: {index_filepath}')
-
- if not os.path.exists(index_filepath) and os.path.isdir(index_filepath):
- os.makedirs(index_filepath)
-
- if self.index_type == 'FAISS':
- index.save_local(index_filepath)
-
- elif self.index_type == 'Chroma':
- index.persist()
-
- elif self.index_type == 'GPTVectorStoreIndex':
- index.save_to_disk(index_filepath)
-
- elif self.index_type == 'pickle':
- with open(index_filepath, "wb") as f:
- pickle.dump(index, f)
-
- logger.info(f'Index saved to: {index_filepath} successfully!')
-
-
- def load_index(
- self
- ):
- logger.info(f'Loading index from: {self.index_filepath}')
-
- if not os.path.exists(self.index_filepath):
- logger.warning(f"Cannot load index from {self.index_filepath} as the path doest not exist!")
- return
-
- if self.index_type == 'FAISS':
- self.index = FAISS.load_local(self.index_filepath, self.embeddings)
-
- elif self.index_type == 'Chroma':
- self.index = Chroma(
- persist_directory=self.index_filepath,
- embedding_function=self.embeddings
- )
-
- elif self.index_type == 'GPTVectorStoreIndex':
- self.index = GPTVectorStoreIndex.load_from_disk(self.index_filepath)
-
- elif self.index_type == 'pickle':
- with open(self.index_filepath, "rb") as f:
- self.index = pickle.load(f)
-
- logger.info(f'Index loaded from: {self.index_filepath} successfully!')
-
-
- def convert_text_to_documents(
- self,
- text_list=[]
- ):
- """
- Converts the list of text data to Documents format that can be feed to GPT API to build the Vector store
- """
-
- from llama_index import Document
- documents = [Document(t) for t in text_list]
- return documents
-
-
- def merge_documents_from_different_sources(
- self,
- doc_documents,
- url_documents
- ):
- # Build the Vector store for docs
- doc_index = GPTVectorStoreIndex.from_documents(doc_documents)
- # Build the Vector store for URLs
- url_index = GPTVectorStoreIndex.from_documents(url_documents)
-
- # Set summary of each index
- doc_index.set_text("index_from_docs")
- url_index.set_text("index_from_urls")
-
- # Merge index of different data sources
- index = GPTListIndex([doc_index, url_index])
-
- return index
-
-
- def merge_store_master_index(
- self,
- index_category
- ):
- """
- Merge multiple doc_type indices into a single master index. Query/search would be performed on this merged index.
-
- Args:
- index_category: index_category (can be any of: [crops, fruits, pest_management, govt_policy, soil, etc.])
- """
- logger.info('Merging doc_type indices of different index categories into a master index')
-
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = None
- doc_type_indices = self.index_category_doc_type_wise_index[index_category]
-
- if self.index_type == 'FAISS':
- for doc_type, index in doc_type_indices.items():
- if doc_type == constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE:
- # Only merge the non-master doc_type_indices
- continue
- if not index or not isinstance(index, FAISS):
- logger.warning(f'{doc_type} index to be merged is not an instance of type langchain.vectorstores.faiss.FAISS')
- continue
- if not self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE]:
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = index
- else:
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE].merge_from(index)
-
- elif self.index_type == 'Chroma':
- for doc_type, index in doc_type_indices.items():
- if not index or not isinstance(index, Chroma):
- logger.warning(f'{doc_type} index to be merged is not an instance of type langchain.vectorstores.Chroma')
- continue
- raise NotImplementedError
-
- elif self.index_type == 'GPTVectorStoreIndex':
- for doc_type, index in doc_type_indices.items():
- if not index or not isinstance(index, GPTVectorStoreIndex):
- logger.warning(f'{doc_type} index to be merged is not an instance of type llama_index.GPTVectorStoreIndex')
- continue
- raise NotImplementedError
-
- # Store index_category master index
- self.store_index(
- index=self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE],
- index_filepath=self.get_index_filepath(
- index_category=index_category,
- doc_type=constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE
- )
- )
-
- logger.info('doc_type indices of different index categories into a master index merged successfully!')
-
-
- def init_chromadb(self):
- logger.info('Initializing Chroma DB')
-
- if not os.path.exists(self.index_filepath):
- os.makedirs(self.index_filepath)
-
- client_settings = chromadb.config.Settings(
- chroma_db_impl="duckdb+parquet",
- persist_directory=self.index_filepath,
- anonymized_telemetry=False
- )
-
- self.index = Chroma(
- collection_name="langchain_store",
- embedding_function=self.embeddings,
- client_settings=client_settings,
- persist_directory=self.index_filepath,
- )
-
- logger.info('Chroma DB initialized successfully!')
-
-
- def query_chromadb(
- self,
- question,
- k=1
- ):
- return self.index.similarity_search(query=question, k=k)
-
-
- def query(self,
- question,
- question_category,
- mode=constants_utils.MODE,
- response_mode=constants_utils.RESPONSE_MODE,
- similarity_top_k=constants_utils.SIMILARITY_TOP_K,
- required_keywords=[],
- exclude_keywords=[],
- verbose=False
- ):
- '''
- Args:
- mode: can be any of [default, embedding]
- response_mode: can be any of [default, compact, tree_summarize]
- '''
- logger.info(f'question category: {question_category}; question: {question}')
-
- response = None
-
- # Get the index of the given question_category
- index = self.index_category_doc_type_wise_index[question_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE]
-
- if not index:
- logger.error(f'Index for {question_category} not found! That means no PDFs, Text files, or URLs have been ingested and indexed so far. Ingest the new data for {question_category} and then querying again.')
- return response
-
- if self.index_type == 'FAISS':
- response = index.similarity_search(
- question,
- k=similarity_top_k
- )
-
- elif self.index_type == 'Chroma':
- response = index.similarity_search(
- question,
- k=similarity_top_k
- )
-
- elif self.index_type == 'GPTVectorStoreIndex':
- # Querying the index
- response = index.query(
- question,
- mode=mode,
- response_mode=response_mode,
- similarity_top_k=similarity_top_k,
- required_keywords=required_keywords,
- exclude_keywords=exclude_keywords,
- verbose=verbose
- )
-
- return response
-
-
- def load_uploaded_documents(
- self,
- doc_type,
- files_or_urls
- ):
- logger.info(f'Loading uploaded documents from: {doc_type}')
-
- if doc_type == 'pdf':
- if not isinstance(files_or_urls, list):
- files_or_urls = [files_or_urls]
- for pdf in files_or_urls:
- if not pdf.name.endswith('.pdf'):
- logger.warning(f'Found a file other than .pdf format. Cannot load {pdf.name} file!')
- continue
- logger.info(f'Loading PDF from: {pdf.name}')
- # Load PDF as documents
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_pdf(
- doc_filepath=pdf.name,
- doc_type=doc_type
- )
- )
-
- elif doc_type == 'textfile':
- if not isinstance(files_or_urls, list):
- files_or_urls = [files_or_urls]
- for text_file in files_or_urls:
- if not text_file.name.endswith('.txt'):
- logger.warning(f'Found a file other than .txt format. Cannot load {text_file.name} file!')
- continue
- logger.info(f'Loading textfile from: {text_file.name}')
- # Load textfile as documents
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_text(
- doc_filepath=text_file.name,
- doc_type=doc_type
- )
- )
-
- elif doc_type == 'online_pdf':
- files_or_urls = self.utils_obj.split_text(files_or_urls)
- # Load online_pdfs as documents
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_pdf(
- doc_type=doc_type,
- urls=files_or_urls
- )
- )
-
- elif doc_type == 'urls':
- files_or_urls = self.utils_obj.split_text(files_or_urls)
- # Load URLs as documents
- self.documents.extend(
- self.data_loader_utils_obj.load_documents_from_urls(
- doc_type=doc_type,
- urls=files_or_urls
- )
- )
-
- logger.info(f'Uploaded documents from: {doc_type} loaded successfully!')
-
-
- def upload_data(
- self,
- doc_type,
- files_or_urls,
- index_category
- ):
- logger.info(f'Uploading data for: {index_category}; from: {doc_type}')
-
- self.documents = []
- self.index = None
-
- # Create documents of the uploaded files
- self.load_uploaded_documents(
- doc_type,
- files_or_urls
- )
-
- # Create the index from documents for search/retrieval
- self.index = self.create_index()
-
- # Update the existing index with the newly data
- self.upsert_index(
- doc_type=doc_type,
- index_category=index_category
- )
-
- logger.info(f'{index_category}-{doc_type} data uploaded successfully!')
-
-
- def upsert_index(
- self,
- doc_type,
- index_category
- ):
- """
- Updates the index of the given index_category-doc_type, if present.
- Creates a new index if index_category-doc_type index is not present.
- Also updates the master index for the given index_category.
- """
- if not self.index:
- return
-
- logger.info(f'Upserting index for: {index_category}-{doc_type}')
-
- if not self.index_category_doc_type_wise_index.get(index_category, None):
- """
- If index_category index does not exists
- Steps:
- - set index_category index
- - set doc_type index
- - Store new index_category index as master
- - Store new doc_type index
- """
- logger.info(f'Master index does not exist for: {index_category}. A new {index_category} master index & {doc_type} index would be created.')
- self.index_category_doc_type_wise_index.setdefault(index_category, {})
- # Set a master index only if it doesn't exist. Else keep it's value as-it-is.
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = self.index
- # Set an index for the given doc_type only if it doesn't exist. Else keep it's value as-it-is.
- self.index_category_doc_type_wise_index[index_category][doc_type] = self.index
-
- elif not self.index_category_doc_type_wise_index[index_category].get(doc_type, None):
- """
- If doc_type index does not exists
- Steps:
- - set doc_type index
- - if master index does not exist for the index_category - set a master index
- - if master index exists - update the master index to merge it with doc_type index
- - Store new/updated index_category index as master
- - Store new doc_type index
- """
- logger.info(f'{doc_type} index does not exist for: {index_category}-{doc_type}. A new {doc_type} index would be created.')
- # create doc_type index
- self.index_category_doc_type_wise_index[index_category][doc_type] = self.index
- # if master index does not exist for the index_category - create a master index
- if not self.index_category_doc_type_wise_index[index_category].get(constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE, None):
- logger.info(f'Master index does not exist for: {index_category}-{doc_type}. A new master index would be created.')
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = self.index
-
- else:
- """
- If the new document is of the existing index_category & doc_type
- Steps:
- - if master index does not exist for the index_category - set a master index
- - if master index exists - update the master index to merge it with doc_type index
- - update the doc_type index
- - Store updated index_category index as master
- - Store updated doc_type index
- """
- # if master index does not exist for the index_category - create a master index
- if not self.index_category_doc_type_wise_index[index_category].get(constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE, None):
- logger.info(f'Master index does not exist for: {index_category}-{doc_type}. A new master index would be created.')
- self.index_category_doc_type_wise_index[index_category][constants_utils.INDEX_CATEGORY_MASTER_INDEX_DOC_TYPE] = self.index
- # Merge new self.index with existing doc_type index
- self.index_category_doc_type_wise_index[index_category][doc_type].merge_from(self.index)
- # Update self.index to store/overwrite the existing index with the updated index
- self.index = self.index_category_doc_type_wise_index[index_category][doc_type]
-
-
- # Store newly created/merged index
- self.store_index(
- index=self.index,
- index_filepath=self.get_index_filepath(
- index_category=index_category,
- doc_type=doc_type
- )
- )
-
- # Merge and store master index for index_category
- self.merge_store_master_index(
- index_category=index_category
- )
-
- logger.info(f'Index for: {index_category}-{doc_type} upserted successful!')
-
-
- def delete_index(
- self,
- ids: Optional[List[str]] = None,
- # filter: Optional[DocumentMetadataFilter] = None,
- delete_all: Optional[bool] = None,
- ):
- """
- Removes vectors by ids, filter, or everything in the datastore.
- Multiple parameters can be used at once.
- Returns whether the operation was successful.
- """
- logger.info(f'Deleting index')
-
- raise NotImplementedError
-
- # NOTE: we can delete a specific collection
- self.index.delete_collection()
- self.index.persist()
-
- # Or just nuke the persist directory
- # !rm -rf self.index_filepath
-
-
- def get_index_category_wise_data_sources(
- self
- ):
- # self.index_category_doc_type_wise_data_sources
- for index_category, doc_type in self.index_category_doc_type_wise_index.items():
- self.index_category_doc_type_wise_data_sources.setdefault(index_category, {})
- for dt in doc_type.keys():
- if dt == 'master':
- continue
- self.index_category_doc_type_wise_data_sources[index_category].setdefault(dt, set())
- if doc_type[dt]:
- docs = doc_type[dt].docstore._dict
- for doc, val in docs.items():
- if 'source' in val.metadata and val.metadata['source']:
- self.index_category_doc_type_wise_data_sources[index_category][dt].add(val.metadata['source'])
-
- return self.index_category_doc_type_wise_data_sources
-
-
- def save_answer_feeback(
- self,
- question_category,
- question,
- answer,
- feedback
- ):
- logger.info(f'Question category: {question_category}')
- logger.info(f'Question: {question}')
- logger.info(f'Answer: {answer}')
- logger.info(f'Answer feedback is: {feedback}')
-
- feedback_filepath = os.path.join(
- constants_utils.OUTPUT_PATH_ANSWER_FEEDBACK,
- f'{constants_utils.OUTPUT_PATH_ANSWER_FEEDBACK_FILE_PREFIX}_{question_category}.tsv'
- )
-
- if os.path.exists(feedback_filepath):
- df = pd.read_csv(feedback_filepath, sep=constants_utils.OUTPUT_PATH_ANSWER_FEEDBACK_FILE_SAVE_SEPARATOR)
- else:
- df = pd.DataFrame(columns=['question_category', 'question', 'answer', 'feedback', 'timestamp'])
-
- # Append answer feedback to df
- df.loc[len(df)] = {
- 'question_category': question_category,
- 'question': question,
- 'answer': answer,
- 'feedback': feedback,
- 'timestamp': datetime.strftime(datetime.now(), '%Y-%m-%d %H:%M:%S.%f')[:-3]
- }
-
- # Save df into TSV format
- df.to_csv(feedback_filepath, sep=constants_utils.OUTPUT_PATH_ANSWER_FEEDBACK_FILE_SAVE_SEPARATOR, index=False, header=True)
-
-
- def get_sources_of_relevant_paragraphs(
- self,
- relevant_paragraphs
- ):
- sources_relevant_paragraphs = []
- # Extract information on Source of relevant_paragraphs
- for indx, doc in enumerate(relevant_paragraphs):
- if 'source' in doc.metadata and 'page' in doc.metadata and doc.metadata['source'].endswith('.pdf'):
- # Need to add +1 as PyPDFLoader sets page number from 0th-index
- relevant_paragraphs[indx].metadata['page'] += 1
- sources_relevant_paragraphs = [doc.metadata for doc in relevant_paragraphs]
-
- return sources_relevant_paragraphs
-
-
- def clean_relevant_paragraphs(
- self,
- relevant_paragraphs
- ):
- cleaned_relevant_paragraphs = []
- for doc in relevant_paragraphs:
- cleaned_relevant_paragraphs.append(self.utils_obj.replace_newlines_and_spaces(doc.page_content))
-
- return cleaned_relevant_paragraphs
diff --git a/spaces/wanglettes/zw_chatgpt_01/app.py b/spaces/wanglettes/zw_chatgpt_01/app.py
deleted file mode 100644
index 14a55567e7b08fa35f81c63b3049b8949eff38a7..0000000000000000000000000000000000000000
--- a/spaces/wanglettes/zw_chatgpt_01/app.py
+++ /dev/null
@@ -1,80 +0,0 @@
-from typing import List, Tuple, Dict, Generator
-from langchain.llms import OpenAI
-import gradio as gr
-model_name = "gpt-3.5-turbo"
-LLM = OpenAI(model_name=model_name, temperature=0.1)
-def create_history_messages(history: List[Tuple[str, str]]) -> List[dict]:
- history_messages = [{"role": "user", "content": m[0]} for m in history]
- history_messages.extend([{"role": "assistant", "content": m[1]} for m in history])
- return history_messages
-
-def create_formatted_history(history_messages: List[dict]) -> List[Tuple[str, str]]:
- formatted_history = []
- user_messages = []
- assistant_messages = []
-
- for message in history_messages:
- if message["role"] == "user":
- user_messages.append(message["content"])
- elif message["role"] == "assistant":
- assistant_messages.append(message["content"])
-
- if user_messages and assistant_messages:
- formatted_history.append(
- ("".join(user_messages), "".join(assistant_messages))
- )
- user_messages = []
- assistant_messages = []
-
- # append any remaining messages
- if user_messages:
- formatted_history.append(("".join(user_messages), None))
- elif assistant_messages:
- formatted_history.append((None, "".join(assistant_messages)))
-
- return formatted_history
-
-def chat(
- message: str, state: List[Dict[str, str]], client = LLM.client
-) -> Generator[Tuple[List[Tuple[str, str]], List[Dict[str, str]]], None, None]:
- history_messages = state
- if history_messages == None:
- history_messages = []
- history_messages.append({"role": "system", "content":"用中文回答问题"})
-
- history_messages.append({"role": "user", "content": message})
- # We have no content for the assistant's response yet but we will update this:
- history_messages.append({"role": "assistant", "content": ""})
-
- response_message = ""
-
- chat_generator = client.create(
- messages=history_messages, stream=True, model=model_name
- )
-
- for chunk in chat_generator:
- if "choices" in chunk:
- for choice in chunk["choices"]:
- if "delta" in choice and "content" in choice["delta"]:
- new_token = choice["delta"]["content"]
- # Add the latest token:
- response_message += new_token
- # Update the assistant's response in our model:
- history_messages[-1]["content"] = response_message
-
- if "finish_reason" in choice and choice["finish_reason"] == "stop":
- break
- formatted_history = create_formatted_history(history_messages)
- yield formatted_history, history_messages
-chatbot = gr.Chatbot(label="Chat").style(color_map=("yellow", "purple"))
-iface = gr.Interface(
- fn=chat,
- inputs=[
- gr.Textbox(placeholder="在这里数据您的问题", label="Message"),
- "state",
- ],
- outputs=[chatbot, "state"],
- allow_flagging="never",
-)
-
-iface.queue().launch()
\ No newline at end of file
diff --git a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/python/dqn/policies.py b/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/python/dqn/policies.py
deleted file mode 100644
index 4ecf39a5fc04b24ad1b809232b186728366987b6..0000000000000000000000000000000000000000
--- a/spaces/wangrongsheng/ChatImprovement/crazy_functions/test_project/python/dqn/policies.py
+++ /dev/null
@@ -1,237 +0,0 @@
-from typing import Any, Dict, List, Optional, Type
-
-import gym
-import torch as th
-from torch import nn
-
-from stable_baselines3.common.policies import BasePolicy, register_policy
-from stable_baselines3.common.torch_layers import BaseFeaturesExtractor, FlattenExtractor, NatureCNN, create_mlp
-from stable_baselines3.common.type_aliases import Schedule
-
-
-class QNetwork(BasePolicy):
- """
- Action-Value (Q-Value) network for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- features_extractor: nn.Module,
- features_dim: int,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- normalize_images: bool = True,
- ):
- super(QNetwork, self).__init__(
- observation_space,
- action_space,
- features_extractor=features_extractor,
- normalize_images=normalize_images,
- )
-
- if net_arch is None:
- net_arch = [64, 64]
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.features_extractor = features_extractor
- self.features_dim = features_dim
- self.normalize_images = normalize_images
- action_dim = self.action_space.n # number of actions
- q_net = create_mlp(self.features_dim, action_dim, self.net_arch, self.activation_fn)
- self.q_net = nn.Sequential(*q_net)
-
- def forward(self, obs: th.Tensor) -> th.Tensor:
- """
- Predict the q-values.
-
- :param obs: Observation
- :return: The estimated Q-Value for each action.
- """
- return self.q_net(self.extract_features(obs))
-
- def _predict(self, observation: th.Tensor, deterministic: bool = True) -> th.Tensor:
- q_values = self.forward(observation)
- # Greedy action
- action = q_values.argmax(dim=1).reshape(-1)
- return action
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_arch,
- features_dim=self.features_dim,
- activation_fn=self.activation_fn,
- features_extractor=self.features_extractor,
- )
- )
- return data
-
-
-class DQNPolicy(BasePolicy):
- """
- Policy class with Q-Value Net and target net for DQN
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param features_extractor_kwargs: Keyword arguments
- to pass to the features extractor.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = FlattenExtractor,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(DQNPolicy, self).__init__(
- observation_space,
- action_space,
- features_extractor_class,
- features_extractor_kwargs,
- optimizer_class=optimizer_class,
- optimizer_kwargs=optimizer_kwargs,
- )
-
- if net_arch is None:
- if features_extractor_class == FlattenExtractor:
- net_arch = [64, 64]
- else:
- net_arch = []
-
- self.net_arch = net_arch
- self.activation_fn = activation_fn
- self.normalize_images = normalize_images
-
- self.net_args = {
- "observation_space": self.observation_space,
- "action_space": self.action_space,
- "net_arch": self.net_arch,
- "activation_fn": self.activation_fn,
- "normalize_images": normalize_images,
- }
-
- self.q_net, self.q_net_target = None, None
- self._build(lr_schedule)
-
- def _build(self, lr_schedule: Schedule) -> None:
- """
- Create the network and the optimizer.
-
- :param lr_schedule: Learning rate schedule
- lr_schedule(1) is the initial learning rate
- """
-
- self.q_net = self.make_q_net()
- self.q_net_target = self.make_q_net()
- self.q_net_target.load_state_dict(self.q_net.state_dict())
-
- # Setup optimizer with initial learning rate
- self.optimizer = self.optimizer_class(self.parameters(), lr=lr_schedule(1), **self.optimizer_kwargs)
-
- def make_q_net(self) -> QNetwork:
- # Make sure we always have separate networks for features extractors etc
- net_args = self._update_features_extractor(self.net_args, features_extractor=None)
- return QNetwork(**net_args).to(self.device)
-
- def forward(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self._predict(obs, deterministic=deterministic)
-
- def _predict(self, obs: th.Tensor, deterministic: bool = True) -> th.Tensor:
- return self.q_net._predict(obs, deterministic=deterministic)
-
- def _get_constructor_parameters(self) -> Dict[str, Any]:
- data = super()._get_constructor_parameters()
-
- data.update(
- dict(
- net_arch=self.net_args["net_arch"],
- activation_fn=self.net_args["activation_fn"],
- lr_schedule=self._dummy_schedule, # dummy lr schedule, not needed for loading policy alone
- optimizer_class=self.optimizer_class,
- optimizer_kwargs=self.optimizer_kwargs,
- features_extractor_class=self.features_extractor_class,
- features_extractor_kwargs=self.features_extractor_kwargs,
- )
- )
- return data
-
-
-MlpPolicy = DQNPolicy
-
-
-class CnnPolicy(DQNPolicy):
- """
- Policy class for DQN when using images as input.
-
- :param observation_space: Observation space
- :param action_space: Action space
- :param lr_schedule: Learning rate schedule (could be constant)
- :param net_arch: The specification of the policy and value networks.
- :param activation_fn: Activation function
- :param features_extractor_class: Features extractor to use.
- :param normalize_images: Whether to normalize images or not,
- dividing by 255.0 (True by default)
- :param optimizer_class: The optimizer to use,
- ``th.optim.Adam`` by default
- :param optimizer_kwargs: Additional keyword arguments,
- excluding the learning rate, to pass to the optimizer
- """
-
- def __init__(
- self,
- observation_space: gym.spaces.Space,
- action_space: gym.spaces.Space,
- lr_schedule: Schedule,
- net_arch: Optional[List[int]] = None,
- activation_fn: Type[nn.Module] = nn.ReLU,
- features_extractor_class: Type[BaseFeaturesExtractor] = NatureCNN,
- features_extractor_kwargs: Optional[Dict[str, Any]] = None,
- normalize_images: bool = True,
- optimizer_class: Type[th.optim.Optimizer] = th.optim.Adam,
- optimizer_kwargs: Optional[Dict[str, Any]] = None,
- ):
- super(CnnPolicy, self).__init__(
- observation_space,
- action_space,
- lr_schedule,
- net_arch,
- activation_fn,
- features_extractor_class,
- features_extractor_kwargs,
- normalize_images,
- optimizer_class,
- optimizer_kwargs,
- )
-
-
-register_policy("MlpPolicy", MlpPolicy)
-register_policy("CnnPolicy", CnnPolicy)
diff --git a/spaces/whitphx/gradio-static-test/dist/assets/index-f0702dd5.js b/spaces/whitphx/gradio-static-test/dist/assets/index-f0702dd5.js
deleted file mode 100644
index 5ca88f8f10e1b37968d920ab4de688bf9503e809..0000000000000000000000000000000000000000
--- a/spaces/whitphx/gradio-static-test/dist/assets/index-f0702dd5.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as ne,i as le,s as $,C as Q,D as d,h as y,F as P,G as H,r as v,ag as zt,H as F,M as J,u as de,b as L,I as j,ac as Ve,al as At,q as p,n as x,t as A,p as ee,O as Tt,v as It,U as ge,a6 as Bt,ad as xe,ae as et,E as St,N as R,K as q,an as Et,a1 as yt,z as ue,e as T,m as B,o as S,af as je,ao as Rt,f as _e,a as V,l as Z,W as Dt,Y as Lt,Z as Ut,$ as jt,y as qt,a0 as Ht,j as Ft,k as Nt}from"../lite.js";import{B as Wt}from"./Button-0391b19a.js";import{B as vt}from"./BlockLabel-a3ec523d.js";/* empty css */import{I as qe}from"./Image-05614c6d.js";import{C as Ot,i as Yt,U as Xt,W as Pt}from"./StaticImage.svelte_svelte_type_style_lang-e0be6656.js";import{I as ke,C as Jt,M as He}from"./ModifyUpload-ee7ccefb.js";import{U as Gt}from"./Upload-a154f660.js";import{E as Qt}from"./Empty-91947ea3.js";import{D as Vt}from"./Download-35908774.js";import"./Blocks-99723874.js";import{U as Zt}from"./UploadText-ca9fa5cb.js";import{E as _l}from"./Image-645ff0ce.js";import"./ModifyUpload.svelte_svelte_type_style_lang-ba6baa96.js";function Kt(t){let e,n,l;return{c(){e=Q("svg"),n=Q("path"),l=Q("path"),d(n,"d","M28.828 3.172a4.094 4.094 0 0 0-5.656 0L4.05 22.292A6.954 6.954 0 0 0 2 27.242V30h2.756a6.952 6.952 0 0 0 4.95-2.05L28.828 8.829a3.999 3.999 0 0 0 0-5.657zM10.91 18.26l2.829 2.829l-2.122 2.121l-2.828-2.828zm-2.619 8.276A4.966 4.966 0 0 1 4.756 28H4v-.759a4.967 4.967 0 0 1 1.464-3.535l1.91-1.91l2.829 2.828zM27.415 7.414l-12.261 12.26l-2.829-2.828l12.262-12.26a2.047 2.047 0 0 1 2.828 0a2 2 0 0 1 0 2.828z"),d(n,"fill","currentColor"),d(l,"d","M6.5 15a3.5 3.5 0 0 1-2.475-5.974l3.5-3.5a1.502 1.502 0 0 0 0-2.121a1.537 1.537 0 0 0-2.121 0L3.415 5.394L2 3.98l1.99-1.988a3.585 3.585 0 0 1 4.95 0a3.504 3.504 0 0 1 0 4.949L5.439 10.44a1.502 1.502 0 0 0 0 2.121a1.537 1.537 0 0 0 2.122 0l4.024-4.024L13 9.95l-4.025 4.024A3.475 3.475 0 0 1 6.5 15z"),d(l,"fill","currentColor"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 32 32")},m(a,r){y(a,e,r),P(e,n),P(e,l)},p:H,i:H,o:H,d(a){a&&v(e)}}}class $t extends ne{constructor(e){super(),le(this,e,null,Kt,$,{})}}function xt(t){let e,n,l,a,r,i,u;return{c(){e=Q("svg"),n=Q("circle"),l=Q("circle"),a=Q("circle"),r=Q("circle"),i=Q("circle"),u=Q("path"),d(n,"cx","10"),d(n,"cy","12"),d(n,"r","2"),d(n,"fill","currentColor"),d(l,"cx","16"),d(l,"cy","9"),d(l,"r","2"),d(l,"fill","currentColor"),d(a,"cx","22"),d(a,"cy","12"),d(a,"r","2"),d(a,"fill","currentColor"),d(r,"cx","23"),d(r,"cy","18"),d(r,"r","2"),d(r,"fill","currentColor"),d(i,"cx","19"),d(i,"cy","23"),d(i,"r","2"),d(i,"fill","currentColor"),d(u,"fill","currentColor"),d(u,"d","M16.54 2A14 14 0 0 0 2 16a4.82 4.82 0 0 0 6.09 4.65l1.12-.31a3 3 0 0 1 3.79 2.9V27a3 3 0 0 0 3 3a14 14 0 0 0 14-14.54A14.05 14.05 0 0 0 16.54 2Zm8.11 22.31A11.93 11.93 0 0 1 16 28a1 1 0 0 1-1-1v-3.76a5 5 0 0 0-5-5a5.07 5.07 0 0 0-1.33.18l-1.12.31A2.82 2.82 0 0 1 4 16A12 12 0 0 1 16.47 4A12.18 12.18 0 0 1 28 15.53a11.89 11.89 0 0 1-3.35 8.79Z"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 32 32")},m(s,f){y(s,e,f),P(e,n),P(e,l),P(e,a),P(e,r),P(e,i),P(e,u)},p:H,i:H,o:H,d(s){s&&v(e)}}}class en extends ne{constructor(e){super(),le(this,e,null,xt,$,{})}}function tn(t){let e,n;return{c(){e=Q("svg"),n=Q("path"),d(n,"fill","currentColor"),d(n,"d","M7 27h23v2H7zm20.38-16.49l-7.93-7.92a2 2 0 0 0-2.83 0l-14 14a2 2 0 0 0 0 2.83L7.13 24h9.59l10.66-10.66a2 2 0 0 0 0-2.83zM15.89 22H8l-4-4l6.31-6.31l7.93 7.92zm3.76-3.76l-7.92-7.93L18 4l8 7.93z"),d(e,"xmlns","http://www.w3.org/2000/svg"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 32 32")},m(l,a){y(l,e,a),P(e,n)},p:H,i:H,o:H,d(l){l&&v(e)}}}class nn extends ne{constructor(e){super(),le(this,e,null,tn,$,{})}}function ln(t){let e,n;return{c(){e=Q("svg"),n=Q("path"),d(n,"d","M17 3a2.828 2.828 0 1 1 4 4L7.5 20.5 2 22l1.5-5.5L17 3z"),d(e,"xmlns","http://www.w3.org/2000/svg"),d(e,"width","100%"),d(e,"height","100%"),d(e,"viewBox","0 0 24 24"),d(e,"fill","none"),d(e,"stroke","currentColor"),d(e,"stroke-width","1.5"),d(e,"stroke-linecap","round"),d(e,"stroke-linejoin","round"),d(e,"class","feather feather-edit-2")},m(l,a){y(l,e,a),P(e,n)},p:H,i:H,o:H,d(l){l&&v(e)}}}let tt=class extends ne{constructor(e){super(),le(this,e,null,ln,$,{})}};const Ct=t=>{let e=t.currentTarget;const n=e.getBoundingClientRect(),l=e.naturalWidth/n.width,a=e.naturalHeight/n.height;if(l>a){n.width;const u=e.naturalHeight/l,s=(n.height-u)/2;var r=Math.round((t.clientX-n.left)*l),i=Math.round((t.clientY-n.top-s)*l)}else{const u=e.naturalWidth/a;n.height;const s=(n.width-u)/2;var r=Math.round((t.clientX-n.left-s)*a),i=Math.round((t.clientY-n.top)*a)}return r<0||r>=e.naturalWidth||i<0||i>=e.naturalHeight?null:[r,i]};function sn(t){let e,n;return{c(){e=F("img"),J(e.src,n=t[0])||d(e,"src",n),d(e,"alt","")},m(l,a){y(l,e,a),t[4](e)},p(l,[a]){a&1&&!J(e.src,n=l[0])&&d(e,"src",n)},i:H,o:H,d(l){l&&v(e),t[4](null)}}}function rn(t,e,n){let{image:l}=e,a;const r=de();let i;function u(){i.destroy()}function s(){i&&u(),i=new Ot(a,{autoCropArea:1,cropend(){const o=i.getCroppedCanvas().toDataURL();r("crop",o)}}),r("crop",l)}function f(o){L[o?"unshift":"push"](()=>{a=o,n(1,a)})}return t.$$set=o=>{"image"in o&&n(0,l=o.image)},[l,a,u,s,f]}class Mt extends ne{constructor(e){super(),le(this,e,rn,sn,$,{image:0,destroy:2,create:3})}get image(){return this.$$.ctx[0]}set image(e){this.$$set({image:e}),zt()}get destroy(){return this.$$.ctx[2]}get create(){return this.$$.ctx[3]}}class nt{constructor(e,n){this.x=e,this.y=n}}class lt extends nt{update(e){this.x=e.x,this.y=e.y}moveByAngle(e,n){const l=e+Math.PI/2;this.x=this.x+Math.sin(l)*n,this.y=this.y-Math.cos(l)*n}equalsTo(e){return this.x===e.x&&this.y===e.y}getDifferenceTo(e){return new nt(this.x-e.x,this.y-e.y)}getDistanceTo(e){const n=this.getDifferenceTo(e);return Math.sqrt(Math.pow(n.x,2)+Math.pow(n.y,2))}getAngleTo(e){const n=this.getDifferenceTo(e);return Math.atan2(n.y,n.x)}toObject(){return{x:this.x,y:this.y}}}const an=30;class un{constructor({radius:e=an,enabled:n=!0,initialPoint:l={x:0,y:0}}={}){this.radius=e,this._isEnabled=n,this.pointer=new lt(l.x,l.y),this.brush=new lt(l.x,l.y),this.angle=0,this.distance=0,this._hasMoved=!1}enable(){this._isEnabled=!0}disable(){this._isEnabled=!1}isEnabled(){return this._isEnabled}setRadius(e){this.radius=e}getRadius(){return this.radius}getBrushCoordinates(){return this.brush.toObject()}getPointerCoordinates(){return this.pointer.toObject()}getBrush(){return this.brush}getPointer(){return this.pointer}getAngle(){return this.angle}getDistance(){return this.distance}brushHasMoved(){return this._hasMoved}update(e,{both:n=!1}={}){return this._hasMoved=!1,this.pointer.equalsTo(e)&&!n?!1:(this.pointer.update(e),n?(this._hasMoved=!0,this.brush.update(e),!0):(this._isEnabled?(this.distance=this.pointer.getDistanceTo(this.brush),this.angle=this.pointer.getAngleTo(this.brush),this.distance>this.radius&&(this.brush.moveByAngle(this.angle,this.distance-this.radius),this._hasMoved=!0)):(this.distance=0,this.angle=0,this.brush.update(e),this._hasMoved=!0),!0))}}function st(t,e,n){const l=t.slice();return l[61]=e[n].name,l[62]=e[n].zIndex,l[63]=e,l[64]=n,l}function it(t){let e,n,l;return{c(){e=F("div"),e.textContent="Start drawing",d(e,"class","start-prompt svelte-yigbas")},m(a,r){y(a,e,r),l=!0},i(a){l||(Ve(()=>{l&&(n||(n=xe(e,et,{duration:50},!0)),n.run(1))}),l=!0)},o(a){n||(n=xe(e,et,{duration:50},!1)),n.run(0),l=!1},d(a){a&&v(e),a&&n&&n.end()}}}function rt(t){let e,n=t[61],l,a;const r=()=>t[30](e,n),i=()=>t[30](null,n);return{c(){e=F("canvas"),d(e,"key",t[61]),St(e,"z-index",t[62]),d(e,"class","svelte-yigbas"),R(e,"lr",t[5]),R(e,"tb",!t[5])},m(u,s){y(u,e,s),r(),l||(a=[q(e,"mousedown",t[61]==="interface"?t[7]:void 0),q(e,"mousemove",t[61]==="interface"?t[8]:void 0),q(e,"mouseup",t[61]==="interface"?t[9]:void 0),q(e,"mouseout",t[61]==="interface"?t[9]:void 0),q(e,"blur",t[61]==="interface"?t[9]:void 0),q(e,"touchstart",t[61]==="interface"?t[7]:void 0),q(e,"touchmove",t[61]==="interface"?t[8]:void 0),q(e,"touchend",t[61]==="interface"?t[9]:void 0),q(e,"touchcancel",t[61]==="interface"?t[9]:void 0),q(e,"click",Et(t[29]))],l=!0)},p(u,s){t=u,n!==t[61]&&(i(),n=t[61],r()),s[0]&32&&R(e,"lr",t[5]),s[0]&32&&R(e,"tb",!t[5])},d(u){u&&v(e),i(),l=!1,yt(a)}}}function on(t){let e,n,l,a,r=t[4]===0&&it(),i=t[6],u=[];for(let s=0;st[32].call(e))},m(s,f){y(s,e,f),r&&r.m(e,null),P(e,n);for(let o=0;o{r=null}),ee()),f[0]&993){i=s[6];let o;for(o=0;oh?(m=b[0],C=b[0]/h,G=(b[1]-C)/2):(z=0,G=0,m=b[0],C=b[1]),k.temp.drawImage(i,z,G,m,C)}It(async()=>{Object.keys(E).forEach(m=>{n(26,k[m]=E[m].getContext("2d"),k)}),await ge(),i&&(i.addEventListener("load",m=>{o==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):w(),k.drawing.drawImage(E.temp,0,0,g,_),ae()}),setTimeout(()=>{o==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):w(),k.drawing.drawImage(E.temp,0,0,g,_),pe({lines:Y.slice()}),ae()},100)),n(28,X=new un({radius:f*.05,enabled:!0,initialPoint:{x:g/2,y:_/2}})),O=new Yt((m,C,...M)=>{ze()}),O.observe(te),we(),n(24,I=!0),requestAnimationFrame(()=>{be(),requestAnimationFrame(()=>{me()})})});function be(){const m=g/2,C=_/2;X.update({x:m,y:C},{both:!0}),X.update({x:m,y:C},{both:!1}),se=!0,oe=!0}Bt(()=>{n(24,I=!1),O.unobserve(te)});function re(m){Le(),i&&(o==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):w(),(!Y||!Y.length)&&k.drawing.drawImage(E.temp,0,0,g,_)),pe({lines:m}),n(4,K=m.length),Y.length&&n(27,Y=m),Y.length==0&&a("clear")}function Fe(){re([]),ae()}function Ne(){const m=Y.slice(0,-1);re(m),ae()}let pe=({lines:m})=>{m.forEach(C=>{const{points:M,brush_color:h,brush_radius:z}=C;Se({points:M,brush_color:h,brush_radius:z}),u==="mask"&&Ee({points:M,brush_color:h,brush_radius:z}),W=M}),De(),u==="mask"&&Re()},We=m=>{m.preventDefault(),ie=!0;const{x:C,y:M}=Te(m);m.touches&&m.touches.length>0&&X.update({x:C,y:M},{both:!0}),Be(C,M),n(4,K+=1)},Ie=m=>{m.preventDefault();const{x:C,y:M}=Te(m);Be(C,M)},Oe=m=>{m.preventDefault(),Ie(m),fe=!1,ie=!1,De(),u==="mask"&&Re()},ye=0,ve=0,Ce=0,Me=!1,ze=async()=>{if(b&&te){const M=te?.getBoundingClientRect(),h=b[0]/b[1],z=M.width/M.height;n(5,Me=h{ve=_,ye=g,Ce=c},10),await ge(),me()},he=async(m,C,M,h=!0)=>{if(!I)return;await ge();const z=window.devicePixelRatio||1;m.width=C.width*(h?z:1),m.height=C.height*(h?z:1);const G=m.getContext("2d");h&&G.scale(z,z),m.style.width=`${M.width}px`,m.style.height=`${M.height}px`},Te=m=>{const C=E.interface.getBoundingClientRect();let M=m.clientX,h=m.clientY;return m.changedTouches&&m.changedTouches.length>0&&(M=m.changedTouches[0].clientX,h=m.changedTouches[0].clientY),{x:(M-C.left)/C.width*g,y:(h-C.top)/C.height*_}},Be=(m,C)=>{X.update({x:m,y:C});const M=!X.isEnabled();(ie&&!fe||M&&ie)&&(fe=!0,W.push(X.brush.toObject())),fe&&(W.push(X.brush.toObject()),Se({points:W,brush_color:s,brush_radius:f}),u==="mask"&&Ee({points:W,brush_color:s,brush_radius:f})),se=!0},Se=({points:m,brush_color:C,brush_radius:M})=>{if(!m||m.length<2||(n(26,k.temp.lineJoin="round",k),n(26,k.temp.lineCap="round",k),n(26,k.temp.strokeStyle=C,k),n(26,k.temp.lineWidth=M,k),!m||m.length<2))return;let h=m[0],z=m[1];k.temp.moveTo(z.x,z.y),k.temp.beginPath();for(var G=1,Qe=m.length;G{if(!m||m.length<2)return;n(26,k.temp_fake.lineJoin="round",k),n(26,k.temp_fake.lineCap="round",k),n(26,k.temp_fake.strokeStyle="#fff",k),n(26,k.temp_fake.lineWidth=M,k);let h=m[0],z=m[1];k.temp_fake.moveTo(z.x,z.y),k.temp_fake.beginPath();for(var G=1,Qe=m.length;G{W.length<1||(W.length=0,k.mask.drawImage(E.temp_fake,0,0,g,_),ae())},De=()=>{W.length<1||(Y.push({points:W.slice(),brush_color:s,brush_radius:f}),u!=="mask"&&(W.length=0),k.drawing.drawImage(E.temp,0,0,g,_),ae())},ae=()=>{const m=Ue();a("change",m)};function me(){return n(27,Y=[]),Le(),n(4,K=0),!0}function Le(){oe=!0,k.temp.clearRect(0,0,g,_),n(26,k.temp.fillStyle=u==="mask"?"transparent":"#FFFFFF",k),k.temp.fillRect(0,0,g,_),u==="mask"&&(k.temp_fake.clearRect(0,0,E.temp_fake.width,E.temp_fake.height),k.mask.clearRect(0,0,g,_),n(26,k.mask.fillStyle="#000",k),k.mask.fillRect(0,0,g,_))}let we=({once:m=!1}={})=>{if(se||oe){const C=X.getPointerCoordinates(),M=X.getBrushCoordinates();Ye(k.interface,C,M),se=!1,oe=!1}m||window.requestAnimationFrame(()=>{we()})},Ye=(m,C,M)=>{m.clearRect(0,0,g,_),m.beginPath(),m.fillStyle=s,m.arc(M.x,M.y,f/2,0,Math.PI*2,!0),m.fill(),m.beginPath(),m.fillStyle=fn,m.arc(M.x,M.y,l,0,Math.PI*2,!0),m.fill()};function Ue(){return u==="mask"?E.mask.toDataURL("image/jpg"):E.drawing.toDataURL("image/jpg")}function Xe(m){ue.call(this,t,m)}function Pe(m,C){L[m?"unshift":"push"](()=>{E[C]=m,n(0,E)})}function Je(m){L[m?"unshift":"push"](()=>{te=m,n(3,te)})}function Ge(){D=this.offsetWidth,N=this.offsetHeight,n(1,D),n(2,N)}return t.$$set=m=>{"value"in m&&n(13,r=m.value),"value_img"in m&&n(14,i=m.value_img),"mode"in m&&n(15,u=m.mode),"brush_color"in m&&n(16,s=m.brush_color),"brush_radius"in m&&n(10,f=m.brush_radius),"source"in m&&n(17,o=m.source),"width"in m&&n(11,g=m.width),"height"in m&&n(12,_=m.height),"container_height"in m&&n(18,c=m.container_height),"shape"in m&&n(19,b=m.shape)},t.$$.update=()=>{t.$$.dirty[0]&530432&&b&&(g||_)&&(n(11,g=b[0]),n(12,_=b[1])),t.$$.dirty[0]&16785408&&I&&!r&&me(),t.$$.dirty[0]&251811841&&I&&i!==U&&(n(25,U=i),me(),setTimeout(()=>{o==="webcam"?(k.temp.save(),k.temp.translate(g,0),k.temp.scale(-1,1),k.temp.drawImage(i,0,0),k.temp.restore()):w(),k.drawing.drawImage(E.temp,0,0,g,_),pe({lines:Y.slice()}),ae()},50)),t.$$.dirty[0]&268436480&&X&&(be(),X.setRadius(f*.05)),t.$$.dirty[0]&6144&&(g||_)&&ze(),t.$$.dirty[0]&1024&&(l=f*.075)},[E,D,N,te,K,Me,ce,We,Ie,Oe,f,g,_,r,i,u,s,o,c,b,Fe,Ne,me,Ue,I,U,k,Y,X,Xe,Pe,Je,Ge]}class Ze extends ne{constructor(e){super(),le(this,e,_n,on,$,{value:13,value_img:14,mode:15,brush_color:16,brush_radius:10,source:17,width:11,height:12,container_height:18,shape:19,clear_mask:20,undo:21,clear:22,get_image_data:23},null,[-1,-1,-1])}get clear_mask(){return this.$$.ctx[20]}get undo(){return this.$$.ctx[21]}get clear(){return this.$$.ctx[22]}get get_image_data(){return this.$$.ctx[23]}}function ut(t){let e,n;return e=new ke({props:{Icon:nn,label:"Clear"}}),e.$on("click",t[3]),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},p:H,i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}function cn(t){let e,n,l,a,r,i;n=new ke({props:{Icon:Xt,label:"Undo"}}),n.$on("click",t[2]);let u=t[0]&&ut(t);return r=new ke({props:{Icon:Jt,label:"Remove Image"}}),r.$on("click",t[4]),{c(){e=F("div"),T(n.$$.fragment),l=j(),u&&u.c(),a=j(),T(r.$$.fragment),d(e,"class","svelte-s6ybro")},m(s,f){y(s,e,f),B(n,e,null),P(e,l),u&&u.m(e,null),P(e,a),B(r,e,null),i=!0},p(s,[f]){s[0]?u?(u.p(s,f),f&1&&p(u,1)):(u=ut(s),u.c(),p(u,1),u.m(e,a)):u&&(x(),A(u,1,1,()=>{u=null}),ee())},i(s){i||(p(n.$$.fragment,s),p(u),p(r.$$.fragment,s),i=!0)},o(s){A(n.$$.fragment,s),A(u),A(r.$$.fragment,s),i=!1},d(s){s&&v(e),S(n),u&&u.d(),S(r)}}}function hn(t,e,n){const l=de();let{show_eraser:a=!1}=e;const r=()=>l("undo"),i=s=>{l("clear_mask"),s.stopPropagation()},u=s=>{l("remove_image"),s.stopPropagation()};return t.$$set=s=>{"show_eraser"in s&&n(0,a=s.show_eraser)},[a,l,r,i,u]}class Ke extends ne{constructor(e){super(),le(this,e,hn,cn,$,{show_eraser:0})}}function ot(t){let e,n,l,a,r;return{c(){e=F("input"),d(e,"aria-label","Brush radius"),d(e,"type","range"),d(e,"min",n=.5*(t[2]/t[6])),d(e,"max",l=75*(t[2]/t[6])),d(e,"class","svelte-p4aq0j")},m(i,u){y(i,e,u),je(e,t[0]),a||(r=[q(e,"change",t[10]),q(e,"input",t[10])],a=!0)},p(i,u){u&68&&n!==(n=.5*(i[2]/i[6]))&&d(e,"min",n),u&68&&l!==(l=75*(i[2]/i[6]))&&d(e,"max",l),u&1&&je(e,i[0])},d(i){i&&v(e),a=!1,yt(r)}}}function ft(t){let e,n,l,a;n=new ke({props:{Icon:en,label:"Select brush color"}}),n.$on("click",t[11]);let r=t[5]&&_t(t);return{c(){e=F("span"),T(n.$$.fragment),l=j(),r&&r.c(),d(e,"class","col svelte-p4aq0j")},m(i,u){y(i,e,u),B(n,e,null),P(e,l),r&&r.m(e,null),a=!0},p(i,u){i[5]?r?r.p(i,u):(r=_t(i),r.c(),r.m(e,null)):r&&(r.d(1),r=null)},i(i){a||(p(n.$$.fragment,i),a=!0)},o(i){A(n.$$.fragment,i),a=!1},d(i){i&&v(e),S(n),r&&r.d()}}}function _t(t){let e,n,l;return{c(){e=F("input"),d(e,"aria-label","Brush color"),d(e,"type","color"),d(e,"class","svelte-p4aq0j")},m(a,r){y(a,e,r),je(e,t[1]),n||(l=q(e,"input",t[12]),n=!0)},p(a,r){r&2&&je(e,a[1])},d(a){a&&v(e),n=!1,l()}}}function mn(t){let e,n,l,a,r,i;l=new ke({props:{Icon:$t,label:"Use brush"}}),l.$on("click",t[9]);let u=t[4]&&ot(t),s=t[3]!=="mask"&&ft(t);return{c(){e=F("div"),n=F("span"),T(l.$$.fragment),a=j(),u&&u.c(),r=j(),s&&s.c(),d(n,"class","brush svelte-p4aq0j"),d(e,"class","wrap svelte-p4aq0j")},m(f,o){y(f,e,o),P(e,n),B(l,n,null),P(n,a),u&&u.m(n,null),P(e,r),s&&s.m(e,null),i=!0},p(f,[o]){f[4]?u?u.p(f,o):(u=ot(f),u.c(),u.m(n,null)):u&&(u.d(1),u=null),f[3]!=="mask"?s?(s.p(f,o),o&8&&p(s,1)):(s=ft(f),s.c(),p(s,1),s.m(e,null)):s&&(x(),A(s,1,1,()=>{s=null}),ee())},i(f){i||(p(l.$$.fragment,f),p(s),i=!0)},o(f){A(l.$$.fragment,f),A(s),i=!1},d(f){f&&v(e),S(l),u&&u.d(),s&&s.d()}}}function gn(t,e,n){let l;de();let a=!1,r=!1,{brush_radius:i=20}=e,{brush_color:u="#000"}=e,{container_height:s}=e,{img_width:f}=e,{img_height:o}=e,{mode:g="other"}=e;const _=()=>n(4,a=!a);function c(){i=Rt(this.value),n(0,i)}const b=()=>n(5,r=!r);function I(){u=this.value,n(1,u)}return t.$$set=D=>{"brush_radius"in D&&n(0,i=D.brush_radius),"brush_color"in D&&n(1,u=D.brush_color),"container_height"in D&&n(7,s=D.container_height),"img_width"in D&&n(2,f=D.img_width),"img_height"in D&&n(8,o=D.img_height),"mode"in D&&n(3,g=D.mode)},t.$$.update=()=>{t.$$.dirty&388&&n(6,l=s*(f/o))},[i,u,f,g,a,r,l,s,o,_,c,b,I]}class $e extends ne{constructor(e){super(),le(this,e,gn,mn,$,{brush_radius:0,brush_color:1,container_height:7,img_width:2,img_height:8,mode:3})}}function dn(t){let e,n,l,a;return{c(){e=F("img"),J(e.src,n=t[0].image||t[0])||d(e,"src",n),d(e,"alt",""),d(e,"class","svelte-p3y7hu"),R(e,"webcam",t[5]==="webcam"&&t[9]),R(e,"selectable",t[10])},m(r,i){y(r,e,i),l||(a=q(e,"click",t[29]),l=!0)},p(r,i){i[0]&1&&!J(e.src,n=r[0].image||r[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",r[5]==="webcam"&&r[9]),i[0]&1024&&R(e,"selectable",r[10])},i:H,o:H,d(r){r&&v(e),l=!1,a()}}}function bn(t){let e=t[21],n,l,a,r=ct(t),i=t[16]>0&&ht(t);return{c(){r.c(),n=j(),i&&i.c(),l=_e()},m(u,s){r.m(u,s),y(u,n,s),i&&i.m(u,s),y(u,l,s),a=!0},p(u,s){s[0]&2097152&&$(e,e=u[21])?(r.d(1),r=ct(u),r.c(),r.m(n.parentNode,n)):r.p(u,s),u[16]>0?i?(i.p(u,s),s[0]&65536&&p(i,1)):(i=ht(u),i.c(),p(i,1),i.m(l.parentNode,l)):i&&(x(),A(i,1,1,()=>{i=null}),ee())},i(u){a||(p(i),a=!0)},o(u){A(i),a=!1},d(u){r.d(u),u&&v(n),i&&i.d(u),u&&v(l)}}}function kn(t){let e,n,l,a,r,i,u;return e=new He({props:{editable:!0}}),e.$on("edit",t[52]),e.$on("clear",t[24]),{c(){T(e.$$.fragment),n=j(),l=F("img"),J(l.src,a=t[0])||d(l,"src",a),d(l,"alt",""),d(l,"class","svelte-p3y7hu"),R(l,"selectable",t[10]),R(l,"webcam",t[5]==="webcam"&&t[9])},m(s,f){B(e,s,f),y(s,n,f),y(s,l,f),r=!0,i||(u=q(l,"click",t[29]),i=!0)},p(s,f){(!r||f[0]&1&&!J(l.src,a=s[0]))&&d(l,"src",a),(!r||f[0]&1024)&&R(l,"selectable",s[10]),(!r||f[0]&544)&&R(l,"webcam",s[5]==="webcam"&&s[9])},i(s){r||(p(e.$$.fragment,s),r=!0)},o(s){A(e.$$.fragment,s),r=!1},d(s){S(e,s),s&&v(n),s&&v(l),i=!1,u()}}}function pn(t){let e,n,l,a,r={image:t[0]};return e=new Mt({props:r}),t[50](e),e.$on("crop",t[25]),l=new He({}),l.$on("clear",t[51]),{c(){T(e.$$.fragment),n=j(),T(l.$$.fragment)},m(i,u){B(e,i,u),y(i,n,u),B(l,i,u),a=!0},p(i,u){const s={};u[0]&1&&(s.image=i[0]),e.$set(s)},i(i){a||(p(e.$$.fragment,i),p(l.$$.fragment,i),a=!0)},o(i){A(e.$$.fragment,i),A(l.$$.fragment,i),a=!1},d(i){t[50](null),S(e,i),i&&v(n),S(l,i)}}}function wn(t){let e,n,l=t[5]==="webcam"&&!t[21]&>(t);return{c(){l&&l.c(),e=_e()},m(a,r){l&&l.m(a,r),y(a,e,r),n=!0},p(a,r){a[5]==="webcam"&&!a[21]?l?(l.p(a,r),r[0]&2097184&&p(l,1)):(l=gt(a),l.c(),p(l,1),l.m(e.parentNode,e)):l&&(x(),A(l,1,1,()=>{l=null}),ee())},i(a){n||(p(l),n=!0)},o(a){A(l),n=!1},d(a){l&&l.d(a),a&&v(e)}}}function An(t){let e,n,l,a,r,i,u;e=new Ke({}),e.$on("undo",t[42]),e.$on("remove_image",t[27]);let s=t[1]==="color-sketch"&&dt(t);function f(_){t[45](_)}function o(_){t[46](_)}let g={value:t[0],mode:t[13],width:t[16]||t[20],height:t[15]||t[19],container_height:t[17]||t[19],shape:t[6]};return t[2]!==void 0&&(g.brush_radius=t[2]),t[22]!==void 0&&(g.brush_color=t[22]),a=new Ze({props:g}),L.push(()=>V(a,"brush_radius",f)),L.push(()=>V(a,"brush_color",o)),t[47](a),a.$on("change",t[25]),a.$on("clear",t[27]),{c(){T(e.$$.fragment),n=j(),s&&s.c(),l=j(),T(a.$$.fragment)},m(_,c){B(e,_,c),y(_,n,c),s&&s.m(_,c),y(_,l,c),B(a,_,c),u=!0},p(_,c){_[1]==="color-sketch"?s?(s.p(_,c),c[0]&2&&p(s,1)):(s=dt(_),s.c(),p(s,1),s.m(l.parentNode,l)):s&&(x(),A(s,1,1,()=>{s=null}),ee());const b={};c[0]&1&&(b.value=_[0]),c[0]&8192&&(b.mode=_[13]),c[0]&1114112&&(b.width=_[16]||_[20]),c[0]&557056&&(b.height=_[15]||_[19]),c[0]&655360&&(b.container_height=_[17]||_[19]),c[0]&64&&(b.shape=_[6]),!r&&c[0]&4&&(r=!0,b.brush_radius=_[2],Z(()=>r=!1)),!i&&c[0]&4194304&&(i=!0,b.brush_color=_[22],Z(()=>i=!1)),a.$set(b)},i(_){u||(p(e.$$.fragment,_),p(s),p(a.$$.fragment,_),u=!0)},o(_){A(e.$$.fragment,_),A(s),A(a.$$.fragment,_),u=!1},d(_){S(e,_),_&&v(n),s&&s.d(_),_&&v(l),t[47](null),S(a,_)}}}function In(t){let e,n,l;function a(i){t[41](i)}let r={filetype:"image/*",include_file_metadata:!1,disable_click:!!t[0],$$slots:{default:[Tn]},$$scope:{ctx:t}};return t[12]!==void 0&&(r.dragging=t[12]),e=new Gt({props:r}),L.push(()=>V(e,"dragging",a)),e.$on("load",t[23]),{c(){T(e.$$.fragment)},m(i,u){B(e,i,u),l=!0},p(i,u){const s={};u[0]&1&&(s.disable_click=!!i[0]),u[0]&8384231|u[1]&1073741824&&(s.$$scope={dirty:u,ctx:i}),!n&&u[0]&4096&&(n=!0,s.dragging=i[12],Z(()=>n=!1)),e.$set(s)},i(i){l||(p(e.$$.fragment,i),l=!0)},o(i){A(e.$$.fragment,i),l=!1},d(i){S(e,i)}}}function ct(t){let e,n,l,a;return{c(){e=F("img"),d(e,"class","absolute-img svelte-p3y7hu"),J(e.src,n=t[21]||t[0]?.image||t[0])||d(e,"src",n),d(e,"alt",""),R(e,"webcam",t[5]==="webcam"&&t[9])},m(r,i){y(r,e,i),t[53](e),l||(a=q(e,"load",t[26]),l=!0)},p(r,i){i[0]&2097153&&!J(e.src,n=r[21]||r[0]?.image||r[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",r[5]==="webcam"&&r[9])},d(r){r&&v(e),t[53](null),l=!1,a()}}}function ht(t){let e,n,l,a,r,i,u,s;function f(c){t[55](c)}function o(c){t[56](c)}let g={value:t[0],mode:t[13],width:t[16]||t[20],height:t[15]||t[19],container_height:t[17]||t[19],value_img:t[18],source:t[5]};t[2]!==void 0&&(g.brush_radius=t[2]),t[22]!==void 0&&(g.brush_color=t[22]),e=new Ze({props:g}),t[54](e),L.push(()=>V(e,"brush_radius",f)),L.push(()=>V(e,"brush_color",o)),e.$on("change",t[25]),r=new Ke({}),r.$on("undo",t[57]),r.$on("remove_image",t[27]);let _=(t[1]==="color-sketch"||t[1]==="sketch")&&mt(t);return{c(){T(e.$$.fragment),a=j(),T(r.$$.fragment),i=j(),_&&_.c(),u=_e()},m(c,b){B(e,c,b),y(c,a,b),B(r,c,b),y(c,i,b),_&&_.m(c,b),y(c,u,b),s=!0},p(c,b){const I={};b[0]&1&&(I.value=c[0]),b[0]&8192&&(I.mode=c[13]),b[0]&1114112&&(I.width=c[16]||c[20]),b[0]&557056&&(I.height=c[15]||c[19]),b[0]&655360&&(I.container_height=c[17]||c[19]),b[0]&262144&&(I.value_img=c[18]),b[0]&32&&(I.source=c[5]),!n&&b[0]&4&&(n=!0,I.brush_radius=c[2],Z(()=>n=!1)),!l&&b[0]&4194304&&(l=!0,I.brush_color=c[22],Z(()=>l=!1)),e.$set(I),c[1]==="color-sketch"||c[1]==="sketch"?_?(_.p(c,b),b[0]&2&&p(_,1)):(_=mt(c),_.c(),p(_,1),_.m(u.parentNode,u)):_&&(x(),A(_,1,1,()=>{_=null}),ee())},i(c){s||(p(e.$$.fragment,c),p(r.$$.fragment,c),p(_),s=!0)},o(c){A(e.$$.fragment,c),A(r.$$.fragment,c),A(_),s=!1},d(c){t[54](null),S(e,c),c&&v(a),S(r,c),c&&v(i),_&&_.d(c),c&&v(u)}}}function mt(t){let e,n,l,a;function r(s){t[58](s)}function i(s){t[59](s)}let u={container_height:t[17]||t[19],img_width:t[16]||t[20],img_height:t[15]||t[19],mode:t[13]};return t[2]!==void 0&&(u.brush_radius=t[2]),t[22]!==void 0&&(u.brush_color=t[22]),e=new $e({props:u}),L.push(()=>V(e,"brush_radius",r)),L.push(()=>V(e,"brush_color",i)),{c(){T(e.$$.fragment)},m(s,f){B(e,s,f),a=!0},p(s,f){const o={};f[0]&655360&&(o.container_height=s[17]||s[19]),f[0]&1114112&&(o.img_width=s[16]||s[20]),f[0]&557056&&(o.img_height=s[15]||s[19]),f[0]&8192&&(o.mode=s[13]),!n&&f[0]&4&&(n=!0,o.brush_radius=s[2],Z(()=>n=!1)),!l&&f[0]&4194304&&(l=!0,o.brush_color=s[22],Z(()=>l=!1)),e.$set(o)},i(s){a||(p(e.$$.fragment,s),a=!0)},o(s){A(e.$$.fragment,s),a=!1},d(s){S(e,s)}}}function gt(t){let e,n;return e=new Pt({props:{streaming:t[7],pending:t[8],mirror_webcam:t[9]}}),e.$on("capture",t[48]),e.$on("stream",t[25]),e.$on("error",t[49]),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},p(l,a){const r={};a[0]&128&&(r.streaming=l[7]),a[0]&256&&(r.pending=l[8]),a[0]&512&&(r.mirror_webcam=l[9]),e.$set(r)},i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}function dt(t){let e,n,l,a;function r(s){t[43](s)}function i(s){t[44](s)}let u={container_height:t[17]||t[19],img_width:t[16]||t[20],img_height:t[15]||t[19]};return t[2]!==void 0&&(u.brush_radius=t[2]),t[22]!==void 0&&(u.brush_color=t[22]),e=new $e({props:u}),L.push(()=>V(e,"brush_radius",r)),L.push(()=>V(e,"brush_color",i)),{c(){T(e.$$.fragment)},m(s,f){B(e,s,f),a=!0},p(s,f){const o={};f[0]&655360&&(o.container_height=s[17]||s[19]),f[0]&1114112&&(o.img_width=s[16]||s[20]),f[0]&557056&&(o.img_height=s[15]||s[19]),!n&&f[0]&4&&(n=!0,o.brush_radius=s[2],Z(()=>n=!1)),!l&&f[0]&4194304&&(l=!0,o.brush_color=s[22],Z(()=>l=!1)),e.$set(o)},i(s){a||(p(e.$$.fragment,s),a=!0)},o(s){A(e.$$.fragment,s),a=!1},d(s){S(e,s)}}}function yn(t){let e,n,l,a;return{c(){e=F("img"),J(e.src,n=t[0].image||t[0])||d(e,"src",n),d(e,"alt","hello"),d(e,"class","svelte-p3y7hu"),R(e,"webcam",t[5]==="webcam"&&t[9]),R(e,"selectable",t[10])},m(r,i){y(r,e,i),l||(a=q(e,"click",t[29]),l=!0)},p(r,i){i[0]&1&&!J(e.src,n=r[0].image||r[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",r[5]==="webcam"&&r[9]),i[0]&1024&&R(e,"selectable",r[10])},i:H,o:H,d(r){r&&v(e),l=!1,a()}}}function vn(t){let e=t[21],n,l,a,r=bt(t),i=t[16]>0&&kt(t);return{c(){r.c(),n=j(),i&&i.c(),l=_e()},m(u,s){r.m(u,s),y(u,n,s),i&&i.m(u,s),y(u,l,s),a=!0},p(u,s){s[0]&2097152&&$(e,e=u[21])?(r.d(1),r=bt(u),r.c(),r.m(n.parentNode,n)):r.p(u,s),u[16]>0?i?(i.p(u,s),s[0]&65536&&p(i,1)):(i=kt(u),i.c(),p(i,1),i.m(l.parentNode,l)):i&&(x(),A(i,1,1,()=>{i=null}),ee())},i(u){a||(p(i),a=!0)},o(u){A(i),a=!1},d(u){r.d(u),u&&v(n),i&&i.d(u),u&&v(l)}}}function Cn(t){let e,n,l,a,r,i,u;return e=new He({props:{editable:!0}}),e.$on("edit",t[33]),e.$on("clear",t[24]),{c(){T(e.$$.fragment),n=j(),l=F("img"),J(l.src,a=t[0])||d(l,"src",a),d(l,"alt",""),d(l,"class","svelte-p3y7hu"),R(l,"scale-x-[-1]",t[5]==="webcam"&&t[9]),R(l,"selectable",t[10])},m(s,f){B(e,s,f),y(s,n,f),y(s,l,f),r=!0,i||(u=q(l,"click",t[29]),i=!0)},p(s,f){(!r||f[0]&1&&!J(l.src,a=s[0]))&&d(l,"src",a),(!r||f[0]&544)&&R(l,"scale-x-[-1]",s[5]==="webcam"&&s[9]),(!r||f[0]&1024)&&R(l,"selectable",s[10])},i(s){r||(p(e.$$.fragment,s),r=!0)},o(s){A(e.$$.fragment,s),r=!1},d(s){S(e,s),s&&v(n),s&&v(l),i=!1,u()}}}function Mn(t){let e,n,l,a,r={image:t[0]};return e=new Mt({props:r}),t[31](e),e.$on("crop",t[25]),l=new He({}),l.$on("clear",t[32]),{c(){T(e.$$.fragment),n=j(),T(l.$$.fragment)},m(i,u){B(e,i,u),y(i,n,u),B(l,i,u),a=!0},p(i,u){const s={};u[0]&1&&(s.image=i[0]),e.$set(s)},i(i){a||(p(e.$$.fragment,i),p(l.$$.fragment,i),a=!0)},o(i){A(e.$$.fragment,i),A(l.$$.fragment,i),a=!1},d(i){t[31](null),S(e,i),i&&v(n),S(l,i)}}}function zn(t){let e;const n=t[30].default,l=Dt(n,t,t[61],null);return{c(){l&&l.c()},m(a,r){l&&l.m(a,r),e=!0},p(a,r){l&&l.p&&(!e||r[1]&1073741824)&&Lt(l,n,a,a[61],e?jt(n,a[61],r,null):Ut(a[61]),null)},i(a){e||(p(l,a),e=!0)},o(a){A(l,a),e=!1},d(a){l&&l.d(a)}}}function bt(t){let e,n,l,a;return{c(){e=F("img"),d(e,"class","absolute-img svelte-p3y7hu"),J(e.src,n=t[21]||t[0]?.image||t[0])||d(e,"src",n),d(e,"alt",""),R(e,"webcam",t[5]==="webcam"&&t[9])},m(r,i){y(r,e,i),t[34](e),l||(a=q(e,"load",t[26]),l=!0)},p(r,i){i[0]&2097153&&!J(e.src,n=r[21]||r[0]?.image||r[0])&&d(e,"src",n),i[0]&544&&R(e,"webcam",r[5]==="webcam"&&r[9])},d(r){r&&v(e),t[34](null),l=!1,a()}}}function kt(t){let e,n,l,a,r,i,u,s;function f(c){t[36](c)}function o(c){t[37](c)}let g={value:t[0],mode:t[13],width:t[16]||t[20],height:t[15]||t[19],container_height:t[17]||t[19],value_img:t[18],source:t[5],shape:t[6]};t[2]!==void 0&&(g.brush_radius=t[2]),t[22]!==void 0&&(g.brush_color=t[22]),e=new Ze({props:g}),t[35](e),L.push(()=>V(e,"brush_radius",f)),L.push(()=>V(e,"brush_color",o)),e.$on("change",t[25]),r=new Ke({props:{show_eraser:t[18]}}),r.$on("undo",t[38]),r.$on("clear_mask",t[28]),r.$on("remove_image",t[27]);let _=(t[1]==="color-sketch"||t[1]==="sketch")&&pt(t);return{c(){T(e.$$.fragment),a=j(),T(r.$$.fragment),i=j(),_&&_.c(),u=_e()},m(c,b){B(e,c,b),y(c,a,b),B(r,c,b),y(c,i,b),_&&_.m(c,b),y(c,u,b),s=!0},p(c,b){const I={};b[0]&1&&(I.value=c[0]),b[0]&8192&&(I.mode=c[13]),b[0]&1114112&&(I.width=c[16]||c[20]),b[0]&557056&&(I.height=c[15]||c[19]),b[0]&655360&&(I.container_height=c[17]||c[19]),b[0]&262144&&(I.value_img=c[18]),b[0]&32&&(I.source=c[5]),b[0]&64&&(I.shape=c[6]),!n&&b[0]&4&&(n=!0,I.brush_radius=c[2],Z(()=>n=!1)),!l&&b[0]&4194304&&(l=!0,I.brush_color=c[22],Z(()=>l=!1)),e.$set(I);const D={};b[0]&262144&&(D.show_eraser=c[18]),r.$set(D),c[1]==="color-sketch"||c[1]==="sketch"?_?(_.p(c,b),b[0]&2&&p(_,1)):(_=pt(c),_.c(),p(_,1),_.m(u.parentNode,u)):_&&(x(),A(_,1,1,()=>{_=null}),ee())},i(c){s||(p(e.$$.fragment,c),p(r.$$.fragment,c),p(_),s=!0)},o(c){A(e.$$.fragment,c),A(r.$$.fragment,c),A(_),s=!1},d(c){t[35](null),S(e,c),c&&v(a),S(r,c),c&&v(i),_&&_.d(c),c&&v(u)}}}function pt(t){let e,n,l,a;function r(s){t[39](s)}function i(s){t[40](s)}let u={container_height:t[17]||t[19],img_width:t[16]||t[20],img_height:t[15]||t[19],mode:t[13]};return t[2]!==void 0&&(u.brush_radius=t[2]),t[22]!==void 0&&(u.brush_color=t[22]),e=new $e({props:u}),L.push(()=>V(e,"brush_radius",r)),L.push(()=>V(e,"brush_color",i)),{c(){T(e.$$.fragment)},m(s,f){B(e,s,f),a=!0},p(s,f){const o={};f[0]&655360&&(o.container_height=s[17]||s[19]),f[0]&1114112&&(o.img_width=s[16]||s[20]),f[0]&557056&&(o.img_height=s[15]||s[19]),f[0]&8192&&(o.mode=s[13]),!n&&f[0]&4&&(n=!0,o.brush_radius=s[2],Z(()=>n=!1)),!l&&f[0]&4194304&&(l=!0,o.brush_color=s[22],Z(()=>l=!1)),e.$set(o)},i(s){a||(p(e.$$.fragment,s),a=!0)},o(s){A(e.$$.fragment,s),a=!1},d(s){S(e,s)}}}function Tn(t){let e,n,l,a;const r=[zn,Mn,Cn,vn,yn],i=[];function u(s,f){return s[0]===null&&!s[21]||s[7]?0:s[1]==="select"?1:s[1]==="editor"?2:(s[1]==="sketch"||s[1]==="color-sketch")&&(s[0]!==null||s[21])?3:4}return e=u(t),n=i[e]=r[e](t),{c(){n.c(),l=_e()},m(s,f){i[e].m(s,f),y(s,l,f),a=!0},p(s,f){let o=e;e=u(s),e===o?i[e].p(s,f):(x(),A(i[o],1,1,()=>{i[o]=null}),ee(),n=i[e],n?n.p(s,f):(n=i[e]=r[e](s),n.c()),p(n,1),n.m(l.parentNode,l))},i(s){a||(p(n),a=!0)},o(s){A(n),a=!1},d(s){i[e].d(s),s&&v(l)}}}function Bn(t){let e,n,l,a,r,i,u;e=new vt({props:{show_label:t[4],Icon:t[5]==="canvas"?tt:qe,label:t[3]||(t[5]==="canvas"?"Sketch":"Image")}});const s=[In,An,wn,pn,kn,bn,dn],f=[];function o(g,_){return g[5]==="upload"?0:g[5]==="canvas"?1:g[0]===null&&!g[21]||g[7]?2:g[1]==="select"?3:g[1]==="editor"?4:(g[1]==="sketch"||g[1]==="color-sketch")&&(g[0]!==null||g[21])?5:6}return a=o(t),r=f[a]=s[a](t),{c(){T(e.$$.fragment),n=j(),l=F("div"),r.c(),d(l,"data-testid","image"),d(l,"class","image-container svelte-p3y7hu"),Ve(()=>t[60].call(l))},m(g,_){B(e,g,_),y(g,n,_),y(g,l,_),f[a].m(l,null),i=At(l,t[60].bind(l)),u=!0},p(g,_){const c={};_[0]&16&&(c.show_label=g[4]),_[0]&32&&(c.Icon=g[5]==="canvas"?tt:qe),_[0]&40&&(c.label=g[3]||(g[5]==="canvas"?"Sketch":"Image")),e.$set(c);let b=a;a=o(g),a===b?f[a].p(g,_):(x(),A(f[b],1,1,()=>{f[b]=null}),ee(),r=f[a],r?r.p(g,_):(r=f[a]=s[a](g),r.c()),p(r,1),r.m(l,null))},i(g){u||(p(e.$$.fragment,g),p(r),u=!0)},o(g){A(e.$$.fragment,g),A(r),u=!1},d(g){S(e,g),g&&v(n),g&&v(l),f[a].d(),i()}}}function Sn(t,e,n){let l,{$$slots:a={},$$scope:r}=e,{value:i}=e,{label:u=void 0}=e,{show_label:s}=e,{source:f="upload"}=e,{tool:o="editor"}=e,{shape:g}=e,{streaming:_=!1}=e,{pending:c=!1}=e,{mirror_webcam:b}=e,{brush_radius:I}=e,{selectable:D=!1}=e,N,U;i&&(f==="upload"||f==="webcam")&&o==="sketch"&&(i={image:i,mask:null});function ce({detail:h}){o==="color-sketch"?n(21,re=h):n(0,i=(f==="upload"||f==="webcam")&&o==="sketch"?{image:h,mask:null}:h),W("upload",h)}function E({detail:h}){n(0,i=null),n(21,re=void 0),W("clear")}async function k({detail:h},z){O==="mask"?f==="webcam"&&z?n(0,i={image:h,mask:null}):n(0,i={image:typeof i=="string"?i:i?.image||null,mask:h}):(f==="upload"||f==="webcam")&&o==="sketch"?n(0,i={image:h,mask:null}):n(0,i=h),await ge(),W(_?"stream":"edit")}const W=de();let Y=!1;function se(h){const z=h.currentTarget;n(16,X=z.naturalWidth),n(15,ie=z.naturalHeight),n(17,te=z.getBoundingClientRect().height)}async function oe(){N.clear(),await ge(),n(0,i=null),n(21,re=void 0)}async function fe(){N.clear_mask(),await ge()}let ie=0,X=0,te=0,O,K,w,be,re;It(async()=>{o==="color-sketch"&&i&&typeof i=="string"&&(n(21,re=i),await ge(),se({currentTarget:K}))});const Fe=h=>{let z=Ct(h);z&&W("select",{index:z,value:null})};function Ne(h){L[h?"unshift":"push"](()=>{U=h,n(11,U),n(0,i)})}const pe=h=>(E(h),n(1,o="editor")),We=()=>n(1,o="select");function Ie(h){L[h?"unshift":"push"](()=>{K=h,n(18,K)})}function Oe(h){L[h?"unshift":"push"](()=>{N=h,n(14,N)})}function ye(h){I=h,n(2,I)}function ve(h){l=h,n(22,l),n(13,O),n(5,f),n(1,o)}const Ce=()=>N.undo();function Me(h){I=h,n(2,I)}function ze(h){l=h,n(22,l),n(13,O),n(5,f),n(1,o)}function he(h){Y=h,n(12,Y)}const Te=()=>N.undo();function Be(h){I=h,n(2,I)}function Se(h){l=h,n(22,l),n(13,O),n(5,f),n(1,o)}function Ee(h){I=h,n(2,I)}function Re(h){l=h,n(22,l),n(13,O),n(5,f),n(1,o)}function De(h){L[h?"unshift":"push"](()=>{N=h,n(14,N)})}const ae=h=>o==="color-sketch"?ce(h):k(h,!0);function me(h){ue.call(this,t,h)}function Le(h){L[h?"unshift":"push"](()=>{U=h,n(11,U),n(0,i)})}const we=h=>(E(h),n(1,o="editor")),Ye=()=>n(1,o="select");function Ue(h){L[h?"unshift":"push"](()=>{K=h,n(18,K)})}function Xe(h){L[h?"unshift":"push"](()=>{N=h,n(14,N)})}function Pe(h){I=h,n(2,I)}function Je(h){l=h,n(22,l),n(13,O),n(5,f),n(1,o)}const Ge=()=>N.undo();function m(h){I=h,n(2,I)}function C(h){l=h,n(22,l),n(13,O),n(5,f),n(1,o)}function M(){w=this.offsetHeight,be=this.offsetWidth,n(19,w),n(20,be)}return t.$$set=h=>{"value"in h&&n(0,i=h.value),"label"in h&&n(3,u=h.label),"show_label"in h&&n(4,s=h.show_label),"source"in h&&n(5,f=h.source),"tool"in h&&n(1,o=h.tool),"shape"in h&&n(6,g=h.shape),"streaming"in h&&n(7,_=h.streaming),"pending"in h&&n(8,c=h.pending),"mirror_webcam"in h&&n(9,b=h.mirror_webcam),"brush_radius"in h&&n(2,I=h.brush_radius),"selectable"in h&&n(10,D=h.selectable),"$$scope"in h&&n(61,r=h.$$scope)},t.$$.update=()=>{t.$$.dirty[0]&1&&W("change",i),t.$$.dirty[0]&4096&&W("drag",Y),t.$$.dirty[0]&34&&(f==="canvas"&&o==="sketch"?n(13,O="bw-sketch"):o==="color-sketch"?n(13,O="color-sketch"):(f==="upload"||f==="webcam")&&o==="sketch"?n(13,O="mask"):n(13,O="editor")),t.$$.dirty[0]&8192&&n(22,l=O=="mask"?"#000000":"#000"),t.$$.dirty[0]&1&&(i===null||i.image===null&&i.mask===null)&&n(21,re=void 0),t.$$.dirty[0]&2049&&U&&(i?(n(11,U.image=i,U),U.create()):U.destroy())},[i,o,I,u,s,f,g,_,c,b,D,U,Y,O,N,ie,X,te,K,w,be,re,l,ce,E,k,se,oe,fe,Fe,a,Ne,pe,We,Ie,Oe,ye,ve,Ce,Me,ze,he,Te,Be,Se,Ee,Re,De,ae,me,Le,we,Ye,Ue,Xe,Pe,Je,Ge,m,C,M,r]}let En=class extends ne{constructor(e){super(),le(this,e,Sn,Bn,$,{value:0,label:3,show_label:4,source:5,tool:1,shape:6,streaming:7,pending:8,mirror_webcam:9,brush_radius:2,selectable:10},null,[-1,-1,-1])}};function Rn(t){let e,n,l,a,r,i,u,s,f;return l=new ke({props:{Icon:Vt,label:"Download"}}),{c(){e=F("div"),n=F("a"),T(l.$$.fragment),a=j(),r=F("img"),d(n,"href",t[0]),d(n,"target",window.__is_colab__?"_blank":null),d(n,"download","image"),d(e,"class","download svelte-ms5bsk"),J(r.src,i=t[0])||d(r,"src",i),d(r,"alt",""),d(r,"class","svelte-ms5bsk"),R(r,"selectable",t[3])},m(o,g){y(o,e,g),P(e,n),B(l,n,null),y(o,a,g),y(o,r,g),u=!0,s||(f=q(r,"click",t[4]),s=!0)},p(o,g){(!u||g&1)&&d(n,"href",o[0]),(!u||g&1&&!J(r.src,i=o[0]))&&d(r,"src",i),(!u||g&8)&&R(r,"selectable",o[3])},i(o){u||(p(l.$$.fragment,o),u=!0)},o(o){A(l.$$.fragment,o),u=!1},d(o){o&&v(e),S(l),o&&v(a),o&&v(r),s=!1,f()}}}function Dn(t){let e,n;return e=new Qt({props:{size:"large",unpadded_box:!0,$$slots:{default:[Ln]},$$scope:{ctx:t}}}),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},p(l,a){const r={};a&64&&(r.$$scope={dirty:a,ctx:l}),e.$set(r)},i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}function Ln(t){let e,n;return e=new qe({}),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}function Un(t){let e,n,l,a,r,i;e=new vt({props:{show_label:t[2],Icon:qe,label:t[1]||"Image"}});const u=[Dn,Rn],s=[];function f(o,g){return o[0]===null?0:1}return l=f(t),a=s[l]=u[l](t),{c(){T(e.$$.fragment),n=j(),a.c(),r=_e()},m(o,g){B(e,o,g),y(o,n,g),s[l].m(o,g),y(o,r,g),i=!0},p(o,[g]){const _={};g&4&&(_.show_label=o[2]),g&2&&(_.label=o[1]||"Image"),e.$set(_);let c=l;l=f(o),l===c?s[l].p(o,g):(x(),A(s[c],1,1,()=>{s[c]=null}),ee(),a=s[l],a?a.p(o,g):(a=s[l]=u[l](o),a.c()),p(a,1),a.m(r.parentNode,r))},i(o){i||(p(e.$$.fragment,o),p(a),i=!0)},o(o){A(e.$$.fragment,o),A(a),i=!1},d(o){S(e,o),o&&v(n),s[l].d(o),o&&v(r)}}}function jn(t,e,n){let{value:l}=e,{label:a=void 0}=e,{show_label:r}=e,{selectable:i=!1}=e;const u=de(),s=f=>{let o=Ct(f);o&&u("select",{index:o,value:null})};return t.$$set=f=>{"value"in f&&n(0,l=f.value),"label"in f&&n(1,a=f.label),"show_label"in f&&n(2,r=f.show_label),"selectable"in f&&n(3,i=f.selectable)},t.$$.update=()=>{t.$$.dirty&1&&l&&u("change",l)},[l,a,r,i,s]}class qn extends ne{constructor(e){super(),le(this,e,jn,Un,$,{value:0,label:1,show_label:2,selectable:3})}}function Hn(t){let e,n,l;function a(i){t[19](i)}let r={brush_radius:t[14],shape:t[13],source:t[5],tool:t[6],selectable:t[15],label:t[7],show_label:t[8],pending:t[10],streaming:t[9],mirror_webcam:t[12],$$slots:{default:[Nn]},$$scope:{ctx:t}};return t[0]!==void 0&&(r.value=t[0]),e=new En({props:r}),L.push(()=>V(e,"value",a)),e.$on("edit",t[20]),e.$on("clear",t[21]),e.$on("change",t[22]),e.$on("stream",t[23]),e.$on("drag",t[24]),e.$on("upload",t[25]),e.$on("select",t[26]),e.$on("error",t[27]),{c(){T(e.$$.fragment)},m(i,u){B(e,i,u),l=!0},p(i,u){const s={};u&16384&&(s.brush_radius=i[14]),u&8192&&(s.shape=i[13]),u&32&&(s.source=i[5]),u&64&&(s.tool=i[6]),u&32768&&(s.selectable=i[15]),u&128&&(s.label=i[7]),u&256&&(s.show_label=i[8]),u&1024&&(s.pending=i[10]),u&512&&(s.streaming=i[9]),u&4096&&(s.mirror_webcam=i[12]),u&536870912&&(s.$$scope={dirty:u,ctx:i}),!n&&u&1&&(n=!0,s.value=i[0],Z(()=>n=!1)),e.$set(s)},i(i){l||(p(e.$$.fragment,i),l=!0)},o(i){A(e.$$.fragment,i),l=!1},d(i){S(e,i)}}}function Fn(t){let e,n;return e=new qn({props:{value:t[0],label:t[7],show_label:t[8],selectable:t[15]}}),e.$on("select",t[18]),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},p(l,a){const r={};a&1&&(r.value=l[0]),a&128&&(r.label=l[7]),a&256&&(r.show_label=l[8]),a&32768&&(r.selectable=l[15]),e.$set(r)},i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}function Nn(t){let e,n;return e=new Zt({props:{type:"image"}}),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},p:H,i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}function Wn(t){let e,n,l,a,r,i;const u=[t[1]];let s={};for(let _=0;_{o[I]=null}),ee(),a=o[l],a?a.p(_,c):(a=o[l]=f[l](_),a.c()),p(a,1),a.m(r.parentNode,r))},i(_){i||(p(e.$$.fragment,_),p(a),i=!0)},o(_){A(e.$$.fragment,_),A(a),i=!1},d(_){S(e,_),_&&v(n),o[l].d(_),_&&v(r)}}}function On(t){let e,n;return e=new Wt({props:{visible:t[4],variant:t[16]==="dynamic"&&t[0]===null&&t[5]==="upload"?"dashed":"solid",border_mode:t[17]?"focus":"base",padding:!1,elem_id:t[2],elem_classes:t[3],style:{height:t[11].height||(t[5]==="webcam"||t[16]==="static"?void 0:wt),width:t[11].width},allow_overflow:!1,$$slots:{default:[Wn]},$$scope:{ctx:t}}}),{c(){T(e.$$.fragment)},m(l,a){B(e,l,a),n=!0},p(l,[a]){const r={};a&16&&(r.visible=l[4]),a&65569&&(r.variant=l[16]==="dynamic"&&l[0]===null&&l[5]==="upload"?"dashed":"solid"),a&131072&&(r.border_mode=l[17]?"focus":"base"),a&4&&(r.elem_id=l[2]),a&8&&(r.elem_classes=l[3]),a&67616&&(r.style={height:l[11].height||(l[5]==="webcam"||l[16]==="static"?void 0:wt),width:l[11].width}),a&537130979&&(r.$$scope={dirty:a,ctx:l}),e.$set(r)},i(l){n||(p(e.$$.fragment,l),n=!0)},o(l){A(e.$$.fragment,l),n=!1},d(l){S(e,l)}}}const wt=240;function Yn(t,e,n){let{elem_id:l=""}=e,{elem_classes:a=[]}=e,{visible:r=!0}=e,{value:i=null}=e,{source:u="upload"}=e,{tool:s="editor"}=e,{label:f}=e,{show_label:o}=e,{streaming:g}=e,{pending:_}=e,{style:c={}}=e,{mirror_webcam:b}=e,{shape:I}=e,{brush_radius:D}=e,{selectable:N=!1}=e,{loading_status:U}=e,{mode:ce}=e;const E=de();let k;function W(w){ue.call(this,t,w)}function Y(w){i=w,n(0,i)}function se(w){ue.call(this,t,w)}function oe(w){ue.call(this,t,w)}function fe(w){ue.call(this,t,w)}function ie(w){ue.call(this,t,w)}const X=({detail:w})=>n(17,k=w);function te(w){ue.call(this,t,w)}function O(w){ue.call(this,t,w)}const K=({detail:w})=>{n(1,U=U||{}),n(1,U.status="error",U),n(1,U.message=w,U)};return t.$$set=w=>{"elem_id"in w&&n(2,l=w.elem_id),"elem_classes"in w&&n(3,a=w.elem_classes),"visible"in w&&n(4,r=w.visible),"value"in w&&n(0,i=w.value),"source"in w&&n(5,u=w.source),"tool"in w&&n(6,s=w.tool),"label"in w&&n(7,f=w.label),"show_label"in w&&n(8,o=w.show_label),"streaming"in w&&n(9,g=w.streaming),"pending"in w&&n(10,_=w.pending),"style"in w&&n(11,c=w.style),"mirror_webcam"in w&&n(12,b=w.mirror_webcam),"shape"in w&&n(13,I=w.shape),"brush_radius"in w&&n(14,D=w.brush_radius),"selectable"in w&&n(15,N=w.selectable),"loading_status"in w&&n(1,U=w.loading_status),"mode"in w&&n(16,ce=w.mode)},t.$$.update=()=>{t.$$.dirty&1&&n(0,i=i||null),t.$$.dirty&1&&E("change")},[i,U,l,a,r,u,s,f,o,g,_,c,b,I,D,N,ce,k,W,Y,se,oe,fe,ie,X,te,O,K]}class Xn extends ne{constructor(e){super(),le(this,e,Yn,On,$,{elem_id:2,elem_classes:3,visible:4,value:0,source:5,tool:6,label:7,show_label:8,streaming:9,pending:10,style:11,mirror_webcam:12,shape:13,brush_radius:14,selectable:15,loading_status:1,mode:16})}}const rl=Xn,al=["static","dynamic"],ul=t=>({type:{payload:"string"},description:{payload:"image data as base64 string"},example_data:"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAACklEQVR4nGMAAQAABQABDQottAAAAABJRU5ErkJggg=="});export{rl as Component,_l as ExampleComponent,ul as document,al as modes};
-//# sourceMappingURL=index-f0702dd5.js.map
diff --git a/spaces/widged/gender-bias-evaluation/README.md b/spaces/widged/gender-bias-evaluation/README.md
deleted file mode 100644
index c7ec040caf8ef5644fb27cb1f4844b6d7d684ab7..0000000000000000000000000000000000000000
--- a/spaces/widged/gender-bias-evaluation/README.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: Spaces Template Gradio
-emoji: 🌍
-colorFrom: gray
-colorTo: purple
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
-
-# Warnings
-
-:WARN: Not my own work. Borrowed or adapted from another space of the same name.
diff --git a/spaces/wisnuarys15/rvc-wisnu5/app.py b/spaces/wisnuarys15/rvc-wisnu5/app.py
deleted file mode 100644
index f6d9df484d4b74af79f88868816ef7b377e47797..0000000000000000000000000000000000000000
--- a/spaces/wisnuarys15/rvc-wisnu5/app.py
+++ /dev/null
@@ -1,188 +0,0 @@
-import os
-import json
-import argparse
-import traceback
-import logging
-import gradio as gr
-import numpy as np
-import librosa
-import torch
-import asyncio
-import edge_tts
-from datetime import datetime
-from fairseq import checkpoint_utils
-from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
-from vc_infer_pipeline import VC
-from config import (
- is_half,
- device
-)
-logging.getLogger("numba").setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
-
-def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
- def vc_fn(
- input_audio,
- f0_up_key,
- f0_method,
- index_rate,
- tts_mode,
- tts_text,
- tts_voice
- ):
- try:
- if tts_mode:
- if len(tts_text) > 100 and limitation:
- return "Text is too long", None
- if tts_text is None or tts_voice is None:
- return "You need to enter text and select a voice", None
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
- else:
- if args.files:
- audio, sr = librosa.load(input_audio, sr=16000, mono=True)
- else:
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- duration = audio.shape[0] / sampling_rate
- if duration > 20 and limitation:
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- times = [0, 0, 0]
- f0_up_key = int(f0_up_key)
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- )
- print(
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
- )
- return "Success", (tgt_sr, audio_opt)
- except:
- info = traceback.format_exc()
- print(info)
- return info, (None, None)
- return vc_fn
-
-def load_hubert():
- global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(device)
- if is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- hubert_model.eval()
-
-def change_to_tts_mode(tts_mode):
- if tts_mode:
- return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
- else:
- return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--files", action="store_true", default=False, help="load audio from path")
- args, unknown = parser.parse_known_args()
- load_hubert()
- models = []
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
- models_info = json.load(f)
- for name, info in models_info.items():
- if not info['enable']:
- continue
- title = info['title']
- author = info.get("author", None)
- cover = f"weights/{name}/{info['cover']}"
- index = f"weights/{name}/{info['feature_retrieval_library']}"
- npy = f"weights/{name}/{info['feature_file']}"
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- del net_g.enc_q
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
- net_g.eval().to(device)
- if is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, device, is_half)
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
- with gr.Blocks() as app:
- gr.Markdown(
- "# RVC Models\n"
- "## The input audio should be clean and pure voice without background music.\n"
- "\n\n"
- "[](https://colab.research.google.com/drive/1ZURZ2IaN_EDdimt29i8vDN3qnSM0IMZ_?usp=drive_link)\n\n"
- "[](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n"
- "[](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n"
- "[](https://ko-fi.com/R6R7AH1FA)\n\n"
- )
- with gr.Tabs():
- for (name, title, author, cover, vc_fn) in models:
- with gr.TabItem(name):
- with gr.Row():
- gr.Markdown(
- ''
- f' {title} \n'+
- (f' Model author: {author} ' if author else "")+
- (f'  ' if cover else "")+
- ' '
- )
- with gr.Row():
- with gr.Column():
- if args.files:
- vc_input = gr.Textbox(label="Input audio path")
- else:
- vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '')
- vc_transpose = gr.Number(label="Transpose", value=0)
- vc_f0method = gr.Radio(
- label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies",
- choices=["pm", "harvest"],
- value="pm",
- interactive=True,
- )
- vc_index_ratio = gr.Slider(
- minimum=0,
- maximum=1,
- label="Retrieval feature ratio",
- value=0.6,
- interactive=True,
- )
- tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False)
- tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text")
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
- vc_submit = gr.Button("Generate", variant="primary")
- with gr.Column():
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio")
- vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2])
- tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice])
- app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share)
\ No newline at end of file
diff --git a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/models.py b/spaces/wisnuarys15/rvc-wisnu5/infer_pack/models.py
deleted file mode 100644
index 96165f73644e6fb92d0ffedb4a3c9e1a457cb989..0000000000000000000000000000000000000000
--- a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/models.py
+++ /dev/null
@@ -1,982 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder256Sim(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- x = self.proj(x) * x_mask
- return x, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, max_len=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_sim(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- # hop_length,
- gin_channels=0,
- use_sdp=True,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256Sim(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- is_half=kwargs["is_half"],
- )
-
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- z_slice, ids_slice = commons.rand_slice_segments(
- x, y_lengths, self.segment_size
- )
-
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice
-
- def infer(
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
- ): # y是spec不需要了现在
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
- x = self.flow(x, x_mask, g=g, reverse=True)
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
- return o, o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/utils/misc.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/utils/misc.py
deleted file mode 100644
index e7bfa08060344fedcb1d5017b932a3c16fc5bc86..0000000000000000000000000000000000000000
--- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/utils/misc.py
+++ /dev/null
@@ -1,157 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py
-# Modified by Xueyan Zou
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-from typing import List, Optional
-
-import torch
-import torch.distributed as dist
-import torchvision
-from torch import Tensor
-
-def _max_by_axis(the_list):
- # type: (List[List[int]]) -> List[int]
- maxes = the_list[0]
- for sublist in the_list[1:]:
- for index, item in enumerate(sublist):
- maxes[index] = max(maxes[index], item)
- return maxes
-
-class NestedTensor(object):
- def __init__(self, tensors, mask: Optional[Tensor]):
- self.tensors = tensors
- self.mask = mask
-
- def to(self, device):
- # type: (Device) -> NestedTensor # noqa
- cast_tensor = self.tensors.to(device)
- mask = self.mask
- if mask is not None:
- assert mask is not None
- cast_mask = mask.to(device)
- else:
- cast_mask = None
- return NestedTensor(cast_tensor, cast_mask)
-
- def decompose(self):
- return self.tensors, self.mask
-
- def __repr__(self):
- return str(self.tensors)
-
-def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(img.shape) for img in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, h, w = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], : img.shape[2]] = False
- elif tensor_list[0].ndim == 2:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(txt.shape) for txt in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, l = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, l), dtype=torch.bool, device=device)
- for txt, pad_txt, m in zip(tensor_list, tensor, mask):
- pad_txt[: txt.shape[0], : txt.shape[1]] = txt
- m[: txt.shape[1]] = False
- else:
- raise ValueError("not supported")
- return NestedTensor(tensor, mask)
-
-def _collate_and_pad_divisibility(tensor_list: list, div=32):
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(
- torch.tensor([img.shape[i] for img in tensor_list]).to(torch.float32)
- ).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- c,h,w = max_size
- pad_h = (div - h % div) if h % div != 0 else 0
- pad_w = (div - w % div) if w % div != 0 else 0
- max_size = (c,h+pad_h,w+pad_w)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- return padded_imgs
-
-# _onnx_nested_tensor_from_tensor_list() is an implementation of
-# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
-@torch.jit.unused
-def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(
- torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)
- ).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- tensor = torch.stack(padded_imgs)
- mask = torch.stack(padded_masks)
-
- return NestedTensor(tensor, mask=mask)
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
\ No newline at end of file
diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/__init__.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/__init__.py
deleted file mode 100644
index 4fdfa9dd590c6283d56e86419b863782fa619029..0000000000000000000000000000000000000000
--- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-from .kitti_2d_box import Kitti2DBox
-from .kitti_mots import KittiMOTS
-from .mot_challenge_2d_box import MotChallenge2DBox
-from .mots_challenge import MOTSChallenge
-from .bdd100k import BDD100K
-from .davis import DAVIS
-from .tao import TAO
-from .tao_ow import TAO_OW
-try:
- from .burst import BURST
- from .burst_ow import BURST_OW
-except ImportError as err:
- print(f"Error importing BURST due to missing underlying dependency: {err}")
-from .youtube_vis import YouTubeVIS
-from .head_tracking_challenge import HeadTrackingChallenge
-from .rob_mots import RobMOTS
-from .person_path_22 import PersonPath22
diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/burst_ow.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/burst_ow.py
deleted file mode 100644
index da775456e6c539c07c85db1fc9cad5998d8baaeb..0000000000000000000000000000000000000000
--- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/datasets/burst_ow.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import json
-import os
-from .burst_helpers.burst_ow_base import BURST_OW_Base
-from .burst_helpers.format_converter import GroundTruthBURSTFormatToTAOFormatConverter, PredictionBURSTFormatToTAOFormatConverter
-from .. import utils
-
-
-class BURST_OW(BURST_OW_Base):
- """Dataset class for TAO tracking"""
-
- @staticmethod
- def get_default_dataset_config():
- tao_config = BURST_OW_Base.get_default_dataset_config()
- code_path = utils.get_code_path()
- tao_config['GT_FOLDER'] = os.path.join(
- code_path, 'data/gt/burst/all_classes/val/') # Location of GT data
- tao_config['TRACKERS_FOLDER'] = os.path.join(
- code_path, 'data/trackers/burst/open-world/val/') # Trackers location
- return tao_config
-
- def _iou_type(self):
- return 'mask'
-
- def _box_or_mask_from_det(self, det):
- if "segmentation" in det:
- return det["segmentation"]
- else:
- return det["mask"]
-
- def _calculate_area_for_ann(self, ann):
- import pycocotools.mask as cocomask
- seg = self._box_or_mask_from_det(ann)
- return cocomask.area(seg)
-
- def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
- similarity_scores = self._calculate_mask_ious(gt_dets_t, tracker_dets_t, is_encoded=True, do_ioa=False)
- return similarity_scores
-
- def _postproc_ground_truth_data(self, data):
- return GroundTruthBURSTFormatToTAOFormatConverter(data).convert()
-
- def _postproc_prediction_data(self, data):
- # if it's a list, it's already in TAO format and not in Ali format
- # however the image ids do not match and need to be remapped
- if isinstance(data, list):
- _remap_image_ids(data, self.gt_data)
- return data
-
- return PredictionBURSTFormatToTAOFormatConverter(
- self.gt_data, data,
- exemplar_guided=False).convert()
-
-
-def _remap_image_ids(pred_data, ali_gt_data):
- code_path = utils.get_code_path()
- if 'split' in ali_gt_data:
- split = ali_gt_data['split']
- else:
- split = 'val'
-
- if split in ('val', 'validation'):
- tao_gt_path = os.path.join(
- code_path, 'data/gt/tao/tao_validation/gt.json')
- else:
- tao_gt_path = os.path.join(
- code_path, 'data/gt/tao/tao_test/test_without_annotations.json')
-
- with open(tao_gt_path) as f:
- tao_gt = json.load(f)
-
- tao_img_by_id = {}
- for img in tao_gt['images']:
- img_id = img['id']
- tao_img_by_id[img_id] = img
-
- ali_img_id_by_filename = {}
- for ali_img in ali_gt_data['images']:
- ali_img_id = ali_img['id']
- file_name = ali_img['file_name'].replace("validation", "val")
- ali_img_id_by_filename[file_name] = ali_img_id
-
- ali_img_id_by_tao_img_id = {}
- for tao_img_id, tao_img in tao_img_by_id.items():
- file_name = tao_img['file_name']
- ali_img_id = ali_img_id_by_filename[file_name]
- ali_img_id_by_tao_img_id[tao_img_id] = ali_img_id
-
- for det in pred_data:
- tao_img_id = det['image_id']
- ali_img_id = ali_img_id_by_tao_img_id[tao_img_id]
- det['image_id'] = ali_img_id
diff --git a/spaces/xiang2811/ChatGPT/README.md b/spaces/xiang2811/ChatGPT/README.md
deleted file mode 100644
index 7128e29689e35d059c9cc0a5050910fbd34873cd..0000000000000000000000000000000000000000
--- a/spaces/xiang2811/ChatGPT/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChuanhuChatGPT
-emoji: 🐯
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.25.0
-app_file: ChuanhuChatbot.py
-pinned: false
-license: gpl-3.0
-duplicated_from: JohnSmith9982/ChuanhuChatGPT
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ybelkada/FocusOnDepth/focusondepth/model_definition.py b/spaces/ybelkada/FocusOnDepth/focusondepth/model_definition.py
deleted file mode 100644
index 40b825245d1241c272ce0d251befe3cacd5aab7b..0000000000000000000000000000000000000000
--- a/spaces/ybelkada/FocusOnDepth/focusondepth/model_definition.py
+++ /dev/null
@@ -1,68 +0,0 @@
-from transformers import PreTrainedModel
-import timm
-import torch.nn as nn
-import numpy as np
-
-from .model_config import FocusOnDepthConfig
-from .reassemble import Reassemble
-from .fusion import Fusion
-from .head import HeadDepth, HeadSeg
-
-
-class FocusOnDepth(PreTrainedModel):
- config_class = FocusOnDepthConfig
-
- def __init__(self, config):
- super().__init__(config)
- self.transformer_encoders = timm.create_model(config.model_timm, pretrained=True)
- self.type_ = config.type_
-
- #Register hooks
- self.activation = {}
- self.hooks = config.hooks
- self._get_layers_from_hooks(self.hooks)
-
- #Reassembles Fusion
- self.reassembles = []
- self.fusions = []
- for s in config.reassemble_s:
- self.reassembles.append(Reassemble(config.image_size, config.read, config.patch_size, s, config.emb_dim, config.resample_dim))
- self.fusions.append(Fusion(config.resample_dim))
- self.reassembles = nn.ModuleList(self.reassembles)
- self.fusions = nn.ModuleList(self.fusions)
-
- #Head
- if self.type_ == "full":
- self.head_depth = HeadDepth(config.resample_dim)
- self.head_segmentation = HeadSeg(config.resample_dim, nclasses=config.nclasses)
- elif self.type_ == "depth":
- self.head_depth = HeadDepth(config.resample_dim)
- self.head_segmentation = None
- else:
- self.head_depth = None
- self.head_segmentation = HeadSeg(config.resample_dim, nclasses=config.nclasses)
-
- def forward(self, img):
- _ = self.transformer_encoders(img)
- previous_stage = None
- for i in np.arange(len(self.fusions)-1, -1, -1):
- hook_to_take = 't'+str(self.hooks[i])
- activation_result = self.activation[hook_to_take]
- reassemble_result = self.reassembles[i](activation_result)
- fusion_result = self.fusions[i](reassemble_result, previous_stage)
- previous_stage = fusion_result
- out_depth = None
- out_segmentation = None
- if self.head_depth != None:
- out_depth = self.head_depth(previous_stage)
- if self.head_segmentation != None:
- out_segmentation = self.head_segmentation(previous_stage)
- return out_depth, out_segmentation
-
- def _get_layers_from_hooks(self, hooks):
- def get_activation(name):
- def hook(model, input, output):
- self.activation[name] = output
- return hook
- for h in hooks:
- self.transformer_encoders.blocks[h].register_forward_hook(get_activation('t'+str(h)))
\ No newline at end of file
diff --git a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/utils/nms/py_cpu_nms.py b/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/utils/nms/py_cpu_nms.py
deleted file mode 100644
index 54e7b25fef72b518df6dcf8d6fb78b986796c6e3..0000000000000000000000000000000000000000
--- a/spaces/ygtxr1997/ReliableSwap_Demo/third_party/GPEN/face_detect/utils/nms/py_cpu_nms.py
+++ /dev/null
@@ -1,38 +0,0 @@
-# --------------------------------------------------------
-# Fast R-CNN
-# Copyright (c) 2015 Microsoft
-# Licensed under The MIT License [see LICENSE for details]
-# Written by Ross Girshick
-# --------------------------------------------------------
-
-import numpy as np
-
-def py_cpu_nms(dets, thresh):
- """Pure Python NMS baseline."""
- x1 = dets[:, 0]
- y1 = dets[:, 1]
- x2 = dets[:, 2]
- y2 = dets[:, 3]
- scores = dets[:, 4]
-
- areas = (x2 - x1 + 1) * (y2 - y1 + 1)
- order = scores.argsort()[::-1]
-
- keep = []
- while order.size > 0:
- i = order[0]
- keep.append(i)
- xx1 = np.maximum(x1[i], x1[order[1:]])
- yy1 = np.maximum(y1[i], y1[order[1:]])
- xx2 = np.minimum(x2[i], x2[order[1:]])
- yy2 = np.minimum(y2[i], y2[order[1:]])
-
- w = np.maximum(0.0, xx2 - xx1 + 1)
- h = np.maximum(0.0, yy2 - yy1 + 1)
- inter = w * h
- ovr = inter / (areas[i] + areas[order[1:]] - inter)
-
- inds = np.where(ovr <= thresh)[0]
- order = order[inds + 1]
-
- return keep
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/feature_extraction_sequence_utils.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/feature_extraction_sequence_utils.py
deleted file mode 100644
index 40717d9931850057407f4d00f8da2c4db72b5f99..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/feature_extraction_sequence_utils.py
+++ /dev/null
@@ -1,371 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
- Sequence feature extraction class for common feature extractors to preprocess sequences.
-"""
-from typing import Dict, List, Optional, Union
-
-import numpy as np
-
-from .feature_extraction_utils import BatchFeature, FeatureExtractionMixin
-from .utils import PaddingStrategy, TensorType, is_tf_tensor, is_torch_tensor, logging, to_numpy
-
-
-logger = logging.get_logger(__name__)
-
-
-class SequenceFeatureExtractor(FeatureExtractionMixin):
- """
- This is a general feature extraction class for speech recognition.
-
- Args:
- feature_size (`int`):
- The feature dimension of the extracted features.
- sampling_rate (`int`):
- The sampling rate at which the audio files should be digitalized expressed in hertz (Hz).
- padding_value (`float`):
- The value that is used to fill the padding values / vectors.
- """
-
- def __init__(self, feature_size: int, sampling_rate: int, padding_value: float, **kwargs):
- self.feature_size = feature_size
- self.sampling_rate = sampling_rate
- self.padding_value = padding_value
-
- self.padding_side = kwargs.pop("padding_side", "right")
- self.return_attention_mask = kwargs.pop("return_attention_mask", True)
-
- super().__init__(**kwargs)
-
- def pad(
- self,
- processed_features: Union[
- BatchFeature,
- List[BatchFeature],
- Dict[str, BatchFeature],
- Dict[str, List[BatchFeature]],
- List[Dict[str, BatchFeature]],
- ],
- padding: Union[bool, str, PaddingStrategy] = True,
- max_length: Optional[int] = None,
- truncation: bool = False,
- pad_to_multiple_of: Optional[int] = None,
- return_attention_mask: Optional[bool] = None,
- return_tensors: Optional[Union[str, TensorType]] = None,
- ) -> BatchFeature:
- """
- Pad input values / input vectors or a batch of input values / input vectors up to predefined length or to the
- max sequence length in the batch.
-
- Padding side (left/right) padding values are defined at the feature extractor level (with `self.padding_side`,
- `self.padding_value`)
-
-
-
- If the `processed_features` passed are dictionary of numpy arrays, PyTorch tensors or TensorFlow tensors, the
- result will use the same type unless you provide a different tensor type with `return_tensors`. In the case of
- PyTorch tensors, you will lose the specific device of your tensors however.
-
-
-
- Args:
- processed_features ([`BatchFeature`], list of [`BatchFeature`], `Dict[str, List[float]]`, `Dict[str, List[List[float]]` or `List[Dict[str, List[float]]]`):
- Processed inputs. Can represent one input ([`BatchFeature`] or `Dict[str, List[float]]`) or a batch of
- input values / vectors (list of [`BatchFeature`], *Dict[str, List[List[float]]]* or *List[Dict[str,
- List[float]]]*) so you can use this method during preprocessing as well as in a PyTorch Dataloader
- collate function.
-
- Instead of `List[float]` you can have tensors (numpy arrays, PyTorch tensors or TensorFlow tensors),
- see the note above for the return type.
- padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `True`):
- Select a strategy to pad the returned sequences (according to the model's padding side and padding
- index) among:
-
- - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single
- sequence if provided).
- - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum
- acceptable input length for the model if that argument is not provided.
- - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different
- lengths).
- max_length (`int`, *optional*):
- Maximum length of the returned list and optionally padding length (see above).
- truncation (`bool`):
- Activates truncation to cut input sequences longer than `max_length` to `max_length`.
- pad_to_multiple_of (`int`, *optional*):
- If set will pad the sequence to a multiple of the provided value.
-
- This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability
- `>= 7.5` (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128.
- return_attention_mask (`bool`, *optional*):
- Whether to return the attention mask. If left to the default, will return the attention mask according
- to the specific feature_extractor's default.
-
- [What are attention masks?](../glossary#attention-mask)
- return_tensors (`str` or [`~utils.TensorType`], *optional*):
- If set, will return tensors instead of list of python integers. Acceptable values are:
-
- - `'tf'`: Return TensorFlow `tf.constant` objects.
- - `'pt'`: Return PyTorch `torch.Tensor` objects.
- - `'np'`: Return Numpy `np.ndarray` objects.
- """
- # If we have a list of dicts, let's convert it in a dict of lists
- # We do this to allow using this method as a collate_fn function in PyTorch Dataloader
- if isinstance(processed_features, (list, tuple)) and isinstance(processed_features[0], (dict, BatchFeature)):
- processed_features = {
- key: [example[key] for example in processed_features] for key in processed_features[0].keys()
- }
-
- # The model's main input name, usually `input_values`, has be passed for padding
- if self.model_input_names[0] not in processed_features:
- raise ValueError(
- "You should supply an instance of `transformers.BatchFeature` or list of `transformers.BatchFeature`"
- f" to this method that includes {self.model_input_names[0]}, but you provided"
- f" {list(processed_features.keys())}"
- )
-
- required_input = processed_features[self.model_input_names[0]]
- return_attention_mask = (
- return_attention_mask if return_attention_mask is not None else self.return_attention_mask
- )
-
- if len(required_input) == 0:
- if return_attention_mask:
- processed_features["attention_mask"] = []
- return processed_features
-
- # If we have PyTorch/TF tensors or lists as inputs, we cast them as Numpy arrays
- # and rebuild them afterwards if no return_tensors is specified
- # Note that we lose the specific device the tensor may be on for PyTorch
-
- first_element = required_input[0]
- if isinstance(first_element, (list, tuple)):
- # first_element might be an empty list/tuple in some edge cases so we grab the first non empty element.
- index = 0
- while len(required_input[index]) == 0:
- index += 1
- if index < len(required_input):
- first_element = required_input[index][0]
-
- if return_tensors is None:
- if is_tf_tensor(first_element):
- return_tensors = "tf"
- elif is_torch_tensor(first_element):
- return_tensors = "pt"
- elif isinstance(first_element, (int, float, list, tuple, np.ndarray)):
- return_tensors = "np"
- else:
- raise ValueError(
- f"type of {first_element} unknown: {type(first_element)}. "
- "Should be one of a python, numpy, pytorch or tensorflow object."
- )
-
- for key, value in processed_features.items():
- if isinstance(value[0], (int, float)):
- processed_features[key] = to_numpy(value)
- else:
- processed_features[key] = [to_numpy(v) for v in value]
-
- # Convert padding_strategy in PaddingStrategy
- padding_strategy = self._get_padding_strategies(padding=padding, max_length=max_length)
-
- required_input = processed_features[self.model_input_names[0]]
-
- batch_size = len(required_input)
- if not all(len(v) == batch_size for v in processed_features.values()):
- raise ValueError("Some items in the output dictionary have a different batch size than others.")
-
- truncated_inputs = []
- for i in range(batch_size):
- inputs = {k: v[i] for k, v in processed_features.items()}
- # truncation
- inputs_slice = self._truncate(
- inputs,
- max_length=max_length,
- pad_to_multiple_of=pad_to_multiple_of,
- truncation=truncation,
- )
- truncated_inputs.append(inputs_slice)
-
- if padding_strategy == PaddingStrategy.LONGEST:
- # make sure that `max_length` cannot be longer than the longest truncated length
- max_length = max(len(input_slice[self.model_input_names[0]]) for input_slice in truncated_inputs)
- padding_strategy = PaddingStrategy.MAX_LENGTH
-
- batch_outputs = {}
- for i in range(batch_size):
- # padding
- outputs = self._pad(
- truncated_inputs[i],
- max_length=max_length,
- padding_strategy=padding_strategy,
- pad_to_multiple_of=pad_to_multiple_of,
- return_attention_mask=return_attention_mask,
- )
-
- for key, value in outputs.items():
- if key not in batch_outputs:
- batch_outputs[key] = []
- if value.dtype is np.dtype(np.float64):
- value = value.astype(np.float32)
- batch_outputs[key].append(value)
-
- return BatchFeature(batch_outputs, tensor_type=return_tensors)
-
- def _pad(
- self,
- processed_features: Union[Dict[str, np.ndarray], BatchFeature],
- max_length: Optional[int] = None,
- padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD,
- pad_to_multiple_of: Optional[int] = None,
- return_attention_mask: Optional[bool] = None,
- ) -> dict:
- """
- Pad inputs (on left/right and up to predefined length or max length in the batch)
-
- Args:
- processed_features (`Union[Dict[str, np.ndarray], BatchFeature]`):
- Dictionary of input values (`np.ndarray[float]`) / input vectors (`List[np.ndarray[float]]`) or batch
- of inputs values (`List[np.ndarray[int]]`) / input vectors (`List[np.ndarray[int]]`)
- max_length (`int`, *optional*):
- Maximum length of the returned list and optionally padding length (see below)
- padding_strategy (`PaddingStrategy`, *optional*, default to `PaddingStrategy.DO_NOT_PAD`):
- PaddingStrategy to use for padding.
-
- - PaddingStrategy.LONGEST Pad to the longest sequence in the batch
- - PaddingStrategy.MAX_LENGTH: Pad to the max length (default)
- - PaddingStrategy.DO_NOT_PAD: Do not pad
- The feature_extractor padding sides are defined in self.padding_side:
-
- - 'left': pads on the left of the sequences
- - 'right': pads on the right of the sequences
- pad_to_multiple_of (`int`, *optional*):
- Integer if set will pad the sequence to a multiple of the provided value. This is especially useful to
- enable the use of Tensor Core on NVIDIA hardware with compute capability `>= 7.5` (Volta), or on TPUs
- which benefit from having sequence lengths be a multiple of 128.
- return_attention_mask (`bool`, *optional*):
- Set to False to avoid returning attention mask (default: set to model specifics)
- """
- required_input = processed_features[self.model_input_names[0]]
-
- if padding_strategy == PaddingStrategy.LONGEST:
- max_length = len(required_input)
-
- if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
- max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
-
- needs_to_be_padded = padding_strategy != PaddingStrategy.DO_NOT_PAD and len(required_input) < max_length
-
- if return_attention_mask and "attention_mask" not in processed_features:
- processed_features["attention_mask"] = np.ones(len(required_input), dtype=np.int32)
-
- if needs_to_be_padded:
- difference = max_length - len(required_input)
- if self.padding_side == "right":
- if return_attention_mask:
- processed_features["attention_mask"] = np.pad(
- processed_features["attention_mask"], (0, difference)
- )
- padding_shape = ((0, difference), (0, 0)) if self.feature_size > 1 else (0, difference)
- processed_features[self.model_input_names[0]] = np.pad(
- required_input, padding_shape, "constant", constant_values=self.padding_value
- )
- elif self.padding_side == "left":
- if return_attention_mask:
- processed_features["attention_mask"] = np.pad(
- processed_features["attention_mask"], (difference, 0)
- )
- padding_shape = ((difference, 0), (0, 0)) if self.feature_size > 1 else (difference, 0)
- processed_features[self.model_input_names[0]] = np.pad(
- required_input, padding_shape, "constant", constant_values=self.padding_value
- )
- else:
- raise ValueError("Invalid padding strategy:" + str(self.padding_side))
-
- return processed_features
-
- def _truncate(
- self,
- processed_features: Union[Dict[str, np.ndarray], BatchFeature],
- max_length: Optional[int] = None,
- pad_to_multiple_of: Optional[int] = None,
- truncation: Optional[bool] = None,
- ):
- """
- Truncate inputs to predefined length or max length in the batch
-
- Args:
- processed_features(`Union[Dict[str, np.ndarray], BatchFeature]`):
- Dictionary of input values (`np.ndarray[float]`) / input vectors (`List[np.ndarray[float]]`) or batch
- of inputs values (`List[np.ndarray[int]]`) / input vectors (`List[np.ndarray[int]]`)
- max_length (`int`, *optional*):
- maximum length of the returned list and optionally padding length (see below)
- pad_to_multiple_of (`int`, *optional*) :
- Integer if set will pad the sequence to a multiple of the provided value. This is especially useful to
- enable the use of Tensor Core on NVIDIA hardware with compute capability `>= 7.5` (Volta), or on TPUs
- which benefit from having sequence lengths be a multiple of 128.
- truncation (`bool`, *optional*):
- Activates truncation to cut input sequences longer than `max_length` to `max_length`.
- """
- if not truncation:
- return processed_features
- elif truncation and max_length is None:
- raise ValueError("When setting ``truncation=True``, make sure that ``max_length`` is defined.")
-
- required_input = processed_features[self.model_input_names[0]]
-
- # find `max_length` that fits `pad_to_multiple_of`
- if max_length is not None and pad_to_multiple_of is not None and (max_length % pad_to_multiple_of != 0):
- max_length = ((max_length // pad_to_multiple_of) + 1) * pad_to_multiple_of
-
- needs_to_be_truncated = len(required_input) > max_length
-
- if needs_to_be_truncated:
- processed_features[self.model_input_names[0]] = processed_features[self.model_input_names[0]][:max_length]
- if "attention_mask" in processed_features:
- processed_features["attention_mask"] = processed_features["attention_mask"][:max_length]
-
- return processed_features
-
- def _get_padding_strategies(self, padding=False, max_length=None):
- """
- Find the correct padding strategy
- """
-
- # Get padding strategy
- if padding is not False:
- if padding is True:
- padding_strategy = PaddingStrategy.LONGEST # Default to pad to the longest sequence in the batch
- elif not isinstance(padding, PaddingStrategy):
- padding_strategy = PaddingStrategy(padding)
- elif isinstance(padding, PaddingStrategy):
- padding_strategy = padding
- else:
- padding_strategy = PaddingStrategy.DO_NOT_PAD
-
- # Set max length if needed
- if max_length is None:
- if padding_strategy == PaddingStrategy.MAX_LENGTH:
- raise ValueError(
- f"When setting ``padding={PaddingStrategy.MAX_LENGTH}``, make sure that max_length is defined"
- )
-
- # Test if we have a padding value
- if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.padding_value is None):
- raise ValueError(
- "Asking to pad but the feature_extractor does not have a padding value. Please select a value to use"
- " as `padding_value`. For example: `feature_extractor.padding_value = 0.0`."
- )
-
- return padding_strategy
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blenderbot_small/configuration_blenderbot_small.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blenderbot_small/configuration_blenderbot_small.py
deleted file mode 100644
index fbc23435d66f312dce2656604c8f166bc0e7b8de..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/blenderbot_small/configuration_blenderbot_small.py
+++ /dev/null
@@ -1,391 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The Facebook, Inc. and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" BlenderbotSmall model configuration"""
-
-from collections import OrderedDict
-from typing import Any, Mapping, Optional
-
-from ... import PreTrainedTokenizer
-from ...configuration_utils import PretrainedConfig
-from ...file_utils import TensorType, is_torch_available
-from ...onnx import OnnxConfig, OnnxConfigWithPast, OnnxSeq2SeqConfigWithPast
-from ...onnx.utils import compute_effective_axis_dimension
-from ...utils import logging
-
-
-logger = logging.get_logger(__name__)
-
-BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "facebook/blenderbot_small-90M": "https://huggingface.co/facebook/blenderbot_small-90M/resolve/main/config.json",
- # See all BlenderbotSmall models at https://huggingface.co/models?filter=blenderbot_small
-}
-
-
-class BlenderbotSmallConfig(PretrainedConfig):
- r"""
- This is the configuration class to store the configuration of a [`BlenderbotSmallModel`]. It is used to instantiate
- an BlenderbotSmall model according to the specified arguments, defining the model architecture. Instantiating a
- configuration with the defaults will yield a similar configuration to that of the BlenderbotSmall
- [facebook/blenderbot_small-90M](https://huggingface.co/facebook/blenderbot_small-90M) architecture.
-
- Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
- documentation from [`PretrainedConfig`] for more information.
-
-
- Args:
- vocab_size (`int`, *optional*, defaults to 50265):
- Vocabulary size of the BlenderbotSmall model. Defines the number of different tokens that can be
- represented by the `inputs_ids` passed when calling [`BlenderbotSmallModel`] or [`TFBlenderbotSmallModel`].
- d_model (`int`, *optional*, defaults to 512):
- Dimensionality of the layers and the pooler layer.
- encoder_layers (`int`, *optional*, defaults to 8):
- Number of encoder layers.
- decoder_layers (`int`, *optional*, defaults to 8):
- Number of decoder layers.
- encoder_attention_heads (`int`, *optional*, defaults to 16):
- Number of attention heads for each attention layer in the Transformer encoder.
- decoder_attention_heads (`int`, *optional*, defaults to 16):
- Number of attention heads for each attention layer in the Transformer decoder.
- decoder_ffn_dim (`int`, *optional*, defaults to 2048):
- Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
- encoder_ffn_dim (`int`, *optional*, defaults to 2048):
- Dimensionality of the "intermediate" (often named feed-forward) layer in decoder.
- activation_function (`str` or `function`, *optional*, defaults to `"gelu"`):
- The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
- `"relu"`, `"silu"` and `"gelu_new"` are supported.
- dropout (`float`, *optional*, defaults to 0.1):
- The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
- attention_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for the attention probabilities.
- activation_dropout (`float`, *optional*, defaults to 0.0):
- The dropout ratio for activations inside the fully connected layer.
- max_position_embeddings (`int`, *optional*, defaults to 512):
- The maximum sequence length that this model might ever be used with. Typically set this to something large
- just in case (e.g., 512 or 1024 or 2048).
- init_std (`float`, *optional*, defaults to 0.02):
- The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
- encoder_layerdrop (`float`, *optional*, defaults to 0.0):
- The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
- for more details.
- decoder_layerdrop (`float`, *optional*, defaults to 0.0):
- The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556)
- for more details.
- scale_embedding (`bool`, *optional*, defaults to `False`):
- Scale embeddings by diving by sqrt(d_model).
- use_cache (`bool`, *optional*, defaults to `True`):
- Whether or not the model should return the last key/values attentions (not used by all models)
- forced_eos_token_id (`int`, *optional*, defaults to 2):
- The id of the token to force as the last generated token when `max_length` is reached. Usually set to
- `eos_token_id`.
-
- Example:
-
- ```python
- >>> from transformers import BlenderbotSmallConfig, BlenderbotSmallModel
-
- >>> # Initializing a BlenderbotSmall facebook/blenderbot_small-90M style configuration
- >>> configuration = BlenderbotSmallConfig()
-
- >>> # Initializing a model (with random weights) from the facebook/blenderbot_small-90M style configuration
- >>> model = BlenderbotSmallModel(configuration)
-
- >>> # Accessing the model configuration
- >>> configuration = model.config
- ```"""
- model_type = "blenderbot-small"
- keys_to_ignore_at_inference = ["past_key_values"]
- attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"}
-
- def __init__(
- self,
- vocab_size=50265,
- max_position_embeddings=512,
- encoder_layers=8,
- encoder_ffn_dim=2048,
- encoder_attention_heads=16,
- decoder_layers=8,
- decoder_ffn_dim=2048,
- decoder_attention_heads=16,
- encoder_layerdrop=0.0,
- decoder_layerdrop=0.0,
- use_cache=True,
- is_encoder_decoder=True,
- activation_function="gelu",
- d_model=512,
- dropout=0.1,
- attention_dropout=0.0,
- activation_dropout=0.0,
- init_std=0.02,
- decoder_start_token_id=1,
- scale_embedding=False,
- pad_token_id=0,
- bos_token_id=1,
- eos_token_id=2,
- forced_eos_token_id=2,
- **kwargs,
- ):
- self.vocab_size = vocab_size
- self.max_position_embeddings = max_position_embeddings
- self.d_model = d_model
- self.encoder_ffn_dim = encoder_ffn_dim
- self.encoder_layers = encoder_layers
- self.encoder_attention_heads = encoder_attention_heads
- self.decoder_ffn_dim = decoder_ffn_dim
- self.decoder_layers = decoder_layers
- self.decoder_attention_heads = decoder_attention_heads
- self.dropout = dropout
- self.attention_dropout = attention_dropout
- self.activation_dropout = activation_dropout
- self.activation_function = activation_function
- self.init_std = init_std
- self.encoder_layerdrop = encoder_layerdrop
- self.decoder_layerdrop = decoder_layerdrop
- self.use_cache = use_cache
- self.num_hidden_layers = encoder_layers
- self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True
-
- super().__init__(
- pad_token_id=pad_token_id,
- bos_token_id=bos_token_id,
- eos_token_id=eos_token_id,
- is_encoder_decoder=is_encoder_decoder,
- decoder_start_token_id=decoder_start_token_id,
- forced_eos_token_id=forced_eos_token_id,
- **kwargs,
- )
-
-
-# Copied from transformers.models.bart.configuration_bart.BartOnnxConfig
-class BlenderbotSmallOnnxConfig(OnnxSeq2SeqConfigWithPast):
- @property
- def inputs(self) -> Mapping[str, Mapping[int, str]]:
- if self.task in ["default", "seq2seq-lm"]:
- common_inputs = OrderedDict(
- [
- ("input_ids", {0: "batch", 1: "encoder_sequence"}),
- ("attention_mask", {0: "batch", 1: "encoder_sequence"}),
- ]
- )
-
- if self.use_past:
- common_inputs["decoder_input_ids"] = {0: "batch"}
- common_inputs["decoder_attention_mask"] = {0: "batch", 1: "past_decoder_sequence + sequence"}
- else:
- common_inputs["decoder_input_ids"] = {0: "batch", 1: "decoder_sequence"}
- common_inputs["decoder_attention_mask"] = {0: "batch", 1: "decoder_sequence"}
-
- if self.use_past:
- self.fill_with_past_key_values_(common_inputs, direction="inputs")
- elif self.task == "causal-lm":
- # TODO: figure this case out.
- common_inputs = OrderedDict(
- [
- ("input_ids", {0: "batch", 1: "encoder_sequence"}),
- ("attention_mask", {0: "batch", 1: "encoder_sequence"}),
- ]
- )
- if self.use_past:
- num_encoder_layers, _ = self.num_layers
- for i in range(num_encoder_layers):
- common_inputs[f"past_key_values.{i}.key"] = {0: "batch", 2: "past_sequence + sequence"}
- common_inputs[f"past_key_values.{i}.value"] = {0: "batch", 2: "past_sequence + sequence"}
- else:
- common_inputs = OrderedDict(
- [
- ("input_ids", {0: "batch", 1: "encoder_sequence"}),
- ("attention_mask", {0: "batch", 1: "encoder_sequence"}),
- ("decoder_input_ids", {0: "batch", 1: "decoder_sequence"}),
- ("decoder_attention_mask", {0: "batch", 1: "decoder_sequence"}),
- ]
- )
-
- return common_inputs
-
- @property
- def outputs(self) -> Mapping[str, Mapping[int, str]]:
- if self.task in ["default", "seq2seq-lm"]:
- common_outputs = super().outputs
- else:
- common_outputs = super(OnnxConfigWithPast, self).outputs
- if self.use_past:
- num_encoder_layers, _ = self.num_layers
- for i in range(num_encoder_layers):
- common_outputs[f"present.{i}.key"] = {0: "batch", 2: "past_sequence + sequence"}
- common_outputs[f"present.{i}.value"] = {0: "batch", 2: "past_sequence + sequence"}
- return common_outputs
-
- def _generate_dummy_inputs_for_default_and_seq2seq_lm(
- self,
- tokenizer: PreTrainedTokenizer,
- batch_size: int = -1,
- seq_length: int = -1,
- is_pair: bool = False,
- framework: Optional[TensorType] = None,
- ) -> Mapping[str, Any]:
- encoder_inputs = self._generate_dummy_inputs_for_sequence_classification_and_question_answering(
- tokenizer, batch_size, seq_length, is_pair, framework
- )
-
- # Generate decoder inputs
- decoder_seq_length = seq_length if not self.use_past else 1
- decoder_inputs = self._generate_dummy_inputs_for_sequence_classification_and_question_answering(
- tokenizer, batch_size, decoder_seq_length, is_pair, framework
- )
- decoder_inputs = {f"decoder_{name}": tensor for name, tensor in decoder_inputs.items()}
- common_inputs = dict(**encoder_inputs, **decoder_inputs)
-
- if self.use_past:
- if not is_torch_available():
- raise ValueError("Cannot generate dummy past_keys inputs without PyTorch installed.")
- else:
- import torch
- batch, encoder_seq_length = common_inputs["input_ids"].shape
- decoder_seq_length = common_inputs["decoder_input_ids"].shape[1]
- num_encoder_attention_heads, num_decoder_attention_heads = self.num_attention_heads
- encoder_shape = (
- batch,
- num_encoder_attention_heads,
- encoder_seq_length,
- self._config.hidden_size // num_encoder_attention_heads,
- )
- decoder_past_length = decoder_seq_length + 3
- decoder_shape = (
- batch,
- num_decoder_attention_heads,
- decoder_past_length,
- self._config.hidden_size // num_decoder_attention_heads,
- )
-
- common_inputs["decoder_attention_mask"] = torch.cat(
- [common_inputs["decoder_attention_mask"], torch.ones(batch, decoder_past_length)], dim=1
- )
-
- common_inputs["past_key_values"] = []
- # If the number of encoder and decoder layers are present in the model configuration, both are considered
- num_encoder_layers, num_decoder_layers = self.num_layers
- min_num_layers = min(num_encoder_layers, num_decoder_layers)
- max_num_layers = max(num_encoder_layers, num_decoder_layers) - min_num_layers
- remaining_side_name = "encoder" if num_encoder_layers > num_decoder_layers else "decoder"
-
- for _ in range(min_num_layers):
- common_inputs["past_key_values"].append(
- (
- torch.zeros(decoder_shape),
- torch.zeros(decoder_shape),
- torch.zeros(encoder_shape),
- torch.zeros(encoder_shape),
- )
- )
- # TODO: test this.
- shape = encoder_shape if remaining_side_name == "encoder" else decoder_shape
- for _ in range(min_num_layers, max_num_layers):
- common_inputs["past_key_values"].append((torch.zeros(shape), torch.zeros(shape)))
- return common_inputs
-
- def _generate_dummy_inputs_for_causal_lm(
- self,
- tokenizer: PreTrainedTokenizer,
- batch_size: int = -1,
- seq_length: int = -1,
- is_pair: bool = False,
- framework: Optional[TensorType] = None,
- ) -> Mapping[str, Any]:
- common_inputs = self._generate_dummy_inputs_for_sequence_classification_and_question_answering(
- tokenizer, batch_size, seq_length, is_pair, framework
- )
-
- if self.use_past:
- if not is_torch_available():
- raise ValueError("Cannot generate dummy past_keys inputs without PyTorch installed.")
- else:
- import torch
- batch, seqlen = common_inputs["input_ids"].shape
- # Not using the same length for past_key_values
- past_key_values_length = seqlen + 2
- num_encoder_layers, _ = self.num_layers
- num_encoder_attention_heads, _ = self.num_attention_heads
- past_shape = (
- batch,
- num_encoder_attention_heads,
- past_key_values_length,
- self._config.hidden_size // num_encoder_attention_heads,
- )
-
- mask_dtype = common_inputs["attention_mask"].dtype
- common_inputs["attention_mask"] = torch.cat(
- [common_inputs["attention_mask"], torch.ones(batch, past_key_values_length, dtype=mask_dtype)], dim=1
- )
- common_inputs["past_key_values"] = [
- (torch.zeros(past_shape), torch.zeros(past_shape)) for _ in range(num_encoder_layers)
- ]
- return common_inputs
-
- def _generate_dummy_inputs_for_sequence_classification_and_question_answering(
- self,
- tokenizer: PreTrainedTokenizer,
- batch_size: int = -1,
- seq_length: int = -1,
- is_pair: bool = False,
- framework: Optional[TensorType] = None,
- ) -> Mapping[str, Any]:
- # Copied from OnnxConfig.generate_dummy_inputs
- # Did not use super(OnnxConfigWithPast, self).generate_dummy_inputs for code clarity.
- # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX
- batch_size = compute_effective_axis_dimension(
- batch_size, fixed_dimension=OnnxConfig.default_fixed_batch, num_token_to_add=0
- )
-
- # If dynamic axis (-1) we forward with a fixed dimension of 8 tokens to avoid optimizations made by ONNX
- token_to_add = tokenizer.num_special_tokens_to_add(is_pair)
- seq_length = compute_effective_axis_dimension(
- seq_length, fixed_dimension=OnnxConfig.default_fixed_sequence, num_token_to_add=token_to_add
- )
-
- # Generate dummy inputs according to compute batch and sequence
- dummy_input = [" ".join([tokenizer.unk_token]) * seq_length] * batch_size
- common_inputs = dict(tokenizer(dummy_input, return_tensors=framework))
- return common_inputs
-
- def generate_dummy_inputs(
- self,
- tokenizer: PreTrainedTokenizer,
- batch_size: int = -1,
- seq_length: int = -1,
- is_pair: bool = False,
- framework: Optional[TensorType] = None,
- ) -> Mapping[str, Any]:
- if self.task in ["default", "seq2seq-lm"]:
- common_inputs = self._generate_dummy_inputs_for_default_and_seq2seq_lm(
- tokenizer, batch_size=batch_size, seq_length=seq_length, is_pair=is_pair, framework=framework
- )
-
- elif self.task == "causal-lm":
- common_inputs = self._generate_dummy_inputs_for_causal_lm(
- tokenizer, batch_size=batch_size, seq_length=seq_length, is_pair=is_pair, framework=framework
- )
- else:
- common_inputs = self._generate_dummy_inputs_for_sequence_classification_and_question_answering(
- tokenizer, batch_size=batch_size, seq_length=seq_length, is_pair=is_pair, framework=framework
- )
-
- return common_inputs
-
- def _flatten_past_key_values_(self, flattened_output, name, idx, t):
- if self.task in ["default", "seq2seq-lm"]:
- flattened_output = super()._flatten_past_key_values_(flattened_output, name, idx, t)
- else:
- flattened_output = super(OnnxSeq2SeqConfigWithPast, self)._flatten_past_key_values_(
- flattened_output, name, idx, t
- )
diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/electra/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/electra/__init__.py
deleted file mode 100644
index 09ce039d25fd057608693a8d6c9d79358d970225..0000000000000000000000000000000000000000
--- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/electra/__init__.py
+++ /dev/null
@@ -1,168 +0,0 @@
-# Copyright 2020 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import TYPE_CHECKING
-
-from ...utils import (
- OptionalDependencyNotAvailable,
- _LazyModule,
- is_flax_available,
- is_tf_available,
- is_tokenizers_available,
- is_torch_available,
-)
-
-
-_import_structure = {
- "configuration_electra": ["ELECTRA_PRETRAINED_CONFIG_ARCHIVE_MAP", "ElectraConfig", "ElectraOnnxConfig"],
- "tokenization_electra": ["ElectraTokenizer"],
-}
-
-try:
- if not is_tokenizers_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["tokenization_electra_fast"] = ["ElectraTokenizerFast"]
-
-try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["modeling_electra"] = [
- "ELECTRA_PRETRAINED_MODEL_ARCHIVE_LIST",
- "ElectraForCausalLM",
- "ElectraForMaskedLM",
- "ElectraForMultipleChoice",
- "ElectraForPreTraining",
- "ElectraForQuestionAnswering",
- "ElectraForSequenceClassification",
- "ElectraForTokenClassification",
- "ElectraModel",
- "ElectraPreTrainedModel",
- "load_tf_weights_in_electra",
- ]
-
-try:
- if not is_tf_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["modeling_tf_electra"] = [
- "TF_ELECTRA_PRETRAINED_MODEL_ARCHIVE_LIST",
- "TFElectraForMaskedLM",
- "TFElectraForMultipleChoice",
- "TFElectraForPreTraining",
- "TFElectraForQuestionAnswering",
- "TFElectraForSequenceClassification",
- "TFElectraForTokenClassification",
- "TFElectraModel",
- "TFElectraPreTrainedModel",
- ]
-
-try:
- if not is_flax_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- pass
-else:
- _import_structure["modeling_flax_electra"] = [
- "FlaxElectraForCausalLM",
- "FlaxElectraForMaskedLM",
- "FlaxElectraForMultipleChoice",
- "FlaxElectraForPreTraining",
- "FlaxElectraForQuestionAnswering",
- "FlaxElectraForSequenceClassification",
- "FlaxElectraForTokenClassification",
- "FlaxElectraModel",
- "FlaxElectraPreTrainedModel",
- ]
-
-
-if TYPE_CHECKING:
- from .configuration_electra import ELECTRA_PRETRAINED_CONFIG_ARCHIVE_MAP, ElectraConfig, ElectraOnnxConfig
- from .tokenization_electra import ElectraTokenizer
-
- try:
- if not is_tokenizers_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .tokenization_electra_fast import ElectraTokenizerFast
-
- try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .modeling_electra import (
- ELECTRA_PRETRAINED_MODEL_ARCHIVE_LIST,
- ElectraForCausalLM,
- ElectraForMaskedLM,
- ElectraForMultipleChoice,
- ElectraForPreTraining,
- ElectraForQuestionAnswering,
- ElectraForSequenceClassification,
- ElectraForTokenClassification,
- ElectraModel,
- ElectraPreTrainedModel,
- load_tf_weights_in_electra,
- )
-
- try:
- if not is_tf_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .modeling_tf_electra import (
- TF_ELECTRA_PRETRAINED_MODEL_ARCHIVE_LIST,
- TFElectraForMaskedLM,
- TFElectraForMultipleChoice,
- TFElectraForPreTraining,
- TFElectraForQuestionAnswering,
- TFElectraForSequenceClassification,
- TFElectraForTokenClassification,
- TFElectraModel,
- TFElectraPreTrainedModel,
- )
-
- try:
- if not is_flax_available():
- raise OptionalDependencyNotAvailable()
- except OptionalDependencyNotAvailable:
- pass
- else:
- from .modeling_flax_electra import (
- FlaxElectraForCausalLM,
- FlaxElectraForMaskedLM,
- FlaxElectraForMultipleChoice,
- FlaxElectraForPreTraining,
- FlaxElectraForQuestionAnswering,
- FlaxElectraForSequenceClassification,
- FlaxElectraForTokenClassification,
- FlaxElectraModel,
- FlaxElectraPreTrainedModel,
- )
-
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/logger/__init__.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/diffusion/logger/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py
deleted file mode 100644
index 1d0d40adb4a300d916deecebd20bcaac08936e6d..0000000000000000000000000000000000000000
--- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py
+++ /dev/null
@@ -1,802 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# This file is modified from https://github.com/Res2Net/Res2Net-detectron2/blob/master/detectron2/modeling/backbone/resnet.py
-# The original file is under Apache-2.0 License
-import numpy as np
-import fvcore.nn.weight_init as weight_init
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.layers import (
- CNNBlockBase,
- Conv2d,
- DeformConv,
- ModulatedDeformConv,
- ShapeSpec,
- get_norm,
-)
-
-from detectron2.modeling.backbone import Backbone
-from detectron2.modeling.backbone.fpn import FPN
-from detectron2.modeling.backbone.build import BACKBONE_REGISTRY
-from .fpn_p5 import LastLevelP6P7_P5
-from .bifpn import BiFPN
-
-__all__ = [
- "ResNetBlockBase",
- "BasicBlock",
- "BottleneckBlock",
- "DeformBottleneckBlock",
- "BasicStem",
- "ResNet",
- "make_stage",
- "build_res2net_backbone",
-]
-
-
-ResNetBlockBase = CNNBlockBase
-"""
-Alias for backward compatibiltiy.
-"""
-
-
-class BasicBlock(CNNBlockBase):
- """
- The basic residual block for ResNet-18 and ResNet-34, with two 3x3 conv layers
- and a projection shortcut if needed.
- """
-
- def __init__(self, in_channels, out_channels, *, stride=1, norm="BN"):
- """
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- stride (int): Stride for the first conv.
- norm (str or callable): normalization for all conv layers.
- See :func:`layers.get_norm` for supported format.
- """
- super().__init__(in_channels, out_channels, stride)
-
- if in_channels != out_channels:
- self.shortcut = Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- stride=stride,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- else:
- self.shortcut = None
-
- self.conv1 = Conv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- stride=stride,
- padding=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
-
- self.conv2 = Conv2d(
- out_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
-
- for layer in [self.conv1, self.conv2, self.shortcut]:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
-
- def forward(self, x):
- out = self.conv1(x)
- out = F.relu_(out)
- out = self.conv2(out)
-
- if self.shortcut is not None:
- shortcut = self.shortcut(x)
- else:
- shortcut = x
-
- out += shortcut
- out = F.relu_(out)
- return out
-
-
-class BottleneckBlock(CNNBlockBase):
- """
- The standard bottle2neck residual block used by Res2Net-50, 101 and 152.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- *,
- bottleneck_channels,
- stride=1,
- num_groups=1,
- norm="BN",
- stride_in_1x1=False,
- dilation=1,
- basewidth=26,
- scale=4,
- ):
- """
- Args:
- bottleneck_channels (int): number of output channels for the 3x3
- "bottleneck" conv layers.
- num_groups (int): number of groups for the 3x3 conv layer.
- norm (str or callable): normalization for all conv layers.
- See :func:`layers.get_norm` for supported format.
- stride_in_1x1 (bool): when stride>1, whether to put stride in the
- first 1x1 convolution or the bottleneck 3x3 convolution.
- dilation (int): the dilation rate of the 3x3 conv layer.
- """
- super().__init__(in_channels, out_channels, stride)
-
- if in_channels != out_channels:
- self.shortcut = nn.Sequential(
- nn.AvgPool2d(kernel_size=stride, stride=stride,
- ceil_mode=True, count_include_pad=False),
- Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- stride=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- )
- else:
- self.shortcut = None
-
- # The original MSRA ResNet models have stride in the first 1x1 conv
- # The subsequent fb.torch.resnet and Caffe2 ResNe[X]t implementations have
- # stride in the 3x3 conv
- stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride)
- width = bottleneck_channels//scale
-
- self.conv1 = Conv2d(
- in_channels,
- bottleneck_channels,
- kernel_size=1,
- stride=stride_1x1,
- bias=False,
- norm=get_norm(norm, bottleneck_channels),
- )
- if scale == 1:
- self.nums = 1
- else:
- self.nums = scale -1
- if self.in_channels!=self.out_channels and stride_3x3!=2:
- self.pool = nn.AvgPool2d(kernel_size=3, stride = stride_3x3, padding=1)
-
- convs = []
- bns = []
- for i in range(self.nums):
- convs.append(nn.Conv2d(
- width,
- width,
- kernel_size=3,
- stride=stride_3x3,
- padding=1 * dilation,
- bias=False,
- groups=num_groups,
- dilation=dilation,
- ))
- bns.append(get_norm(norm, width))
- self.convs = nn.ModuleList(convs)
- self.bns = nn.ModuleList(bns)
-
- self.conv3 = Conv2d(
- bottleneck_channels,
- out_channels,
- kernel_size=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- self.scale = scale
- self.width = width
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.stride_3x3 = stride_3x3
- for layer in [self.conv1, self.conv3]:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
- if self.shortcut is not None:
- for layer in self.shortcut.modules():
- if isinstance(layer, Conv2d):
- weight_init.c2_msra_fill(layer)
-
- for layer in self.convs:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
-
- # Zero-initialize the last normalization in each residual branch,
- # so that at the beginning, the residual branch starts with zeros,
- # and each residual block behaves like an identity.
- # See Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour":
- # "For BN layers, the learnable scaling coefficient γ is initialized
- # to be 1, except for each residual block's last BN
- # where γ is initialized to be 0."
-
- # nn.init.constant_(self.conv3.norm.weight, 0)
- # TODO this somehow hurts performance when training GN models from scratch.
- # Add it as an option when we need to use this code to train a backbone.
-
- def forward(self, x):
- out = self.conv1(x)
- out = F.relu_(out)
-
- spx = torch.split(out, self.width, 1)
- for i in range(self.nums):
- if i==0 or self.in_channels!=self.out_channels:
- sp = spx[i]
- else:
- sp = sp + spx[i]
- sp = self.convs[i](sp)
- sp = F.relu_(self.bns[i](sp))
- if i==0:
- out = sp
- else:
- out = torch.cat((out, sp), 1)
- if self.scale!=1 and self.stride_3x3==1:
- out = torch.cat((out, spx[self.nums]), 1)
- elif self.scale != 1 and self.stride_3x3==2:
- out = torch.cat((out, self.pool(spx[self.nums])), 1)
-
- out = self.conv3(out)
-
- if self.shortcut is not None:
- shortcut = self.shortcut(x)
- else:
- shortcut = x
-
- out += shortcut
- out = F.relu_(out)
- return out
-
-
-class DeformBottleneckBlock(ResNetBlockBase):
- """
- Not implemented for res2net yet.
- Similar to :class:`BottleneckBlock`, but with deformable conv in the 3x3 convolution.
- """
-
- def __init__(
- self,
- in_channels,
- out_channels,
- *,
- bottleneck_channels,
- stride=1,
- num_groups=1,
- norm="BN",
- stride_in_1x1=False,
- dilation=1,
- deform_modulated=False,
- deform_num_groups=1,
- basewidth=26,
- scale=4,
- ):
- super().__init__(in_channels, out_channels, stride)
- self.deform_modulated = deform_modulated
-
- if in_channels != out_channels:
- # self.shortcut = Conv2d(
- # in_channels,
- # out_channels,
- # kernel_size=1,
- # stride=stride,
- # bias=False,
- # norm=get_norm(norm, out_channels),
- # )
- self.shortcut = nn.Sequential(
- nn.AvgPool2d(kernel_size=stride, stride=stride,
- ceil_mode=True, count_include_pad=False),
- Conv2d(
- in_channels,
- out_channels,
- kernel_size=1,
- stride=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- )
- else:
- self.shortcut = None
-
- stride_1x1, stride_3x3 = (stride, 1) if stride_in_1x1 else (1, stride)
- width = bottleneck_channels//scale
-
- self.conv1 = Conv2d(
- in_channels,
- bottleneck_channels,
- kernel_size=1,
- stride=stride_1x1,
- bias=False,
- norm=get_norm(norm, bottleneck_channels),
- )
-
- if scale == 1:
- self.nums = 1
- else:
- self.nums = scale -1
- if self.in_channels!=self.out_channels and stride_3x3!=2:
- self.pool = nn.AvgPool2d(kernel_size=3, stride = stride_3x3, padding=1)
-
- if deform_modulated:
- deform_conv_op = ModulatedDeformConv
- # offset channels are 2 or 3 (if with modulated) * kernel_size * kernel_size
- offset_channels = 27
- else:
- deform_conv_op = DeformConv
- offset_channels = 18
-
- # self.conv2_offset = Conv2d(
- # bottleneck_channels,
- # offset_channels * deform_num_groups,
- # kernel_size=3,
- # stride=stride_3x3,
- # padding=1 * dilation,
- # dilation=dilation,
- # )
- # self.conv2 = deform_conv_op(
- # bottleneck_channels,
- # bottleneck_channels,
- # kernel_size=3,
- # stride=stride_3x3,
- # padding=1 * dilation,
- # bias=False,
- # groups=num_groups,
- # dilation=dilation,
- # deformable_groups=deform_num_groups,
- # norm=get_norm(norm, bottleneck_channels),
- # )
-
- conv2_offsets = []
- convs = []
- bns = []
- for i in range(self.nums):
- conv2_offsets.append(Conv2d(
- width,
- offset_channels * deform_num_groups,
- kernel_size=3,
- stride=stride_3x3,
- padding=1 * dilation,
- bias=False,
- groups=num_groups,
- dilation=dilation,
- ))
- convs.append(deform_conv_op(
- width,
- width,
- kernel_size=3,
- stride=stride_3x3,
- padding=1 * dilation,
- bias=False,
- groups=num_groups,
- dilation=dilation,
- deformable_groups=deform_num_groups,
- ))
- bns.append(get_norm(norm, width))
- self.conv2_offsets = nn.ModuleList(conv2_offsets)
- self.convs = nn.ModuleList(convs)
- self.bns = nn.ModuleList(bns)
-
- self.conv3 = Conv2d(
- bottleneck_channels,
- out_channels,
- kernel_size=1,
- bias=False,
- norm=get_norm(norm, out_channels),
- )
- self.scale = scale
- self.width = width
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.stride_3x3 = stride_3x3
- # for layer in [self.conv1, self.conv2, self.conv3, self.shortcut]:
- # if layer is not None: # shortcut can be None
- # weight_init.c2_msra_fill(layer)
-
- # nn.init.constant_(self.conv2_offset.weight, 0)
- # nn.init.constant_(self.conv2_offset.bias, 0)
- for layer in [self.conv1, self.conv3]:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
- if self.shortcut is not None:
- for layer in self.shortcut.modules():
- if isinstance(layer, Conv2d):
- weight_init.c2_msra_fill(layer)
-
- for layer in self.convs:
- if layer is not None: # shortcut can be None
- weight_init.c2_msra_fill(layer)
-
- for layer in self.conv2_offsets:
- if layer.weight is not None:
- nn.init.constant_(layer.weight, 0)
- if layer.bias is not None:
- nn.init.constant_(layer.bias, 0)
-
- def forward(self, x):
- out = self.conv1(x)
- out = F.relu_(out)
-
- # if self.deform_modulated:
- # offset_mask = self.conv2_offset(out)
- # offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1)
- # offset = torch.cat((offset_x, offset_y), dim=1)
- # mask = mask.sigmoid()
- # out = self.conv2(out, offset, mask)
- # else:
- # offset = self.conv2_offset(out)
- # out = self.conv2(out, offset)
- # out = F.relu_(out)
-
- spx = torch.split(out, self.width, 1)
- for i in range(self.nums):
- if i==0 or self.in_channels!=self.out_channels:
- sp = spx[i].contiguous()
- else:
- sp = sp + spx[i].contiguous()
-
- # sp = self.convs[i](sp)
- if self.deform_modulated:
- offset_mask = self.conv2_offsets[i](sp)
- offset_x, offset_y, mask = torch.chunk(offset_mask, 3, dim=1)
- offset = torch.cat((offset_x, offset_y), dim=1)
- mask = mask.sigmoid()
- sp = self.convs[i](sp, offset, mask)
- else:
- offset = self.conv2_offsets[i](sp)
- sp = self.convs[i](sp, offset)
- sp = F.relu_(self.bns[i](sp))
- if i==0:
- out = sp
- else:
- out = torch.cat((out, sp), 1)
- if self.scale!=1 and self.stride_3x3==1:
- out = torch.cat((out, spx[self.nums]), 1)
- elif self.scale != 1 and self.stride_3x3==2:
- out = torch.cat((out, self.pool(spx[self.nums])), 1)
-
- out = self.conv3(out)
-
- if self.shortcut is not None:
- shortcut = self.shortcut(x)
- else:
- shortcut = x
-
- out += shortcut
- out = F.relu_(out)
- return out
-
-
-def make_stage(block_class, num_blocks, first_stride, *, in_channels, out_channels, **kwargs):
- """
- Create a list of blocks just like those in a ResNet stage.
- Args:
- block_class (type): a subclass of ResNetBlockBase
- num_blocks (int):
- first_stride (int): the stride of the first block. The other blocks will have stride=1.
- in_channels (int): input channels of the entire stage.
- out_channels (int): output channels of **every block** in the stage.
- kwargs: other arguments passed to the constructor of every block.
- Returns:
- list[nn.Module]: a list of block module.
- """
- assert "stride" not in kwargs, "Stride of blocks in make_stage cannot be changed."
- blocks = []
- for i in range(num_blocks):
- blocks.append(
- block_class(
- in_channels=in_channels,
- out_channels=out_channels,
- stride=first_stride if i == 0 else 1,
- **kwargs,
- )
- )
- in_channels = out_channels
- return blocks
-
-
-class BasicStem(CNNBlockBase):
- """
- The standard ResNet stem (layers before the first residual block).
- """
-
- def __init__(self, in_channels=3, out_channels=64, norm="BN"):
- """
- Args:
- norm (str or callable): norm after the first conv layer.
- See :func:`layers.get_norm` for supported format.
- """
- super().__init__(in_channels, out_channels, 4)
- self.in_channels = in_channels
- self.conv1 = nn.Sequential(
- Conv2d(
- in_channels,
- 32,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False,
- ),
- get_norm(norm, 32),
- nn.ReLU(inplace=True),
- Conv2d(
- 32,
- 32,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False,
- ),
- get_norm(norm, 32),
- nn.ReLU(inplace=True),
- Conv2d(
- 32,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False,
- ),
- )
- self.bn1 = get_norm(norm, out_channels)
-
- for layer in self.conv1:
- if isinstance(layer, Conv2d):
- weight_init.c2_msra_fill(layer)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.bn1(x)
- x = F.relu_(x)
- x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1)
- return x
-
-
-class ResNet(Backbone):
- def __init__(self, stem, stages, num_classes=None, out_features=None):
- """
- Args:
- stem (nn.Module): a stem module
- stages (list[list[CNNBlockBase]]): several (typically 4) stages,
- each contains multiple :class:`CNNBlockBase`.
- num_classes (None or int): if None, will not perform classification.
- Otherwise, will create a linear layer.
- out_features (list[str]): name of the layers whose outputs should
- be returned in forward. Can be anything in "stem", "linear", or "res2" ...
- If None, will return the output of the last layer.
- """
- super(ResNet, self).__init__()
- self.stem = stem
- self.num_classes = num_classes
-
- current_stride = self.stem.stride
- self._out_feature_strides = {"stem": current_stride}
- self._out_feature_channels = {"stem": self.stem.out_channels}
-
- self.stages_and_names = []
- for i, blocks in enumerate(stages):
- assert len(blocks) > 0, len(blocks)
- for block in blocks:
- assert isinstance(block, CNNBlockBase), block
-
- name = "res" + str(i + 2)
- stage = nn.Sequential(*blocks)
-
- self.add_module(name, stage)
- self.stages_and_names.append((stage, name))
-
- self._out_feature_strides[name] = current_stride = int(
- current_stride * np.prod([k.stride for k in blocks])
- )
- self._out_feature_channels[name] = curr_channels = blocks[-1].out_channels
-
- if num_classes is not None:
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
- self.linear = nn.Linear(curr_channels, num_classes)
-
- # Sec 5.1 in "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour":
- # "The 1000-way fully-connected layer is initialized by
- # drawing weights from a zero-mean Gaussian with standard deviation of 0.01."
- nn.init.normal_(self.linear.weight, std=0.01)
- name = "linear"
-
- if out_features is None:
- out_features = [name]
- self._out_features = out_features
- assert len(self._out_features)
- children = [x[0] for x in self.named_children()]
- for out_feature in self._out_features:
- assert out_feature in children, "Available children: {}".format(", ".join(children))
-
- def forward(self, x):
- outputs = {}
- x = self.stem(x)
- if "stem" in self._out_features:
- outputs["stem"] = x
- for stage, name in self.stages_and_names:
- x = stage(x)
- if name in self._out_features:
- outputs[name] = x
- if self.num_classes is not None:
- x = self.avgpool(x)
- x = torch.flatten(x, 1)
- x = self.linear(x)
- if "linear" in self._out_features:
- outputs["linear"] = x
- return outputs
-
- def output_shape(self):
- return {
- name: ShapeSpec(
- channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
- )
- for name in self._out_features
- }
-
- def freeze(self, freeze_at=0):
- """
- Freeze the first several stages of the ResNet. Commonly used in
- fine-tuning.
- Args:
- freeze_at (int): number of stem and stages to freeze.
- `1` means freezing the stem. `2` means freezing the stem and
- the first stage, etc.
- Returns:
- nn.Module: this ResNet itself
- """
- if freeze_at >= 1:
- self.stem.freeze()
- for idx, (stage, _) in enumerate(self.stages_and_names, start=2):
- if freeze_at >= idx:
- for block in stage.children():
- block.freeze()
- return self
-
-
-@BACKBONE_REGISTRY.register()
-def build_res2net_backbone(cfg, input_shape):
- """
- Create a Res2Net instance from config.
- Returns:
- ResNet: a :class:`ResNet` instance.
- """
- # need registration of new blocks/stems?
- norm = cfg.MODEL.RESNETS.NORM
- stem = BasicStem(
- in_channels=input_shape.channels,
- out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS,
- norm=norm,
- )
-
- # fmt: off
- freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT
- out_features = cfg.MODEL.RESNETS.OUT_FEATURES
- depth = cfg.MODEL.RESNETS.DEPTH
- num_groups = cfg.MODEL.RESNETS.NUM_GROUPS
- width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP
- scale = 4
- bottleneck_channels = num_groups * width_per_group * scale
- in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS
- out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS
- stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1
- res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION
- deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE
- deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED
- deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS
- # fmt: on
- assert res5_dilation in {1, 2}, "res5_dilation cannot be {}.".format(res5_dilation)
-
- num_blocks_per_stage = {
- 18: [2, 2, 2, 2],
- 34: [3, 4, 6, 3],
- 50: [3, 4, 6, 3],
- 101: [3, 4, 23, 3],
- 152: [3, 8, 36, 3],
- }[depth]
-
- if depth in [18, 34]:
- assert out_channels == 64, "Must set MODEL.RESNETS.RES2_OUT_CHANNELS = 64 for R18/R34"
- assert not any(
- deform_on_per_stage
- ), "MODEL.RESNETS.DEFORM_ON_PER_STAGE unsupported for R18/R34"
- assert res5_dilation == 1, "Must set MODEL.RESNETS.RES5_DILATION = 1 for R18/R34"
- assert num_groups == 1, "Must set MODEL.RESNETS.NUM_GROUPS = 1 for R18/R34"
-
- stages = []
-
- # Avoid creating variables without gradients
- # It consumes extra memory and may cause allreduce to fail
- out_stage_idx = [{"res2": 2, "res3": 3, "res4": 4, "res5": 5}[f] for f in out_features]
- max_stage_idx = max(out_stage_idx)
- for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)):
- dilation = res5_dilation if stage_idx == 5 else 1
- first_stride = 1 if idx == 0 or (stage_idx == 5 and dilation == 2) else 2
- stage_kargs = {
- "num_blocks": num_blocks_per_stage[idx],
- "first_stride": first_stride,
- "in_channels": in_channels,
- "out_channels": out_channels,
- "norm": norm,
- }
- # Use BasicBlock for R18 and R34.
- if depth in [18, 34]:
- stage_kargs["block_class"] = BasicBlock
- else:
- stage_kargs["bottleneck_channels"] = bottleneck_channels
- stage_kargs["stride_in_1x1"] = stride_in_1x1
- stage_kargs["dilation"] = dilation
- stage_kargs["num_groups"] = num_groups
- stage_kargs["scale"] = scale
-
- if deform_on_per_stage[idx]:
- stage_kargs["block_class"] = DeformBottleneckBlock
- stage_kargs["deform_modulated"] = deform_modulated
- stage_kargs["deform_num_groups"] = deform_num_groups
- else:
- stage_kargs["block_class"] = BottleneckBlock
- blocks = make_stage(**stage_kargs)
- in_channels = out_channels
- out_channels *= 2
- bottleneck_channels *= 2
- stages.append(blocks)
- return ResNet(stem, stages, out_features=out_features).freeze(freeze_at)
-
-
-@BACKBONE_REGISTRY.register()
-def build_p67_res2net_fpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_res2net_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- out_channels = cfg.MODEL.FPN.OUT_CHANNELS
- backbone = FPN(
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=out_channels,
- norm=cfg.MODEL.FPN.NORM,
- top_block=LastLevelP6P7_P5(out_channels, out_channels),
- fuse_type=cfg.MODEL.FPN.FUSE_TYPE,
- )
- return backbone
-
-
-@BACKBONE_REGISTRY.register()
-def build_res2net_bifpn_backbone(cfg, input_shape: ShapeSpec):
- """
- Args:
- cfg: a detectron2 CfgNode
-
- Returns:
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
- """
- bottom_up = build_res2net_backbone(cfg, input_shape)
- in_features = cfg.MODEL.FPN.IN_FEATURES
- backbone = BiFPN(
- cfg=cfg,
- bottom_up=bottom_up,
- in_features=in_features,
- out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS,
- norm=cfg.MODEL.BIFPN.NORM,
- num_levels=cfg.MODEL.BIFPN.NUM_LEVELS,
- num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN,
- separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV,
- )
- return backbone
\ No newline at end of file
diff --git a/spaces/ysharma/Chat_With_Blip2/README.md b/spaces/ysharma/Chat_With_Blip2/README.md
deleted file mode 100644
index 3e4c0d3ed7ffa14b01384b2cb6a6489baa8bca48..0000000000000000000000000000000000000000
--- a/spaces/ysharma/Chat_With_Blip2/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Chat With Blip2
-emoji: 🌖
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.18.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ywqisok/ysyy/app.py b/spaces/ywqisok/ysyy/app.py
deleted file mode 100644
index 57451de18b453bede7f1489a81acb3ab0a8b2c3c..0000000000000000000000000000000000000000
--- a/spaces/ywqisok/ysyy/app.py
+++ /dev/null
@@ -1,141 +0,0 @@
-# coding=utf-8
-import time
-import os
-import gradio as gr
-import utils
-import argparse
-import commons
-from models import SynthesizerTrn
-from text import text_to_sequence
-import torch
-from torch import no_grad, LongTensor
-import webbrowser
-import logging
-logging.getLogger('numba').setLevel(logging.WARNING)
-limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces
-
-def get_text(text, hps):
- text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
- if hps.data.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = LongTensor(text_norm)
- return text_norm, clean_text
-
-def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale):
- start = time.perf_counter()
- if not len(text):
- return "输入文本不能为空!", None, None
- text = text.replace('\n', ' ').replace('\r', '').replace(" ", "")
- if len(text) > 100 and limitation:
- return f"输入文字过长!{len(text)}>100", None, None
- if language == 0:
- text = f"[ZH]{text}[ZH]"
- elif language == 1:
- text = f"[JA]{text}[JA]"
- else:
- text = f"{text}"
- stn_tst, clean_text = get_text(text, hps_ms)
- with no_grad():
- x_tst = stn_tst.unsqueeze(0).to(device)
- x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device)
- speaker_id = LongTensor([speaker_id]).to(device)
- audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w,
- length_scale=length_scale)[0][0, 0].data.cpu().float().numpy()
-
- return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s"
-
-def search_speaker(search_value):
- for s in speakers:
- if search_value == s:
- return s
- for s in speakers:
- if search_value in s:
- return s
-
-def change_lang(language):
- if language == 0:
- return 0.6, 0.668, 1.2
- else:
- return 0.6, 0.668, 1.1
-
-download_audio_js = """
-() =>{{
- let root = document.querySelector("body > gradio-app");
- if (root.shadowRoot != null)
- root = root.shadowRoot;
- let audio = root.querySelector("#tts-audio").querySelector("audio");
- let text = root.querySelector("#input-text").querySelector("textarea");
- if (audio == undefined)
- return;
- text = text.value;
- if (text == undefined)
- text = Math.floor(Math.random()*100000000);
- audio = audio.src;
- let oA = document.createElement("a");
- oA.download = text.substr(0, 20)+'.wav';
- oA.href = audio;
- document.body.appendChild(oA);
- oA.click();
- oA.remove();
-}}
-"""
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--device', type=str, default='cpu')
- parser.add_argument('--api', action="store_true", default=False)
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
- parser.add_argument("--colab", action="store_true", default=False, help="share gradio app")
- args = parser.parse_args()
- device = torch.device(args.device)
-
- hps_ms = utils.get_hparams_from_file(r'./model/config.json')
- net_g_ms = SynthesizerTrn(
- len(hps_ms.symbols),
- hps_ms.data.filter_length // 2 + 1,
- hps_ms.train.segment_size // hps_ms.data.hop_length,
- n_speakers=hps_ms.data.n_speakers,
- **hps_ms.model)
- _ = net_g_ms.eval().to(device)
- speakers = hps_ms.speakers
- model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None)
-
- with gr.Blocks() as app:
- gr.Markdown(
- "# VITS语音在线合成demo\n"
- "# 严禁将模型用于任何商业项目,否则后果自负\n"
- "主要有赛马娘,原神中文,原神日语,崩坏3的音色 "
- ''
- ''
- )
-
- with gr.Tabs():
- with gr.TabItem("vits"):
- with gr.Row():
- with gr.Column():
- input_text = gr.Textbox(label="Text (100 words limitation) " if limitation else "Text", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text")
- lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"],
- type="index", value="中文")
- btn = gr.Button(value="Submit")
- with gr.Row():
- search = gr.Textbox(label="Search Speaker", lines=1)
- btn2 = gr.Button(value="Search")
- sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228])
- with gr.Row():
- ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True)
- nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True)
- ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True)
- with gr.Column():
- o1 = gr.Textbox(label="Output Message")
- o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio")
- o3 = gr.Textbox(label="Extra Info")
- download = gr.Button("Download Audio")
- btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3])
- download.click(None, [], [], _js=download_audio_js.format())
- btn2.click(search_speaker, inputs=[search], outputs=[sid])
- lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls])
- with gr.TabItem("可用人物一览"):
- gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index")
- if args.colab:
- webbrowser.open("http://127.0.0.1:7860")
- app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share)
diff --git a/spaces/zhenwusw/JoJoGAN/e4e/datasets/inference_dataset.py b/spaces/zhenwusw/JoJoGAN/e4e/datasets/inference_dataset.py
deleted file mode 100644
index fb577d7b538d634f27013c2784d2ea32143154cb..0000000000000000000000000000000000000000
--- a/spaces/zhenwusw/JoJoGAN/e4e/datasets/inference_dataset.py
+++ /dev/null
@@ -1,25 +0,0 @@
-from torch.utils.data import Dataset
-from PIL import Image
-from utils import data_utils
-
-
-class InferenceDataset(Dataset):
-
- def __init__(self, root, opts, transform=None, preprocess=None):
- self.paths = sorted(data_utils.make_dataset(root))
- self.transform = transform
- self.preprocess = preprocess
- self.opts = opts
-
- def __len__(self):
- return len(self.paths)
-
- def __getitem__(self, index):
- from_path = self.paths[index]
- if self.preprocess is not None:
- from_im = self.preprocess(from_path)
- else:
- from_im = Image.open(from_path).convert('RGB')
- if self.transform:
- from_im = self.transform(from_im)
- return from_im
diff --git a/spaces/zhoupin30/zhoupin30/postcss.config.js b/spaces/zhoupin30/zhoupin30/postcss.config.js
deleted file mode 100644
index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000
--- a/spaces/zhoupin30/zhoupin30/postcss.config.js
+++ /dev/null
@@ -1,6 +0,0 @@
-module.exports = {
- plugins: {
- tailwindcss: {},
- autoprefixer: {},
- },
-}
|