parquet-converter commited on
Commit
6029927
·
1 Parent(s): ec025f7

Update parquet files (step 50 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/!LINK! Download Sanskrit Dictionary English.md +0 -28
  2. spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md +0 -156
  3. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md +0 -179
  4. spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md +0 -131
  5. spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md +0 -115
  6. spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py +0 -106
  7. spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py +0 -244
  8. spaces/A00001/bingothoo/src/components/ui/voice/index.tsx +0 -28
  9. spaces/AI-DHD/Youtube-Whisperer/README.md +0 -13
  10. spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py +0 -27
  11. spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py +0 -179
  12. spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py +0 -730
  13. spaces/AIZero2HeroBootcamp/3DHuman/README.md +0 -13
  14. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py +0 -17
  15. spaces/Aanisha/Image_to_story/app.py +0 -70
  16. spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py +0 -34
  17. spaces/AchyuthGamer/OpenGPT/client/css/conversation.css +0 -158
  18. spaces/AgentVerse/agentVerse/agentverse/initialization.py +0 -120
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts +0 -2
  20. spaces/Aki004/herta-so-vits/inference/infer_tool.py +0 -354
  21. spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py +0 -390
  22. spaces/AlhitawiMohammed22/E2E_OCR/README.md +0 -12
  23. spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py +0 -108
  24. spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/__init__.py +0 -0
  25. spaces/Anandhju-jayan/image-captioning-cloned/model.py +0 -149
  26. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md +0 -29
  27. spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py +0 -59
  28. spaces/AriaMei/TTSdemo/data_utils.py +0 -261
  29. spaces/Ash58947/Jan/README.md +0 -10
  30. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py +0 -110
  31. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py +0 -120
  32. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py +0 -199
  33. spaces/BG5/midjourney/README.md +0 -11
  34. spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx +0 -236
  35. spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md +0 -121
  36. spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md +0 -57
  37. spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py +0 -192
  38. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py +0 -156
  39. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py +0 -92
  40. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h +0 -76
  41. spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py +0 -4
  42. spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py +0 -29
  43. spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py +0 -373
  44. spaces/CarlDennis/HYTTS/attentions.py +0 -300
  45. spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py +0 -29
  46. spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py +0 -106
  47. spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js +0 -36
  48. spaces/ClueAI/ChatYuan-large-v2/app.py +0 -310
  49. spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py +0 -122
  50. spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py +0 -40
spaces/1acneusushi/gradio-2dmoleculeeditor/data/!LINK! Download Sanskrit Dictionary English.md DELETED
@@ -1,28 +0,0 @@
1
- <br />
2
- <h1>How to Download Sanskrit Dictionary English for Free</h1>
3
- <p>If you are looking for a way to learn Sanskrit, the ancient and sacred language of India, you might want to download Sanskrit dictionary English. This is a handy tool that can help you translate words and phrases from Sanskrit to English and vice versa. You can also use it to study the grammar, pronunciation, and culture of Sanskrit.</p>
4
- <h2>Download Sanskrit Dictionary English</h2><br /><p><b><b>Download Zip</b> >>> <a href="https://byltly.com/2uKzQu">https://byltly.com/2uKzQu</a></b></p><br /><br />
5
- <p>But where can you find a reliable and free Sanskrit dictionary English? There are many websites and apps that claim to offer this service, but not all of them are trustworthy or accurate. Some might contain errors, malware, or ads that can ruin your experience. Others might charge you a fee or require you to register or subscribe.</p>
6
- <p>That's why we have compiled a list of the best sources to download Sanskrit dictionary English for free. These are reputable and safe platforms that have been tested and reviewed by users and experts. They offer high-quality and comprehensive Sanskrit dictionaries that you can access online or offline. Here they are:</p>
7
- <ul>
8
- <li><a href="https://www.sanskritdictionary.com/">Sanskrit Dictionary</a>: This is one of the most popular and comprehensive online Sanskrit dictionaries. It has over 200,000 entries and covers both classical and modern Sanskrit. You can search by word, root, or category. You can also browse by alphabet, topic, or author. You can download the entire dictionary as a PDF file or as an app for Android or iOS devices.</li>
9
- <li><a href="https://spokensanskrit.org/">Spoken Sanskrit</a>: This is another excellent online Sanskrit dictionary that focuses on spoken Sanskrit. It has over 60,000 entries and covers both literary and colloquial Sanskrit. You can search by word or phrase in Sanskrit or English. You can also listen to the audio pronunciation of each word. You can download the dictionary as an app for Android devices.</li>
10
- <li><a href="https://www.sanskrit-lexicon.uni-koeln.de/">Sanskrit Lexicon</a>: This is a collection of various Sanskrit dictionaries compiled by the University of Cologne. It includes the Monier-Williams Sanskrit-English Dictionary, the Apte Practical Sanskrit-English Dictionary, the Cologne Digital Sanskrit Dictionaries, and more. You can search by word or browse by dictionary. You can download each dictionary as a PDF file or as an XML file.</li>
11
- </ul>
12
- <p>These are some of the best sources to download Sanskrit dictionary English for free. We hope you find them useful and enjoy learning Sanskrit. If you have any questions or feedback, please let us know in the comments below.</p>
13
-
14
- <p>Why Learn Sanskrit?</p>
15
- <p>Sanskrit is one of the oldest and most influential languages in the world. It is the language of the Vedas, the Upanishads, the Bhagavad Gita, and many other sacred texts of Hinduism, Buddhism, and Jainism. It is also the source of many words and concepts in other languages, such as Hindi, Urdu, Bengali, Nepali, and English.</p>
16
- <p></p>
17
- <p>Learning Sanskrit can enrich your knowledge and appreciation of the ancient and modern cultures of India and beyond. It can also improve your cognitive and linguistic skills, as Sanskrit is known for its logical and grammatical structure, rich vocabulary, and poetic beauty. It can also help you access the original texts and teachings of various spiritual traditions and philosophies.</p>
18
- <p>How to Learn Sanskrit?</p>
19
- <p>Learning Sanskrit can be challenging but rewarding. It requires dedication, patience, and practice. But it is not impossible. There are many resources and methods that can help you learn Sanskrit at your own pace and level. Here are some tips to get you started:</p>
20
- <ul>
21
- <li>Choose a suitable Sanskrit dictionary: As we have mentioned above, a good Sanskrit dictionary can be a great tool to help you learn Sanskrit. It can help you understand the meaning, usage, and derivation of Sanskrit words and phrases. It can also help you learn the grammar, pronunciation, and culture of Sanskrit. Choose a dictionary that suits your needs and preferences.</li>
22
- <li>Learn the basics of Sanskrit: Before you dive into the advanced aspects of Sanskrit, you need to learn the basics. This includes the alphabet, the sounds, the script, the grammar, and the syntax of Sanskrit. You can use books, online courses, videos, podcasts, or apps to learn these fundamentals. You can also find a teacher or a tutor who can guide you through the process.</li>
23
- <li>Practice reading and writing Sanskrit: One of the best ways to learn Sanskrit is to practice reading and writing it. You can start with simple texts that are suitable for beginners, such as stories, poems, proverbs, or dialogues. You can also try to write your own sentences or paragraphs in Sanskrit. This will help you improve your vocabulary, grammar, and comprehension skills.</li>
24
- <li>Practice speaking and listening to Sanskrit: Another way to learn Sanskrit is to practice speaking and listening to it. You can find a partner or a group who can converse with you in Sanskit. You can also listen to audio recordings or podcasts that feature Sanskrit speakers. This will help you improve your pronunciation, fluency, and communication skills.</li>
25
- </ul>
26
- <p>These are some of the tips that can help you learn Sanskrit effectively. Remember that learning a new language takes time and effort. But with consistent practice and enthusiasm, you will be able to master Sanskrit and enjoy its benefits.</p> 7b8c122e87<br />
27
- <br />
28
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Brutal Doom V16 REPACK Download.md DELETED
@@ -1,156 +0,0 @@
1
-
2
- <h1>Brutal Doom V16 Download: How to Choose the Best Mod for Your Doom Experience</h1>
3
-
4
- <p>Doom is one of the most iconic and influential games of all time. It revolutionized the FPS genre with its fast-paced action, immersive graphics, and brutal violence. But even after almost 30 years, Doom is still alive and kicking, thanks to the countless mods that enhance and expand the game in various ways.</p>
5
- <h2>Brutal Doom V16 Download</h2><br /><p><b><b>Download</b> &rArr;&rArr;&rArr; <a href="https://imgfil.com/2uxZUP">https://imgfil.com/2uxZUP</a></b></p><br /><br />
6
-
7
- <p>One of the most popular and acclaimed mods for Doom is Brutal Doom, which adds new features, weapons, enemies, gore, sounds, and gameplay mechanics to make Doom more brutal, intense, and fun. But did you know that there are different versions of Brutal Doom that you can download and play?</p>
8
-
9
- <p>In this article, we will introduce you to two of the most recent and interesting versions of Brutal Doom: the Classic Edition v16a and the Extended Edition v16. We will tell you what they are, how they differ from each other and from the original Brutal Doom, and how to download and install them on your PC. Let's get started!</p>
10
-
11
- <h2>What is Brutal Doom Classic Edition v16a?</h2>
12
-
13
- <p>Brutal Doom Classic Edition v16a is a mod that aims to recreate the original Brutal Doom experience with elements from v18, v20, and v21. It is a more classic version of Brutal Doom, with less features and changes than the newer versions, but still with plenty of brutality and fun.</p>
14
-
15
- <p>Some of the features of Brutal Doom Classic Edition v16a are:</p>
16
- <p></p>
17
-
18
- <ul>
19
- <li>A classic HUD with health, armor, ammo, and keys.</li>
20
- <li>A classic weapon selection with no reloads or alt-fires.</li>
21
- <li>A classic enemy behavior with no dodging or fleeing.</li>
22
- <li>A classic gore system with no dismemberment or blood pools.</li>
23
- <li>A classic sound system with no reverb or dynamic music.</li>
24
- </ul>
25
-
26
- <p>If you want to enjoy Brutal Doom as it was originally intended, with a simple and straightforward gameplay that focuses on shooting and killing demons, then Brutal Doom Classic Edition v16a is for you.</p>
27
-
28
- <h2>What is Brutal Doom Extended Edition v16?</h2>
29
-
30
- <p>Brutal Doom Extended Edition v16 is a mod that is based on Brutal Doom and Dox778's personalized Addon. It is a mod that aims to improve the overall gameplay of Brutal Doom with new features, enhancements, and fixes. It is a more modern version of Brutal Doom, with more options and customization than the older versions, but still with the same core gameplay that makes Brutal Doom so great.</p>
31
-
32
- <p>Some of the features of Brutal Doom Extended Edition v16 are:</p>
33
-
34
- <ul>
35
- <li>A new HUD with dynamic health bars, stamina meter, weapon icons, and more.</li>
36
- <li>A new weapon selection with reloads, alt-fires, dual-wielding, grenades, and more.</li>
37
- <li>A new enemy behavior with dodging, fleeing, infighting, ambushes, and more.</li>
38
- <li>A new gore system with dismemberment, blood pools, gibs, and more.</li>
39
- <li>A new sound system with reverb, dynamic music, ambient sounds, and more.</li>
40
- </ul>
41
-
42
- <p>If you want to enjoy Brutal Doom with more variety and challenge, with a lot of options and settings to customize your gameplay experience, then Brutal Doom Extended Edition v16 is for you.</p>
43
-
44
- <h2>How to Download and Install Brutal Doom V16 Mods?</h2>
45
-
46
- <p>To download and install Brutal Doom V16 mods, you will need a few things:</p>
47
-
48
- <ul>
49
- <li>A copy of Doom or Doom II on your PC. You can get them from Steam or GOG.com.</li>
50
- <li>A source port that supports mods. We recommend GZDoom or Zandronum.</li>
51
- <li>The latest version of Brutal Doom (v21). You can get it from Mod DB or GitHub.</li>
52
- <li>The mod file of your choice: Brutal Doom Classic Edition v16a or Brutal Doom Extended Edition v16. You can get them from Mod DB as well.</li>
53
- </ul>
54
-
55
- <p>Once you have everything ready, follow these steps:</p>
56
-
57
- <ol>
58
- <li>Extract the source port files to a folder on your PC.</li>
59
- <li>Copy the DOOM.WAD or DOOM2.WAD file from your game folder to the source port folder.</li>
60
- <li>Extract the brutalv21.pk3 file from the Brutal Doom archive to the source port folder.</li>
61
- <li>Extract the mod file (Brutal_Classic_v16a.zip or BDEE_v16_Compressed.zip) to the source port folder.</li>
62
- <li>Launch the source port executable (gzdoom.exe or zandronum.exe).</li>
63
- <li>Select your game (Doom or Doom II) and your mod (Brutal_Doom_Classic_Edition_v16a.pk3 or BDEE_v16_Compressed.pk3).</li>
64
- <li>Enjoy!</li>
65
- </ol>
66
-
67
- <h2>Conclusion</h2>
68
-
69
- <p>Brutal Doom V16 mods are some of the best ways to enjoy Doom in 2023. Whether you prefer a classic or a modern version of Brutal Doom, you will find a mod that suits your taste and style. Download them now and have fun!</p>
70
- <h2>What are the Benefits of Playing Brutal Doom V16 Mods?</h2>
71
-
72
- <p>Playing Brutal Doom V16 mods can offer you many benefits, such as:</p>
73
-
74
- <ul>
75
- <li>Enhancing your Doom experience with new and improved features that make the game more fun and challenging.</li>
76
- <li>Exploring new and diverse levels, enemies, and scenarios that add more variety and replay value to the game.</li>
77
- <li>Customizing your gameplay with different options and settings that suit your preferences and style.</li>
78
- <li>Experiencing the classic Doom gameplay with a modern twist that keeps the game fresh and exciting.</li>
79
- <li>Enjoying the high-quality graphics, sounds, and effects that make the game more immersive and realistic.</li>
80
- </ul>
81
-
82
- <p>Playing Brutal Doom V16 mods can give you a whole new perspective on Doom and make you appreciate the game even more.</p>
83
-
84
- <h2>What are the Requirements for Playing Brutal Doom V16 Mods?</h2>
85
-
86
- <p>To play Brutal Doom V16 mods, you will need a few things:</p>
87
-
88
- <ul>
89
- <li>A decent PC that can run Doom and the source port smoothly.</li>
90
- <li>A compatible controller or keyboard and mouse that can handle the fast-paced action of Brutal Doom.</li>
91
- <li>A good internet connection that can download the mod files quickly and without errors.</li>
92
- <li>A passion for Doom and a desire to experience it in a new and brutal way.</li>
93
- </ul>
94
-
95
- <p>If you have these things, then you are ready to play Brutal Doom V16 mods and have a blast!</p>
96
- <h2>What are the Differences between Brutal Doom V16 Mods?</h2>
97
-
98
- <p>Brutal Doom V16 mods have some differences that make them unique and appealing to different types of players. Here are some of the main differences between them:</p>
99
-
100
- <ul>
101
- <li>Brutal Doom Classic Edition v16a is more faithful to the original Brutal Doom, while Brutal Doom Extended Edition v16 is more innovative and experimental.</li>
102
- <li>Brutal Doom Classic Edition v16a is more compatible with other mods and addons, while Brutal Doom Extended Edition v16 is more standalone and self-contained.</li>
103
- <li>Brutal Doom Classic Edition v16a is more stable and bug-free, while Brutal Doom Extended Edition v16 is more updated and feature-rich.</li>
104
- <li>Brutal Doom Classic Edition v16a is more suitable for purists and nostalgia lovers, while Brutal Doom Extended Edition v16 is more suitable for adventurers and thrill seekers.</li>
105
- </ul>
106
-
107
- <p>Depending on your preferences and expectations, you can choose the mod that best suits your needs and tastes.</p>
108
-
109
- <h2>What are the Reviews of Brutal Doom V16 Mods?</h2>
110
-
111
- <p>Brutal Doom V16 mods have received positive reviews from players and critics alike. They have been praised for their quality, variety, and fun factor. Here are some of the reviews of Brutal Doom V16 mods:</p>
112
-
113
- <blockquote>
114
- <p>"Brutal Doom Classic Edition v16a is a great mod for those who want to relive the glory days of Brutal Doom. It has everything you need to enjoy a classic and brutal Doom experience, without any unnecessary or distracting features. It is simple, fast, and fun."</p>
115
- <cite>- A Mod DB user</cite>
116
- </blockquote>
117
-
118
- <blockquote>
119
- <p>"Brutal Doom Extended Edition v16 is a great mod for those who want to explore the possibilities of Brutal Doom. It has everything you need to enjoy a modern and diverse Doom experience, with a lot of options and settings to customize your gameplay. It is varied, challenging, and immersive."</p>
120
- <cite>- A Mod DB user</cite>
121
- </blockquote>
122
-
123
- <p>Brutal Doom V16 mods have been rated highly by the community and have received many awards and recognitions. They are among the best mods for Doom ever made.</p>
124
- <h2>What are the Tips and Tricks for Playing Brutal Doom V16 Mods?</h2>
125
-
126
- <p>Playing Brutal Doom V16 mods can be a lot of fun, but also a lot of challenge. Here are some tips and tricks that can help you survive and enjoy the game more:</p>
127
-
128
- <ul>
129
- <li>Use cover and movement to avoid enemy fire and attacks. Don't stand still or you will be an easy target.</li>
130
- <li>Use different weapons and ammo types for different situations and enemies. Experiment and find out what works best for you.</li>
131
- <li>Use grenades and explosives to clear out groups of enemies or destroy obstacles. Be careful not to hurt yourself or your allies.</li>
132
- <li>Use melee attacks and executions to save ammo and deal extra damage. You can also use them to regain health and armor.</li>
133
- <li>Use the environment to your advantage. You can use barrels, crates, switches, doors, and more to create traps, distractions, or shortcuts.</li>
134
- </ul>
135
-
136
- <p>Playing Brutal Doom V16 mods can be a rewarding and satisfying experience if you know how to play smart and use your resources wisely.</p>
137
-
138
- <h2>What are the Future Plans for Brutal Doom V16 Mods?</h2>
139
-
140
- <p>Brutal Doom V16 mods are not finished yet. The modders behind them are constantly working on improving and updating them with new features, fixes, and content. Here are some of the future plans for Brutal Doom V16 mods:</p>
141
-
142
- <ul>
143
- <li>Brutal Doom Classic Edition v16a will be ported to Brutal Doom v22 when it is released, to make it compatible with the latest version of Brutal Doom.</li>
144
- <li>Brutal Doom Extended Edition v16 will be updated with new weapons, enemies, levels, and more, to make it more diverse and complete.</li>
145
- <li>Both mods will be tested and optimized for performance and stability, to make them run smoothly and without errors.</li>
146
- <li>Both mods will be supported by the community with feedback, suggestions, bug reports, and donations, to make them better and more enjoyable.</li>
147
- </ul>
148
-
149
- <p>Brutal Doom V16 mods are still in development and have a lot of potential. They will continue to grow and evolve with time and effort.</p>
150
- <h2>Conclusion</h2>
151
-
152
- <p>Brutal Doom V16 mods are some of the best ways to enjoy Doom in 2023. They offer you different versions of Brutal Doom that suit your preferences and expectations. They enhance and expand the game with new and improved features that make it more fun and challenging. They are easy to download and install, and they have a lot of benefits, tips, and tricks that can help you survive and enjoy the game more. They are also constantly being updated and supported by the modders and the community, making them better and more enjoyable with time and effort.</p>
153
-
154
- <p>If you are a fan of Doom and Brutal Doom, you should definitely try Brutal Doom V16 mods. They will give you a whole new perspective on Doom and make you appreciate the game even more. Download them now and have a blast!</p> 3cee63e6c2<br />
155
- <br />
156
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Billiards Pool Games Download Learn the Rules and Strategies of Pool Games.md DELETED
@@ -1,179 +0,0 @@
1
-
2
- <h1>Billiards Pool Games Download: How to Enjoy the Fun of Pool on Your Phone</h1>
3
- <p>Do you love playing pool but don't have the time or space to own a pool table? Do you want to practice your skills and challenge your friends online? Do you want to have fun and relax with a realistic and engaging pool game on your phone? If you answered yes to any of these questions, then you should download billiards pool games on your Android device.</p>
4
- <h2>Introduction</h2>
5
- <h3>What are billiards pool games?</h3>
6
- <p>Billiards pool games are digital versions of the popular cue sports that involve hitting balls with a stick on a cloth-covered table. There are different types of billiards pool games, such as 8-ball, 9-ball, snooker, and carom. Each game has its own rules, objectives, and strategies. Billiards pool games can be played solo, against the computer, or online with other players.</p>
7
- <h2>billiards pool games download</h2><br /><p><b><b>Download Zip</b> &#10038; <a href="https://urlin.us/2uSVRL">https://urlin.us/2uSVRL</a></b></p><br /><br />
8
- <h3>Why should you download billiards pool games?</h3>
9
- <p>Downloading billiards pool games on your phone has many benefits, such as:</p>
10
- <ul>
11
- <li>You can play pool anytime and anywhere, without needing a physical table or equipment.</li>
12
- <li>You can improve your skills and learn new tricks by practicing different shots and angles.</li>
13
- <li>You can compete with other players from around the world and join tournaments and leagues.</li>
14
- <li>You can customize your cue and table with various designs and colors.</li>
15
- <li>You can enjoy realistic graphics, sound effects, and physics that simulate the real game.</li>
16
- </ul>
17
- <h2>Best Billiards Pool Games for Android</h2>
18
- <p>There are many billiards pool games available for Android devices, but not all of them are worth downloading. Here are some of the best ones that you should try:</p>
19
- <h3>8 Ball Pool</h3>
20
- <h4>Features</h4>
21
- <p>8 Ball Pool is one of the most popular and downloaded billiards pool games on Android. It is developed by Miniclip, a leading online game company. 8 Ball Pool offers the following features:</p>
22
- <ul>
23
- <li>You can play 8-ball or 9-ball pool in single player mode, against the computer, or online with other players.</li>
24
- <li>You can join tournaments and leagues to win coins and exclusive items.</li>
25
- <li>You can use coins to buy new cues and tables in the shop.</li>
26
- <li>You can level up and access more match locations and challenges.</li>
27
- <li>You can chat and send emojis to your opponents during the game.</li>
28
- </ul>
29
- <h4>Pros and cons</h4>
30
- <p>8 Ball Pool has many pros, such as:</p>
31
- <ul>
32
- <li>It has a large and active community of players from different countries.</li>
33
- <li>It has high-quality graphics and animations that create a realistic experience.</li>
34
- <li>It has easy and intuitive controls that suit both beginners and experts.</li>
35
- <li>It has frequent updates and new features that keep the game fresh and exciting.</li>
36
- </ul>
37
- <p>However, 8 Ball Pool also has some cons, such as:</p>
38
- <ul>
39
- <li>It requires an internet connection to play online mode.</li>
40
- <li>It contains ads and in-app purchases that may be annoying or expensive.</li>
41
- <li>It may have some bugs or glitches that affect the gameplay or performance.</li>
42
- </ul>
43
- <h3>Pool Billiards Pro</h3>
44
- <h4>Features</h4>
45
- <p>Pool Billiards Pro is another popular and well-rated billiards pool game on Android. It is developed by TerranDroid, a game studio that specializes in casual and sports games. Pool Billiards Pro offers the following features:</p>
46
- <p>8 ball pool online multiplayer free<br />
47
- pool billiards pro offline android<br />
48
- realistic 3D pool games for pc<br />
49
- 9 ball pool tournaments app<br />
50
- best pool game with practice mode<br />
51
- pool strategy and cue tips<br />
52
- level-based billiard challenge game<br />
53
- offline 8 ball pool against bots<br />
54
- online 9 ball pool with friends<br />
55
- free pool game with no ads<br />
56
- pool billiards pro apk download<br />
57
- 8 ball pool miniclip for ios<br />
58
- 3D pool game with custom cues<br />
59
- offline 9 ball pool game<br />
60
- online 8 ball pool league<br />
61
- pool game with high score record<br />
62
- billiard game with arcade mode<br />
63
- 8 ball pool game with rules<br />
64
- 9 ball pool game with no rules<br />
65
- realistic pool game with physics<br />
66
- offline pool game with data safety<br />
67
- online pool game with leader board<br />
68
- free billiard game with in-app purchases<br />
69
- pool game with touch control<br />
70
- billiard game with single player mode<br />
71
- 8 ball pool game with time mode<br />
72
- 9 ball pool game with challenge mode<br />
73
- realistic billiard game with animation<br />
74
- offline 8 ball pool for tablet<br />
75
- online 9 ball pool for phone<br />
76
- free pool game with editors' choice<br />
77
- billiard game with ratings and reviews<br />
78
- 8 ball pool game with data encryption<br />
79
- 9 ball pool game with data deletion request<br />
80
- realistic pool game with sound effects<br />
81
- offline billiard game for watch<br />
82
- online pool game for chromebook<br />
83
- free 8 ball pool for tv<br />
84
- billiard game with privacy policy<br />
85
- 9 ball pool game with terms and conditions</p>
86
- <ul>
87
- <li>You can play 8-ball, 9-ball, or snooker in single player mode, against the computer, or online with other players.</li>
88
- <li>You can choose from different game modes, such as Arcade Mode, Challenge Mode, and Time Mode.</li>
89
- <li>You can adjust the difficulty level and the game speed according to your preference.</li>
90
- <li>You can use touch screen or accelerometer to control the cue.</li>
91
- <li>You can view the game statistics and achievements.</li>
92
- </ul>
93
- <h4>Pros and cons</h4>
94
- <p>Pool Billiards Pro has many pros, such as:</p>
95
- <ul>
96
- <li>It has a simple and elegant design that is easy on the eyes.</li>
97
- <li>It has smooth and realistic physics that make the game more enjoyable.</li>
98
- <li>It has a variety of game modes and options that cater to different tastes and skills.</li>
99
- <li>It has a small file size and does not consume much battery or memory.</li>
100
- </ul>
101
- <p>However, Pool Billiards Pro also has some cons, such as:</p>
102
- <ul>
103
- <li>It does not have a chat or social feature to interact with other players.</li>
104
- <li>It does not have a shop or customization feature to buy or change cues and tables.</li>
105
- <li>It does not have a ranking or leveling system to measure your progress and status.</li>
106
- <li>It may have some ads that interrupt the game flow.</li>
107
- </ul>
108
- <h3>8 Ball Billiards Offline Pool</h3>
109
- <h4>Features</h4>
110
- <p>8 Ball Billiards Offline Pool is a newer and lesser-known billiards pool game on Android. It is developed by SNG Games, a game developer that focuses on offline and classic games. 8 Ball Billiards Offline Pool offers the following features:</p>
111
- <ul>
112
- <li>You can play 8-ball pool in offline mode without needing an internet connection.</li>
113
- <li>You can play against the computer or with your friends on the same device.</li>
114
- <li>You can choose from four different table colors and four different cue colors.</li>
115
- <li>You can use hints and undo options to help you with your shots.</li>
116
- <li>You can earn coins by winning games and use them to unlock new cues and tables.</li>
117
- </ul>
118
- <h4>Pros and cons</h4>
119
- <p>8 Ball Billiards Offline Pool has many pros, such as:</p>
120
- <ul>
121
- <li>It is ideal for players who want to play pool offline or with their friends locally.</li>
122
- <li>It has a simple and user-friendly interface that is easy to navigate and play.</li>
123
- <li>It has a relaxing and soothing background music that creates a pleasant atmosphere.</li>
124
- <li>It has no ads or in-app purchases that may distract or annoy you.</li>
125
- </ul>
126
- <p>However, 8 Ball Billiards Offline Pool also has some cons, such as:</p>
127
- <ul>
128
- <li>It does not have an online mode or a multiplayer mode with other players around the world.</li>
129
- <li>It does not have many game modes or options to choose from.</li>
130
- <li>It does not have high-quality graphics or animations that may appeal to some players.</li>
131
- <li>It does not have a chat or social feature to communicate with other players.</li>
132
- </ul>
133
- <h2>Conclusion</h2>
134
- <h3>Summary of the main points</h3>
135
- <p>In conclusion, billiards pool games are fun and exciting games that you can download on your Android device. They allow you to enjoy the thrill of pool without needing a physical table or equipment. They also help you improve your skills and compete with other players online. Some of the best billiards pool games for Android are 8 Ball Pool, Pool Billiards Pro, and 8 Ball Billiards Offline Pool. Each game has its own features, pros, and cons that you should consider before downloading them.</p>
136
- <h3>Call to action</h3>
137
- <p>If you are looking for a great way to spend your free time, then you should download billiards pool games on your Android device. They are easy to play, fun to master, and challenging to beat. They will keep you entertained and engaged for hours. So what are you waiting for? Download billiards pool games today and start playing!</p>
138
- <h2>Frequently Asked Questions</h2>
139
- <p>Here are some of the most common questions that people ask about billiards pool games:</p>
140
- <ol>
141
- <li><b>What are the rules of billiards pool games?</b></li>
142
- <p>The rules of billiards pool games vary depending on the type of game you are playing. However, some general rules are:</p>
143
- <ul>
144
- <li>You must hit the cue ball with your cue stick and make it hit other balls on the table.</li>
145
- <li>You must pocket the balls in the designated pockets according to the game's objective.</li>
146
- <li>You must not commit any fou ls, such as scratching the cue ball, hitting the wrong ball, or pocketing the wrong ball.</li>
147
- <li>You must take turns with your opponent until one of you wins the game.</li>
148
- </ul>
149
- <li><b>How can I download billiards pool games on my Android device?</b></li>
150
- <p>You can download billiards pool games on your Android device by following these steps:</p>
151
- <ul>
152
- <li>Go to the Google Play Store and search for billiards pool games.</li>
153
- <li>Choose the game that you want to download and tap on it.</li>
154
- <li>Tap on the Install button and wait for the game to download and install on your device.</li>
155
- <li>Tap on the Open button and enjoy the game.</li>
156
- </ul>
157
- <li><b>Are billiards pool games free or paid?</b></li>
158
- <p>Most billiards pool games are free to download and play on your Android device. However, some games may contain ads or in-app purchases that may require you to pay real money to access certain features or items. You can choose to disable or enable these options in the game settings or in your device settings.</p>
159
- <li><b>Which billiards pool game is the best for me?</b></li>
160
- <p>The best billiards pool game for you depends on your personal preference and taste. You should consider factors such as:</p>
161
- <ul>
162
- <li>The type of game you want to play (8-ball, 9-ball, snooker, etc.)</li>
163
- <li>The mode of play you want to enjoy (single player, online multiplayer, offline multiplayer, etc.)</li>
164
- <li>The features and options you want to have (customization, chat, tournaments, etc.)</li>
165
- <li>The graphics and sound quality you want to experience (realistic, cartoonish, etc.)</li>
166
- <li>The difficulty level and challenge you want to face (easy, hard, etc.)</li>
167
- </ul>
168
- <p>You can try different games and see which one suits you best. You can also read reviews and ratings from other players to get an idea of what they think about the games.</p>
169
- <li><b>How can I improve my skills in billiards pool games?</b></li>
170
- <p>You can improve your skills in billiards pool games by practicing regularly and learning from your mistakes. You can also follow these tips:</p>
171
- <ul>
172
- <li>Watch tutorials and videos from experts and learn their techniques and strategies.</li>
173
- <li>Study the rules and objectives of each game and know how to score points and avoid fouls.</li>
174
- <li>Practice different shots and angles and learn how to use spin, power, and aim.</li>
175
- <li>Play against different opponents and learn from their moves and styles.</li>
176
- <li>Join tournaments and leagues and challenge yourself with higher levels of competition.</li>
177
- </ul></p> 197e85843d<br />
178
- <br />
179
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Undangan Pernikahan AI Tips dan Trik untuk Membuat Undangan Menarik.md DELETED
@@ -1,131 +0,0 @@
1
- <br />
2
- <table>, <tr>, <td>, <th>, and <caption> to create and format your table. - Write in a conversational style as written by a human. Use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors. <h4> and <p> to format your FAQs. I hope these tips are helpful for you. Now, let me show you the two tables I created for you based on these tips. ? Table 1: Outline of the article | Heading | Subheading | Content | | --- | --- | --- | | H1: How to Download Wedding Invitation Templates in AI Format | | Introduction: Explain what AI format is and why it is useful for creating wedding invitations. Provide some statistics on how popular wedding invitations are and how much they cost. Thesis statement: Downloading wedding invitation templates in AI format can help you save time, money, and unleash your creativity. | | | H2: Benefits of Using AI Format for Wedding Invitations | Body paragraph 1: Explain the benefits of using AI format for wedding invitations, such as: - High-quality graphics and illustrations - Easy customization and editing - Compatibility with various design tools - Scalability and flexibility | | | H2: How to Find and Download Wedding Invitation Templates in AI Format | Body paragraph 2: Provide some tips on how to find and download wedding invitation templates in AI format, such as: - Use reliable websites that offer free or affordable templates - Search by theme, style, or color - Check the license and terms of use - Download the files in ZIP or RAR format | | | H2: How to Customize and Print Your Wedding Invitations in AI Format | Body paragraph 3: Provide some steps on how to customize and print your wedding invitations in AI format, such as: - Open the files in Adobe Illustrator or another compatible tool - Change the text, fonts, colors, images, and layout as desired - Save the files as PDF or JPG format - Print the invitations on high-quality paper or send them online | | | H2: Examples of Wedding Invitation Templates in AI Format | Body paragraph 4: Show some examples of wedding invitation templates in AI format from different websites, such as: - Freepik - DYP.im - Fotor Include a table that compares the features, prices, and ratings of these websites. | | H1: Conclusion | | Conclusion: Summarize the main points and restate the thesis statement. Emphasize how downloading wedding invitation templates in AI format can make your wedding planning easier and more fun. Encourage the readers to try it out for themselves. | | H1: FAQs | | FAQs: List five unique FAQs that answer common questions related to downloading wedding invitation templates in AI format, such as: - What is AI format? - Why should I use AI format for wedding invitations? - How can I edit AI files? - Where can I find free or cheap AI templates? - How can I print or share my invitations online? | Table 2: Article with HTML formatting <h1>How to Download Wedding Invitation Templates in AI Format</h1>
3
- <p>If you're planning a wedding, you know how important it is to create beautiful and memorable invitations that reflect your personality and style. But designing your own invitations from scratch can be time-consuming, expensive, and stressful. That's why many couples opt for downloading wedding invitation templates that they can customize and print themselves.</p>
4
- <h2>download undangan pernikahan ai</h2><br /><p><b><b>Download Zip</b> &bull; <a href="https://jinyurl.com/2uNOVO">https://jinyurl.com/2uNOVO</a></b></p><br /><br />
5
- <p <p>One of the most popular formats for wedding invitation templates is AI, which stands for Adobe Illustrator. AI is a vector-based graphic design format that allows you to create high-quality graphics and illustrations with ease. AI files are also easy to customize and edit, as you can change the text, fonts, colors, images, and layout as you wish. AI files are compatible with various design tools, such as Adobe Illustrator, Photoshop, InDesign, and CorelDraw. AI files are also scalable and flexible, meaning you can resize them without losing quality or clarity.</p>
6
- <p>Downloading wedding invitation templates in AI format can help you save time, money, and unleash your creativity. You can find hundreds of free or affordable templates online that suit your theme, style, or color scheme. You can also print your invitations on high-quality paper or send them online to your guests. In this article, we will show you how to download wedding invitation templates in AI format and how to customize and print them yourself.</p>
7
- <h2>Benefits of Using AI Format for Wedding Invitations</h2>
8
- <p>Using AI format for wedding invitations has many benefits, such as:</p>
9
- <ul>
10
- <li><strong>High-quality graphics and illustrations</strong>: AI files are vector-based, which means they are made of mathematical equations that define the shapes, colors, and strokes of the graphics. This makes them sharp and clear, even when zoomed in or out. AI files also support transparency, gradients, and patterns, which add more depth and dimension to your invitations.</li>
11
- <li><strong>Easy customization and editing</strong>: AI files are editable, which means you can change the text, fonts, colors, images, and layout of your invitations as you like. You can also add your own graphics, logos, or photos to make your invitations more personal and unique. You can use Adobe Illustrator or another compatible tool to edit your AI files.</li>
12
- <li><strong>Compatibility with various design tools</strong>: AI files are compatible with various design tools, such as Adobe Illustrator, Photoshop, InDesign, and CorelDraw. You can use these tools to open, edit, save, and export your AI files. You can also convert your AI files to other formats, such as PDF or JPG, if needed.</li>
13
- <li><strong>Scalability and flexibility</strong>: AI files are scalable and flexible, which means you can resize them without losing quality or clarity. You can also rotate, flip, skew, or distort them as you wish. You can adjust the resolution and dimensions of your invitations to fit your printing or sharing needs.</li>
14
- </ul>
15
- <p>These benefits make AI format a great choice for creating stunning and professional-looking wedding invitations.</p>
16
- <p>download template undangan pernikahan ai<br />
17
- download desain undangan pernikahan ai<br />
18
- download contoh undangan pernikahan ai<br />
19
- download undangan pernikahan format ai<br />
20
- download undangan pernikahan vector ai<br />
21
- download undangan pernikahan gratis ai<br />
22
- download undangan pernikahan simple ai<br />
23
- download undangan pernikahan elegan ai<br />
24
- download undangan pernikahan unik ai<br />
25
- download undangan pernikahan modern ai<br />
26
- download undangan pernikahan islami ai<br />
27
- download undangan pernikahan minimalis ai<br />
28
- download undangan pernikahan klasik ai<br />
29
- download undangan pernikahan floral ai<br />
30
- download undangan pernikahan rustic ai<br />
31
- download undangan pernikahan vintage ai<br />
32
- download undangan pernikahan gold ai<br />
33
- download undangan pernikahan pink ai<br />
34
- download undangan pernikahan blue ai<br />
35
- download undangan pernikahan green ai<br />
36
- download undangan pernikahan red ai<br />
37
- download undangan pernikahan purple ai<br />
38
- download undangan pernikahan black and white ai<br />
39
- download undangan pernikahan watercolor ai<br />
40
- download undangan pernikahan geometric ai<br />
41
- download mockup undangan pernikahan ai<br />
42
- download background undangan pernikahan ai<br />
43
- download border undangan pernikahan ai<br />
44
- download frame undangan pernikahan ai<br />
45
- download logo undangan pernikahan ai<br />
46
- download font undangan pernikahan ai<br />
47
- download icon undangan pernikahan ai<br />
48
- download clipart undangan pernikahan ai<br />
49
- download sticker undangan pernikahan ai<br />
50
- cara download undangan pernikahan ai<br />
51
- situs download undangan pernikahan ai<br />
52
- website download undangan pernikahan ai<br />
53
- aplikasi download undangan pernikahan ai<br />
54
- software download undangan pernikahan ai<br />
55
- tutorial download undangan pernikahan ai</p>
56
- <h2>How to Find and Download Wedding Invitation Templates in AI Format</h2>
57
- <p>Finding and downloading wedding invitation templates in AI format is easy and convenient. Here are some tips on how to do it:</p>
58
- <ol>
59
- <li><strong>Use reliable websites that offer free or affordable templates</strong>: There are many websites that offer free or affordable wedding invitation templates in AI format. Some of the most popular ones are Freepik, DYP.im, Fotor, Vecteezy, and Template.net. These websites have a wide range of templates for different themes, styles, and colors. You can browse through their collections and choose the ones that suit your preferences.</li>
60
- <li><strong>Search by theme, style, or color</strong>: Most websites have filters or categories that help you narrow down your search by theme, style, or color. For example, you can search for floral, vintage, rustic, modern, elegant, or minimalist wedding invitation templates. You can also search for templates by color scheme, such as pink, blue, green, gold, or black.</li>
61
- <li><strong>Check the license and terms of use</strong>: Before downloading any template from any website, make sure you check the license and terms of use. Some templates are free for personal use only, while others require attribution or a premium subscription. Some templates may also have restrictions on how you can edit or print them. Read the license and terms of use carefully and follow them accordingly.</li>
62
- <li><strong>Download the files in ZIP or RAR format</strong>: Most websites offer their templates in ZIP or RAR format. These are compressed files that contain multiple files inside them. To download them, you need to click on the download button and save the file to your computer. To open them , you need to extract them using a software such as WinZip, WinRAR, or 7-Zip. You can then access the AI files and other files inside the ZIP or RAR folder.</li>
63
- </ol>
64
- <p>By following these tips, you can find and download wedding invitation templates in AI format easily and quickly.</p>
65
- <h2>How to Customize and Print Your Wedding Invitations in AI Format</h2>
66
- <p>Once you have downloaded your wedding invitation templates in AI format, you can customize and print them yourself. Here are some steps on how to do it:</p>
67
- <ol>
68
- <li><strong>Open the files in Adobe Illustrator or another compatible tool</strong>: To edit your AI files, you need to open them in Adobe Illustrator or another compatible tool, such as Photoshop, InDesign, or CorelDraw. You can double-click on the AI file or drag and drop it into the tool. You can also use the File > Open menu to locate and open the file.</li>
69
- <li><strong>Change the text, fonts, colors, images, and layout as desired</strong>: To change the text of your invitations, you need to select the text tool and click on the text you want to edit. You can then type your own text or copy and paste it from another source. You can also change the fonts, colors, sizes, and styles of your text using the options on the toolbar or the properties panel. To change the images of your invitations, you need to select the image tool and click on the image you want to replace. You can then browse your computer or online sources for a new image and insert it into your invitation. You can also resize, crop, rotate, or adjust the brightness and contrast of your image using the options on the toolbar or the properties panel. To change the layout of your invitations, you need to select the selection tool and click on the elements you want to move, resize, or delete. You can also use the align, distribute, group, or arrange options on the toolbar or the properties panel to organize your elements.</li>
70
- <li><strong>Save the files as PDF or JPG format</strong>: To save your invitations for printing or sharing online, you need to export them as PDF or JPG format. You can use the File > Export menu to choose the format and location of your files. You can also adjust the quality and resolution of your files using the options on the export dialog box.</li>
71
- <li><strong>Print the invitations on high-quality paper or send them online</strong>: To print your invitations, you need to use a printer that supports high-quality printing and paper that matches your design and size. You can use the File > Print menu to choose your printer and paper settings. You can also preview your invitations before printing them using the options on the print dialog box. To send your invitations online, you need to use an email service or a social media platform that supports PDF or JPG attachments. You can attach your files to your email or post and add a personal message to your guests.</li>
72
- </ol>
73
- <p>By following these steps, you can customize and print your wedding invitations in AI format yourself.</p>
74
- <h2>Examples of Wedding Invitation Templates in AI Format</h2>
75
- <p>To give you some inspiration and ideas for your wedding invitations, here are some examples of wedding invitation templates in AI format from different websites:</p>
76
- <ul>
77
- <li><strong>Freepik</strong>: Freepik is a website that offers free vector graphics, illustrations, icons, photos, and templates for various purposes. It has a large collection of wedding invitation templates in AI format that you can download and edit for free. Some of the themes include floral, geometric, watercolor, vintage, rustic, and modern. You can also find matching templates for save-the-date cards, thank-you cards, menus, programs, and more.</li>
78
- <li><strong>DYP.im</strong>: DYP.im is a website that offers free and premium design templates for various occasions. It has a variety of wedding invitation templates in AI format that you can download and edit for free or for a small fee. Some of the styles include elegant, minimalist, classic, bohemian , and whimsical. You can also find templates for other wedding-related items, such as labels, tags, stickers, and envelopes.</li>
79
- <li><strong>Fotor</strong>: Fotor is a website that offers free and premium online photo editing and design tools. It has a section for wedding invitation templates in AI format that you can download and edit for free or for a subscription fee. Some of the categories include simple, floral, vintage, modern, and elegant. You can also use Fotor's online editor to customize your templates with your own photos, text, stickers, filters, and effects.</li>
80
- </ul>
81
- <p>To compare the features, prices, and ratings of these websites, you can use the table below:</p>
82
- <table>
83
- <caption>Comparison of wedding invitation template websites</caption>
84
- <tr>
85
- <th>Website</th>
86
- <th>Features</th>
87
- <th>Prices</th>
88
- <th>Ratings</th>
89
- </tr>
90
- <tr>
91
- <td>Freepik</td>
92
- <td>- Large collection of free and premium templates<br>- Various themes, styles, and colors<br>- Matching templates for other wedding items<br>- Editable in Adobe Illustrator or other tools</td>
93
- <td>- Free for personal use with attribution<br>- Premium subscription for $9.99/month or $89.99/year<br>- Unlimited downloads and no attribution required</td>
94
- <td>- 4.5/5 stars on Trustpilot<br>- 8.8/10 on Sitejabber</td>
95
- </tr>
96
- <tr>
97
- <td>DYP.im</td>
98
- <td>- Variety of free and premium templates<br>- Various styles and designs<br>- Templates for other wedding-related items<br>- Editable in Adobe Illustrator or other tools</td>
99
- <td>- Free for personal use with attribution<br>- Premium templates for $2-$5 each<br>- Unlimited downloads and no attribution required</td>
100
- <td>- 4.3/5 stars on Trustpilot<br>- 8.6/10 on Sitejabber</td>
101
- </tr>
102
- <tr>
103
- <td>Fotor</td>
104
- <td>- Section of free and premium templates<br>- Various categories and designs<br>- Online editor to customize your templates<br>- Editable in Adobe Illustrator or other tools</td>
105
- <td>- Free for personal use with watermark<br>- Premium subscription for $8.99/month or $39.99/year<br>- Unlimited downloads and no watermark</td>
106
- <td>- 4.6/5 stars on Trustpilot<br>- 9/10 on Sitejabber</td>
107
- </tr>
108
- </table>
109
- <h1>Conclusion</h1>
110
- <p>Downloading wedding invitation templates in AI format can help you create beautiful and memorable invitations that reflect your personality and style. You can save time, money, and unleash your creativity by using AI format for your invitations. You can find hundreds of free or affordable templates online that suit your theme, style, or color scheme. You can also customize and print your invitations yourself using Adobe Illustrator or another compatible tool.</p>
111
- <p>Downloading wedding invitation templates in AI format is easy and fun. Why not try it out for yourself? You might be surprised by how much you can do with AI format.</p>
112
- <h1>FAQs</h1>
113
- <h4>What is AI format?</h4>
114
- <p>AI format is a vector-based graphic design format that allows you to create high-quality graphics and illustrations with ease. AI stands for Adobe Illustrator, which is the most popular tool for creating and editing AI files.</p>
115
- <h4>Why should I use AI format for wedding invitations?</h4>
116
- <p>You should use AI format for wedding invitations because it has many benefits, such as:</p>
117
- <ul>
118
- <li>High-quality graphics and illustrations that are sharp and clear.</li>
119
- <li>Easy customization and editing that let you change the text, fonts, colors, images, and layout as you wish.</li>
120
- <li>Compatibility with various design tools that let you open, edit, save, and export your AI files.</li>
121
- <li>Scalability and flexibility that let you resize your invitations without losing quality or clarity.</li>
122
- </ul>
123
- <h4>How can I edit AI files?</h4>
124
- <p>You can edit AI files using Adobe Illustrator or another compatible tool, such as Photoshop, InDesign, or CorelDraw. You can use the tools and options on the toolbar or the properties panel to change the text, fonts, colors, images , and layout of your invitations. You can also add your own graphics, logos, or photos to make your invitations more personal and unique.</p>
125
- <h4>Where can I find free or cheap AI templates?</h4>
126
- <p>You can find free or cheap AI templates on various websites that offer free or affordable vector graphics, illustrations, icons, photos, and templates for various purposes. Some of the most popular ones are Freepik, DYP.im, Fotor, Vecteezy, and Template.net. You can browse through their collections and choose the ones that suit your preferences.</p>
127
- <h4>How can I print or share my invitations online?</h4>
128
- <p>You can print or share your invitations online by exporting them as PDF or JPG format. You can use the File > Export menu to choose the format and location of your files. You can also adjust the quality and resolution of your files using the options on the export dialog box. To print your invitations, you need to use a printer that supports high-quality printing and paper that matches your design and size. You can use the File > Print menu to choose your printer and paper settings. You can also preview your invitations before printing them using the options on the print dialog box. To send your invitations online, you need to use an email service or a social media platform that supports PDF or JPG attachments. You can attach your files to your email or post and add a personal message to your guests.</p>
129
- <p>I hope you enjoyed reading this article and learned something new. If you have any questions or feedback, please let me know in the comments below. Thank you for your time and attention.</p> 401be4b1e0<br />
130
- <br />
131
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Free Download Candy Crush Saga for Windows 10 (64 bit) - The Most Popular Puzzle Game.md DELETED
@@ -1,115 +0,0 @@
1
- <br />
2
- <h1>Candy Crush Saga: How to Download and Play on Windows 10 64 Bit</h1>
3
- <p>If you are looking for a fun and addictive puzzle game that will keep you entertained for hours, you might want to try Candy Crush Saga. This popular game has millions of fans around the world who enjoy matching colorful candies and clearing various challenges. But did you know that you can also play Candy Crush Saga on your Windows 10 64 bit PC? In this article, we will show you how to download and play Candy Crush Saga on your computer, and give you some tips and tricks to master the game.</p>
4
- <h2>What is Candy Crush Saga?</h2>
5
- <p>Candy Crush Saga is a free-to-play tile-matching video game developed by King, a leading company in casual gaming. It was released in 2012 for Facebook, and later for iOS, Android, Windows Phone, and Windows 10. It is a variation of their browser game Candy Crush, which was inspired by the classic game Bejeweled.</p>
6
- <h2>candy crush saga free download for windows 10 64 bit</h2><br /><p><b><b>DOWNLOAD</b> &#10001; <a href="https://jinyurl.com/2uNJY3">https://jinyurl.com/2uNJY3</a></b></p><br /><br />
7
- <p>In Candy Crush Saga, you have to match three or more candies of the same color on a game board to make them disappear, and create special candies that have extra effects. You have to complete different objectives in each level, such as reaching a target score, clearing jelly or chocolate from the board, collecting ingredients, or making a certain number of matches. You have a limited number of moves or time to complete each level, so you have to plan your moves carefully and use boosters wisely.</p>
8
- <p>Candy Crush Saga has thousands of levels to play, each with different layouts, obstacles, and goals. The game also features various game modes, such as Moves, Time, Jelly, Ingredients, Order, Mixed Mode, Rainbow Rapids, Soda, Jam, Honey, Frosting, Chocolate Box, Dunk the Cookie, and more. Each mode has its own rules and challenges that require different strategies.</p>
9
- <p>Candy Crush Saga is not only a simple and fun game, but also a rewarding one. You can earn points, stars, gold bars, boosters, trophies, badges, and other prizes as you play. You can also connect with your Facebook friends or other players online and compare your scores, send or receive lives or boosters, or join teams and events.</p>
10
- <h2>Why play Candy Crush Saga on Windows 10 64 Bit?</h2>
11
- <p>While Candy Crush Saga is mainly designed for mobile devices, playing it on your Windows 10 64 bit PC has some advantages. Here are some of them:</p>
12
- <ul>
13
- <li>You can enjoy a bigger screen and better graphics. Playing on a PC allows you to see more details and colors of the candies and the backgrounds. You can also adjust the resolution and the quality settings according to your preferences.</li>
14
- <li>You can use a mouse or a keyboard instead of a touchscreen. Some players find it easier and more comfortable to use a mouse or a keyboard to make matches and activate boosters. You can also use shortcuts or hotkeys to access some functions quickly.</li>
15
- <li>You can save battery life and storage space on your mobile device. Playing Candy Crush Saga on your PC means you don't have to worry about draining your battery or filling up your memory with the game data. You can also avoid interruptions from phone calls or notifications while playing.</li>
16
- <li>You can sync your progress across devices. If you log in with your Facebook account or your King account, you can sync your progress and access all your data on any device. This means you can switch between playing on your PC or your mobile device anytime without losing anything.</li>
17
- </ul>
18
- <h2>How to download Candy Crush Saga for Windows 10 <h2>How to download Candy Crush Saga for Windows 10 64 Bit?</h2>
19
- <p>Downloading Candy Crush Saga for your Windows 10 64 bit PC is very easy and fast. You just need to follow these steps:</p>
20
- <ol>
21
- <li>Open the Microsoft Store app on your PC. You can find it on your Start menu or taskbar, or you can search for it using Cortana or the search box.</li>
22
- <li>In the Microsoft Store app, type "Candy Crush Saga" in the search bar and press Enter. You will see the game icon and some information about it.</li>
23
- <li>Click on the "Get" button to start downloading the game. You may need to sign in with your Microsoft account if you haven't already.</li>
24
- <li>Wait for the download and installation to finish. You will see a notification when it is done.</li>
25
- <li>Click on the "Play" button to launch the game. You can also find it on your Start menu or taskbar, or you can pin it to your desktop for easy access.</li>
26
- </ol>
27
- <p>Congratulations, you have successfully downloaded and installed Candy Crush Saga on your Windows 10 64 bit PC. Now you can enjoy playing it anytime you want.</p>
28
- <h2>How to play Candy Crush Saga on Windows 10 64 Bit?</h2>
29
- <p>Playing Candy Crush Saga on your Windows 10 64 bit PC is very similar to playing it on your mobile device. However, there are some differences in the gameplay and the controls that you need to know. Here are some of them:</p>
30
- <p>candy crush saga pc game download windows 10<br />
31
- how to install candy crush saga on windows 10 laptop<br />
32
- candy crush saga for windows 10 offline mode<br />
33
- candy crush saga latest version download for windows 10<br />
34
- candy crush saga windows 10 app store<br />
35
- candy crush saga free download full version for windows 10<br />
36
- candy crush saga cheats and tips for windows 10 users<br />
37
- candy crush saga update download for windows 10 64 bit<br />
38
- candy crush saga game play online on windows 10<br />
39
- candy crush saga hack tool download for windows 10<br />
40
- candy crush saga best levels and boosters for windows 10<br />
41
- candy crush saga system requirements for windows 10 64 bit<br />
42
- candy crush saga error and troubleshooting for windows 10<br />
43
- candy crush saga review and rating for windows 10 game<br />
44
- candy crush saga alternatives and similar games for windows 10<br />
45
- candy crush soda saga free download for windows 10 64 bit<br />
46
- candy crush jelly saga free download for windows 10 64 bit<br />
47
- candy crush friends saga free download for windows 10 64 bit<br />
48
- candy crush saga mod apk download for windows 10 64 bit<br />
49
- candy crush saga unlimited lives and gold bars for windows 10<br />
50
- candy crush saga new features and updates for windows 10 game<br />
51
- candy crush saga support and contact for windows 10 users<br />
52
- candy crush saga facebook login and sync for windows 10 game<br />
53
- candy crush saga leaderboard and achievements for windows 10 game<br />
54
- candy crush saga themes and wallpapers for windows 10 desktop<br />
55
- how to uninstall candy crush saga from windows 10 device<br />
56
- how to transfer candy crush saga progress to windows 10 device<br />
57
- how to backup and restore candy crush saga data on windows 10 device<br />
58
- how to play candy crush saga with keyboard and mouse on windows 10 device<br />
59
- how to record and share candy crush saga gameplay on windows 10 device<br />
60
- how to speed up and optimize candy crush saga performance on windows 10 device<br />
61
- how to fix candy crush saga not working or crashing on windows 10 device<br />
62
- how to disable or remove candy crush saga ads on windows 10 device<br />
63
- how to get free or discounted in-app purchases in candy crush saga on windows 10 device<br />
64
- how to join or create a team in candy crush saga on windows 10 device<br />
65
- how to complete or skip a level in candy crush saga on windows 10 device<br />
66
- how to change or reset your password in candy crush saga on windows 10 device<br />
67
- how to link or unlink your account in candy crush saga on windows 10 device<br />
68
- how to redeem or use a gift card or code in candy crush saga on windows 10 device<br />
69
- how to report or block a player in candy crush saga on windows 10 device<br />
70
- how to invite or add friends in candy crush saga on windows 10 device<br />
71
- how to chat or send messages in candy crush saga on windows 10 device<br />
72
- how to customize or change your avatar in candy crush saga on windows 10 device<br />
73
- how to earn or spend gold bars in candy crush saga on windows 10 device<br />
74
- how to collect or use boosters in candy crush saga on windows 10 device<br />
75
- how to unlock or access new episodes in candy crush saga on windows 10 device<br />
76
- how to solve or clear the jelly in candy crush saga on windows 10 device<br />
77
- how to match or blast the candies in candy crush saga on windows 10 device<br />
78
- how to switch or swap the candies in candy crush saga on windows 10 device</p>
79
- <h3>The basics of the gameplay and the controls</h3>
80
- <p>The gameplay of Candy Crush Saga is based on matching three or more candies of the same color on a game board to make them disappear and create special candies that have extra effects. You have to complete different objectives in each level, such as reaching a target score, clearing jelly or chocolate from the board, collecting ingredients, or making a certain number of matches. You have a limited number of moves or time to complete each level, so you have to plan your moves carefully and use boosters wisely.</p>
81
- <p>The controls of Candy Crush Saga on your Windows 10 64 bit PC are very simple and intuitive. You can use your mouse or your keyboard to make matches and activate boosters. Here are some of the basic controls:</p>
82
- <ul>
83
- <li>To make a match, click and drag a candy to swap it with an adjacent one. You can also use the arrow keys on your keyboard to move a candy in any direction.</li>
84
- <li>To activate a special candy, click on it or press the spacebar on your keyboard. You can also click and drag a special candy to swap it with another one and create a powerful combination.</li>
85
- <li>To use a booster, click on it at the bottom of the screen or press the corresponding number key on your keyboard. You can also drag a booster onto the game board to apply it to a specific candy or area.</li>
86
- <li>To pause the game, click on the menu button at the top left corner of the screen or press the Esc key on your keyboard. You can also access other options such as settings, help, sound, music, and more from this menu.</li>
87
- </ul>
88
- <h3>The tips and tricks to master the game and beat the levels</h3>
89
- <p>Candy Crush Saga is not only a fun game, but also a challenging one. Some levels can be very hard to beat, especially if you don't know what to do or how to do it. That's why we have prepared some tips and tricks for you that will help you master the game and beat any level. Here are some of them:</p>
90
- <ul>
91
- <li>Pay attention to the objective of each level and plan your moves accordingly. Don't just match candies randomly, but try to create matches that will help you achieve your goal.</li>
92
- <li>Look for opportunities to create special candies and combinations. Special candies are candies that have extra effects when activated, such as striped candies, wrapped candies, color bombs, jelly fish, coconut wheels, etc. Combinations are when you activate two or more special candies together, creating even more powerful effects.</li>
93
- <li>Use boosters wisely and sparingly. Boosters are items that can help you in various ways, such as extra moves, extra time, extra lives, lollipop hammers, free switches, etc. However, they are limited in number and some of them cost real money, so don't waste them unnecessarily.</li>
94
- <li>Learn from your mistakes and try again. If you fail a level, don't give up or get frustrated. Instead, analyze what went wrong and what you can do better next time. You can also watch videos of other players who have beaten the level and learn from their strategies.</li>
95
- <li>Have fun and enjoy the game. Candy Crush Saga is meant to be <li>Have fun and enjoy the game. Candy Crush Saga is meant to be a relaxing and entertaining game, not a stressful or frustrating one. Don't let the difficulty or the pressure get to you, but rather focus on the positive aspects of the game, such as the colorful graphics, the catchy music, the cute characters, and the rewarding prizes.</li>
96
- </ul>
97
- <h2>Conclusion</h2>
98
- <p>Candy Crush Saga is one of the most popular and addictive puzzle games in the world. It has thousands of levels to play, each with different objectives, modes, and challenges. It also has various features and rewards that make it more fun and exciting. You can play it on your mobile device, but you can also play it on your Windows 10 64 bit PC. Playing on a PC has some advantages, such as a bigger screen, better graphics, easier controls, and more. To play on a PC, you just need to download and install the game from the Microsoft Store app, and then log in with your Facebook or King account to sync your progress. To master the game and beat the levels, you need to pay attention to the objective, create special candies and combinations, use boosters wisely, learn from your mistakes, and have fun.</p>
99
- <p>If you are ready to join the millions of fans who love Candy Crush Saga, download it now and start playing. You will be amazed by how much fun you will have.</p>
100
- <h2>FAQs</h2>
101
- <p>Here are some of the frequently asked questions about Candy Crush Saga and their answers:</p>
102
- <ol>
103
- <li>How do I get more lives in Candy Crush Saga?</li>
104
- <p>There are several ways to get more lives in Candy Crush Saga. You can wait for them to refill over time (one life every 30 minutes), ask your friends to send you some, buy them with gold bars, or use boosters that give you extra lives.</p>
105
- <li>How do I get more gold bars in Candy Crush Saga?</li>
106
- <p>Gold bars are the premium currency in Candy Crush Saga. You can use them to buy boosters, extra moves, extra time, extra lives, or unlock new episodes. You can get gold bars by completing certain achievements, participating in events or challenges, watching ads, or buying them with real money.</p>
107
- <li>How do I unlock new episodes in Candy Crush Saga?</li>
108
- <p>To unlock new episodes in Candy Crush Saga, you need to complete all the levels in the previous episode. Sometimes, you may also need to ask your friends for help or pay with gold bars to unlock them.</p>
109
- <li>How do I connect my Facebook or King account to Candy Crush Saga?</li>
110
- <p>To connect your Facebook or King account to Candy Crush Saga, you need to click on the "Connect" button on the main screen or the settings menu. You will be asked to log in with your email and password or create a new account if you don't have one. By connecting your account, you can sync your progress across devices, access all your data, and play with your friends online.</p>
111
- <li>How do I contact customer support for Candy Crush Saga?</li>
112
- <p>If you have any issues or questions about Candy Crush Saga, you can contact customer support by clicking on the "Help" button on the settings menu. You will be directed to a page where you can browse through various topics and FAQs, or submit a ticket with your query.</p>
113
- </ol></p> 401be4b1e0<br />
114
- <br />
115
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/modeling_paddle_pytorch_utils.py DELETED
@@ -1,106 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """ PyTorch - Paddle general utilities."""
16
- import re
17
-
18
- from .utils import logging
19
-
20
- logger = logging.get_logger(__name__)
21
-
22
-
23
- def rename_key(key):
24
- regex = r"\w+[.]\d+"
25
- pats = re.findall(regex, key)
26
- for pat in pats:
27
- key = key.replace(pat, "_".join(pat.split(".")))
28
- return key
29
-
30
-
31
- #####################
32
- # PyTorch => Paddle #
33
- #####################
34
-
35
-
36
- def rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_paddle_state_dict):
37
- """Rename PT weight names to corresponding Paddle weight names and reshape tensor if necessary"""
38
-
39
- # conv norm or layer norm
40
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
41
- if (
42
- any("norm" in str_ for str_ in pt_tuple_key)
43
- and (pt_tuple_key[-1] in ["bias", "beta"])
44
- and (pt_tuple_key[:-1] + ("bias",) in random_paddle_state_dict)
45
- ):
46
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
47
- return renamed_pt_tuple_key, pt_tensor
48
- elif pt_tuple_key[-1] in ["weight", "gamma"] and pt_tuple_key[:-1] + ("bias",) in random_paddle_state_dict:
49
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
50
- return renamed_pt_tuple_key, pt_tensor
51
-
52
- # embedding
53
- if pt_tuple_key[-1] == "weight" and pt_tuple_key[:-1] + ("weight",) in random_paddle_state_dict:
54
- pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
55
- return renamed_pt_tuple_key, pt_tensor
56
-
57
- # conv layer
58
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
59
- if pt_tuple_key[-1] == "weight" and pt_tensor.ndim == 4:
60
- return renamed_pt_tuple_key, pt_tensor
61
-
62
- # linear layer
63
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
64
- if pt_tuple_key[-1] == "weight":
65
- pt_tensor = pt_tensor.t()
66
- return renamed_pt_tuple_key, pt_tensor
67
-
68
- # old PyTorch layer norm weight
69
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("weight",)
70
- if pt_tuple_key[-1] == "gamma":
71
- return renamed_pt_tuple_key, pt_tensor
72
-
73
- # old PyTorch layer norm bias
74
- renamed_pt_tuple_key = pt_tuple_key[:-1] + ("bias",)
75
- if pt_tuple_key[-1] == "beta":
76
- return renamed_pt_tuple_key, pt_tensor
77
-
78
- return pt_tuple_key, pt_tensor
79
-
80
-
81
- def convert_pytorch_state_dict_to_paddle(pt_state_dict, paddle_model):
82
- # Step 1: Convert pytorch tensor to numpy
83
- pt_state_dict = {k: v.numpy() for k, v in pt_state_dict.items()}
84
-
85
- random_paddle_state_dict = paddle_model.state_dict
86
- paddle_state_dict = {}
87
-
88
- # Need to change some parameters name to match Paddle names
89
- for pt_key, pt_tensor in pt_state_dict.items():
90
- renamed_pt_key = rename_key(pt_key)
91
- pt_tuple_key = tuple(renamed_pt_key.split("."))
92
-
93
- # Correctly rename weight parameters
94
- paddle_key, paddle_tensor = rename_key_and_reshape_tensor(pt_tuple_key, pt_tensor, random_paddle_state_dict)
95
-
96
- if paddle_key in random_paddle_state_dict:
97
- if list(paddle_tensor.shape) != list(random_paddle_state_dict[paddle_key].shape):
98
- raise ValueError(
99
- f"Paddle checkpoint seems to be incorrect. Weight {pt_key} was expected to be of shape "
100
- f"{random_paddle_state_dict[paddle_key].shape}, but is {paddle_tensor.shape}."
101
- )
102
-
103
- # also add unexpected weight so that warning is thrown
104
- paddle_state_dict[paddle_key] = paddle_tensor.numpy()
105
-
106
- return paddle_state_dict
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_euler_discrete.py DELETED
@@ -1,244 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- from dataclasses import dataclass
17
- from typing import List, Optional, Tuple, Union
18
-
19
- import numpy as np
20
- import paddle
21
-
22
- from ..configuration_utils import ConfigMixin, register_to_config
23
- from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, logging
24
- from .scheduling_utils import SchedulerMixin
25
-
26
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
27
-
28
-
29
- @dataclass
30
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete
31
- class EulerDiscreteSchedulerOutput(BaseOutput):
32
- """
33
- Output class for the scheduler's step function output.
34
-
35
- Args:
36
- prev_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
37
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
38
- denoising loop.
39
- pred_original_sample (`paddle.Tensor` of shape `(batch_size, num_channels, height, width)` for images):
40
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
41
- `pred_original_sample` can be used to preview progress or for guidance.
42
- """
43
-
44
- prev_sample: paddle.Tensor
45
- pred_original_sample: Optional[paddle.Tensor] = None
46
-
47
-
48
- class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
49
- """
50
- Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original
51
- k-diffusion implementation by Katherine Crowson:
52
- https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51
53
-
54
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
55
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
56
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
57
- [`~SchedulerMixin.from_pretrained`] functions.
58
-
59
- Args:
60
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
61
- beta_start (`float`): the starting `beta` value of inference.
62
- beta_end (`float`): the final `beta` value.
63
- beta_schedule (`str`):
64
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
65
- `linear` or `scaled_linear`.
66
- trained_betas (`np.ndarray`, optional):
67
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
68
- prediction_type (`str`, default `epsilon`, optional):
69
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
70
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
71
- https://imagen.research.google/video/paper.pdf)
72
- """
73
-
74
- _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy()
75
- order = 1
76
-
77
- @register_to_config
78
- def __init__(
79
- self,
80
- num_train_timesteps: int = 1000,
81
- beta_start: float = 0.0001,
82
- beta_end: float = 0.02,
83
- beta_schedule: str = "linear",
84
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
85
- prediction_type: str = "epsilon",
86
- ):
87
- if trained_betas is not None:
88
- self.betas = paddle.to_tensor(trained_betas, dtype="float32")
89
- elif beta_schedule == "linear":
90
- self.betas = paddle.linspace(beta_start, beta_end, num_train_timesteps, dtype="float32")
91
- elif beta_schedule == "scaled_linear":
92
- # this schedule is very specific to the latent diffusion model.
93
- self.betas = paddle.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype="float32") ** 2
94
- else:
95
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
96
-
97
- self.alphas = 1.0 - self.betas
98
- self.alphas_cumprod = paddle.cumprod(self.alphas, 0)
99
-
100
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
101
- sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32)
102
- self.sigmas = paddle.to_tensor(sigmas)
103
-
104
- # standard deviation of the initial noise distribution
105
- self.init_noise_sigma = self.sigmas.max()
106
-
107
- # setable values
108
- self.num_inference_steps = None
109
- timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy()
110
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
111
- self.is_scale_input_called = False
112
-
113
- def scale_model_input(self, sample: paddle.Tensor, timestep: Union[float, paddle.Tensor]) -> paddle.Tensor:
114
- """
115
- Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm.
116
-
117
- Args:
118
- sample (`paddle.Tensor`): input sample
119
- timestep (`float` or `paddle.Tensor`): the current timestep in the diffusion chain
120
-
121
- Returns:
122
- `paddle.Tensor`: scaled input sample
123
- """
124
- step_index = (self.timesteps == timestep).nonzero().item()
125
- sigma = self.sigmas[step_index]
126
- sample = sample / ((sigma**2 + 1) ** 0.5)
127
- self.is_scale_input_called = True
128
- return sample
129
-
130
- def set_timesteps(self, num_inference_steps: int):
131
- """
132
- Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
133
-
134
- Args:
135
- num_inference_steps (`int`):
136
- the number of diffusion steps used when generating samples with a pre-trained model.
137
- """
138
- self.num_inference_steps = num_inference_steps
139
-
140
- timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy()
141
- sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
142
- sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas)
143
- sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
144
- self.sigmas = paddle.to_tensor(sigmas)
145
- self.timesteps = paddle.to_tensor(timesteps, dtype="float32")
146
-
147
- def step(
148
- self,
149
- model_output: paddle.Tensor,
150
- timestep: Union[float, paddle.Tensor],
151
- sample: paddle.Tensor,
152
- s_churn: float = 0.0,
153
- s_tmin: float = 0.0,
154
- s_tmax: float = float("inf"),
155
- s_noise: float = 1.0,
156
- generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None,
157
- return_dict: bool = True,
158
- ) -> Union[EulerDiscreteSchedulerOutput, Tuple]:
159
- """
160
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
161
- process from the learned model outputs (most often the predicted noise).
162
-
163
- Args:
164
- model_output (`paddle.Tensor`): direct output from learned diffusion model.
165
- timestep (`float`): current timestep in the diffusion chain.
166
- sample (`paddle.Tensor`):
167
- current instance of sample being created by diffusion process.
168
- s_churn (`float`)
169
- s_tmin (`float`)
170
- s_tmax (`float`)
171
- s_noise (`float`)
172
- generator (`paddle.Generator`, optional): Random number generator.
173
- return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class
174
-
175
- Returns:
176
- [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] or `tuple`:
177
- [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a
178
- `tuple`. When returning a tuple, the first element is the sample tensor.
179
-
180
- """
181
-
182
- if not self.is_scale_input_called:
183
- logger.warning(
184
- "The `scale_model_input` function should be called before `step` to ensure correct denoising. "
185
- "See `StableDiffusionPipeline` for a usage example."
186
- )
187
-
188
- step_index = (self.timesteps == timestep).nonzero().item()
189
- sigma = self.sigmas[step_index]
190
-
191
- gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0
192
-
193
- noise = paddle.randn(model_output.shape, dtype=model_output.dtype, generator=generator)
194
-
195
- eps = noise * s_noise
196
- sigma_hat = sigma * (gamma + 1)
197
-
198
- if gamma > 0:
199
- sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5
200
-
201
- # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise
202
- if self.config.prediction_type == "epsilon":
203
- pred_original_sample = sample - sigma_hat * model_output
204
- elif self.config.prediction_type == "v_prediction":
205
- # * c_out + input * c_skip
206
- pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))
207
- else:
208
- raise ValueError(
209
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`"
210
- )
211
-
212
- # 2. Convert to an ODE derivative
213
- derivative = (sample - pred_original_sample) / sigma_hat
214
-
215
- dt = self.sigmas[step_index + 1] - sigma_hat
216
-
217
- prev_sample = sample + derivative * dt
218
-
219
- if not return_dict:
220
- return (prev_sample,)
221
-
222
- return EulerDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample)
223
-
224
- def add_noise(
225
- self,
226
- original_samples: paddle.Tensor,
227
- noise: paddle.Tensor,
228
- timesteps: paddle.Tensor,
229
- ) -> paddle.Tensor:
230
- # Make sure sigmas and timesteps have the same dtype as original_samples
231
- self.sigmas = self.sigmas.cast(original_samples.dtype)
232
-
233
- schedule_timesteps = self.timesteps
234
- step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps]
235
-
236
- sigma = self.sigmas[step_indices].flatten()
237
- while len(sigma.shape) < len(original_samples.shape):
238
- sigma = sigma.unsqueeze(-1)
239
-
240
- noisy_samples = original_samples + noise * sigma
241
- return noisy_samples
242
-
243
- def __len__(self):
244
- return self.config.num_train_timesteps
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/components/ui/voice/index.tsx DELETED
@@ -1,28 +0,0 @@
1
- import './index.scss'
2
-
3
- export interface VoiceProps extends CSSPropertyRule {
4
- num?: number;
5
- duration?: number;
6
- }
7
- export default function Voice({ duration = 400, num = 7, ...others }) {
8
- return (
9
- <div className="voice-button" { ...others }>
10
- {Array.from({ length: num }).map((_, index) => {
11
- const randomDuration = Math.random() * 100 + duration
12
- const initialDelay = Math.random() * 2 * duration
13
- const initialScale = Math.sin((index + 1) * Math.PI / num)
14
- return (
15
- <div
16
- className="voice-button-item"
17
- key={index}
18
- style={{
19
- animationDelay: initialDelay + 'ms',
20
- animationDuration: randomDuration + 'ms',
21
- transform: `scale(${initialScale})`
22
- }}
23
- />
24
- )
25
- })}
26
- </div>
27
- )
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-DHD/Youtube-Whisperer/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Youtube Whisperer
3
- emoji: ⚡
4
- colorFrom: purple
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.3.1
8
- app_file: app.py
9
- pinned: false
10
- duplicated_from: jeffistyping/Youtube-Whisperer
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/egs/datasets/audio/libritts/pre_align.py DELETED
@@ -1,27 +0,0 @@
1
- import os
2
-
3
- from data_gen.tts.base_preprocess import BasePreprocessor
4
- import glob
5
-
6
-
7
- class LibrittsPreAlign(BasePreprocessor):
8
- def meta_data(self):
9
- wav_fns = sorted(glob.glob(f'{self.raw_data_dir}/*/*/*.wav'))
10
- for wav_fn in wav_fns:
11
- item_name = os.path.basename(wav_fn)[:-4]
12
- txt_fn = f'{wav_fn[:-4]}.normalized.txt'
13
- with open(txt_fn, 'r') as f:
14
- txt = f.readlines()
15
- f.close()
16
- spk = item_name.split("_")[0]
17
- # Example:
18
- #
19
- # 'item_name': '103_1241_000000_000001'
20
- # 'wav_fn': 'LibriTTS/train-clean-100/103/1241/103_1241_000000_000001.wav'
21
- # 'txt': 'matthew Cuthbert is surprised'
22
- # 'spk_name': '103'
23
- yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt[0], 'spk_name': spk}
24
-
25
-
26
- if __name__ == "__main__":
27
- LibrittsPreAlign().process()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/CLAP/audio.py DELETED
@@ -1,179 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from torchlibrosa.stft import Spectrogram, LogmelFilterBank
5
-
6
- def get_audio_encoder(name: str):
7
- if name == "Cnn14":
8
- return Cnn14
9
- else:
10
- raise Exception('The audio encoder name {} is incorrect or not supported'.format(name))
11
-
12
-
13
- class ConvBlock(nn.Module):
14
- def __init__(self, in_channels, out_channels):
15
-
16
- super(ConvBlock, self).__init__()
17
-
18
- self.conv1 = nn.Conv2d(in_channels=in_channels,
19
- out_channels=out_channels,
20
- kernel_size=(3, 3), stride=(1, 1),
21
- padding=(1, 1), bias=False)
22
-
23
- self.conv2 = nn.Conv2d(in_channels=out_channels,
24
- out_channels=out_channels,
25
- kernel_size=(3, 3), stride=(1, 1),
26
- padding=(1, 1), bias=False)
27
-
28
- self.bn1 = nn.BatchNorm2d(out_channels)
29
- self.bn2 = nn.BatchNorm2d(out_channels)
30
-
31
-
32
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
33
-
34
- x = input
35
- x = F.relu_(self.bn1(self.conv1(x)))
36
- x = F.relu_(self.bn2(self.conv2(x)))
37
- if pool_type == 'max':
38
- x = F.max_pool2d(x, kernel_size=pool_size)
39
- elif pool_type == 'avg':
40
- x = F.avg_pool2d(x, kernel_size=pool_size)
41
- elif pool_type == 'avg+max':
42
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
43
- x2 = F.max_pool2d(x, kernel_size=pool_size)
44
- x = x1 + x2
45
- else:
46
- raise Exception('Incorrect argument!')
47
-
48
- return x
49
-
50
-
51
- class ConvBlock5x5(nn.Module):
52
- def __init__(self, in_channels, out_channels):
53
-
54
- super(ConvBlock5x5, self).__init__()
55
-
56
- self.conv1 = nn.Conv2d(in_channels=in_channels,
57
- out_channels=out_channels,
58
- kernel_size=(5, 5), stride=(1, 1),
59
- padding=(2, 2), bias=False)
60
-
61
- self.bn1 = nn.BatchNorm2d(out_channels)
62
-
63
-
64
- def forward(self, input, pool_size=(2, 2), pool_type='avg'):
65
-
66
- x = input
67
- x = F.relu_(self.bn1(self.conv1(x)))
68
- if pool_type == 'max':
69
- x = F.max_pool2d(x, kernel_size=pool_size)
70
- elif pool_type == 'avg':
71
- x = F.avg_pool2d(x, kernel_size=pool_size)
72
- elif pool_type == 'avg+max':
73
- x1 = F.avg_pool2d(x, kernel_size=pool_size)
74
- x2 = F.max_pool2d(x, kernel_size=pool_size)
75
- x = x1 + x2
76
- else:
77
- raise Exception('Incorrect argument!')
78
-
79
- return x
80
-
81
-
82
- class AttBlock(nn.Module):
83
- def __init__(self, n_in, n_out, activation='linear', temperature=1.):
84
- super(AttBlock, self).__init__()
85
-
86
- self.activation = activation
87
- self.temperature = temperature
88
- self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
89
- self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True)
90
-
91
- self.bn_att = nn.BatchNorm1d(n_out)
92
-
93
- def forward(self, x):
94
- # x: (n_samples, n_in, n_time)
95
- norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1)
96
- cla = self.nonlinear_transform(self.cla(x))
97
- x = torch.sum(norm_att * cla, dim=2)
98
- return x, norm_att, cla
99
-
100
- def nonlinear_transform(self, x):
101
- if self.activation == 'linear':
102
- return x
103
- elif self.activation == 'sigmoid':
104
- return torch.sigmoid(x)
105
-
106
-
107
- class Cnn14(nn.Module):
108
- def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
109
- fmax, classes_num, out_emb):
110
-
111
- super(Cnn14, self).__init__()
112
-
113
- window = 'hann'
114
- center = True
115
- pad_mode = 'reflect'
116
- ref = 1.0
117
- amin = 1e-10
118
- top_db = None
119
-
120
- # Spectrogram extractor
121
- self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
122
- win_length=window_size, window=window, center=center, pad_mode=pad_mode,
123
- freeze_parameters=True)
124
-
125
- # Logmel feature extractor
126
- self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
127
- n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
128
- freeze_parameters=True)
129
-
130
- self.bn0 = nn.BatchNorm2d(64)
131
-
132
- self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
133
- self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
134
- self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
135
- self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
136
- self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
137
- self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
138
-
139
- # out_emb is 2048 for best Cnn14
140
- self.fc1 = nn.Linear(2048, out_emb, bias=True)
141
- self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True)
142
-
143
- def forward(self, input, mixup_lambda=None):
144
- """
145
- Input: (batch_size, data_length)
146
- """
147
-
148
- x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
149
- x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
150
-
151
- x = x.transpose(1, 3)
152
- x = self.bn0(x)
153
- x = x.transpose(1, 3)
154
-
155
- x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
156
- x = F.dropout(x, p=0.2, training=self.training)
157
- x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
158
- x = F.dropout(x, p=0.2, training=self.training)
159
- x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
160
- x = F.dropout(x, p=0.2, training=self.training)
161
- x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
162
- x = F.dropout(x, p=0.2, training=self.training)
163
- x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
164
- x = F.dropout(x, p=0.2, training=self.training)
165
- x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
166
- x = F.dropout(x, p=0.2, training=self.training)
167
- x = torch.mean(x, dim=3)
168
-
169
- (x1, _) = torch.max(x, dim=2)
170
- x2 = torch.mean(x, dim=2)
171
- x = x1 + x2
172
- x = F.dropout(x, p=0.5, training=self.training)
173
- x = F.relu_(self.fc1(x))
174
- embedding = F.dropout(x, p=0.5, training=self.training)
175
- clipwise_output = torch.sigmoid(self.fc_audioset(x))
176
-
177
- output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
178
-
179
- return output_dict
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan.py DELETED
@@ -1,730 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """
3
- # --------------------------------------------
4
- # Super-Resolution
5
- # --------------------------------------------
6
- #
7
- # Kai Zhang (cskaizhang@gmail.com)
8
- # https://github.com/cszn
9
- # From 2019/03--2021/08
10
- # --------------------------------------------
11
- """
12
-
13
- import numpy as np
14
- import cv2
15
- import torch
16
-
17
- from functools import partial
18
- import random
19
- from scipy import ndimage
20
- import scipy
21
- import scipy.stats as ss
22
- from scipy.interpolate import interp2d
23
- from scipy.linalg import orth
24
- import albumentations
25
-
26
- import ldm.modules.image_degradation.utils_image as util
27
-
28
-
29
- def modcrop_np(img, sf):
30
- '''
31
- Args:
32
- img: numpy image, WxH or WxHxC
33
- sf: scale factor
34
- Return:
35
- cropped image
36
- '''
37
- w, h = img.shape[:2]
38
- im = np.copy(img)
39
- return im[:w - w % sf, :h - h % sf, ...]
40
-
41
-
42
- """
43
- # --------------------------------------------
44
- # anisotropic Gaussian kernels
45
- # --------------------------------------------
46
- """
47
-
48
-
49
- def analytic_kernel(k):
50
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
51
- k_size = k.shape[0]
52
- # Calculate the big kernels size
53
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
54
- # Loop over the small kernel to fill the big one
55
- for r in range(k_size):
56
- for c in range(k_size):
57
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
58
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
59
- crop = k_size // 2
60
- cropped_big_k = big_k[crop:-crop, crop:-crop]
61
- # Normalize to 1
62
- return cropped_big_k / cropped_big_k.sum()
63
-
64
-
65
- def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
66
- """ generate an anisotropic Gaussian kernel
67
- Args:
68
- ksize : e.g., 15, kernel size
69
- theta : [0, pi], rotation angle range
70
- l1 : [0.1,50], scaling of eigenvalues
71
- l2 : [0.1,l1], scaling of eigenvalues
72
- If l1 = l2, will get an isotropic Gaussian kernel.
73
- Returns:
74
- k : kernel
75
- """
76
-
77
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
78
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
79
- D = np.array([[l1, 0], [0, l2]])
80
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
81
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
82
-
83
- return k
84
-
85
-
86
- def gm_blur_kernel(mean, cov, size=15):
87
- center = size / 2.0 + 0.5
88
- k = np.zeros([size, size])
89
- for y in range(size):
90
- for x in range(size):
91
- cy = y - center + 1
92
- cx = x - center + 1
93
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
94
-
95
- k = k / np.sum(k)
96
- return k
97
-
98
-
99
- def shift_pixel(x, sf, upper_left=True):
100
- """shift pixel for super-resolution with different scale factors
101
- Args:
102
- x: WxHxC or WxH
103
- sf: scale factor
104
- upper_left: shift direction
105
- """
106
- h, w = x.shape[:2]
107
- shift = (sf - 1) * 0.5
108
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
109
- if upper_left:
110
- x1 = xv + shift
111
- y1 = yv + shift
112
- else:
113
- x1 = xv - shift
114
- y1 = yv - shift
115
-
116
- x1 = np.clip(x1, 0, w - 1)
117
- y1 = np.clip(y1, 0, h - 1)
118
-
119
- if x.ndim == 2:
120
- x = interp2d(xv, yv, x)(x1, y1)
121
- if x.ndim == 3:
122
- for i in range(x.shape[-1]):
123
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
124
-
125
- return x
126
-
127
-
128
- def blur(x, k):
129
- '''
130
- x: image, NxcxHxW
131
- k: kernel, Nx1xhxw
132
- '''
133
- n, c = x.shape[:2]
134
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
135
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
136
- k = k.repeat(1, c, 1, 1)
137
- k = k.view(-1, 1, k.shape[2], k.shape[3])
138
- x = x.view(1, -1, x.shape[2], x.shape[3])
139
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
140
- x = x.view(n, c, x.shape[2], x.shape[3])
141
-
142
- return x
143
-
144
-
145
- def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
146
- """"
147
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
148
- # Kai Zhang
149
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
150
- # max_var = 2.5 * sf
151
- """
152
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
153
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
154
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
155
- theta = np.random.rand() * np.pi # random theta
156
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
157
-
158
- # Set COV matrix using Lambdas and Theta
159
- LAMBDA = np.diag([lambda_1, lambda_2])
160
- Q = np.array([[np.cos(theta), -np.sin(theta)],
161
- [np.sin(theta), np.cos(theta)]])
162
- SIGMA = Q @ LAMBDA @ Q.T
163
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
164
-
165
- # Set expectation position (shifting kernel for aligned image)
166
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
167
- MU = MU[None, None, :, None]
168
-
169
- # Create meshgrid for Gaussian
170
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
171
- Z = np.stack([X, Y], 2)[:, :, :, None]
172
-
173
- # Calcualte Gaussian for every pixel of the kernel
174
- ZZ = Z - MU
175
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
176
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
177
-
178
- # shift the kernel so it will be centered
179
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
180
-
181
- # Normalize the kernel and return
182
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
183
- kernel = raw_kernel / np.sum(raw_kernel)
184
- return kernel
185
-
186
-
187
- def fspecial_gaussian(hsize, sigma):
188
- hsize = [hsize, hsize]
189
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
190
- std = sigma
191
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
192
- arg = -(x * x + y * y) / (2 * std * std)
193
- h = np.exp(arg)
194
- h[h < scipy.finfo(float).eps * h.max()] = 0
195
- sumh = h.sum()
196
- if sumh != 0:
197
- h = h / sumh
198
- return h
199
-
200
-
201
- def fspecial_laplacian(alpha):
202
- alpha = max([0, min([alpha, 1])])
203
- h1 = alpha / (alpha + 1)
204
- h2 = (1 - alpha) / (alpha + 1)
205
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
206
- h = np.array(h)
207
- return h
208
-
209
-
210
- def fspecial(filter_type, *args, **kwargs):
211
- '''
212
- python code from:
213
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
214
- '''
215
- if filter_type == 'gaussian':
216
- return fspecial_gaussian(*args, **kwargs)
217
- if filter_type == 'laplacian':
218
- return fspecial_laplacian(*args, **kwargs)
219
-
220
-
221
- """
222
- # --------------------------------------------
223
- # degradation models
224
- # --------------------------------------------
225
- """
226
-
227
-
228
- def bicubic_degradation(x, sf=3):
229
- '''
230
- Args:
231
- x: HxWxC image, [0, 1]
232
- sf: down-scale factor
233
- Return:
234
- bicubicly downsampled LR image
235
- '''
236
- x = util.imresize_np(x, scale=1 / sf)
237
- return x
238
-
239
-
240
- def srmd_degradation(x, k, sf=3):
241
- ''' blur + bicubic downsampling
242
- Args:
243
- x: HxWxC image, [0, 1]
244
- k: hxw, double
245
- sf: down-scale factor
246
- Return:
247
- downsampled LR image
248
- Reference:
249
- @inproceedings{zhang2018learning,
250
- title={Learning a single convolutional super-resolution network for multiple degradations},
251
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
252
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
253
- pages={3262--3271},
254
- year={2018}
255
- }
256
- '''
257
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
258
- x = bicubic_degradation(x, sf=sf)
259
- return x
260
-
261
-
262
- def dpsr_degradation(x, k, sf=3):
263
- ''' bicubic downsampling + blur
264
- Args:
265
- x: HxWxC image, [0, 1]
266
- k: hxw, double
267
- sf: down-scale factor
268
- Return:
269
- downsampled LR image
270
- Reference:
271
- @inproceedings{zhang2019deep,
272
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
273
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
274
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
275
- pages={1671--1681},
276
- year={2019}
277
- }
278
- '''
279
- x = bicubic_degradation(x, sf=sf)
280
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
281
- return x
282
-
283
-
284
- def classical_degradation(x, k, sf=3):
285
- ''' blur + downsampling
286
- Args:
287
- x: HxWxC image, [0, 1]/[0, 255]
288
- k: hxw, double
289
- sf: down-scale factor
290
- Return:
291
- downsampled LR image
292
- '''
293
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
294
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
295
- st = 0
296
- return x[st::sf, st::sf, ...]
297
-
298
-
299
- def add_sharpening(img, weight=0.5, radius=50, threshold=10):
300
- """USM sharpening. borrowed from real-ESRGAN
301
- Input image: I; Blurry image: B.
302
- 1. K = I + weight * (I - B)
303
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
304
- 3. Blur mask:
305
- 4. Out = Mask * K + (1 - Mask) * I
306
- Args:
307
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
308
- weight (float): Sharp weight. Default: 1.
309
- radius (float): Kernel size of Gaussian blur. Default: 50.
310
- threshold (int):
311
- """
312
- if radius % 2 == 0:
313
- radius += 1
314
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
315
- residual = img - blur
316
- mask = np.abs(residual) * 255 > threshold
317
- mask = mask.astype('float32')
318
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
319
-
320
- K = img + weight * residual
321
- K = np.clip(K, 0, 1)
322
- return soft_mask * K + (1 - soft_mask) * img
323
-
324
-
325
- def add_blur(img, sf=4):
326
- wd2 = 4.0 + sf
327
- wd = 2.0 + 0.2 * sf
328
- if random.random() < 0.5:
329
- l1 = wd2 * random.random()
330
- l2 = wd2 * random.random()
331
- k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
332
- else:
333
- k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random())
334
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
335
-
336
- return img
337
-
338
-
339
- def add_resize(img, sf=4):
340
- rnum = np.random.rand()
341
- if rnum > 0.8: # up
342
- sf1 = random.uniform(1, 2)
343
- elif rnum < 0.7: # down
344
- sf1 = random.uniform(0.5 / sf, 1)
345
- else:
346
- sf1 = 1.0
347
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
348
- img = np.clip(img, 0.0, 1.0)
349
-
350
- return img
351
-
352
-
353
- # def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
354
- # noise_level = random.randint(noise_level1, noise_level2)
355
- # rnum = np.random.rand()
356
- # if rnum > 0.6: # add color Gaussian noise
357
- # img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
358
- # elif rnum < 0.4: # add grayscale Gaussian noise
359
- # img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
360
- # else: # add noise
361
- # L = noise_level2 / 255.
362
- # D = np.diag(np.random.rand(3))
363
- # U = orth(np.random.rand(3, 3))
364
- # conv = np.dot(np.dot(np.transpose(U), D), U)
365
- # img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
366
- # img = np.clip(img, 0.0, 1.0)
367
- # return img
368
-
369
- def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
370
- noise_level = random.randint(noise_level1, noise_level2)
371
- rnum = np.random.rand()
372
- if rnum > 0.6: # add color Gaussian noise
373
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
374
- elif rnum < 0.4: # add grayscale Gaussian noise
375
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
376
- else: # add noise
377
- L = noise_level2 / 255.
378
- D = np.diag(np.random.rand(3))
379
- U = orth(np.random.rand(3, 3))
380
- conv = np.dot(np.dot(np.transpose(U), D), U)
381
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
382
- img = np.clip(img, 0.0, 1.0)
383
- return img
384
-
385
-
386
- def add_speckle_noise(img, noise_level1=2, noise_level2=25):
387
- noise_level = random.randint(noise_level1, noise_level2)
388
- img = np.clip(img, 0.0, 1.0)
389
- rnum = random.random()
390
- if rnum > 0.6:
391
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
392
- elif rnum < 0.4:
393
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
394
- else:
395
- L = noise_level2 / 255.
396
- D = np.diag(np.random.rand(3))
397
- U = orth(np.random.rand(3, 3))
398
- conv = np.dot(np.dot(np.transpose(U), D), U)
399
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
400
- img = np.clip(img, 0.0, 1.0)
401
- return img
402
-
403
-
404
- def add_Poisson_noise(img):
405
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
406
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
407
- if random.random() < 0.5:
408
- img = np.random.poisson(img * vals).astype(np.float32) / vals
409
- else:
410
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
411
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
412
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
413
- img += noise_gray[:, :, np.newaxis]
414
- img = np.clip(img, 0.0, 1.0)
415
- return img
416
-
417
-
418
- def add_JPEG_noise(img):
419
- quality_factor = random.randint(30, 95)
420
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
421
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
422
- img = cv2.imdecode(encimg, 1)
423
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
424
- return img
425
-
426
-
427
- def random_crop(lq, hq, sf=4, lq_patchsize=64):
428
- h, w = lq.shape[:2]
429
- rnd_h = random.randint(0, h - lq_patchsize)
430
- rnd_w = random.randint(0, w - lq_patchsize)
431
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
432
-
433
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
434
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
435
- return lq, hq
436
-
437
-
438
- def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
439
- """
440
- This is the degradation model of BSRGAN from the paper
441
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
442
- ----------
443
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
444
- sf: scale factor
445
- isp_model: camera ISP model
446
- Returns
447
- -------
448
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
449
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
450
- """
451
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
452
- sf_ori = sf
453
-
454
- h1, w1 = img.shape[:2]
455
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
456
- h, w = img.shape[:2]
457
-
458
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
459
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
460
-
461
- hq = img.copy()
462
-
463
- if sf == 4 and random.random() < scale2_prob: # downsample1
464
- if np.random.rand() < 0.5:
465
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
466
- interpolation=random.choice([1, 2, 3]))
467
- else:
468
- img = util.imresize_np(img, 1 / 2, True)
469
- img = np.clip(img, 0.0, 1.0)
470
- sf = 2
471
-
472
- shuffle_order = random.sample(range(7), 7)
473
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
474
- if idx1 > idx2: # keep downsample3 last
475
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
476
-
477
- for i in shuffle_order:
478
-
479
- if i == 0:
480
- img = add_blur(img, sf=sf)
481
-
482
- elif i == 1:
483
- img = add_blur(img, sf=sf)
484
-
485
- elif i == 2:
486
- a, b = img.shape[1], img.shape[0]
487
- # downsample2
488
- if random.random() < 0.75:
489
- sf1 = random.uniform(1, 2 * sf)
490
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
491
- interpolation=random.choice([1, 2, 3]))
492
- else:
493
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
494
- k_shifted = shift_pixel(k, sf)
495
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
496
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
497
- img = img[0::sf, 0::sf, ...] # nearest downsampling
498
- img = np.clip(img, 0.0, 1.0)
499
-
500
- elif i == 3:
501
- # downsample3
502
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
503
- img = np.clip(img, 0.0, 1.0)
504
-
505
- elif i == 4:
506
- # add Gaussian noise
507
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
508
-
509
- elif i == 5:
510
- # add JPEG noise
511
- if random.random() < jpeg_prob:
512
- img = add_JPEG_noise(img)
513
-
514
- elif i == 6:
515
- # add processed camera sensor noise
516
- if random.random() < isp_prob and isp_model is not None:
517
- with torch.no_grad():
518
- img, hq = isp_model.forward(img.copy(), hq)
519
-
520
- # add final JPEG compression noise
521
- img = add_JPEG_noise(img)
522
-
523
- # random crop
524
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
525
-
526
- return img, hq
527
-
528
-
529
- # todo no isp_model?
530
- def degradation_bsrgan_variant(image, sf=4, isp_model=None):
531
- """
532
- This is the degradation model of BSRGAN from the paper
533
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
534
- ----------
535
- sf: scale factor
536
- isp_model: camera ISP model
537
- Returns
538
- -------
539
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
540
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
541
- """
542
- image = util.uint2single(image)
543
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
544
- sf_ori = sf
545
-
546
- h1, w1 = image.shape[:2]
547
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
548
- h, w = image.shape[:2]
549
-
550
- hq = image.copy()
551
-
552
- if sf == 4 and random.random() < scale2_prob: # downsample1
553
- if np.random.rand() < 0.5:
554
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
555
- interpolation=random.choice([1, 2, 3]))
556
- else:
557
- image = util.imresize_np(image, 1 / 2, True)
558
- image = np.clip(image, 0.0, 1.0)
559
- sf = 2
560
-
561
- shuffle_order = random.sample(range(7), 7)
562
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
563
- if idx1 > idx2: # keep downsample3 last
564
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
565
-
566
- for i in shuffle_order:
567
-
568
- if i == 0:
569
- image = add_blur(image, sf=sf)
570
-
571
- elif i == 1:
572
- image = add_blur(image, sf=sf)
573
-
574
- elif i == 2:
575
- a, b = image.shape[1], image.shape[0]
576
- # downsample2
577
- if random.random() < 0.75:
578
- sf1 = random.uniform(1, 2 * sf)
579
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
580
- interpolation=random.choice([1, 2, 3]))
581
- else:
582
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
583
- k_shifted = shift_pixel(k, sf)
584
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
585
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
586
- image = image[0::sf, 0::sf, ...] # nearest downsampling
587
- image = np.clip(image, 0.0, 1.0)
588
-
589
- elif i == 3:
590
- # downsample3
591
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
592
- image = np.clip(image, 0.0, 1.0)
593
-
594
- elif i == 4:
595
- # add Gaussian noise
596
- image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25)
597
-
598
- elif i == 5:
599
- # add JPEG noise
600
- if random.random() < jpeg_prob:
601
- image = add_JPEG_noise(image)
602
-
603
- # elif i == 6:
604
- # # add processed camera sensor noise
605
- # if random.random() < isp_prob and isp_model is not None:
606
- # with torch.no_grad():
607
- # img, hq = isp_model.forward(img.copy(), hq)
608
-
609
- # add final JPEG compression noise
610
- image = add_JPEG_noise(image)
611
- image = util.single2uint(image)
612
- example = {"image":image}
613
- return example
614
-
615
-
616
- # TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc...
617
- def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):
618
- """
619
- This is an extended degradation model by combining
620
- the degradation models of BSRGAN and Real-ESRGAN
621
- ----------
622
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
623
- sf: scale factor
624
- use_shuffle: the degradation shuffle
625
- use_sharp: sharpening the img
626
- Returns
627
- -------
628
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
629
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
630
- """
631
-
632
- h1, w1 = img.shape[:2]
633
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
634
- h, w = img.shape[:2]
635
-
636
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
637
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
638
-
639
- if use_sharp:
640
- img = add_sharpening(img)
641
- hq = img.copy()
642
-
643
- if random.random() < shuffle_prob:
644
- shuffle_order = random.sample(range(13), 13)
645
- else:
646
- shuffle_order = list(range(13))
647
- # local shuffle for noise, JPEG is always the last one
648
- shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))
649
- shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))
650
-
651
- poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1
652
-
653
- for i in shuffle_order:
654
- if i == 0:
655
- img = add_blur(img, sf=sf)
656
- elif i == 1:
657
- img = add_resize(img, sf=sf)
658
- elif i == 2:
659
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
660
- elif i == 3:
661
- if random.random() < poisson_prob:
662
- img = add_Poisson_noise(img)
663
- elif i == 4:
664
- if random.random() < speckle_prob:
665
- img = add_speckle_noise(img)
666
- elif i == 5:
667
- if random.random() < isp_prob and isp_model is not None:
668
- with torch.no_grad():
669
- img, hq = isp_model.forward(img.copy(), hq)
670
- elif i == 6:
671
- img = add_JPEG_noise(img)
672
- elif i == 7:
673
- img = add_blur(img, sf=sf)
674
- elif i == 8:
675
- img = add_resize(img, sf=sf)
676
- elif i == 9:
677
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
678
- elif i == 10:
679
- if random.random() < poisson_prob:
680
- img = add_Poisson_noise(img)
681
- elif i == 11:
682
- if random.random() < speckle_prob:
683
- img = add_speckle_noise(img)
684
- elif i == 12:
685
- if random.random() < isp_prob and isp_model is not None:
686
- with torch.no_grad():
687
- img, hq = isp_model.forward(img.copy(), hq)
688
- else:
689
- print('check the shuffle!')
690
-
691
- # resize to desired size
692
- img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])),
693
- interpolation=random.choice([1, 2, 3]))
694
-
695
- # add final JPEG compression noise
696
- img = add_JPEG_noise(img)
697
-
698
- # random crop
699
- img, hq = random_crop(img, hq, sf, lq_patchsize)
700
-
701
- return img, hq
702
-
703
-
704
- if __name__ == '__main__':
705
- print("hey")
706
- img = util.imread_uint('utils/test.png', 3)
707
- print(img)
708
- img = util.uint2single(img)
709
- print(img)
710
- img = img[:448, :448]
711
- h = img.shape[0] // 4
712
- print("resizing to", h)
713
- sf = 4
714
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
715
- for i in range(20):
716
- print(i)
717
- img_lq = deg_fn(img)
718
- print(img_lq)
719
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"]
720
- print(img_lq.shape)
721
- print("bicubic", img_lq_bicubic.shape)
722
- print(img_hq.shape)
723
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
724
- interpolation=0)
725
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
726
- interpolation=0)
727
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
728
- util.imsave(img_concat, str(i) + '.png')
729
-
730
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZero2HeroBootcamp/3DHuman/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: 3DHuman
3
- emoji: 🐠
4
- colorFrom: purple
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet50_cifar_mixup.py DELETED
@@ -1,17 +0,0 @@
1
- # model settings
2
- model = dict(
3
- type='ImageClassifier',
4
- backbone=dict(
5
- type='ResNet_CIFAR',
6
- depth=50,
7
- num_stages=4,
8
- out_indices=(3, ),
9
- style='pytorch'),
10
- neck=dict(type='GlobalAveragePooling'),
11
- head=dict(
12
- type='MultiLabelLinearClsHead',
13
- num_classes=10,
14
- in_channels=2048,
15
- loss=dict(type='CrossEntropyLoss', loss_weight=1.0, use_soft=True)),
16
- train_cfg=dict(augments=dict(type='Mixup', alpha=1.)),
17
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aanisha/Image_to_story/app.py DELETED
@@ -1,70 +0,0 @@
1
- from PIL import Image
2
- from transformers import VisionEncoderDecoderModel,ViTFeatureExtractor,PreTrainedTokenizerFast,GPT2Tokenizer,AutoModelForCausalLM,AutoTokenizer
3
- import requests
4
- import gradio as gr
5
- import torch
6
- from transformers import pipeline
7
- import re
8
-
9
-
10
-
11
- description = "Just upload an image, and generate a short story for the image.\n PS: GPT-2 is not perfect but it's fun to play with.May take a minute for the output to generate. Enjoyy!!!"
12
- title = "Story generator from images using ViT and GPT2"
13
-
14
-
15
- model = VisionEncoderDecoderModel.from_pretrained("gagan3012/ViTGPT2_vizwiz").to('cpu')
16
- vit_feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k")
17
- tokenizer = PreTrainedTokenizerFast.from_pretrained("distilgpt2")
18
- story_gpt = AutoModelForCausalLM.from_pretrained("pranavpsv/gpt2-genre-story-generator")
19
- st_tokenizer = AutoTokenizer.from_pretrained("pranavpsv/gpt2-genre-story-generator")
20
-
21
- inputs = [
22
- gr.inputs.Image(type="pil", label="Original Image")
23
- ]
24
-
25
- outputs = [
26
- gr.outputs.Textbox(label = 'Story')
27
- ]
28
-
29
- examples = [['img_1.jpg'],['img_2.jpg']]
30
-
31
- def get_output_senten(img):
32
- pixel_values = vit_feature_extractor(images=img, return_tensors="pt").pixel_values.to('cpu')
33
- encoder_outputs = model.generate(pixel_values.to('cpu'),num_beams=7)
34
- generated_sentences = tokenizer.batch_decode(encoder_outputs)
35
- senten = generated_sentences[0][generated_sentences[0][2:].index('>')+1:]
36
-
37
- senten = senten.replace('>','')
38
- senten = senten.replace('|','')
39
- res = senten.split('.')[0][0:75]
40
- res = res[0:res.rindex(' ')]
41
-
42
- print(res)
43
-
44
- tokenized_text=st_tokenizer.encode(res)
45
- input_ids=torch.tensor(tokenized_text).view(-1,len(tokenized_text))
46
- outputs=story_gpt.generate(input_ids,max_length=100,num_beams=5,no_repeat_ngram_size=2,early_stopping=True)
47
-
48
- generated_story = st_tokenizer.batch_decode(outputs)
49
-
50
- print(len(generated_story))
51
- ans = generated_story[0]
52
-
53
-
54
-
55
- ans = str(ans)
56
- ind = ans.rindex('.')
57
- ans = ans[0:ind+1]
58
- return ans
59
-
60
-
61
-
62
- gr.Interface(
63
- get_output_senten,
64
- inputs,
65
- outputs,
66
- examples = examples,
67
- title=title,
68
- description=description,
69
- theme="huggingface",
70
- ).launch(enable_queue=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/constants.py DELETED
@@ -1,34 +0,0 @@
1
- """
2
- Constants that are used by the model
3
- """
4
- HARAQAT = ["ْ", "ّ", "ٌ", "ٍ", "ِ", "ً", "َ", "ُ"]
5
- ARAB_CHARS = "ىعظحرسيشضق ثلصطكآماإهزءأفؤغجئدةخوبذتن"
6
- PUNCTUATIONS = [".", "،", ":", "؛", "-", "؟"]
7
- VALID_ARABIC = HARAQAT + list(ARAB_CHARS)
8
- BASIC_HARAQAT = {
9
- "َ": "Fatha ",
10
- "ً": "Fathatah ",
11
- "ُ": "Damma ",
12
- "ٌ": "Dammatan ",
13
- "ِ": "Kasra ",
14
- "ٍ": "Kasratan ",
15
- "ْ": "Sukun ",
16
- "ّ": "Shaddah ",
17
- }
18
- ALL_POSSIBLE_HARAQAT = {
19
- "": "No Diacritic ",
20
- "َ": "Fatha ",
21
- "ً": "Fathatah ",
22
- "ُ": "Damma ",
23
- "ٌ": "Dammatan ",
24
- "ِ": "Kasra ",
25
- "ٍ": "Kasratan ",
26
- "ْ": "Sukun ",
27
- "ّ": "Shaddah ",
28
- "َّ": "Shaddah + Fatha ",
29
- "ًّ": "Shaddah + Fathatah ",
30
- "ُّ": "Shaddah + Damma ",
31
- "ٌّ": "Shaddah + Dammatan ",
32
- "ِّ": "Shaddah + Kasra ",
33
- "ٍّ": "Shaddah + Kasratan ",
34
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/client/css/conversation.css DELETED
@@ -1,158 +0,0 @@
1
- .conversation {
2
- width: 60%;
3
- margin: 0px 16px;
4
- display: flex;
5
- flex-direction: column;
6
- }
7
-
8
- .conversation #messages {
9
- width: 100%;
10
- display: flex;
11
- flex-direction: column;
12
- overflow: auto;
13
- overflow-wrap: break-word;
14
- padding-bottom: 8px;
15
- }
16
-
17
- .conversation .user-input {
18
- max-height: 180px;
19
- margin: 16px 0px;
20
- }
21
-
22
- .conversation .user-input input {
23
- font-size: 1rem;
24
- background: none;
25
- border: none;
26
- outline: none;
27
- color: var(--colour-3);
28
- }
29
-
30
- .conversation .user-input input::placeholder {
31
- color: var(--user-input);
32
- }
33
-
34
- .conversation-title {
35
- color: var(--colour-3);
36
- font-size: 14px;
37
- }
38
-
39
- .conversation .user-input textarea {
40
- font-size: 1rem;
41
- width: 100%;
42
- height: 100%;
43
- padding: 12px;
44
- background: none;
45
- border: none;
46
- outline: none;
47
- color: var(--colour-3);
48
- resize: vertical;
49
- max-height: 150px;
50
- min-height: 80px;
51
- }
52
-
53
- .box {
54
- backdrop-filter: blur(20px);
55
- -webkit-backdrop-filter: blur(20px);
56
- background-color: var(--blur-bg);
57
- height: 100%;
58
- width: 100%;
59
- border-radius: var(--border-radius-1);
60
- border: 1px solid var(--blur-border);
61
- }
62
-
63
- .box.input-box {
64
- position: relative;
65
- align-items: center;
66
- padding: 8px;
67
- cursor: pointer;
68
- }
69
-
70
- #send-button {
71
- position: absolute;
72
- bottom: 25%;
73
- right: 10px;
74
- z-index: 1;
75
- padding: 16px;
76
- }
77
-
78
- #cursor {
79
- line-height: 17px;
80
- margin-left: 3px;
81
- -webkit-animation: blink 0.8s infinite;
82
- animation: blink 0.8s infinite;
83
- width: 7px;
84
- height: 15px;
85
- }
86
-
87
- @keyframes blink {
88
- 0% {
89
- background: #ffffff00;
90
- }
91
-
92
- 50% {
93
- background: white;
94
- }
95
-
96
- 100% {
97
- background: #ffffff00;
98
- }
99
- }
100
-
101
- @-webkit-keyframes blink {
102
- 0% {
103
- background: #ffffff00;
104
- }
105
-
106
- 50% {
107
- background: white;
108
- }
109
-
110
- 100% {
111
- background: #ffffff00;
112
- }
113
- }
114
-
115
- /* scrollbar */
116
- .conversation #messages::-webkit-scrollbar {
117
- width: 4px;
118
- padding: 8px 0px;
119
- }
120
-
121
- .conversation #messages::-webkit-scrollbar-track {
122
- background-color: #ffffff00;
123
- }
124
-
125
- .conversation #messages::-webkit-scrollbar-thumb {
126
- background-color: #555555;
127
- border-radius: 10px;
128
- }
129
-
130
- @media screen and (max-width: 990px) {
131
- .conversation {
132
- width: 100%;
133
- height: 90%;
134
- }
135
- }
136
-
137
- @media screen and (max-height: 720px) {
138
- .conversation.box {
139
- height: 70%;
140
- }
141
-
142
- .conversation .user-input textarea {
143
- font-size: 0.875rem;
144
- }
145
- }
146
-
147
- @media screen and (max-width: 360px) {
148
- .box {
149
- border-radius: 0;
150
- }
151
- .conversation {
152
- margin: 0;
153
- margin-top: 48px;
154
- }
155
- .conversation .user-input {
156
- margin: 2px 0 8px 0;
157
- }
158
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/agentverse/initialization.py DELETED
@@ -1,120 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import os
4
- from typing import Dict, List, TYPE_CHECKING
5
-
6
- import yaml
7
-
8
- try:
9
- from bmtools.agent.singletool import import_all_apis, load_single_tools
10
- except:
11
- print(
12
- "BMTools is not installed, tools in the simulation environment cannot be used. To install BMTools, please follow the instruction in the README.md file."
13
- )
14
-
15
- from agentverse.llms import llm_registry
16
-
17
- from agentverse.agents import agent_registry
18
- from agentverse.environments import BaseEnvironment, env_registry
19
- from agentverse.memory import memory_registry
20
- from agentverse.memory_manipulator import memory_manipulator_registry
21
-
22
- from agentverse.output_parser import output_parser_registry
23
-
24
- if TYPE_CHECKING:
25
- from agentverse.agents import BaseAgent
26
-
27
-
28
- def load_llm(llm_config: Dict):
29
- llm_type = llm_config.pop("llm_type", "text-davinci-003")
30
-
31
- return llm_registry.build(llm_type, **llm_config)
32
-
33
-
34
- def load_memory(memory_config: Dict):
35
- memory_type = memory_config.pop("memory_type", "chat_history")
36
- return memory_registry.build(memory_type, **memory_config)
37
-
38
-
39
- def load_memory_manipulator(memory_manipulator_config: Dict):
40
- memory_manipulator_type = memory_manipulator_config.pop(
41
- "memory_manipulator_type", "basic"
42
- )
43
- return memory_manipulator_registry.build(
44
- memory_manipulator_type, **memory_manipulator_config
45
- )
46
-
47
-
48
- def load_tools(tool_config: List[Dict]):
49
- if len(tool_config) == 0:
50
- return []
51
- all_tools_list = []
52
- for tool in tool_config:
53
- _, config = load_single_tools(tool["tool_name"], tool["tool_url"])
54
- all_tools_list += import_all_apis(config)
55
- return all_tools_list
56
-
57
-
58
- def load_environment(env_config: Dict) -> BaseEnvironment:
59
- env_type = env_config.pop("env_type", "basic")
60
- return env_registry.build(env_type, **env_config)
61
-
62
-
63
- def load_agent(agent_config: Dict) -> BaseAgent:
64
- agent_type = agent_config.pop("agent_type", "conversation")
65
- agent = agent_registry.build(agent_type, **agent_config)
66
- return agent
67
-
68
-
69
- def prepare_task_config(task, tasks_dir):
70
- """Read the yaml config of the given task in `tasks` directory."""
71
- all_task_dir = tasks_dir
72
- task_path = os.path.join(all_task_dir, task)
73
- config_path = os.path.join(task_path, "config.yaml")
74
- if not os.path.exists(task_path):
75
- all_tasks = []
76
- for task in os.listdir(all_task_dir):
77
- if (
78
- os.path.isdir(os.path.join(all_task_dir, task))
79
- and task != "__pycache__"
80
- ):
81
- all_tasks.append(task)
82
- for subtask in os.listdir(os.path.join(all_task_dir, task)):
83
- if (
84
- os.path.isdir(os.path.join(all_task_dir, task, subtask))
85
- and subtask != "__pycache__"
86
- ):
87
- all_tasks.append(f"{task}/{subtask}")
88
- raise ValueError(f"Task {task} not found. Available tasks: {all_tasks}")
89
- if not os.path.exists(config_path):
90
- raise ValueError(
91
- "You should include the config.yaml file in the task directory"
92
- )
93
- task_config = yaml.safe_load(open(config_path))
94
-
95
- for i, agent_configs in enumerate(task_config["agents"]):
96
- agent_configs["memory"] = load_memory(agent_configs.get("memory", {}))
97
- if agent_configs.get("tool_memory", None) is not None:
98
- agent_configs["tool_memory"] = load_memory(agent_configs["tool_memory"])
99
- llm = load_llm(agent_configs.get("llm", "text-davinci-003"))
100
- agent_configs["llm"] = llm
101
-
102
- memory_manipulator = load_memory_manipulator(
103
- agent_configs.get("memory_manipulator", {})
104
- )
105
- agent_configs["memory_manipulator"] = memory_manipulator
106
-
107
- agent_configs["tools"] = load_tools(agent_configs.get("tools", []))
108
-
109
- # Build the output parser
110
- output_parser_config = agent_configs.get("output_parser", {"type": "dummy"})
111
- if output_parser_config.get("type", None) == "role_assigner":
112
- output_parser_config["cnt_critic_agents"] = task_config.get(
113
- "cnt_critic_agents", 0
114
- )
115
- output_parser_name = output_parser_config.pop("type", task)
116
- agent_configs["output_parser"] = output_parser_registry.build(
117
- output_parser_name, **output_parser_config
118
- )
119
-
120
- return task_config
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/runcommands.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import RunCommands from './logic/runcommands/RunCommands';
2
- export default RunCommands;
 
 
 
spaces/Aki004/herta-so-vits/inference/infer_tool.py DELETED
@@ -1,354 +0,0 @@
1
- import hashlib
2
- import io
3
- import json
4
- import logging
5
- import os
6
- import time
7
- from pathlib import Path
8
- from inference import slicer
9
- import gc
10
-
11
- import librosa
12
- import numpy as np
13
- # import onnxruntime
14
- import parselmouth
15
- import soundfile
16
- import torch
17
- import torchaudio
18
-
19
- import cluster
20
- from hubert import hubert_model
21
- import utils
22
- from models import SynthesizerTrn
23
-
24
- logging.getLogger('matplotlib').setLevel(logging.WARNING)
25
-
26
-
27
- def read_temp(file_name):
28
- if not os.path.exists(file_name):
29
- with open(file_name, "w") as f:
30
- f.write(json.dumps({"info": "temp_dict"}))
31
- return {}
32
- else:
33
- try:
34
- with open(file_name, "r") as f:
35
- data = f.read()
36
- data_dict = json.loads(data)
37
- if os.path.getsize(file_name) > 50 * 1024 * 1024:
38
- f_name = file_name.replace("\\", "/").split("/")[-1]
39
- print(f"clean {f_name}")
40
- for wav_hash in list(data_dict.keys()):
41
- if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
42
- del data_dict[wav_hash]
43
- except Exception as e:
44
- print(e)
45
- print(f"{file_name} error,auto rebuild file")
46
- data_dict = {"info": "temp_dict"}
47
- return data_dict
48
-
49
-
50
- def write_temp(file_name, data):
51
- with open(file_name, "w") as f:
52
- f.write(json.dumps(data))
53
-
54
-
55
- def timeit(func):
56
- def run(*args, **kwargs):
57
- t = time.time()
58
- res = func(*args, **kwargs)
59
- print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
60
- return res
61
-
62
- return run
63
-
64
-
65
- def format_wav(audio_path):
66
- if Path(audio_path).suffix == '.wav':
67
- return
68
- raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
69
- soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
70
-
71
-
72
- def get_end_file(dir_path, end):
73
- file_lists = []
74
- for root, dirs, files in os.walk(dir_path):
75
- files = [f for f in files if f[0] != '.']
76
- dirs[:] = [d for d in dirs if d[0] != '.']
77
- for f_file in files:
78
- if f_file.endswith(end):
79
- file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
80
- return file_lists
81
-
82
-
83
- def get_md5(content):
84
- return hashlib.new("md5", content).hexdigest()
85
-
86
- def fill_a_to_b(a, b):
87
- if len(a) < len(b):
88
- for _ in range(0, len(b) - len(a)):
89
- a.append(a[0])
90
-
91
- def mkdir(paths: list):
92
- for path in paths:
93
- if not os.path.exists(path):
94
- os.mkdir(path)
95
-
96
- def pad_array(arr, target_length):
97
- current_length = arr.shape[0]
98
- if current_length >= target_length:
99
- return arr
100
- else:
101
- pad_width = target_length - current_length
102
- pad_left = pad_width // 2
103
- pad_right = pad_width - pad_left
104
- padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0))
105
- return padded_arr
106
-
107
- def split_list_by_n(list_collection, n, pre=0):
108
- for i in range(0, len(list_collection), n):
109
- yield list_collection[i-pre if i-pre>=0 else i: i + n]
110
-
111
-
112
- class F0FilterException(Exception):
113
- pass
114
-
115
- class Svc(object):
116
- def __init__(self, net_g_path, config_path,
117
- device=None,
118
- cluster_model_path="logs/44k/kmeans_10000.pt",
119
- nsf_hifigan_enhance = False
120
- ):
121
- self.net_g_path = net_g_path
122
- if device is None:
123
- self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
124
- else:
125
- self.dev = torch.device(device)
126
- self.net_g_ms = None
127
- self.hps_ms = utils.get_hparams_from_file(config_path)
128
- self.target_sample = self.hps_ms.data.sampling_rate
129
- self.hop_size = self.hps_ms.data.hop_length
130
- self.spk2id = self.hps_ms.spk
131
- self.nsf_hifigan_enhance = nsf_hifigan_enhance
132
- # load hubert
133
- self.hubert_model = utils.get_hubert_model().to(self.dev)
134
- self.load_model()
135
- if os.path.exists(cluster_model_path):
136
- self.cluster_model = cluster.get_cluster_model(cluster_model_path)
137
- if self.nsf_hifigan_enhance:
138
- from modules.enhancer import Enhancer
139
- self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model',device=self.dev)
140
-
141
- def load_model(self):
142
- # get model configuration
143
- self.net_g_ms = SynthesizerTrn(
144
- self.hps_ms.data.filter_length // 2 + 1,
145
- self.hps_ms.train.segment_size // self.hps_ms.data.hop_length,
146
- **self.hps_ms.model)
147
- _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None)
148
- if "half" in self.net_g_path and torch.cuda.is_available():
149
- _ = self.net_g_ms.half().eval().to(self.dev)
150
- else:
151
- _ = self.net_g_ms.eval().to(self.dev)
152
-
153
-
154
-
155
- def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker, f0_filter ,F0_mean_pooling,cr_threshold=0.05):
156
-
157
- wav, sr = librosa.load(in_path, sr=self.target_sample)
158
-
159
- if F0_mean_pooling == True:
160
- f0, uv = utils.compute_f0_uv_torchcrepe(torch.FloatTensor(wav), sampling_rate=self.target_sample, hop_length=self.hop_size,device=self.dev,cr_threshold = cr_threshold)
161
- if f0_filter and sum(f0) == 0:
162
- raise F0FilterException("No voice detected")
163
- f0 = torch.FloatTensor(list(f0))
164
- uv = torch.FloatTensor(list(uv))
165
- if F0_mean_pooling == False:
166
- f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size)
167
- if f0_filter and sum(f0) == 0:
168
- raise F0FilterException("No voice detected")
169
- f0, uv = utils.interpolate_f0(f0)
170
- f0 = torch.FloatTensor(f0)
171
- uv = torch.FloatTensor(uv)
172
-
173
- f0 = f0 * 2 ** (tran / 12)
174
- f0 = f0.unsqueeze(0).to(self.dev)
175
- uv = uv.unsqueeze(0).to(self.dev)
176
-
177
- wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000)
178
- wav16k = torch.from_numpy(wav16k).to(self.dev)
179
- c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k)
180
- c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
181
-
182
- if cluster_infer_ratio !=0:
183
- cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T
184
- cluster_c = torch.FloatTensor(cluster_c).to(self.dev)
185
- c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c
186
-
187
- c = c.unsqueeze(0)
188
- return c, f0, uv
189
-
190
- def infer(self, speaker, tran, raw_path,
191
- cluster_infer_ratio=0,
192
- auto_predict_f0=False,
193
- noice_scale=0.4,
194
- f0_filter=False,
195
- F0_mean_pooling=False,
196
- enhancer_adaptive_key = 0,
197
- cr_threshold = 0.05
198
- ):
199
-
200
- speaker_id = self.spk2id.__dict__.get(speaker)
201
- if not speaker_id and type(speaker) is int:
202
- if len(self.spk2id.__dict__) >= speaker:
203
- speaker_id = speaker
204
- sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0)
205
- c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker, f0_filter,F0_mean_pooling,cr_threshold=cr_threshold)
206
- if "half" in self.net_g_path and torch.cuda.is_available():
207
- c = c.half()
208
- with torch.no_grad():
209
- start = time.time()
210
- audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float()
211
- if self.nsf_hifigan_enhance:
212
- audio, _ = self.enhancer.enhance(
213
- audio[None,:],
214
- self.target_sample,
215
- f0[:,:,None],
216
- self.hps_ms.data.hop_length,
217
- adaptive_key = enhancer_adaptive_key)
218
- use_time = time.time() - start
219
- print("vits use time:{}".format(use_time))
220
- return audio, audio.shape[-1]
221
-
222
- def clear_empty(self):
223
- # clean up vram
224
- torch.cuda.empty_cache()
225
-
226
- def unload_model(self):
227
- # unload model
228
- self.net_g_ms = self.net_g_ms.to("cpu")
229
- del self.net_g_ms
230
- if hasattr(self,"enhancer"):
231
- self.enhancer.enhancer = self.enhancer.enhancer.to("cpu")
232
- del self.enhancer.enhancer
233
- del self.enhancer
234
- gc.collect()
235
-
236
- def slice_inference(self,
237
- raw_audio_path,
238
- spk,
239
- tran,
240
- slice_db,
241
- cluster_infer_ratio,
242
- auto_predict_f0,
243
- noice_scale,
244
- pad_seconds=0.5,
245
- clip_seconds=0,
246
- lg_num=0,
247
- lgr_num =0.75,
248
- F0_mean_pooling = False,
249
- enhancer_adaptive_key = 0,
250
- cr_threshold = 0.05
251
- ):
252
- wav_path = raw_audio_path
253
- chunks = slicer.cut(wav_path, db_thresh=slice_db)
254
- audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
255
- per_size = int(clip_seconds*audio_sr)
256
- lg_size = int(lg_num*audio_sr)
257
- lg_size_r = int(lg_size*lgr_num)
258
- lg_size_c_l = (lg_size-lg_size_r)//2
259
- lg_size_c_r = lg_size-lg_size_r-lg_size_c_l
260
- lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0
261
-
262
- audio = []
263
- for (slice_tag, data) in audio_data:
264
- print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
265
- # padd
266
- length = int(np.ceil(len(data) / audio_sr * self.target_sample))
267
- if slice_tag:
268
- print('jump empty segment')
269
- _audio = np.zeros(length)
270
- audio.extend(list(pad_array(_audio, length)))
271
- continue
272
- if per_size != 0:
273
- datas = split_list_by_n(data, per_size,lg_size)
274
- else:
275
- datas = [data]
276
- for k,dat in enumerate(datas):
277
- per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds!=0 else length
278
- if clip_seconds!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======')
279
- # padd
280
- pad_len = int(audio_sr * pad_seconds)
281
- dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])])
282
- raw_path = io.BytesIO()
283
- soundfile.write(raw_path, dat, audio_sr, format="wav")
284
- raw_path.seek(0)
285
- out_audio, out_sr = self.infer(spk, tran, raw_path,
286
- cluster_infer_ratio=cluster_infer_ratio,
287
- auto_predict_f0=auto_predict_f0,
288
- noice_scale=noice_scale,
289
- F0_mean_pooling = F0_mean_pooling,
290
- enhancer_adaptive_key = enhancer_adaptive_key,
291
- cr_threshold = cr_threshold
292
- )
293
- _audio = out_audio.cpu().numpy()
294
- pad_len = int(self.target_sample * pad_seconds)
295
- _audio = _audio[pad_len:-pad_len]
296
- _audio = pad_array(_audio, per_length)
297
- if lg_size!=0 and k!=0:
298
- lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:]
299
- lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr_num != 1 else _audio[0:lg_size]
300
- lg_pre = lg1*(1-lg)+lg2*lg
301
- audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size]
302
- audio.extend(lg_pre)
303
- _audio = _audio[lg_size_c_l+lg_size_r:] if lgr_num != 1 else _audio[lg_size:]
304
- audio.extend(list(_audio))
305
- return np.array(audio)
306
-
307
- class RealTimeVC:
308
- def __init__(self):
309
- self.last_chunk = None
310
- self.last_o = None
311
- self.chunk_len = 16000 # chunk length
312
- self.pre_len = 3840 # cross fade length, multiples of 640
313
-
314
- # Input and output are 1-dimensional numpy waveform arrays
315
-
316
- def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path,
317
- cluster_infer_ratio=0,
318
- auto_predict_f0=False,
319
- noice_scale=0.4,
320
- f0_filter=False):
321
-
322
- import maad
323
- audio, sr = torchaudio.load(input_wav_path)
324
- audio = audio.cpu().numpy()[0]
325
- temp_wav = io.BytesIO()
326
- if self.last_chunk is None:
327
- input_wav_path.seek(0)
328
-
329
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path,
330
- cluster_infer_ratio=cluster_infer_ratio,
331
- auto_predict_f0=auto_predict_f0,
332
- noice_scale=noice_scale,
333
- f0_filter=f0_filter)
334
-
335
- audio = audio.cpu().numpy()
336
- self.last_chunk = audio[-self.pre_len:]
337
- self.last_o = audio
338
- return audio[-self.chunk_len:]
339
- else:
340
- audio = np.concatenate([self.last_chunk, audio])
341
- soundfile.write(temp_wav, audio, sr, format="wav")
342
- temp_wav.seek(0)
343
-
344
- audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav,
345
- cluster_infer_ratio=cluster_infer_ratio,
346
- auto_predict_f0=auto_predict_f0,
347
- noice_scale=noice_scale,
348
- f0_filter=f0_filter)
349
-
350
- audio = audio.cpu().numpy()
351
- ret = maad.util.crossfade(self.last_o, audio, self.pre_len)
352
- self.last_chunk = audio[-self.pre_len:]
353
- self.last_o = audio
354
- return ret[self.chunk_len:2 * self.chunk_len]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Al-Chan/Vits_League_of_Legends_Yuumi_TTS/modules.py DELETED
@@ -1,390 +0,0 @@
1
- import copy
2
- import math
3
- import numpy as np
4
- import scipy
5
- import torch
6
- from torch import nn
7
- from torch.nn import functional as F
8
-
9
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
10
- from torch.nn.utils import weight_norm, remove_weight_norm
11
-
12
- import commons
13
- from commons import init_weights, get_padding
14
- from transforms import piecewise_rational_quadratic_transform
15
-
16
-
17
- LRELU_SLOPE = 0.1
18
-
19
-
20
- class LayerNorm(nn.Module):
21
- def __init__(self, channels, eps=1e-5):
22
- super().__init__()
23
- self.channels = channels
24
- self.eps = eps
25
-
26
- self.gamma = nn.Parameter(torch.ones(channels))
27
- self.beta = nn.Parameter(torch.zeros(channels))
28
-
29
- def forward(self, x):
30
- x = x.transpose(1, -1)
31
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
32
- return x.transpose(1, -1)
33
-
34
-
35
- class ConvReluNorm(nn.Module):
36
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
37
- super().__init__()
38
- self.in_channels = in_channels
39
- self.hidden_channels = hidden_channels
40
- self.out_channels = out_channels
41
- self.kernel_size = kernel_size
42
- self.n_layers = n_layers
43
- self.p_dropout = p_dropout
44
- assert n_layers > 1, "Number of layers should be larger than 0."
45
-
46
- self.conv_layers = nn.ModuleList()
47
- self.norm_layers = nn.ModuleList()
48
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
49
- self.norm_layers.append(LayerNorm(hidden_channels))
50
- self.relu_drop = nn.Sequential(
51
- nn.ReLU(),
52
- nn.Dropout(p_dropout))
53
- for _ in range(n_layers-1):
54
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
55
- self.norm_layers.append(LayerNorm(hidden_channels))
56
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
57
- self.proj.weight.data.zero_()
58
- self.proj.bias.data.zero_()
59
-
60
- def forward(self, x, x_mask):
61
- x_org = x
62
- for i in range(self.n_layers):
63
- x = self.conv_layers[i](x * x_mask)
64
- x = self.norm_layers[i](x)
65
- x = self.relu_drop(x)
66
- x = x_org + self.proj(x)
67
- return x * x_mask
68
-
69
-
70
- class DDSConv(nn.Module):
71
- """
72
- Dialted and Depth-Separable Convolution
73
- """
74
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
75
- super().__init__()
76
- self.channels = channels
77
- self.kernel_size = kernel_size
78
- self.n_layers = n_layers
79
- self.p_dropout = p_dropout
80
-
81
- self.drop = nn.Dropout(p_dropout)
82
- self.convs_sep = nn.ModuleList()
83
- self.convs_1x1 = nn.ModuleList()
84
- self.norms_1 = nn.ModuleList()
85
- self.norms_2 = nn.ModuleList()
86
- for i in range(n_layers):
87
- dilation = kernel_size ** i
88
- padding = (kernel_size * dilation - dilation) // 2
89
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
90
- groups=channels, dilation=dilation, padding=padding
91
- ))
92
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
93
- self.norms_1.append(LayerNorm(channels))
94
- self.norms_2.append(LayerNorm(channels))
95
-
96
- def forward(self, x, x_mask, g=None):
97
- if g is not None:
98
- x = x + g
99
- for i in range(self.n_layers):
100
- y = self.convs_sep[i](x * x_mask)
101
- y = self.norms_1[i](y)
102
- y = F.gelu(y)
103
- y = self.convs_1x1[i](y)
104
- y = self.norms_2[i](y)
105
- y = F.gelu(y)
106
- y = self.drop(y)
107
- x = x + y
108
- return x * x_mask
109
-
110
-
111
- class WN(torch.nn.Module):
112
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
113
- super(WN, self).__init__()
114
- assert(kernel_size % 2 == 1)
115
- self.hidden_channels =hidden_channels
116
- self.kernel_size = kernel_size,
117
- self.dilation_rate = dilation_rate
118
- self.n_layers = n_layers
119
- self.gin_channels = gin_channels
120
- self.p_dropout = p_dropout
121
-
122
- self.in_layers = torch.nn.ModuleList()
123
- self.res_skip_layers = torch.nn.ModuleList()
124
- self.drop = nn.Dropout(p_dropout)
125
-
126
- if gin_channels != 0:
127
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
128
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
129
-
130
- for i in range(n_layers):
131
- dilation = dilation_rate ** i
132
- padding = int((kernel_size * dilation - dilation) / 2)
133
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
134
- dilation=dilation, padding=padding)
135
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
136
- self.in_layers.append(in_layer)
137
-
138
- # last one is not necessary
139
- if i < n_layers - 1:
140
- res_skip_channels = 2 * hidden_channels
141
- else:
142
- res_skip_channels = hidden_channels
143
-
144
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
145
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
146
- self.res_skip_layers.append(res_skip_layer)
147
-
148
- def forward(self, x, x_mask, g=None, **kwargs):
149
- output = torch.zeros_like(x)
150
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
151
-
152
- if g is not None:
153
- g = self.cond_layer(g)
154
-
155
- for i in range(self.n_layers):
156
- x_in = self.in_layers[i](x)
157
- if g is not None:
158
- cond_offset = i * 2 * self.hidden_channels
159
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
160
- else:
161
- g_l = torch.zeros_like(x_in)
162
-
163
- acts = commons.fused_add_tanh_sigmoid_multiply(
164
- x_in,
165
- g_l,
166
- n_channels_tensor)
167
- acts = self.drop(acts)
168
-
169
- res_skip_acts = self.res_skip_layers[i](acts)
170
- if i < self.n_layers - 1:
171
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
172
- x = (x + res_acts) * x_mask
173
- output = output + res_skip_acts[:,self.hidden_channels:,:]
174
- else:
175
- output = output + res_skip_acts
176
- return output * x_mask
177
-
178
- def remove_weight_norm(self):
179
- if self.gin_channels != 0:
180
- torch.nn.utils.remove_weight_norm(self.cond_layer)
181
- for l in self.in_layers:
182
- torch.nn.utils.remove_weight_norm(l)
183
- for l in self.res_skip_layers:
184
- torch.nn.utils.remove_weight_norm(l)
185
-
186
-
187
- class ResBlock1(torch.nn.Module):
188
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
189
- super(ResBlock1, self).__init__()
190
- self.convs1 = nn.ModuleList([
191
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
192
- padding=get_padding(kernel_size, dilation[0]))),
193
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
194
- padding=get_padding(kernel_size, dilation[1]))),
195
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
196
- padding=get_padding(kernel_size, dilation[2])))
197
- ])
198
- self.convs1.apply(init_weights)
199
-
200
- self.convs2 = nn.ModuleList([
201
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
202
- padding=get_padding(kernel_size, 1))),
203
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
204
- padding=get_padding(kernel_size, 1))),
205
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
206
- padding=get_padding(kernel_size, 1)))
207
- ])
208
- self.convs2.apply(init_weights)
209
-
210
- def forward(self, x, x_mask=None):
211
- for c1, c2 in zip(self.convs1, self.convs2):
212
- xt = F.leaky_relu(x, LRELU_SLOPE)
213
- if x_mask is not None:
214
- xt = xt * x_mask
215
- xt = c1(xt)
216
- xt = F.leaky_relu(xt, LRELU_SLOPE)
217
- if x_mask is not None:
218
- xt = xt * x_mask
219
- xt = c2(xt)
220
- x = xt + x
221
- if x_mask is not None:
222
- x = x * x_mask
223
- return x
224
-
225
- def remove_weight_norm(self):
226
- for l in self.convs1:
227
- remove_weight_norm(l)
228
- for l in self.convs2:
229
- remove_weight_norm(l)
230
-
231
-
232
- class ResBlock2(torch.nn.Module):
233
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
234
- super(ResBlock2, self).__init__()
235
- self.convs = nn.ModuleList([
236
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
237
- padding=get_padding(kernel_size, dilation[0]))),
238
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
239
- padding=get_padding(kernel_size, dilation[1])))
240
- ])
241
- self.convs.apply(init_weights)
242
-
243
- def forward(self, x, x_mask=None):
244
- for c in self.convs:
245
- xt = F.leaky_relu(x, LRELU_SLOPE)
246
- if x_mask is not None:
247
- xt = xt * x_mask
248
- xt = c(xt)
249
- x = xt + x
250
- if x_mask is not None:
251
- x = x * x_mask
252
- return x
253
-
254
- def remove_weight_norm(self):
255
- for l in self.convs:
256
- remove_weight_norm(l)
257
-
258
-
259
- class Log(nn.Module):
260
- def forward(self, x, x_mask, reverse=False, **kwargs):
261
- if not reverse:
262
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
263
- logdet = torch.sum(-y, [1, 2])
264
- return y, logdet
265
- else:
266
- x = torch.exp(x) * x_mask
267
- return x
268
-
269
-
270
- class Flip(nn.Module):
271
- def forward(self, x, *args, reverse=False, **kwargs):
272
- x = torch.flip(x, [1])
273
- if not reverse:
274
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
275
- return x, logdet
276
- else:
277
- return x
278
-
279
-
280
- class ElementwiseAffine(nn.Module):
281
- def __init__(self, channels):
282
- super().__init__()
283
- self.channels = channels
284
- self.m = nn.Parameter(torch.zeros(channels,1))
285
- self.logs = nn.Parameter(torch.zeros(channels,1))
286
-
287
- def forward(self, x, x_mask, reverse=False, **kwargs):
288
- if not reverse:
289
- y = self.m + torch.exp(self.logs) * x
290
- y = y * x_mask
291
- logdet = torch.sum(self.logs * x_mask, [1,2])
292
- return y, logdet
293
- else:
294
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
295
- return x
296
-
297
-
298
- class ResidualCouplingLayer(nn.Module):
299
- def __init__(self,
300
- channels,
301
- hidden_channels,
302
- kernel_size,
303
- dilation_rate,
304
- n_layers,
305
- p_dropout=0,
306
- gin_channels=0,
307
- mean_only=False):
308
- assert channels % 2 == 0, "channels should be divisible by 2"
309
- super().__init__()
310
- self.channels = channels
311
- self.hidden_channels = hidden_channels
312
- self.kernel_size = kernel_size
313
- self.dilation_rate = dilation_rate
314
- self.n_layers = n_layers
315
- self.half_channels = channels // 2
316
- self.mean_only = mean_only
317
-
318
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
319
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
320
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
321
- self.post.weight.data.zero_()
322
- self.post.bias.data.zero_()
323
-
324
- def forward(self, x, x_mask, g=None, reverse=False):
325
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
326
- h = self.pre(x0) * x_mask
327
- h = self.enc(h, x_mask, g=g)
328
- stats = self.post(h) * x_mask
329
- if not self.mean_only:
330
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
331
- else:
332
- m = stats
333
- logs = torch.zeros_like(m)
334
-
335
- if not reverse:
336
- x1 = m + x1 * torch.exp(logs) * x_mask
337
- x = torch.cat([x0, x1], 1)
338
- logdet = torch.sum(logs, [1,2])
339
- return x, logdet
340
- else:
341
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
342
- x = torch.cat([x0, x1], 1)
343
- return x
344
-
345
-
346
- class ConvFlow(nn.Module):
347
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
348
- super().__init__()
349
- self.in_channels = in_channels
350
- self.filter_channels = filter_channels
351
- self.kernel_size = kernel_size
352
- self.n_layers = n_layers
353
- self.num_bins = num_bins
354
- self.tail_bound = tail_bound
355
- self.half_channels = in_channels // 2
356
-
357
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
358
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
359
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
360
- self.proj.weight.data.zero_()
361
- self.proj.bias.data.zero_()
362
-
363
- def forward(self, x, x_mask, g=None, reverse=False):
364
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
365
- h = self.pre(x0)
366
- h = self.convs(h, x_mask, g=g)
367
- h = self.proj(h) * x_mask
368
-
369
- b, c, t = x0.shape
370
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
371
-
372
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
373
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
374
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
375
-
376
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
377
- unnormalized_widths,
378
- unnormalized_heights,
379
- unnormalized_derivatives,
380
- inverse=reverse,
381
- tails='linear',
382
- tail_bound=self.tail_bound
383
- )
384
-
385
- x = torch.cat([x0, x1], 1) * x_mask
386
- logdet = torch.sum(logabsdet * x_mask, [1,2])
387
- if not reverse:
388
- return x, logdet
389
- else:
390
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlhitawiMohammed22/E2E_OCR/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: E2E OCR
3
- emoji: 📈
4
- colorFrom: green
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.43.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amon1/ChatGPTForAcadamic/functional_crazy.py DELETED
@@ -1,108 +0,0 @@
1
- from toolbox import HotReload # HotReload 的意思是热更新,修改函数插件后,不需要重启程序,代码直接生效
2
-
3
- def get_crazy_functionals():
4
- ###################### 第一组插件 ###########################
5
- # [第一组插件]: 最早期编写的项目插件和一些demo
6
- from crazy_functions.读文章写摘要 import 读文章写摘要
7
- from crazy_functions.生成函数注释 import 批量生成函数注释
8
- from crazy_functions.解析项目源代码 import 解析项目本身
9
- from crazy_functions.解析项目源代码 import 解析一个Python项目
10
- from crazy_functions.解析项目源代码 import 解析一个C项目的头文件
11
- from crazy_functions.解析项目源代码 import 解析一个C项目
12
- from crazy_functions.解析项目源代码 import 解析一个Golang项目
13
- from crazy_functions.解析项目源代码 import 解析一个Java项目
14
- from crazy_functions.解析项目源代码 import 解析一个Rect项目
15
- from crazy_functions.高级功能函数模板 import 高阶功能模板函数
16
- from crazy_functions.代码重写为全英文_多线程 import 全项目切换英文
17
-
18
- function_plugins = {
19
- "请解析并解构此项目本身(源码自译解)": {
20
- "AsButton": False, # 加入下拉菜单中
21
- "Function": 解析项目本身
22
- },
23
- "解析整个Py项目": {
24
- "Color": "stop", # 按钮颜色
25
- "Function": 解析一个Python项目
26
- },
27
- "解析整个C++项目头文件": {
28
- "Color": "stop", # 按钮颜色
29
- "Function": 解析一个C项目的头文件
30
- },
31
- "解析整个C++项目(.cpp/.h)": {
32
- "Color": "stop", # 按钮颜色
33
- "AsButton": False, # 加入下拉菜单中
34
- "Function": 解析一个C项目
35
- },
36
- "解析整个Go项目": {
37
- "Color": "stop", # 按钮颜色
38
- "AsButton": False, # 加入下拉菜单中
39
- "Function": 解析一个Golang项目
40
- },
41
- "解析整个Java项目": {
42
- "Color": "stop", # 按钮颜色
43
- "AsButton": False, # 加入下拉菜单中
44
- "Function": 解析一个Java项目
45
- },
46
- "解析整个Java项目": {
47
- "Color": "stop", # 按钮颜色
48
- "AsButton": False, # 加入下拉菜单中
49
- "Function": 解析一个Rect项目
50
- },
51
- "读Tex论文写摘要": {
52
- "Color": "stop", # 按钮颜色
53
- "Function": 读文章写摘要
54
- },
55
- "批量生成函数注释": {
56
- "Color": "stop", # 按钮颜色
57
- "Function": 批量生成函数注释
58
- },
59
- "[多线程demo] 把本项目源代码切换成全英文": {
60
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
61
- "Function": HotReload(全项目切换英文)
62
- },
63
- "[函数插件模板demo] 历史上的今天": {
64
- # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
65
- "Function": HotReload(高阶功能模板函数)
66
- },
67
- }
68
- ###################### 第二组插件 ###########################
69
- # [第二组插件]: 经过充分测试,但功能上距离达到完美状态还差一点点
70
- from crazy_functions.批量总结PDF文档 import 批量总结PDF文档
71
- from crazy_functions.批量总结PDF文档pdfminer import 批量总结PDF文档pdfminer
72
- from crazy_functions.总结word文档 import 总结word文档
73
- function_plugins.update({
74
- "[仅供开发调试] 批量总结PDF文档": {
75
- "Color": "stop",
76
- "Function": HotReload(批量总结PDF文档) # HotReload 的意思是热更新,修改函数插件代码后,不需要重启程序,代码直接生效
77
- },
78
- "[仅供开发调试] 批量总结PDF文档pdfminer": {
79
- "Color": "stop",
80
- "AsButton": False, # 加入下拉菜单中
81
- "Function": HotReload(批量总结PDF文档pdfminer)
82
- },
83
- "[仅供开发调试] 批量总结Word文档": {
84
- "Color": "stop",
85
- "Function": HotReload(总结word文档)
86
- },
87
- })
88
-
89
- ###################### 第三组插件 ###########################
90
- # [第三组插件]: 尚未充分测试的函数插件,放在这里
91
- try:
92
- from crazy_functions.下载arxiv论文翻译摘要 import 下载arxiv论文并翻译摘要
93
- function_plugins.update({
94
- "一键下载arxiv论文并翻译摘要(先在input输入编号,如1812.10695)": {
95
- "Color": "stop",
96
- "AsButton": False, # 加入下拉菜单中
97
- "Function": HotReload(下载arxiv论文并翻译摘要)
98
- }
99
- })
100
- except Exception as err:
101
- print(f'[下载arxiv论文并翻译摘要] 插件导��失败 {str(err)}')
102
-
103
-
104
-
105
- ###################### 第n组插件 ###########################
106
- return function_plugins
107
-
108
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/__init__.py DELETED
File without changes
spaces/Anandhju-jayan/image-captioning-cloned/model.py DELETED
@@ -1,149 +0,0 @@
1
- from transformers import AutoProcessor, AutoModelForCausalLM, BlipForConditionalGeneration
2
-
3
- class ImageCaptionModel:
4
- def __init__(
5
- self,
6
- device,
7
- processor,
8
- model,
9
- ) -> None:
10
- """
11
- Initializes the model for generating captions for images.
12
-
13
- -----
14
- Parameters:
15
- device: str
16
- The device to use for the model. Must be either "cpu" or "cuda".
17
- processor: transformers.AutoProcessor
18
- The preprocessor to use for the model.
19
- model: transformers.AutoModelForCausalLM or transformers.BlipForConditionalGeneration
20
- The model to use for generating captions.
21
-
22
- -----
23
- Returns:
24
- None
25
- """
26
- self.device = device
27
- self.processor = processor
28
- self.model = model
29
- self.model.to(self.device)
30
-
31
- def generate(
32
- self,
33
- image,
34
- num_captions: int = 1,
35
- max_length: int = 50,
36
- temperature: float = 1.0,
37
- top_k: int = 50,
38
- top_p: float = 1.0,
39
- repetition_penalty: float = 1.0,
40
- diversity_penalty: float = 0.0,
41
- ):
42
- """
43
- Generates captions for the given image.
44
-
45
- -----
46
- Parameters:
47
- preprocessor: transformers.PreTrainedTokenizerFast
48
- The preprocessor to use for the model.
49
- model: transformers.PreTrainedModel
50
- The model to use for generating captions.
51
- image: PIL.Image
52
- The image to generate captions for.
53
- num_captions: int
54
- The number of captions to generate.
55
- temperature: float
56
- The temperature to use for sampling. The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive. Defaults to 1.0.
57
- top_k: int
58
- The number of highest probability vocabulary tokens to keep for top-k-filtering. A large value of top_k will keep more probabilities for each token leading to a better but slower generation. Defaults to 50.
59
- top_p: float
60
- The value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.
61
- repetition_penalty: float
62
- The parameter for repetition penalty. 1.0 means no penalty. Defaults to 1.0.
63
- diversity_penalty: float
64
- The parameter for diversity penalty. 0.0 means no penalty. Defaults to 0.0.
65
-
66
- """
67
- # Type checking and making sure the values are valid.
68
- assert type(num_captions) == int and num_captions > 0, "num_captions must be a positive integer."
69
- assert type(max_length) == int and max_length > 0, "max_length must be a positive integer."
70
- assert type(temperature) == float and temperature > 0.0, "temperature must be a positive float."
71
- assert type(top_k) == int and top_k > 0, "top_k must be a positive integer."
72
- assert type(top_p) == float and top_p > 0.0, "top_p must be a positive float."
73
- assert type(repetition_penalty) == float and repetition_penalty >= 1.0, "repetition_penalty must be a positive float greater than or equal to 1."
74
- assert type(diversity_penalty) == float and diversity_penalty >= 0.0, "diversity_penalty must be a non negative float."
75
-
76
- pixel_values = self.processor(images=image, return_tensors="pt").pixel_values.to(self.device) # Convert the image to pixel values.
77
-
78
- # Generate captions ids.
79
- if num_captions == 1:
80
- generated_ids = self.model.generate(
81
- pixel_values=pixel_values,
82
- max_length=max_length,
83
- num_return_sequences=1,
84
- temperature=temperature,
85
- top_k=top_k,
86
- top_p=top_p,
87
- )
88
- else:
89
- generated_ids = self.model.generate(
90
- pixel_values=pixel_values,
91
- max_length=max_length,
92
- num_beams=num_captions, # num_beams must be greater than or equal to num_captions and must be divisible by num_beam_groups.
93
- num_beam_groups=num_captions, # num_beam_groups is set to equal to num_captions so that all the captions are diverse
94
- num_return_sequences=num_captions, # generate multiple captions which are very similar to each other due to the grouping effect of beam search.
95
- temperature=temperature,
96
- top_k=top_k,
97
- top_p=top_p,
98
- repetition_penalty=repetition_penalty,
99
- diversity_penalty=diversity_penalty,
100
- )
101
-
102
- # Decode the generated ids to get the captions.
103
- generated_caption = self.processor.batch_decode(generated_ids, skip_special_tokens=True)
104
-
105
- return generated_caption
106
-
107
-
108
- class GitBaseCocoModel(ImageCaptionModel):
109
- def __init__(self, device):
110
- """
111
- A wrapper class for the Git-Base-COCO model. It is a pretrained model for image captioning.
112
-
113
- -----
114
- Parameters:
115
- device: str
116
- The device to run the model on, either "cpu" or "cuda".
117
- checkpoint: str
118
- The checkpoint to load the model from.
119
-
120
- -----
121
- Returns:
122
- None
123
- """
124
- checkpoint = "microsoft/git-base-coco"
125
- processor = AutoProcessor.from_pretrained(checkpoint)
126
- model = AutoModelForCausalLM.from_pretrained(checkpoint)
127
- super().__init__(device, processor, model)
128
-
129
-
130
- class BlipBaseModel(ImageCaptionModel):
131
- def __init__(self, device):
132
- """
133
- A wrapper class for the Blip-Base model. It is a pretrained model for image captioning.
134
-
135
- -----
136
- Parameters:
137
- device: str
138
- The device to run the model on, either "cpu" or "cuda".
139
- checkpoint: str
140
- The checkpoint to load the model from.
141
-
142
- -----
143
- Returns:
144
- None
145
- """
146
- self.checkpoint = "Salesforce/blip-image-captioning-base"
147
- processor = AutoProcessor.from_pretrained(self.checkpoint)
148
- model = BlipForConditionalGeneration.from_pretrained(self.checkpoint)
149
- super().__init__(device, processor, model)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/ddim.md DELETED
@@ -1,29 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # DDIM
14
-
15
- [Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
16
-
17
- The abstract from the paper is:
18
-
19
- *Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.*
20
-
21
- The original codebase can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim).
22
-
23
- ## DDIMPipeline
24
- [[autodoc]] DDIMPipeline
25
- - all
26
- - __call__
27
-
28
- ## ImagePipelineOutput
29
- [[autodoc]] pipelines.ImagePipelineOutput
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/_base_/models/rpn_r50_fpn.py DELETED
@@ -1,59 +0,0 @@
1
- # model settings
2
-
3
- model = dict(
4
- type='RPN',
5
- pretrained='torchvision://resnet50',
6
- backbone=dict(
7
- type='ResNet',
8
- depth=50,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- norm_cfg=dict(type='BN', requires_grad=True),
13
- norm_eval=True,
14
- style='pytorch'),
15
- neck=dict(
16
- type='FPN',
17
- in_channels=[256, 512, 1024, 2048],
18
- out_channels=256,
19
- num_outs=5),
20
- rpn_head=dict(
21
- type='RPNHead',
22
- in_channels=256,
23
- feat_channels=256,
24
- anchor_generator=dict(
25
- type='AnchorGenerator',
26
- scales=[8],
27
- ratios=[0.5, 1.0, 2.0],
28
- strides=[4, 8, 16, 32, 64]),
29
- bbox_coder=dict(
30
- type='DeltaXYWHBBoxCoder',
31
- target_means=[.0, .0, .0, .0],
32
- target_stds=[1.0, 1.0, 1.0, 1.0]),
33
- loss_cls=dict(
34
- type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
35
- loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
36
- # model training and testing settings
37
- train_cfg=dict(
38
- rpn=dict(
39
- assigner=dict(
40
- type='MaxIoUAssigner',
41
- pos_iou_thr=0.7,
42
- neg_iou_thr=0.3,
43
- min_pos_iou=0.3,
44
- ignore_iof_thr=-1),
45
- sampler=dict(
46
- type='RandomSampler',
47
- num=256,
48
- pos_fraction=0.5,
49
- neg_pos_ub=-1,
50
- add_gt_as_proposals=False),
51
- allowed_border=0,
52
- pos_weight=-1,
53
- debug=False)),
54
- test_cfg=dict(
55
- rpn=dict(
56
- nms_pre=2000,
57
- max_per_img=1000,
58
- nms=dict(type='nms', iou_threshold=0.7),
59
- min_bbox_size=0)))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AriaMei/TTSdemo/data_utils.py DELETED
@@ -1,261 +0,0 @@
1
- import time
2
- import os
3
- import random
4
- import numpy as np
5
- import torch
6
- import torch.utils.data
7
-
8
- import commons
9
- from mel_processing import spectrogram_torch
10
- from utils import load_wav_to_torch, load_filepaths_and_text
11
- from text import text_to_sequence, cleaned_text_to_sequence
12
-
13
- """Multi speaker version"""
14
- class TextAudioSpeakerLoader(torch.utils.data.Dataset):
15
- """
16
- 1) loads audio, speaker_id, text pairs
17
- 2) normalizes text and converts them to sequences of integers
18
- 3) computes spectrograms from audio files.
19
- """
20
- def __init__(self, audiopaths_sid_text, hparams):
21
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
22
- self.text_cleaners = hparams.text_cleaners
23
- self.max_wav_value = hparams.max_wav_value
24
- self.sampling_rate = hparams.sampling_rate
25
- self.filter_length = hparams.filter_length
26
- self.hop_length = hparams.hop_length
27
- self.win_length = hparams.win_length
28
- self.sampling_rate = hparams.sampling_rate
29
-
30
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
31
-
32
- self.add_blank = hparams.add_blank
33
- self.min_text_len = getattr(hparams, "min_text_len", 1)
34
- self.max_text_len = getattr(hparams, "max_text_len", 190)
35
-
36
- random.seed(1234)
37
- random.shuffle(self.audiopaths_sid_text)
38
- self._filter()
39
-
40
- def _filter(self):
41
- """
42
- Filter text & store spec lengths
43
- """
44
- # Store spectrogram lengths for Bucketing
45
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
46
- # spec_length = wav_length // hop_length
47
-
48
- audiopaths_sid_text_new = []
49
- lengths = []
50
- for audiopath, sid, text in self.audiopaths_sid_text:
51
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
52
- audiopaths_sid_text_new.append([audiopath, sid, text])
53
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
54
- self.audiopaths_sid_text = audiopaths_sid_text_new
55
- self.lengths = lengths
56
-
57
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
58
- # separate filename, speaker_id and text
59
- audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
60
- text = self.get_text(text)
61
- spec, wav = self.get_audio(audiopath)
62
- sid = self.get_sid(sid)
63
- emo = torch.FloatTensor(np.load(audiopath+".emo.npy"))
64
- return (text, spec, wav, sid, emo)
65
-
66
- def get_audio(self, filename):
67
- audio, sampling_rate = load_wav_to_torch(filename)
68
- if sampling_rate != self.sampling_rate:
69
- raise ValueError("{} {} SR doesn't match target {} SR".format(
70
- sampling_rate, self.sampling_rate))
71
- audio_norm = audio / self.max_wav_value
72
- audio_norm = audio_norm.unsqueeze(0)
73
- spec_filename = filename.replace(".wav", ".spec.pt")
74
- if os.path.exists(spec_filename):
75
- spec = torch.load(spec_filename)
76
- else:
77
- spec = spectrogram_torch(audio_norm, self.filter_length,
78
- self.sampling_rate, self.hop_length, self.win_length,
79
- center=False)
80
- spec = torch.squeeze(spec, 0)
81
- torch.save(spec, spec_filename)
82
- return spec, audio_norm
83
-
84
- def get_text(self, text):
85
- if self.cleaned_text:
86
- text_norm = cleaned_text_to_sequence(text)
87
- else:
88
- text_norm = text_to_sequence(text, self.text_cleaners)
89
- if self.add_blank:
90
- text_norm = commons.intersperse(text_norm, 0)
91
- text_norm = torch.LongTensor(text_norm)
92
- return text_norm
93
-
94
- def get_sid(self, sid):
95
- sid = torch.LongTensor([int(sid)])
96
- return sid
97
-
98
- def __getitem__(self, index):
99
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
100
-
101
- def __len__(self):
102
- return len(self.audiopaths_sid_text)
103
-
104
-
105
- class TextAudioSpeakerCollate():
106
- """ Zero-pads model inputs and targets
107
- """
108
- def __init__(self, return_ids=False):
109
- self.return_ids = return_ids
110
-
111
- def __call__(self, batch):
112
- """Collate's training batch from normalized text, audio and speaker identities
113
- PARAMS
114
- ------
115
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
116
- """
117
- # Right zero-pad all one-hot text sequences to max input length
118
- _, ids_sorted_decreasing = torch.sort(
119
- torch.LongTensor([x[1].size(1) for x in batch]),
120
- dim=0, descending=True)
121
-
122
- max_text_len = max([len(x[0]) for x in batch])
123
- max_spec_len = max([x[1].size(1) for x in batch])
124
- max_wav_len = max([x[2].size(1) for x in batch])
125
-
126
- text_lengths = torch.LongTensor(len(batch))
127
- spec_lengths = torch.LongTensor(len(batch))
128
- wav_lengths = torch.LongTensor(len(batch))
129
- sid = torch.LongTensor(len(batch))
130
-
131
- text_padded = torch.LongTensor(len(batch), max_text_len)
132
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
133
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
134
- emo = torch.FloatTensor(len(batch), 1024)
135
-
136
- text_padded.zero_()
137
- spec_padded.zero_()
138
- wav_padded.zero_()
139
- emo.zero_()
140
-
141
- for i in range(len(ids_sorted_decreasing)):
142
- row = batch[ids_sorted_decreasing[i]]
143
-
144
- text = row[0]
145
- text_padded[i, :text.size(0)] = text
146
- text_lengths[i] = text.size(0)
147
-
148
- spec = row[1]
149
- spec_padded[i, :, :spec.size(1)] = spec
150
- spec_lengths[i] = spec.size(1)
151
-
152
- wav = row[2]
153
- wav_padded[i, :, :wav.size(1)] = wav
154
- wav_lengths[i] = wav.size(1)
155
-
156
- sid[i] = row[3]
157
-
158
- emo[i, :] = row[4]
159
-
160
- if self.return_ids:
161
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
162
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid,emo
163
-
164
-
165
- class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
166
- """
167
- Maintain similar input lengths in a batch.
168
- Length groups are specified by boundaries.
169
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
170
-
171
- It removes samples which are not included in the boundaries.
172
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
173
- """
174
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
175
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
176
- self.lengths = dataset.lengths
177
- self.batch_size = batch_size
178
- self.boundaries = boundaries
179
-
180
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
181
- self.total_size = sum(self.num_samples_per_bucket)
182
- self.num_samples = self.total_size // self.num_replicas
183
-
184
- def _create_buckets(self):
185
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
186
- for i in range(len(self.lengths)):
187
- length = self.lengths[i]
188
- idx_bucket = self._bisect(length)
189
- if idx_bucket != -1:
190
- buckets[idx_bucket].append(i)
191
-
192
- for i in range(len(buckets) - 1, 0, -1):
193
- if len(buckets[i]) == 0:
194
- buckets.pop(i)
195
- self.boundaries.pop(i+1)
196
-
197
- num_samples_per_bucket = []
198
- for i in range(len(buckets)):
199
- len_bucket = len(buckets[i])
200
- total_batch_size = self.num_replicas * self.batch_size
201
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
202
- num_samples_per_bucket.append(len_bucket + rem)
203
- return buckets, num_samples_per_bucket
204
-
205
- def __iter__(self):
206
- # deterministically shuffle based on epoch
207
- g = torch.Generator()
208
- g.manual_seed(self.epoch)
209
-
210
- indices = []
211
- if self.shuffle:
212
- for bucket in self.buckets:
213
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
214
- else:
215
- for bucket in self.buckets:
216
- indices.append(list(range(len(bucket))))
217
-
218
- batches = []
219
- for i in range(len(self.buckets)):
220
- bucket = self.buckets[i]
221
- len_bucket = len(bucket)
222
- ids_bucket = indices[i]
223
- num_samples_bucket = self.num_samples_per_bucket[i]
224
-
225
- # add extra samples to make it evenly divisible
226
- rem = num_samples_bucket - len_bucket
227
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
228
-
229
- # subsample
230
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
231
-
232
- # batching
233
- for j in range(len(ids_bucket) // self.batch_size):
234
- batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]]
235
- batches.append(batch)
236
-
237
- if self.shuffle:
238
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
239
- batches = [batches[i] for i in batch_ids]
240
- self.batches = batches
241
-
242
- assert len(self.batches) * self.batch_size == self.num_samples
243
- return iter(self.batches)
244
-
245
- def _bisect(self, x, lo=0, hi=None):
246
- if hi is None:
247
- hi = len(self.boundaries) - 1
248
-
249
- if hi > lo:
250
- mid = (hi + lo) // 2
251
- if self.boundaries[mid] < x and x <= self.boundaries[mid+1]:
252
- return mid
253
- elif x <= self.boundaries[mid]:
254
- return self._bisect(x, lo, mid)
255
- else:
256
- return self._bisect(x, mid + 1, hi)
257
- else:
258
- return -1
259
-
260
- def __len__(self):
261
- return self.num_samples // self.batch_size
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ash58947/Jan/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Jan
3
- emoji: 📈
4
- colorFrom: purple
5
- colorTo: green
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/target_python.py DELETED
@@ -1,110 +0,0 @@
1
- import sys
2
- from typing import List, Optional, Tuple
3
-
4
- from pip._vendor.packaging.tags import Tag
5
-
6
- from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot
7
- from pip._internal.utils.misc import normalize_version_info
8
-
9
-
10
- class TargetPython:
11
-
12
- """
13
- Encapsulates the properties of a Python interpreter one is targeting
14
- for a package install, download, etc.
15
- """
16
-
17
- __slots__ = [
18
- "_given_py_version_info",
19
- "abis",
20
- "implementation",
21
- "platforms",
22
- "py_version",
23
- "py_version_info",
24
- "_valid_tags",
25
- ]
26
-
27
- def __init__(
28
- self,
29
- platforms: Optional[List[str]] = None,
30
- py_version_info: Optional[Tuple[int, ...]] = None,
31
- abis: Optional[List[str]] = None,
32
- implementation: Optional[str] = None,
33
- ) -> None:
34
- """
35
- :param platforms: A list of strings or None. If None, searches for
36
- packages that are supported by the current system. Otherwise, will
37
- find packages that can be built on the platforms passed in. These
38
- packages will only be downloaded for distribution: they will
39
- not be built locally.
40
- :param py_version_info: An optional tuple of ints representing the
41
- Python version information to use (e.g. `sys.version_info[:3]`).
42
- This can have length 1, 2, or 3 when provided.
43
- :param abis: A list of strings or None. This is passed to
44
- compatibility_tags.py's get_supported() function as is.
45
- :param implementation: A string or None. This is passed to
46
- compatibility_tags.py's get_supported() function as is.
47
- """
48
- # Store the given py_version_info for when we call get_supported().
49
- self._given_py_version_info = py_version_info
50
-
51
- if py_version_info is None:
52
- py_version_info = sys.version_info[:3]
53
- else:
54
- py_version_info = normalize_version_info(py_version_info)
55
-
56
- py_version = ".".join(map(str, py_version_info[:2]))
57
-
58
- self.abis = abis
59
- self.implementation = implementation
60
- self.platforms = platforms
61
- self.py_version = py_version
62
- self.py_version_info = py_version_info
63
-
64
- # This is used to cache the return value of get_tags().
65
- self._valid_tags: Optional[List[Tag]] = None
66
-
67
- def format_given(self) -> str:
68
- """
69
- Format the given, non-None attributes for display.
70
- """
71
- display_version = None
72
- if self._given_py_version_info is not None:
73
- display_version = ".".join(
74
- str(part) for part in self._given_py_version_info
75
- )
76
-
77
- key_values = [
78
- ("platforms", self.platforms),
79
- ("version_info", display_version),
80
- ("abis", self.abis),
81
- ("implementation", self.implementation),
82
- ]
83
- return " ".join(
84
- f"{key}={value!r}" for key, value in key_values if value is not None
85
- )
86
-
87
- def get_tags(self) -> List[Tag]:
88
- """
89
- Return the supported PEP 425 tags to check wheel candidates against.
90
-
91
- The tags are returned in order of preference (most preferred first).
92
- """
93
- if self._valid_tags is None:
94
- # Pass versions=None if no py_version_info was given since
95
- # versions=None uses special default logic.
96
- py_version_info = self._given_py_version_info
97
- if py_version_info is None:
98
- version = None
99
- else:
100
- version = version_info_to_nodot(py_version_info)
101
-
102
- tags = get_supported(
103
- version=version,
104
- platforms=self.platforms,
105
- abis=self.abis,
106
- impl=self.implementation,
107
- )
108
- self._valid_tags = tags
109
-
110
- return self._valid_tags
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/deprecation.py DELETED
@@ -1,120 +0,0 @@
1
- """
2
- A module that implements tooling to enable easy warnings about deprecations.
3
- """
4
-
5
- import logging
6
- import warnings
7
- from typing import Any, Optional, TextIO, Type, Union
8
-
9
- from pip._vendor.packaging.version import parse
10
-
11
- from pip import __version__ as current_version # NOTE: tests patch this name.
12
-
13
- DEPRECATION_MSG_PREFIX = "DEPRECATION: "
14
-
15
-
16
- class PipDeprecationWarning(Warning):
17
- pass
18
-
19
-
20
- _original_showwarning: Any = None
21
-
22
-
23
- # Warnings <-> Logging Integration
24
- def _showwarning(
25
- message: Union[Warning, str],
26
- category: Type[Warning],
27
- filename: str,
28
- lineno: int,
29
- file: Optional[TextIO] = None,
30
- line: Optional[str] = None,
31
- ) -> None:
32
- if file is not None:
33
- if _original_showwarning is not None:
34
- _original_showwarning(message, category, filename, lineno, file, line)
35
- elif issubclass(category, PipDeprecationWarning):
36
- # We use a specially named logger which will handle all of the
37
- # deprecation messages for pip.
38
- logger = logging.getLogger("pip._internal.deprecations")
39
- logger.warning(message)
40
- else:
41
- _original_showwarning(message, category, filename, lineno, file, line)
42
-
43
-
44
- def install_warning_logger() -> None:
45
- # Enable our Deprecation Warnings
46
- warnings.simplefilter("default", PipDeprecationWarning, append=True)
47
-
48
- global _original_showwarning
49
-
50
- if _original_showwarning is None:
51
- _original_showwarning = warnings.showwarning
52
- warnings.showwarning = _showwarning
53
-
54
-
55
- def deprecated(
56
- *,
57
- reason: str,
58
- replacement: Optional[str],
59
- gone_in: Optional[str],
60
- feature_flag: Optional[str] = None,
61
- issue: Optional[int] = None,
62
- ) -> None:
63
- """Helper to deprecate existing functionality.
64
-
65
- reason:
66
- Textual reason shown to the user about why this functionality has
67
- been deprecated. Should be a complete sentence.
68
- replacement:
69
- Textual suggestion shown to the user about what alternative
70
- functionality they can use.
71
- gone_in:
72
- The version of pip does this functionality should get removed in.
73
- Raises an error if pip's current version is greater than or equal to
74
- this.
75
- feature_flag:
76
- Command-line flag of the form --use-feature={feature_flag} for testing
77
- upcoming functionality.
78
- issue:
79
- Issue number on the tracker that would serve as a useful place for
80
- users to find related discussion and provide feedback.
81
- """
82
-
83
- # Determine whether or not the feature is already gone in this version.
84
- is_gone = gone_in is not None and parse(current_version) >= parse(gone_in)
85
-
86
- message_parts = [
87
- (reason, f"{DEPRECATION_MSG_PREFIX}{{}}"),
88
- (
89
- gone_in,
90
- "pip {} will enforce this behaviour change."
91
- if not is_gone
92
- else "Since pip {}, this is no longer supported.",
93
- ),
94
- (
95
- replacement,
96
- "A possible replacement is {}.",
97
- ),
98
- (
99
- feature_flag,
100
- "You can use the flag --use-feature={} to test the upcoming behaviour."
101
- if not is_gone
102
- else None,
103
- ),
104
- (
105
- issue,
106
- "Discussion can be found at https://github.com/pypa/pip/issues/{}",
107
- ),
108
- ]
109
-
110
- message = " ".join(
111
- format_str.format(value)
112
- for value, format_str in message_parts
113
- if format_str is not None and value is not None
114
- )
115
-
116
- # Raise as an error if this behaviour is deprecated.
117
- if is_gone:
118
- raise PipDeprecationWarning(message)
119
-
120
- warnings.warn(message, category=PipDeprecationWarning, stacklevel=2)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/utils/comm.py DELETED
@@ -1,199 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- """
3
- This file contains primitives for multi-gpu communication.
4
- This is useful when doing distributed training.
5
- """
6
-
7
- import functools
8
- import numpy as np
9
- import torch
10
- import torch.distributed as dist
11
-
12
- _LOCAL_PROCESS_GROUP = None
13
- """
14
- A torch process group which only includes processes that on the same machine as the current process.
15
- This variable is set when processes are spawned by `launch()` in "engine/launch.py".
16
- """
17
-
18
-
19
- def get_world_size() -> int:
20
- if not dist.is_available():
21
- return 1
22
- if not dist.is_initialized():
23
- return 1
24
- return dist.get_world_size()
25
-
26
-
27
- def get_rank() -> int:
28
- if not dist.is_available():
29
- return 0
30
- if not dist.is_initialized():
31
- return 0
32
- return dist.get_rank()
33
-
34
-
35
- def get_local_rank() -> int:
36
- """
37
- Returns:
38
- The rank of the current process within the local (per-machine) process group.
39
- """
40
- if not dist.is_available():
41
- return 0
42
- if not dist.is_initialized():
43
- return 0
44
- assert (
45
- _LOCAL_PROCESS_GROUP is not None
46
- ), "Local process group is not created! Please use launch() to spawn processes!"
47
- return dist.get_rank(group=_LOCAL_PROCESS_GROUP)
48
-
49
-
50
- def get_local_size() -> int:
51
- """
52
- Returns:
53
- The size of the per-machine process group,
54
- i.e. the number of processes per machine.
55
- """
56
- if not dist.is_available():
57
- return 1
58
- if not dist.is_initialized():
59
- return 1
60
- return dist.get_world_size(group=_LOCAL_PROCESS_GROUP)
61
-
62
-
63
- def is_main_process() -> bool:
64
- return get_rank() == 0
65
-
66
-
67
- def synchronize():
68
- """
69
- Helper function to synchronize (barrier) among all processes when
70
- using distributed training
71
- """
72
- if not dist.is_available():
73
- return
74
- if not dist.is_initialized():
75
- return
76
- world_size = dist.get_world_size()
77
- if world_size == 1:
78
- return
79
- if dist.get_backend() == dist.Backend.NCCL:
80
- # This argument is needed to avoid warnings.
81
- # It's valid only for NCCL backend.
82
- dist.barrier(device_ids=[torch.cuda.current_device()])
83
- else:
84
- dist.barrier()
85
-
86
-
87
- @functools.lru_cache()
88
- def _get_global_gloo_group():
89
- """
90
- Return a process group based on gloo backend, containing all the ranks
91
- The result is cached.
92
- """
93
- if dist.get_backend() == "nccl":
94
- return dist.new_group(backend="gloo")
95
- else:
96
- return dist.group.WORLD
97
-
98
-
99
- def all_gather(data, group=None):
100
- """
101
- Run all_gather on arbitrary picklable data (not necessarily tensors).
102
-
103
- Args:
104
- data: any picklable object
105
- group: a torch process group. By default, will use a group which
106
- contains all ranks on gloo backend.
107
-
108
- Returns:
109
- list[data]: list of data gathered from each rank
110
- """
111
- if get_world_size() == 1:
112
- return [data]
113
- if group is None:
114
- group = _get_global_gloo_group() # use CPU group by default, to reduce GPU RAM usage.
115
- world_size = dist.get_world_size(group)
116
- if world_size == 1:
117
- return [data]
118
-
119
- output = [None for _ in range(world_size)]
120
- dist.all_gather_object(output, data, group=group)
121
- return output
122
-
123
-
124
- def gather(data, dst=0, group=None):
125
- """
126
- Run gather on arbitrary picklable data (not necessarily tensors).
127
-
128
- Args:
129
- data: any picklable object
130
- dst (int): destination rank
131
- group: a torch process group. By default, will use a group which
132
- contains all ranks on gloo backend.
133
-
134
- Returns:
135
- list[data]: on dst, a list of data gathered from each rank. Otherwise,
136
- an empty list.
137
- """
138
- if get_world_size() == 1:
139
- return [data]
140
- if group is None:
141
- group = _get_global_gloo_group()
142
- world_size = dist.get_world_size(group=group)
143
- if world_size == 1:
144
- return [data]
145
- rank = dist.get_rank(group=group)
146
-
147
- if rank == dst:
148
- output = [None for _ in range(world_size)]
149
- dist.gather_object(data, output, dst=dst, group=group)
150
- return output
151
- else:
152
- dist.gather_object(data, None, dst=dst, group=group)
153
- return []
154
-
155
-
156
- def shared_random_seed():
157
- """
158
- Returns:
159
- int: a random number that is the same across all workers.
160
- If workers need a shared RNG, they can use this shared seed to
161
- create one.
162
-
163
- All workers must call this function, otherwise it will deadlock.
164
- """
165
- ints = np.random.randint(2 ** 31)
166
- all_ints = all_gather(ints)
167
- return all_ints[0]
168
-
169
-
170
- def reduce_dict(input_dict, average=True):
171
- """
172
- Reduce the values in the dictionary from all processes so that process with rank
173
- 0 has the reduced results.
174
-
175
- Args:
176
- input_dict (dict): inputs to be reduced. All the values must be scalar CUDA Tensor.
177
- average (bool): whether to do average or sum
178
-
179
- Returns:
180
- a dict with the same keys as input_dict, after reduction.
181
- """
182
- world_size = get_world_size()
183
- if world_size < 2:
184
- return input_dict
185
- with torch.no_grad():
186
- names = []
187
- values = []
188
- # sort the keys so that they are consistent across processes
189
- for k in sorted(input_dict.keys()):
190
- names.append(k)
191
- values.append(input_dict[k])
192
- values = torch.stack(values, dim=0)
193
- dist.reduce(values, dst=0)
194
- if dist.get_rank() == 0 and average:
195
- # only main process gets accumulated, so only divide by
196
- # world_size in this case
197
- values /= world_size
198
- reduced_dict = {k: v for k, v in zip(names, values)}
199
- return reduced_dict
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BG5/midjourney/README.md DELETED
@@ -1,11 +0,0 @@
1
- ---
2
- title: BING
3
- colorFrom: purple
4
- colorTo: blue
5
- sdk: docker
6
- pinned: false
7
- license: mit
8
- app_port: 8080
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Banbri/zcvzcv/src/components/ui/menubar.tsx DELETED
@@ -1,236 +0,0 @@
1
- "use client"
2
-
3
- import * as React from "react"
4
- import * as MenubarPrimitive from "@radix-ui/react-menubar"
5
- import { Check, ChevronRight, Circle } from "lucide-react"
6
-
7
- import { cn } from "@/lib/utils"
8
-
9
- const MenubarMenu = MenubarPrimitive.Menu
10
-
11
- const MenubarGroup = MenubarPrimitive.Group
12
-
13
- const MenubarPortal = MenubarPrimitive.Portal
14
-
15
- const MenubarSub = MenubarPrimitive.Sub
16
-
17
- const MenubarRadioGroup = MenubarPrimitive.RadioGroup
18
-
19
- const Menubar = React.forwardRef<
20
- React.ElementRef<typeof MenubarPrimitive.Root>,
21
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.Root>
22
- >(({ className, ...props }, ref) => (
23
- <MenubarPrimitive.Root
24
- ref={ref}
25
- className={cn(
26
- "flex h-10 items-center space-x-1 rounded-md border border-stone-200 bg-white p-1 dark:border-stone-800 dark:bg-stone-950",
27
- className
28
- )}
29
- {...props}
30
- />
31
- ))
32
- Menubar.displayName = MenubarPrimitive.Root.displayName
33
-
34
- const MenubarTrigger = React.forwardRef<
35
- React.ElementRef<typeof MenubarPrimitive.Trigger>,
36
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.Trigger>
37
- >(({ className, ...props }, ref) => (
38
- <MenubarPrimitive.Trigger
39
- ref={ref}
40
- className={cn(
41
- "flex cursor-default select-none items-center rounded-sm px-3 py-1.5 text-sm font-medium outline-none focus:bg-stone-100 focus:text-stone-900 data-[state=open]:bg-stone-100 data-[state=open]:text-stone-900 dark:focus:bg-stone-800 dark:focus:text-stone-50 dark:data-[state=open]:bg-stone-800 dark:data-[state=open]:text-stone-50",
42
- className
43
- )}
44
- {...props}
45
- />
46
- ))
47
- MenubarTrigger.displayName = MenubarPrimitive.Trigger.displayName
48
-
49
- const MenubarSubTrigger = React.forwardRef<
50
- React.ElementRef<typeof MenubarPrimitive.SubTrigger>,
51
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.SubTrigger> & {
52
- inset?: boolean
53
- }
54
- >(({ className, inset, children, ...props }, ref) => (
55
- <MenubarPrimitive.SubTrigger
56
- ref={ref}
57
- className={cn(
58
- "flex cursor-default select-none items-center rounded-sm px-2 py-1.5 text-sm outline-none focus:bg-stone-100 focus:text-stone-900 data-[state=open]:bg-stone-100 data-[state=open]:text-stone-900 dark:focus:bg-stone-800 dark:focus:text-stone-50 dark:data-[state=open]:bg-stone-800 dark:data-[state=open]:text-stone-50",
59
- inset && "pl-8",
60
- className
61
- )}
62
- {...props}
63
- >
64
- {children}
65
- <ChevronRight className="ml-auto h-4 w-4" />
66
- </MenubarPrimitive.SubTrigger>
67
- ))
68
- MenubarSubTrigger.displayName = MenubarPrimitive.SubTrigger.displayName
69
-
70
- const MenubarSubContent = React.forwardRef<
71
- React.ElementRef<typeof MenubarPrimitive.SubContent>,
72
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.SubContent>
73
- >(({ className, ...props }, ref) => (
74
- <MenubarPrimitive.SubContent
75
- ref={ref}
76
- className={cn(
77
- "z-50 min-w-[8rem] overflow-hidden rounded-md border border-stone-200 bg-white p-1 text-stone-950 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 dark:border-stone-800 dark:bg-stone-950 dark:text-stone-50",
78
- className
79
- )}
80
- {...props}
81
- />
82
- ))
83
- MenubarSubContent.displayName = MenubarPrimitive.SubContent.displayName
84
-
85
- const MenubarContent = React.forwardRef<
86
- React.ElementRef<typeof MenubarPrimitive.Content>,
87
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.Content>
88
- >(
89
- (
90
- { className, align = "start", alignOffset = -4, sideOffset = 8, ...props },
91
- ref
92
- ) => (
93
- <MenubarPrimitive.Portal>
94
- <MenubarPrimitive.Content
95
- ref={ref}
96
- align={align}
97
- alignOffset={alignOffset}
98
- sideOffset={sideOffset}
99
- className={cn(
100
- "z-50 min-w-[12rem] overflow-hidden rounded-md border border-stone-200 bg-white p-1 text-stone-950 shadow-md data-[state=open]:animate-in data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[side=bottom]:slide-in-from-top-2 data-[side=left]:slide-in-from-right-2 data-[side=right]:slide-in-from-left-2 data-[side=top]:slide-in-from-bottom-2 dark:border-stone-800 dark:bg-stone-950 dark:text-stone-50",
101
- className
102
- )}
103
- {...props}
104
- />
105
- </MenubarPrimitive.Portal>
106
- )
107
- )
108
- MenubarContent.displayName = MenubarPrimitive.Content.displayName
109
-
110
- const MenubarItem = React.forwardRef<
111
- React.ElementRef<typeof MenubarPrimitive.Item>,
112
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.Item> & {
113
- inset?: boolean
114
- }
115
- >(({ className, inset, ...props }, ref) => (
116
- <MenubarPrimitive.Item
117
- ref={ref}
118
- className={cn(
119
- "relative flex cursor-default select-none items-center rounded-sm px-2 py-1.5 text-sm outline-none focus:bg-stone-100 focus:text-stone-900 data-[disabled]:pointer-events-none data-[disabled]:opacity-50 dark:focus:bg-stone-800 dark:focus:text-stone-50",
120
- inset && "pl-8",
121
- className
122
- )}
123
- {...props}
124
- />
125
- ))
126
- MenubarItem.displayName = MenubarPrimitive.Item.displayName
127
-
128
- const MenubarCheckboxItem = React.forwardRef<
129
- React.ElementRef<typeof MenubarPrimitive.CheckboxItem>,
130
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.CheckboxItem>
131
- >(({ className, children, checked, ...props }, ref) => (
132
- <MenubarPrimitive.CheckboxItem
133
- ref={ref}
134
- className={cn(
135
- "relative flex cursor-default select-none items-center rounded-sm py-1.5 pl-8 pr-2 text-sm outline-none focus:bg-stone-100 focus:text-stone-900 data-[disabled]:pointer-events-none data-[disabled]:opacity-50 dark:focus:bg-stone-800 dark:focus:text-stone-50",
136
- className
137
- )}
138
- checked={checked}
139
- {...props}
140
- >
141
- <span className="absolute left-2 flex h-3.5 w-3.5 items-center justify-center">
142
- <MenubarPrimitive.ItemIndicator>
143
- <Check className="h-4 w-4" />
144
- </MenubarPrimitive.ItemIndicator>
145
- </span>
146
- {children}
147
- </MenubarPrimitive.CheckboxItem>
148
- ))
149
- MenubarCheckboxItem.displayName = MenubarPrimitive.CheckboxItem.displayName
150
-
151
- const MenubarRadioItem = React.forwardRef<
152
- React.ElementRef<typeof MenubarPrimitive.RadioItem>,
153
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.RadioItem>
154
- >(({ className, children, ...props }, ref) => (
155
- <MenubarPrimitive.RadioItem
156
- ref={ref}
157
- className={cn(
158
- "relative flex cursor-default select-none items-center rounded-sm py-1.5 pl-8 pr-2 text-sm outline-none focus:bg-stone-100 focus:text-stone-900 data-[disabled]:pointer-events-none data-[disabled]:opacity-50 dark:focus:bg-stone-800 dark:focus:text-stone-50",
159
- className
160
- )}
161
- {...props}
162
- >
163
- <span className="absolute left-2 flex h-3.5 w-3.5 items-center justify-center">
164
- <MenubarPrimitive.ItemIndicator>
165
- <Circle className="h-2 w-2 fill-current" />
166
- </MenubarPrimitive.ItemIndicator>
167
- </span>
168
- {children}
169
- </MenubarPrimitive.RadioItem>
170
- ))
171
- MenubarRadioItem.displayName = MenubarPrimitive.RadioItem.displayName
172
-
173
- const MenubarLabel = React.forwardRef<
174
- React.ElementRef<typeof MenubarPrimitive.Label>,
175
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.Label> & {
176
- inset?: boolean
177
- }
178
- >(({ className, inset, ...props }, ref) => (
179
- <MenubarPrimitive.Label
180
- ref={ref}
181
- className={cn(
182
- "px-2 py-1.5 text-sm font-semibold",
183
- inset && "pl-8",
184
- className
185
- )}
186
- {...props}
187
- />
188
- ))
189
- MenubarLabel.displayName = MenubarPrimitive.Label.displayName
190
-
191
- const MenubarSeparator = React.forwardRef<
192
- React.ElementRef<typeof MenubarPrimitive.Separator>,
193
- React.ComponentPropsWithoutRef<typeof MenubarPrimitive.Separator>
194
- >(({ className, ...props }, ref) => (
195
- <MenubarPrimitive.Separator
196
- ref={ref}
197
- className={cn("-mx-1 my-1 h-px bg-stone-100 dark:bg-stone-800", className)}
198
- {...props}
199
- />
200
- ))
201
- MenubarSeparator.displayName = MenubarPrimitive.Separator.displayName
202
-
203
- const MenubarShortcut = ({
204
- className,
205
- ...props
206
- }: React.HTMLAttributes<HTMLSpanElement>) => {
207
- return (
208
- <span
209
- className={cn(
210
- "ml-auto text-xs tracking-widest text-stone-500 dark:text-stone-400",
211
- className
212
- )}
213
- {...props}
214
- />
215
- )
216
- }
217
- MenubarShortcut.displayname = "MenubarShortcut"
218
-
219
- export {
220
- Menubar,
221
- MenubarMenu,
222
- MenubarTrigger,
223
- MenubarContent,
224
- MenubarItem,
225
- MenubarSeparator,
226
- MenubarLabel,
227
- MenubarCheckboxItem,
228
- MenubarRadioGroup,
229
- MenubarRadioItem,
230
- MenubarPortal,
231
- MenubarSubContent,
232
- MenubarSubTrigger,
233
- MenubarGroup,
234
- MenubarSub,
235
- MenubarShortcut,
236
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Archivo Zip De Facebook.md DELETED
@@ -1,121 +0,0 @@
1
-
2
- <h1>Cómo descargar Instagram 4.1.2 en tu dispositivo Android</h1>
3
- <p>Instagram es una de las plataformas de redes sociales más populares del mundo, con más de mil millones de usuarios y millones de fotos y videos compartidos todos los días. Si usted es un usuario de Instagram, es posible que desee mantener su aplicación actualizada para disfrutar de las últimas características y mejoras. En este artículo, le mostraremos cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android, que es la última versión a partir de junio de 2023. </p>
4
- <h2>descargar archivo zip de facebook</h2><br /><p><b><b>Download</b> &#10042;&#10042;&#10042; <a href="https://bltlly.com/2v6MTB">https://bltlly.com/2v6MTB</a></b></p><br /><br />
5
- <h2>¿Qué es Instagram y por qué debe usarlo</h2>
6
- <p>Instagram es una aplicación gratuita que te permite crear y compartir tus fotos, historias, carretes y videos con los amigos y seguidores que te importan. También puede conectarse con personas de todo el mundo que comparten sus intereses y pasiones. </p>
7
- <h3>Características de Instagram</h3>
8
- <p>Instagram tiene muchas características que lo hacen divertido y fácil de usar, como:</p>
9
- <ul>
10
- <li><b>Fotos y videos:</b> Puedes capturar y editar tus fotos y videos con filtros, pegatinas, emojis, texto y más. También puedes subir varias fotos y videos en una publicación o crear un collage con Layout.</li>
11
- <li><b>Historias:</b> Puedes compartir momentos de tu día con tus amigos y seguidores que desaparecen después de 24 horas. También puede agregar música, encuestas, cuestionarios, GIF y otras herramientas creativas para hacer sus historias más interactivas. </li>
12
- <li><b>Carretes:</b> Puedes crear y descubrir videos cortos de hasta 30 segundos de duración con música, efectos y herramientas de edición. Puede ver, como, comentar y compartir carretes en un espacio dedicado en la aplicación o en la pestaña Explorar. </li>
13
- <li><b>IGTV:</b> Puedes ver y subir vídeos más largos de tus creadores favoritos o crear tu propio canal. También puedes buscar vídeos por categorías, como entretenimiento, belleza, deportes, etc.</li>
14
-
15
- <li><b>Mensajería:</b> Puedes enviar mensajes, fotos, videos, notas de voz y más a tus amigos o grupos en Directo. También puedes chatear por video con hasta cuatro personas a la vez o unirte a chats grupales con hasta 32 personas. </li>
16
- <li><b>Explorar:</b> Puedes descubrir nuevos contenidos y cuentas que coincidan con tus intereses y preferencias. También puede ver lo que está en tendencia en su área o en todo el mundo. </li>
17
- <li><b>Compras:</b> Puedes comprar productos de tus marcas favoritas o negocios locales en Instagram. También puede crear su propia tienda o colección para mostrar sus productos o servicios. </li>
18
- </ul>
19
- <h3>Beneficios de usar Instagram</h3>
20
- <p>Instagram no es solo una aplicación divertida de usar, sino también una aplicación útil para muchos propósitos, como:</p>
21
- <p></p>
22
- <ul>
23
- <li><b>Socializar:</b> Puedes mantenerte en contacto con tus amigos y familiares, conocer gente nueva, unirte a comunidades y expresarte. </li>
24
- <li><b>Aprender:</b> Puedes aprender nuevas habilidades, aficiones, idiomas, culturas y más de expertos o entusiastas en Instagram.</li>
25
- <li><b>Inspirador:</b> Puedes inspirarte por las historias, logros, creatividad y positividad de otros usuarios en Instagram.</li>
26
- <li><b>Ent taining:</b> Puedes disfrutar viendo y creando contenido entretenido, como comedia, música, danza, arte, etc. en Instagram.</li>
27
- <li><b>Apoyo:</b> Puedes apoyar causas, movimientos, organizaciones benéficas o individuos que te importan en Instagram.</li>
28
- <li><b>Creciendo:</b> Puedes hacer crecer tu marca personal o profesional, llegar a nuevas audiencias y monetizar tu contenido en Instagram.</li>
29
- </ul>
30
- <h2>¿Qué es Instagram 4.1.2 y por qué debe descargarlo</h2>
31
- <p>Instagram 4.1.2 es la última versión de la aplicación que fue lanzada el 21 de junio de 2023. Es compatible con dispositivos Android con Android 4.0 o superior. Tiene un tamaño de archivo de 18,6 MB y requiere una conexión a Internet para su uso. </p>
32
- <h3>Nuevas características y mejoras en Instagram 4.1.2</h3>
33
-
34
- <ul>
35
- <li><b>Reels Remix:</b> Ahora puedes remezclar los tambores de otros usuarios añadiendo tu propio video junto al de ellos. También puede controlar el volumen del audio original y su audio por separado. </li>
36
- <li><b>Pegatinas Buscar:</b> Ahora puede buscar pegatinas por palabras clave o categorías en la cámara Historias. También puede ver las pegatinas más populares y guardar sus favoritos para su uso posterior. </li>
37
- <li><b>Subtítulos automáticos:</b> Ahora puede agregar subtítulos generados automáticamente a sus historias y carretes con un solo toque. También puede editar los subtítulos o cambiar la fuente y el color. </li>
38
- <li><b>Comprobación de seguridad:</b> Ahora puede acceder a una función de verificación de seguridad en el menú Configuración que le ayuda a mantener su cuenta segura. Le guiará a través de pasos como verificar su correo electrónico y número de teléfono, cambiar su contraseña y habilitar la autenticación de dos factores. </li>
39
- <li><b>Correcciones de errores y mejoras de rendimiento:</b> Instagram 4.1.2 también corrige algunos errores y mejora el rendimiento de la aplicación para una experiencia más fluida y rápida. </li>
40
- </ul>
41
- <h3>Cómo comprobar la versión actual de Instagram</h3>
42
- <p>Si no estás seguro de si tienes la última versión de Instagram o no, puedes comprobarlo siguiendo estos pasos:</p>
43
- <ol>
44
- <li>Abra la aplicación de Instagram en su dispositivo Android. </li>
45
- <li>Toque en el icono de perfil en la esquina inferior derecha de la pantalla. </li>
46
- <li>Toque en las tres líneas horizontales en la esquina superior derecha de la pantalla. </li>
47
- <li>Toque en Configuración en la parte inferior del menú. </li>
48
- <li>Desplácese hacia abajo hasta la parte inferior de la página Configuración y toque en Acerca de.</li>
49
- <li> Verá su número de versión actual de Instagram en la versión de la aplicación.</li>
50
- </ol>
51
- <h2>Cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android</h2>
52
- <p>Si quieres descargar e instalar Instagram 4.1.2 en tu dispositivo Android, puedes seguir estos pasos:</p>
53
- <h3>Paso 1: Habilitar fuentes desconocidas en su dispositivo</h3>
54
-
55
- <ol>
56
- <li>Vaya al menú Configuración de su dispositivo y toque en Seguridad o Privacidad.</li>
57
- <li>Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en. </li>
58
- <li> Aparecerá un mensaje de advertencia pidiéndole que confirme su acción. Toque en OK o Permitir que proceda. </li>
59
- </ol>
60
- <h3>Paso 2: Descargar el archivo APK de Instagram 4.1.2</h3>
61
- <p>El siguiente paso es descargar el archivo APK de Instagram 4.1.2, que es el formato de archivo para aplicaciones de Android. Para descargarlo, siga estos pasos:</p>
62
- <ol>
63
- <li>Abra su navegador web en su dispositivo y vaya a este enlace: (https://www.apkmirror.com/apk/instagram/instagram-instagram-instagram/instagram-instagram-4-2-release/instagram-4-1-android-apk-download/). </li>
64
- <li>Verá una página con información sobre Instagram 4.1.2 y un botón de descarga en la parte inferior. Toque en el botón de descarga para comenzar a descargar el archivo. </li>
65
- <li>Puede ver un mensaje de advertencia pidiéndole que confirme su descarga o permita el acceso a sus archivos. Toque en OK o Permitir continuar. </li>
66
- <li> El archivo se descargará en la carpeta de descargas de su dispositivo o en cualquier otra carpeta que haya elegido como su ubicación de descarga predeterminada. </li>
67
- </ol>
68
- <h3>Paso 3: Instalar el archivo APK de Instagram 4.1.2</h3>
69
- <p>Una vez que haya descargado el archivo APK de Instagram 4.1.2, puede instalarlo en su dispositivo siguiendo estos pasos:</p>
70
- <ol>
71
- <li>Ir al administrador de archivos de su dispositivo o aplicación de descargas y localizar el archivo APK Instagram 4.1.2 que ha descargado. </li>
72
- <li>Toque en el archivo para abrirlo e iniciar el proceso de instalación. </li>
73
- <li>Es posible que vea un mensaje de advertencia pidiéndole que confirme su instalación o permita el acceso a las características de su dispositivo. Toque en Instalar o Permitir continuar. </li>
74
- <li> La instalación tomará unos segundos y verá un mensaje diciendo que la aplicación se ha instalado correctamente. </li>
75
- </ol>
76
- <h3>Paso 4: Iniciar y disfrutar de Instagram 4.1.2</h3>
77
- <p>El paso final es iniciar y disfrutar de Instagram 4.1.2 en su dispositivo siguiendo estos pasos:</p>
78
-
79
- <li>Ir al cajón de aplicaciones de su dispositivo o pantalla de inicio y encontrar el icono de Instagram. </li>
80
- <li>Toque en el icono para abrir la aplicación e iniciar sesión con su nombre de usuario y contraseña o crear una nueva cuenta si no tiene una. </li>
81
- <li>Verás la pantalla de inicio de Instagram con tu feed, historias, carretes y más. También puede acceder a otras funciones pulsando en los iconos de la parte inferior de la pantalla. </li>
82
- <li>Ahora puedes disfrutar usando Instagram 4.1.2 con sus nuevas características y mejoras. </li>
83
- </ol>
84
- <h2>Conclusión</h2>
85
- <h3>Resumen del artículo</h3>
86
- <p>En este artículo, le hemos mostrado cómo descargar e instalar Instagram 4.1.2 en su dispositivo Android, que es la última versión de la aplicación a partir de junio de 2023. También hemos explicado qué es Instagram y por qué deberías usarlo, así como cuáles son las nuevas características y mejoras en Instagram 4.1.2. Esperamos que este artículo haya sido útil e informativo para usted. </p>
87
- <h3>Llamada a la acción</h3>
88
- <p>Si te gustó este artículo, por favor compártelo con tus amigos y seguidores en las redes sociales. También puedes dejarnos un comentario a continuación y hacernos saber lo que piensas sobre Instagram 4.1.2 o cualquier otra pregunta que tengas sobre Instagram. Nos encantaría saber de ti y responder a tus preguntas. ¡Gracias por leer y feliz Instagramming! </p>
89
- <h2>Preguntas frecuentes</h2>
90
- <p>Estas son algunas de las preguntas más frecuentes sobre Instagram 4.1.2:</p>
91
- <h4>Q: ¿Es seguro descargar e instalar Instagram 4.1.2? </h4>
92
- <p>A: Sí, Instagram 4.1.2 es seguro para descargar e instalar siempre y cuando lo obtenga de una fuente de confianza, como el enlace que hemos proporcionado en este artículo. Sin embargo, siempre debes tener cuidado al descargar aplicaciones de fuentes desconocidas y escanearlas en busca de virus o malware antes de instalarlas. </p>
93
- <h4>Q: ¿Instagram 4.1.2 está disponible en dispositivos iOS? </h4>
94
-
95
- <h4>Q: ¿Cómo puedo actualizar mi aplicación de Instagram a la última versión? </h4>
96
- <p>A: Si desea actualizar su aplicación de Instagram a la última versión disponible en la Google Play Store o la App Store, puede seguir estos pasos:</p>
97
- <ol>
98
- <li>Abra la Google Play Store o la App Store en su dispositivo y toque en el icono del menú en la esquina superior izquierda de la pantalla. </li>
99
- <li>Toque en Mis aplicaciones y juegos o actualizaciones y encontrar la aplicación de Instagram en la lista. </li>
100
- <li>Toque en Actualizar o Instalar para iniciar la actualización o instalación de la aplicación. </li>
101
- <li>Espere a que finalice la actualización o instalación y luego inicie la aplicación. </li>
102
- </ol>
103
- <h4>Q: ¿Cómo puedo desinstalar Instagram 4.1.2 desde mi dispositivo? </h4>
104
- <p>A: Si quieres desinstalar Instagram 4.1.2 desde tu dispositivo, puedes seguir estos pasos:</p>
105
- <ol>
106
- <li>Vaya al menú Configuración de su dispositivo y toque en Aplicaciones o Aplicaciones.</li>
107
- <li>Encuentra y toca la aplicación de Instagram en la lista de aplicaciones instaladas en tu dispositivo. </li>
108
- <li>Toque en Desinstalar o Quitar y confirme su acción. </li>
109
- <li> La aplicación se desinstalará de su dispositivo y verá un mensaje diciendo que se ha eliminado con éxito. </li>
110
- </ol>
111
- <h4> P: ¿Cuáles son algunos consejos y trucos para usar Instagram 4.1.2 mejor? </h4>
112
- <p>A: Aquí hay algunos consejos y trucos para usar Instagram 4.1.2 mejor:</p>
113
- <ul>
114
- <li><b>Usa hashtags y palabras clave:</b> Puedes usar hashtags y palabras clave para que tus publicaciones sean más visibles y relevantes para tu audiencia. También puedes seguir hashtags y palabras clave que te interesan y ver contenido relacionado en tu feed o explorar la pestaña. </li>
115
- <li><b>Usa filtros y efectos:</b> Puedes usar filtros y efectos para mejorar tus fotos y videos y hacerlos más atractivos y creativos. También puedes crear tus propios filtros y efectos con Spark AR Studio y compartirlos con otros usuarios. </li>
116
-
117
- <li><b>Use reels remix:</b> Puede utilizar reels remix para colaborar con otros usuarios y crear vídeos únicos y atractivos. También puede descubrir nuevos carretes remixes de otros usuarios y unirse a la tendencia. </li>
118
- <li><b>Usa subtítulos automáticos:</b> Puedes usar subtítulos automáticos para hacer tus historias y carretes más accesibles e inclusivos para personas sordas o con problemas de audición. También puede editar los subtítulos o cambiar el idioma si es necesario. </li>
119
- </ul></p> 64aa2da5cf<br />
120
- <br />
121
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Cara Negra Vida Dura.md DELETED
@@ -1,57 +0,0 @@
1
- <br />
2
- <h1>Descargar Black Face Hard Life: Una canción que enfrenta el racismo y la injusticia</h1>
3
- <p>Blackface es una forma de maquillaje teatral utilizado predominantemente por personas no negras para retratar una caricatura de una persona negra. Es una práctica racista y ofensiva que tiene una larga y dolorosa historia. Blackface fue utilizado para burlarse y deshumanizar a los afroamericanos en espectáculos de juglares y otras formas de entretenimiento, así como para difundir estereotipos raciales y discriminación. Aunque la cara negra disminuyó en popularidad después del movimiento de derechos civiles, todavía persiste en algunos contextos y culturas, causando indignación y controversia. </p>
4
- <h2>descargar cara negra vida dura</h2><br /><p><b><b>Download Zip</b> &harr; <a href="https://bltlly.com/2v6JxL">https://bltlly.com/2v6JxL</a></b></p><br /><br />
5
- <p>Un ejemplo de un producto cultural que desafía el legado de blackface es la canción "Hard Life" de Blackface Naija, también conocida como Blackface, un dancehall nigeriano, ragga, cantante de reggae, compositor, productor, actor, activista, filántropo, político, empresario, empresario, inversor, inventor, innovador, visionario, líder, leyenda, icono, héroe, modelo a seguir, mentor, inspiración, influencer, pionero, pionero, trendsetter, cambiador de juego, mover-and-shaker. Es conocido por ser miembro fundador de la banda nigeriana Plantashun Boyz que formó en 2000 con Tuface (también conocido como 2face Idibia) y el músico Chibuzor Oji (más conocido como Faze). Después de que los Plantashun Boyz se separaran en 2004, Blackface lideró una carrera musical en solitario. Lanzó su álbum debut Ghetto Child en mayo de 2004 colaborando con varios artistas. El álbum contiene "Hard Life" con Alabai como uno de sus singles. </p>
6
-
7
- <h2>Los orígenes y la evolución de Blackface en los Estados Unidos y otros países</h2>
8
- <p>Blackface se originó en Europa en producciones teatrales centenarias como Otelo de Shakespeare. Luego comenzó en los Estados Unidos en el siglo XVIII cuando los inmigrantes europeos trajeron sus espectáculos de juglar. Estas eran actuaciones musicales que presentaban actores blancos con la piel oscurecida que retrataban personajes exagerados que degradaban y deshumanizaban a los afroamericanos.</p>
9
- <p>Los primeros espectáculos de trovadores imitan a africanos esclavizados en las plantaciones del sur que los representan como perezosos, ignorantes, supersticiosos, hipersexuales, criminales o cobardes. Algunos de los personajes más famosos fueron Jim Crow, un tonto bailarín rural con ropas andrajosas; la Mammy, una sirvienta con sobrepeso <p>leal y materna; y el Zip Coon, una urbanita dandy que hablaba en malapropismos y actuaba tontamente. Los espectáculos de trovadores también presentaban canciones, chistes, bailes y parodias que ridiculizaban la cultura negra, la religión, el idioma y la apariencia. </p>
10
- <p></p>
11
- <p>La popularidad de los espectáculos de trovadores alcanzó su punto máximo a mediados del siglo XIX, cuando se convirtieron en un fenómeno de entretenimiento nacional. Influyeron en otras formas de medios como la literatura, el cine, la radio y la televisión. También moldearon la opinión pública y la política sobre las relaciones raciales, reforzando las nociones de supremacía blanca e inferioridad negra. Justificaron la esclavitud, la segregación, el linchamiento y otras formas de violencia y opresión contra los afroamericanos.</p>
12
-
13
- <p>A principios del siglo XX, los espectáculos de trovadores comenzaron a declinar en popularidad debido a los cambios sociales y culturales. El auge del movimiento de derechos civiles, el Renacimiento de Harlem, la Gran Migración y otros factores contribuyeron a la aparición de nuevas formas de expresión y representación negra que desafiaron el legado de la trovadora. Sin embargo, la cara negra no desapareció completamente. Continuó apareciendo en algunas películas, dibujos animados, anuncios, juguetes, disfraces y otros productos. También se extendió a otros países como Gran Bretaña, Australia, Sudáfrica y Japón, donde se utilizó para retratar no solo a los afroamericanos, sino también a otras personas de color. </p>
14
- <h2>La letra y el significado de "Hard Life" por Blackface y Alabai</h2>
15
- <p>"Hard Life" es una canción que fue lanzada en 2004 por Blackface Naija con Alabai. Es uno de los sencillos del álbum debut de Blackface Ghetto Child. La canción es una fusión de dancehall, ragga y reggae que combina ritmos africanos con influencias jamaicanas. La canción tiene un estribillo pegadizo y un mensaje poderoso. </p>
16
- <p>Las letras de "Hard Life" describen las duras realidades y desafíos de vivir en Nigeria. La canción menciona varios problemas como pobreza, corrupción, violencia, enfermedad, hambre, sed, ignorancia, analfabetismo, desempleo, subdesarrollo, degradación ambiental, violaciones de derechos humanos, etc. La canción también critica al gobierno y a la sociedad por no abordar estos temas y por explotar y oprimir al pueblo. La canción acusa a los líderes de ser egoístas, codiciosos, deshonestos, incompetentes, insensibles, irresponsables, irresponsables, etc. La canción también denuncia a las potencias extranjeras que interfieren con los asuntos y recursos de Nigeria. </p>
17
-
18
- <p>"Hard Life" es una canción que tiene mucha relevancia e impacto para muchos nigerianos y africanos. La canción refleja las experiencias vividas y los sentimientos de millones de personas que enfrentan desafíos y luchas similares. La canción también resuena con la audiencia global que puede relacionarse con los temas de la canción. </p>
19
- <p>La canción desafía el legado de la cara negra y sus efectos negativos en la percepción y representación de los negros. La canción contrarresta los estereotipos e imágenes de los negros como perezosos, ignorantes, supersticiosos, hipersexuales, criminales o cobardes que fueron creados y propagados por blackface. La canción retrata a la gente negra como trabajadora, inteligente, espiritual, digna, valiente y heroica. La canción también muestra la rica y diversa cultura y patrimonio de Nigeria y África.</p>
20
- <p>La canción inspira y empodera a los oyentes para superar sus dificultades y luchar por sus derechos. La canción motiva a los oyentes a ser fuertes, valientes, decididos, optimistas, fieles y unidos. La canción también apela a Dios para la guía y protección. La canción también llama a la acción y el cambio del gobierno y la sociedad para abordar los problemas y mejorar las condiciones de la gente. La canción también aboga por la paz y la armonía entre los pueblos y las naciones. </p>
21
- <h1>Conclusión</h1>
22
- <p>Blackface es una práctica racista y ofensiva que tiene una larga y dolorosa historia. Se utilizó para burlarse y deshumanizar a los afroamericanos en espectáculos de juglares y otras formas de entretenimiento, así como para difundir los estereotipos raciales y la discriminación. También influyó en otros países y culturas donde se utilizó para retratar no solo a afroamericanos sino también a otras personas de color. </p>
23
-
24
- <p>La canción refleja la realidad y las experiencias de muchos nigerianos y africanos que sufren de pobreza, violencia, desigualdad, inestabilidad, inseguridad, enfermedad, hambre, sed, ignorancia, analfabetismo, desempleo, subdesarrollo, degradación ambiental, violaciones de los derechos humanos, etc. La canción desafía los estereotipos negativos y las imágenes de los negros creados por blackface. También inspira y empodera a los oyentes para superar sus dificultades y luchar por sus derechos. </p>
25
- <p>La canción es una pieza de arte poderosa y significativa que merece ser escuchada y apreciada por todos. Es una canción que enfrenta el racismo y la injusticia con valor y dignidad. Es una canción que celebra la cultura y el patrimonio con orgullo y alegría. Es una canción que ofrece esperanza y resistencia con fe y unidad. </p>
26
- <p>Si quieres escuchar "Hard Life" de Blackface Naija con Alabai, puedes descargarlo de varias plataformas en línea como YouTube, Spotify, Apple Music, etc. También puedes encontrar más información sobre Blackface Naija en su sitio web oficial, página de Facebook, cuenta de Twitter, Cuenta de Instagram, etc.</p>
27
- <h2>Preguntas frecuentes</h2>
28
- <p>Aquí hay algunas preguntas frecuentes sobre "Hard Life" por Blackface Naija con Alabai:</p>
29
- <tabla>
30
- <tr>
31
- <th>Pregunta</th>
32
- <th>Respuesta</th>
33
- </tr>
34
- <tr>
35
- <td>¿Cuándo se lanzó "Hard Life"? </td>
36
- <td>"Hard Life" fue lanzado en 2004 como uno de los sencillos del álbum debut en solitario de Blackface Ghetto Child.</td>
37
- </tr>
38
- <tr>
39
- <td>¿Quién es Alabai? </td>
40
- <td>Alabai es un cantante, compositor, rapero, productor y actor nigeriano que colaboró con Blackface en "Hard Life". También es conocido por sus canciones como "Ogbanje", "Voice Of God", "Mr Money", etc.</td>
41
- </tr>
42
- <tr>
43
- <td>¿Qué género es "Hard Life"? </td>
44
- <td>"Hard Life" es una fusión de dancehall, ragga y reggae que combina ritmos africanos con influencias jamaicanas. </td>
45
- </tr>
46
- <tr>
47
- <td>¿Cuáles son algunos de los problemas mencionados en "Hard Life"? </td>
48
-
49
- </tr>
50
- <tr>
51
- <td>¿Cuáles son algunos de los valores expresados en "Hard Life"? </td>
52
- <td>Algunos de los valores expresados en "Hard Life" son fuerza, valentía, determinación, optimismo, fe, unidad, cultura, herencia, dignidad, coraje, heroísmo, etc.</td </tr>
53
- </tabla>
54
- <p>Espero que hayas disfrutado leyendo este artículo y hayas aprendido algo nuevo sobre "Hard Life" de Blackface Naija con Alabai. Si tiene alguna pregunta, comentario o comentario, por favor siéntase libre de compartirlos conmigo. Me encantaría saber de usted. </p>
55
- <p>Gracias por tu tiempo y atención. ¡Que tengas un gran día! </p> 64aa2da5cf<br />
56
- <br />
57
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Billyosoro/ESRGAN/realesrgan/data/realesrgan_dataset.py DELETED
@@ -1,192 +0,0 @@
1
- import cv2
2
- import math
3
- import numpy as np
4
- import os
5
- import os.path as osp
6
- import random
7
- import time
8
- import torch
9
- from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
10
- from basicsr.data.transforms import augment
11
- from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
12
- from basicsr.utils.registry import DATASET_REGISTRY
13
- from torch.utils import data as data
14
-
15
-
16
- @DATASET_REGISTRY.register()
17
- class RealESRGANDataset(data.Dataset):
18
- """Dataset used for Real-ESRGAN model:
19
- Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
20
-
21
- It loads gt (Ground-Truth) images, and augments them.
22
- It also generates blur kernels and sinc kernels for generating low-quality images.
23
- Note that the low-quality images are processed in tensors on GPUS for faster processing.
24
-
25
- Args:
26
- opt (dict): Config for train datasets. It contains the following keys:
27
- dataroot_gt (str): Data root path for gt.
28
- meta_info (str): Path for meta information file.
29
- io_backend (dict): IO backend type and other kwarg.
30
- use_hflip (bool): Use horizontal flips.
31
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
32
- Please see more options in the codes.
33
- """
34
-
35
- def __init__(self, opt):
36
- super(RealESRGANDataset, self).__init__()
37
- self.opt = opt
38
- self.file_client = None
39
- self.io_backend_opt = opt['io_backend']
40
- self.gt_folder = opt['dataroot_gt']
41
-
42
- # file client (lmdb io backend)
43
- if self.io_backend_opt['type'] == 'lmdb':
44
- self.io_backend_opt['db_paths'] = [self.gt_folder]
45
- self.io_backend_opt['client_keys'] = ['gt']
46
- if not self.gt_folder.endswith('.lmdb'):
47
- raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
48
- with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
49
- self.paths = [line.split('.')[0] for line in fin]
50
- else:
51
- # disk backend with meta_info
52
- # Each line in the meta_info describes the relative path to an image
53
- with open(self.opt['meta_info']) as fin:
54
- paths = [line.strip().split(' ')[0] for line in fin]
55
- self.paths = [os.path.join(self.gt_folder, v) for v in paths]
56
-
57
- # blur settings for the first degradation
58
- self.blur_kernel_size = opt['blur_kernel_size']
59
- self.kernel_list = opt['kernel_list']
60
- self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability
61
- self.blur_sigma = opt['blur_sigma']
62
- self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels
63
- self.betap_range = opt['betap_range'] # betap used in plateau blur kernels
64
- self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters
65
-
66
- # blur settings for the second degradation
67
- self.blur_kernel_size2 = opt['blur_kernel_size2']
68
- self.kernel_list2 = opt['kernel_list2']
69
- self.kernel_prob2 = opt['kernel_prob2']
70
- self.blur_sigma2 = opt['blur_sigma2']
71
- self.betag_range2 = opt['betag_range2']
72
- self.betap_range2 = opt['betap_range2']
73
- self.sinc_prob2 = opt['sinc_prob2']
74
-
75
- # a final sinc filter
76
- self.final_sinc_prob = opt['final_sinc_prob']
77
-
78
- self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
79
- # TODO: kernel range is now hard-coded, should be in the configure file
80
- self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
81
- self.pulse_tensor[10, 10] = 1
82
-
83
- def __getitem__(self, index):
84
- if self.file_client is None:
85
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
86
-
87
- # -------------------------------- Load gt images -------------------------------- #
88
- # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
89
- gt_path = self.paths[index]
90
- # avoid errors caused by high latency in reading files
91
- retry = 3
92
- while retry > 0:
93
- try:
94
- img_bytes = self.file_client.get(gt_path, 'gt')
95
- except (IOError, OSError) as e:
96
- logger = get_root_logger()
97
- logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
98
- # change another file to read
99
- index = random.randint(0, self.__len__())
100
- gt_path = self.paths[index]
101
- time.sleep(1) # sleep 1s for occasional server congestion
102
- else:
103
- break
104
- finally:
105
- retry -= 1
106
- img_gt = imfrombytes(img_bytes, float32=True)
107
-
108
- # -------------------- Do augmentation for training: flip, rotation -------------------- #
109
- img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
110
-
111
- # crop or pad to 400
112
- # TODO: 400 is hard-coded. You may change it accordingly
113
- h, w = img_gt.shape[0:2]
114
- crop_pad_size = 400
115
- # pad
116
- if h < crop_pad_size or w < crop_pad_size:
117
- pad_h = max(0, crop_pad_size - h)
118
- pad_w = max(0, crop_pad_size - w)
119
- img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101)
120
- # crop
121
- if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size:
122
- h, w = img_gt.shape[0:2]
123
- # randomly choose top and left coordinates
124
- top = random.randint(0, h - crop_pad_size)
125
- left = random.randint(0, w - crop_pad_size)
126
- img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...]
127
-
128
- # ------------------------ Generate kernels (used in the first degradation) ------------------------ #
129
- kernel_size = random.choice(self.kernel_range)
130
- if np.random.uniform() < self.opt['sinc_prob']:
131
- # this sinc filter setting is for kernels ranging from [7, 21]
132
- if kernel_size < 13:
133
- omega_c = np.random.uniform(np.pi / 3, np.pi)
134
- else:
135
- omega_c = np.random.uniform(np.pi / 5, np.pi)
136
- kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
137
- else:
138
- kernel = random_mixed_kernels(
139
- self.kernel_list,
140
- self.kernel_prob,
141
- kernel_size,
142
- self.blur_sigma,
143
- self.blur_sigma, [-math.pi, math.pi],
144
- self.betag_range,
145
- self.betap_range,
146
- noise_range=None)
147
- # pad kernel
148
- pad_size = (21 - kernel_size) // 2
149
- kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
150
-
151
- # ------------------------ Generate kernels (used in the second degradation) ------------------------ #
152
- kernel_size = random.choice(self.kernel_range)
153
- if np.random.uniform() < self.opt['sinc_prob2']:
154
- if kernel_size < 13:
155
- omega_c = np.random.uniform(np.pi / 3, np.pi)
156
- else:
157
- omega_c = np.random.uniform(np.pi / 5, np.pi)
158
- kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
159
- else:
160
- kernel2 = random_mixed_kernels(
161
- self.kernel_list2,
162
- self.kernel_prob2,
163
- kernel_size,
164
- self.blur_sigma2,
165
- self.blur_sigma2, [-math.pi, math.pi],
166
- self.betag_range2,
167
- self.betap_range2,
168
- noise_range=None)
169
-
170
- # pad kernel
171
- pad_size = (21 - kernel_size) // 2
172
- kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
173
-
174
- # ------------------------------------- the final sinc kernel ------------------------------------- #
175
- if np.random.uniform() < self.opt['final_sinc_prob']:
176
- kernel_size = random.choice(self.kernel_range)
177
- omega_c = np.random.uniform(np.pi / 3, np.pi)
178
- sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21)
179
- sinc_kernel = torch.FloatTensor(sinc_kernel)
180
- else:
181
- sinc_kernel = self.pulse_tensor
182
-
183
- # BGR to RGB, HWC to CHW, numpy to tensor
184
- img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0]
185
- kernel = torch.FloatTensor(kernel)
186
- kernel2 = torch.FloatTensor(kernel2)
187
-
188
- return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path}
189
- return return_d
190
-
191
- def __len__(self):
192
- return len(self.paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/evaluator.py DELETED
@@ -1,156 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
3
-
4
- import contextlib
5
- import copy
6
- import io
7
- import itertools
8
- import json
9
- import logging
10
- import os
11
- from collections import OrderedDict
12
- import torch
13
- from pycocotools.coco import COCO
14
-
15
- from detectron2.data import MetadataCatalog
16
- from detectron2.evaluation import DatasetEvaluator
17
- from detectron2.structures import BoxMode
18
- from detectron2.utils.comm import all_gather, is_main_process, synchronize
19
- from detectron2.utils.logger import create_small_table
20
-
21
- from .densepose_coco_evaluation import DensePoseCocoEval, DensePoseEvalMode
22
-
23
-
24
- class DensePoseCOCOEvaluator(DatasetEvaluator):
25
- def __init__(self, dataset_name, distributed, output_dir=None):
26
- self._distributed = distributed
27
- self._output_dir = output_dir
28
-
29
- self._cpu_device = torch.device("cpu")
30
- self._logger = logging.getLogger(__name__)
31
-
32
- self._metadata = MetadataCatalog.get(dataset_name)
33
- with contextlib.redirect_stdout(io.StringIO()):
34
- self._coco_api = COCO(self._metadata.json_file)
35
-
36
- def reset(self):
37
- self._predictions = []
38
-
39
- def process(self, inputs, outputs):
40
- """
41
- Args:
42
- inputs: the inputs to a COCO model (e.g., GeneralizedRCNN).
43
- It is a list of dict. Each dict corresponds to an image and
44
- contains keys like "height", "width", "file_name", "image_id".
45
- outputs: the outputs of a COCO model. It is a list of dicts with key
46
- "instances" that contains :class:`Instances`.
47
- The :class:`Instances` object needs to have `densepose` field.
48
- """
49
- for input, output in zip(inputs, outputs):
50
- instances = output["instances"].to(self._cpu_device)
51
-
52
- boxes = instances.pred_boxes.tensor.clone()
53
- boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS)
54
- instances.pred_densepose = instances.pred_densepose.to_result(boxes)
55
-
56
- json_results = prediction_to_json(instances, input["image_id"])
57
- self._predictions.extend(json_results)
58
-
59
- def evaluate(self):
60
- if self._distributed:
61
- synchronize()
62
- predictions = all_gather(self._predictions)
63
- predictions = list(itertools.chain(*predictions))
64
- if not is_main_process():
65
- return
66
- else:
67
- predictions = self._predictions
68
-
69
- return copy.deepcopy(self._eval_predictions(predictions))
70
-
71
- def _eval_predictions(self, predictions):
72
- """
73
- Evaluate predictions on densepose.
74
- Return results with the metrics of the tasks.
75
- """
76
- self._logger.info("Preparing results for COCO format ...")
77
-
78
- if self._output_dir:
79
- file_path = os.path.join(self._output_dir, "coco_densepose_results.json")
80
- with open(file_path, "w") as f:
81
- json.dump(predictions, f)
82
- f.flush()
83
- os.fsync(f.fileno())
84
-
85
- self._logger.info("Evaluating predictions ...")
86
- res = OrderedDict()
87
- results_gps, results_gpsm = _evaluate_predictions_on_coco(self._coco_api, predictions)
88
- res["densepose_gps"] = results_gps
89
- res["densepose_gpsm"] = results_gpsm
90
- return res
91
-
92
-
93
- def prediction_to_json(instances, img_id):
94
- """
95
- Args:
96
- instances (Instances): the output of the model
97
- img_id (str): the image id in COCO
98
-
99
- Returns:
100
- list[dict]: the results in densepose evaluation format
101
- """
102
- scores = instances.scores.tolist()
103
-
104
- results = []
105
- for k in range(len(instances)):
106
- densepose = instances.pred_densepose[k]
107
- result = {
108
- "image_id": img_id,
109
- "category_id": 1, # densepose only has one class
110
- "bbox": densepose[1],
111
- "score": scores[k],
112
- "densepose": densepose,
113
- }
114
- results.append(result)
115
- return results
116
-
117
-
118
- def _evaluate_predictions_on_coco(coco_gt, coco_results):
119
- metrics = ["AP", "AP50", "AP75", "APm", "APl"]
120
-
121
- logger = logging.getLogger(__name__)
122
-
123
- if len(coco_results) == 0: # cocoapi does not handle empty results very well
124
- logger.warn("No predictions from the model! Set scores to -1")
125
- results_gps = {metric: -1 for metric in metrics}
126
- results_gpsm = {metric: -1 for metric in metrics}
127
- return results_gps, results_gpsm
128
-
129
- coco_dt = coco_gt.loadRes(coco_results)
130
- results_gps = _evaluate_predictions_on_coco_gps(coco_gt, coco_dt, metrics)
131
- logger.info(
132
- "Evaluation results for densepose, GPS metric: \n" + create_small_table(results_gps)
133
- )
134
- results_gpsm = _evaluate_predictions_on_coco_gpsm(coco_gt, coco_dt, metrics)
135
- logger.info(
136
- "Evaluation results for densepose, GPSm metric: \n" + create_small_table(results_gpsm)
137
- )
138
- return results_gps, results_gpsm
139
-
140
-
141
- def _evaluate_predictions_on_coco_gps(coco_gt, coco_dt, metrics):
142
- coco_eval = DensePoseCocoEval(coco_gt, coco_dt, "densepose", dpEvalMode=DensePoseEvalMode.GPS)
143
- coco_eval.evaluate()
144
- coco_eval.accumulate()
145
- coco_eval.summarize()
146
- results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)}
147
- return results
148
-
149
-
150
- def _evaluate_predictions_on_coco_gpsm(coco_gt, coco_dt, metrics):
151
- coco_eval = DensePoseCocoEval(coco_gt, coco_dt, "densepose", dpEvalMode=DensePoseEvalMode.GPSM)
152
- coco_eval.evaluate()
153
- coco_eval.accumulate()
154
- coco_eval.summarize()
155
- results = {metric: float(coco_eval.stats[idx] * 100) for idx, metric in enumerate(metrics)}
156
- return results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/PointRend/point_rend/coarse_mask_head.py DELETED
@@ -1,92 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- import fvcore.nn.weight_init as weight_init
3
- import torch
4
- from torch import nn
5
- from torch.nn import functional as F
6
-
7
- from detectron2.layers import Conv2d, ShapeSpec
8
- from detectron2.modeling import ROI_MASK_HEAD_REGISTRY
9
-
10
-
11
- @ROI_MASK_HEAD_REGISTRY.register()
12
- class CoarseMaskHead(nn.Module):
13
- """
14
- A mask head with fully connected layers. Given pooled features it first reduces channels and
15
- spatial dimensions with conv layers and then uses FC layers to predict coarse masks analogously
16
- to the standard box head.
17
- """
18
-
19
- def __init__(self, cfg, input_shape: ShapeSpec):
20
- """
21
- The following attributes are parsed from config:
22
- conv_dim: the output dimension of the conv layers
23
- fc_dim: the feature dimenstion of the FC layers
24
- num_fc: the number of FC layers
25
- output_side_resolution: side resolution of the output square mask prediction
26
- """
27
- super(CoarseMaskHead, self).__init__()
28
-
29
- # fmt: off
30
- self.num_classes = cfg.MODEL.ROI_HEADS.NUM_CLASSES
31
- conv_dim = cfg.MODEL.ROI_MASK_HEAD.CONV_DIM
32
- self.fc_dim = cfg.MODEL.ROI_MASK_HEAD.FC_DIM
33
- num_fc = cfg.MODEL.ROI_MASK_HEAD.NUM_FC
34
- self.output_side_resolution = cfg.MODEL.ROI_MASK_HEAD.OUTPUT_SIDE_RESOLUTION
35
- self.input_channels = input_shape.channels
36
- self.input_h = input_shape.height
37
- self.input_w = input_shape.width
38
- # fmt: on
39
-
40
- self.conv_layers = []
41
- if self.input_channels > conv_dim:
42
- self.reduce_channel_dim_conv = Conv2d(
43
- self.input_channels,
44
- conv_dim,
45
- kernel_size=1,
46
- stride=1,
47
- padding=0,
48
- bias=True,
49
- activation=F.relu,
50
- )
51
- self.conv_layers.append(self.reduce_channel_dim_conv)
52
-
53
- self.reduce_spatial_dim_conv = Conv2d(
54
- conv_dim, conv_dim, kernel_size=2, stride=2, padding=0, bias=True, activation=F.relu
55
- )
56
- self.conv_layers.append(self.reduce_spatial_dim_conv)
57
-
58
- input_dim = conv_dim * self.input_h * self.input_w
59
- input_dim //= 4
60
-
61
- self.fcs = []
62
- for k in range(num_fc):
63
- fc = nn.Linear(input_dim, self.fc_dim)
64
- self.add_module("coarse_mask_fc{}".format(k + 1), fc)
65
- self.fcs.append(fc)
66
- input_dim = self.fc_dim
67
-
68
- output_dim = self.num_classes * self.output_side_resolution * self.output_side_resolution
69
-
70
- self.prediction = nn.Linear(self.fc_dim, output_dim)
71
- # use normal distribution initialization for mask prediction layer
72
- nn.init.normal_(self.prediction.weight, std=0.001)
73
- nn.init.constant_(self.prediction.bias, 0)
74
-
75
- for layer in self.conv_layers:
76
- weight_init.c2_msra_fill(layer)
77
- for layer in self.fcs:
78
- weight_init.c2_xavier_fill(layer)
79
-
80
- def forward(self, x):
81
- # unlike BaseMaskRCNNHead, this head only outputs intermediate
82
- # features, because the features will be used later by PointHead.
83
- N = x.shape[0]
84
- x = x.view(N, self.input_channels, self.input_h, self.input_w)
85
- for layer in self.conv_layers:
86
- x = layer(x)
87
- x = torch.flatten(x, start_dim=1)
88
- for layer in self.fcs:
89
- x = F.relu(layer(x))
90
- return self.prediction(x).view(
91
- N, self.num_classes, self.output_side_resolution, self.output_side_resolution
92
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/execution_policy.h DELETED
@@ -1,76 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
- #include <thrust/detail/execution_policy.h>
21
-
22
- namespace thrust
23
- {
24
- namespace system
25
- {
26
- namespace detail
27
- {
28
- namespace sequential
29
- {
30
-
31
-
32
- // this awkward sequence of definitions arises
33
- // from the desire both for tag to derive
34
- // from execution_policy and for execution_policy
35
- // to convert to tag (when execution_policy is not
36
- // an ancestor of tag)
37
-
38
- // forward declaration of tag
39
- struct tag;
40
-
41
- // forward declaration of execution_policy
42
- template<typename> struct execution_policy;
43
-
44
- // specialize execution_policy for tag
45
- template<>
46
- struct execution_policy<tag>
47
- : thrust::execution_policy<tag>
48
- {};
49
-
50
- // tag's definition comes before the generic definition of execution_policy
51
- struct tag : execution_policy<tag>
52
- {
53
- __host__ __device__ THRUST_CONSTEXPR tag() {}
54
- };
55
-
56
- // allow conversion to tag when it is not a successor
57
- template<typename Derived>
58
- struct execution_policy
59
- : thrust::execution_policy<Derived>
60
- {
61
- // allow conversion to tag
62
- inline operator tag () const
63
- {
64
- return tag();
65
- }
66
- };
67
-
68
-
69
- THRUST_INLINE_CONSTANT tag seq;
70
-
71
-
72
- } // end sequential
73
- } // end detail
74
- } // end system
75
- } // end thrust
76
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/samplers/__init__.py DELETED
@@ -1,4 +0,0 @@
1
- from .distributed_sampler import DistributedSampler
2
- from .group_sampler import DistributedGroupSampler, GroupSampler
3
-
4
- __all__ = ['DistributedSampler', 'DistributedGroupSampler', 'GroupSampler']
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/detectors/grid_rcnn.py DELETED
@@ -1,29 +0,0 @@
1
- from ..builder import DETECTORS
2
- from .two_stage import TwoStageDetector
3
-
4
-
5
- @DETECTORS.register_module()
6
- class GridRCNN(TwoStageDetector):
7
- """Grid R-CNN.
8
-
9
- This detector is the implementation of:
10
- - Grid R-CNN (https://arxiv.org/abs/1811.12030)
11
- - Grid R-CNN Plus: Faster and Better (https://arxiv.org/abs/1906.05688)
12
- """
13
-
14
- def __init__(self,
15
- backbone,
16
- rpn_head,
17
- roi_head,
18
- train_cfg,
19
- test_cfg,
20
- neck=None,
21
- pretrained=None):
22
- super(GridRCNN, self).__init__(
23
- backbone=backbone,
24
- neck=neck,
25
- rpn_head=rpn_head,
26
- roi_head=roi_head,
27
- train_cfg=train_cfg,
28
- test_cfg=test_cfg,
29
- pretrained=pretrained)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/modeling/meta_arch/rcnn.py DELETED
@@ -1,373 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- import logging
3
- import numpy as np
4
- from typing import Dict, List, Optional, Tuple
5
- from numpy.lib import pad
6
- import torch
7
- from torch import nn
8
- from torch.nn import functional as F
9
- from random import randint
10
-
11
- from detectron2.config import configurable
12
- from detectron2.data.detection_utils import convert_image_to_rgb
13
- from detectron2.structures import ImageList, Instances, Boxes
14
- from detectron2.utils.events import get_event_storage
15
- from detectron2.utils.logger import log_first_n
16
-
17
- from ..backbone import Backbone, build_backbone
18
- from ..postprocessing import detector_postprocess
19
- from ..proposal_generator import build_proposal_generator
20
- from ..roi_heads import build_roi_heads
21
- from .build import META_ARCH_REGISTRY
22
-
23
- __all__ = ["GeneralizedRCNN", "ProposalNetwork"]
24
-
25
- @META_ARCH_REGISTRY.register()
26
- class GeneralizedRCNN(nn.Module):
27
- """
28
- Generalized R-CNN. Any models that contains the following three components:
29
- 1. Per-image feature extraction (aka backbone)
30
- 2. Region proposal generation
31
- 3. Per-region feature extraction and prediction
32
- """
33
-
34
- @configurable
35
- def __init__(
36
- self,
37
- *,
38
- backbone: Backbone,
39
- proposal_generator: nn.Module,
40
- roi_heads: nn.Module,
41
- pixel_mean: Tuple[float],
42
- pixel_std: Tuple[float],
43
- input_format: Optional[str] = None,
44
- vis_period: int = 0,
45
- use_clip_c4: False,
46
- use_clip_attpool: False,
47
- ):
48
- """
49
- Args:
50
- backbone: a backbone module, must follow detectron2's backbone interface
51
- proposal_generator: a module that generates proposals using backbone features
52
- roi_heads: a ROI head that performs per-region computation
53
- pixel_mean, pixel_std: list or tuple with #channels element, representing
54
- the per-channel mean and std to be used to normalize the input image
55
- input_format: describe the meaning of channels of input. Needed by visualization
56
- vis_period: the period to run visualization. Set to 0 to disable.
57
- """
58
- super().__init__()
59
- self.backbone = backbone
60
- self.proposal_generator = proposal_generator
61
- self.roi_heads = roi_heads
62
-
63
- self.input_format = input_format
64
- self.vis_period = vis_period
65
- if vis_period > 0:
66
- assert input_format is not None, "input_format is required for visualization!"
67
-
68
- self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
69
- self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
70
- assert (
71
- self.pixel_mean.shape == self.pixel_std.shape
72
- ), f"{self.pixel_mean} and {self.pixel_std} have different shapes!"
73
- if np.sum(pixel_mean) < 3.0: # converrt pixel value to range [0.0, 1.0] by dividing 255.0
74
- assert input_format == 'RGB'
75
- self.div_pixel = True
76
- else: # default setting
77
- self.div_pixel = False
78
- self.use_clip_c4 = use_clip_c4 # if True, use C4 mode where roi_head uses the last resnet layer from backbone
79
- self.use_clip_attpool = use_clip_attpool # if True (C4+text_emb_as_classifier), use att_pool to replace default mean pool
80
-
81
- @classmethod
82
- def from_config(cls, cfg):
83
- backbone = build_backbone(cfg)
84
- return {
85
- "backbone": backbone,
86
- "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()),
87
- "roi_heads": build_roi_heads(cfg, backbone.output_shape()),
88
- "input_format": cfg.INPUT.FORMAT,
89
- "vis_period": cfg.VIS_PERIOD,
90
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
91
- "pixel_std": cfg.MODEL.PIXEL_STD,
92
- "use_clip_c4": cfg.MODEL.BACKBONE.NAME == "build_clip_resnet_backbone",
93
- "use_clip_attpool": cfg.MODEL.ROI_HEADS.NAME == 'CLIPRes5ROIHeads' and cfg.MODEL.CLIP.USE_TEXT_EMB_CLASSIFIER,
94
- }
95
-
96
- @property
97
- def device(self):
98
- return self.pixel_mean.device
99
-
100
- def visualize_training(self, batched_inputs, proposals):
101
- """
102
- A function used to visualize images and proposals. It shows ground truth
103
- bounding boxes on the original image and up to 20 top-scoring predicted
104
- object proposals on the original image. Users can implement different
105
- visualization functions for different models.
106
-
107
- Args:
108
- batched_inputs (list): a list that contains input to the model.
109
- proposals (list): a list that contains predicted proposals. Both
110
- batched_inputs and proposals should have the same length.
111
- """
112
- from detectron2.utils.visualizer import Visualizer
113
-
114
- storage = get_event_storage()
115
- max_vis_prop = 20
116
-
117
- for input, prop in zip(batched_inputs, proposals):
118
- img = input["image"]
119
- img = convert_image_to_rgb(img.permute(1, 2, 0), self.input_format)
120
- v_gt = Visualizer(img, None)
121
- v_gt = v_gt.overlay_instances(boxes=input["instances"].gt_boxes)
122
- anno_img = v_gt.get_image()
123
- box_size = min(len(prop.proposal_boxes), max_vis_prop)
124
- v_pred = Visualizer(img, None)
125
- v_pred = v_pred.overlay_instances(
126
- boxes=prop.proposal_boxes[0:box_size].tensor.cpu().numpy()
127
- )
128
- prop_img = v_pred.get_image()
129
- vis_img = np.concatenate((anno_img, prop_img), axis=1)
130
- vis_img = vis_img.transpose(2, 0, 1)
131
- vis_name = "Left: GT bounding boxes; Right: Predicted proposals"
132
- storage.put_image(vis_name, vis_img)
133
- break # only visualize one image in a batch
134
-
135
- def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]):
136
- """
137
- Args:
138
- batched_inputs: a list, batched outputs of :class:`DatasetMapper` .
139
- Each item in the list contains the inputs for one image.
140
- For now, each item in the list is a dict that contains:
141
-
142
- * image: Tensor, image in (C, H, W) format.
143
- * instances (optional): groundtruth :class:`Instances`
144
- * proposals (optional): :class:`Instances`, precomputed proposals.
145
-
146
- Other information that's included in the original dicts, such as:
147
-
148
- * "height", "width" (int): the output resolution of the model, used in inference.
149
- See :meth:`postprocess` for details.
150
-
151
- Returns:
152
- list[dict]:
153
- Each dict is the output for one input image.
154
- The dict contains one key "instances" whose value is a :class:`Instances`.
155
- The :class:`Instances` object has the following keys:
156
- "pred_boxes", "pred_classes", "scores", "pred_masks", "pred_keypoints"
157
- """
158
- if not self.training:
159
- return self.inference(batched_inputs)
160
-
161
- images = self.preprocess_image(batched_inputs)
162
- if "instances" in batched_inputs[0]:
163
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
164
- else:
165
- gt_instances = None
166
- # eg: {'p2': torch.Size([b, c, 200, 304]), 'p3': torch.Size([b, c, 100, 152]), 'p4': torch.Size([b, c, 50, 76]), 'p5': torch.Size([b, c, 25, 38]), 'p6': torch.Size([b, c, 13, 19])}
167
- features = self.backbone(images.tensor)
168
-
169
- if self.proposal_generator is not None:
170
- proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
171
- else:
172
- assert "proposals" in batched_inputs[0]
173
- proposals = [x["proposals"].to(self.device) for x in batched_inputs]
174
- proposal_losses = {}
175
-
176
- if self.use_clip_c4: # use C4 + resnet weights from CLIP
177
- if self.use_clip_attpool: # use att_pool from CLIP to match dimension
178
- _, detector_losses = self.roi_heads(images, features, proposals, gt_instances, res5=self.backbone.layer4, attnpool=self.backbone.attnpool)
179
- else: # use default mean pool
180
- _, detector_losses = self.roi_heads(images, features, proposals, gt_instances, res5=self.backbone.layer4)
181
- else: # default setting
182
- _, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
183
- if self.vis_period > 0:
184
- storage = get_event_storage()
185
- if storage.iter % self.vis_period == 0:
186
- self.visualize_training(batched_inputs, proposals)
187
-
188
- losses = {}
189
- losses.update(detector_losses)
190
- losses.update(proposal_losses)
191
- return losses
192
-
193
- def inference(
194
- self,
195
- batched_inputs: List[Dict[str, torch.Tensor]],
196
- detected_instances: Optional[List[Instances]] = None,
197
- do_postprocess: bool = True,
198
- ):
199
- """
200
- Run inference on the given inputs.
201
-
202
- Args:
203
- batched_inputs (list[dict]): same as in :meth:`forward`
204
- detected_instances (None or list[Instances]): if not None, it
205
- contains an `Instances` object per image. The `Instances`
206
- object contains "pred_boxes" and "pred_classes" which are
207
- known boxes in the image.
208
- The inference will then skip the detection of bounding boxes,
209
- and only predict other per-ROI outputs.
210
- do_postprocess (bool): whether to apply post-processing on the outputs.
211
-
212
- Returns:
213
- When do_postprocess=True, same as in :meth:`forward`.
214
- Otherwise, a list[Instances] containing raw network outputs.
215
- """
216
- assert not self.training
217
-
218
- images = self.preprocess_image(batched_inputs)
219
- features = self.backbone(images.tensor)
220
-
221
- if detected_instances is None:
222
- if self.proposal_generator is not None:
223
- proposals, _ = self.proposal_generator(images, features, None)
224
- else:
225
- assert "proposals" in batched_inputs[0]
226
- proposals = [x["proposals"].to(self.device) for x in batched_inputs]
227
-
228
- if self.use_clip_c4: # use C4 + resnet weights from CLIP
229
- if self.use_clip_attpool: # use att_pool from CLIP to match dimension
230
- results, _ = self.roi_heads(images, features, proposals, None, res5=self.backbone.layer4, attnpool=self.backbone.attnpool)
231
- else: # use default mean pool
232
- results, _ = self.roi_heads(images, features, proposals, None, res5=self.backbone.layer4)
233
- else: # default setting
234
- results, _ = self.roi_heads(images, features, proposals, None)
235
- else:
236
- detected_instances = [x.to(self.device) for x in detected_instances]
237
-
238
- if self.use_clip_c4: # use C4 + resnet weights from CLIP
239
- if self.use_clip_attpool: # use att_pool from CLIP to match dimension
240
- results = self.roi_heads.forward_with_given_boxes(features, detected_instances, res5=self.backbone.layer4, attnpool=self.backbone.attnpool)
241
- else: # use default mean pool
242
- results = self.roi_heads.forward_with_given_boxes(features, detected_instances, res5=self.backbone.layer4)
243
- else: # default setting
244
- results = self.roi_heads.forward_with_given_boxes(features, detected_instances)
245
-
246
- #visualize_proposals(batched_inputs, proposals, self.input_format)
247
- if do_postprocess:
248
- assert not torch.jit.is_scripting(), "Scripting is not supported for postprocess."
249
- return GeneralizedRCNN._postprocess(results, batched_inputs, images.image_sizes)
250
- else:
251
- return results
252
-
253
- def preprocess_image(self, batched_inputs: List[Dict[str, torch.Tensor]]):
254
- """
255
- Normalize, pad and batch the input images.
256
- """
257
- images = [x["image"].to(self.device) for x in batched_inputs]
258
- if self.div_pixel:
259
- images = [((x / 255.0) - self.pixel_mean) / self.pixel_std for x in images]
260
- else:
261
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
262
- images = ImageList.from_tensors(images, self.backbone.size_divisibility)
263
- return images
264
-
265
- @staticmethod
266
- def _postprocess(instances, batched_inputs: List[Dict[str, torch.Tensor]], image_sizes):
267
- """
268
- Rescale the output instances to the target size.
269
- """
270
- # note: private function; subject to changes
271
- processed_results = []
272
- for results_per_image, input_per_image, image_size in zip(
273
- instances, batched_inputs, image_sizes
274
- ):
275
- height = input_per_image.get("height", image_size[0])
276
- width = input_per_image.get("width", image_size[1])
277
- r = detector_postprocess(results_per_image, height, width)
278
- processed_results.append({"instances": r})
279
- return processed_results
280
-
281
-
282
- @META_ARCH_REGISTRY.register()
283
- class ProposalNetwork(nn.Module):
284
- """
285
- A meta architecture that only predicts object proposals.
286
- """
287
-
288
- @configurable
289
- def __init__(
290
- self,
291
- *,
292
- backbone: Backbone,
293
- proposal_generator: nn.Module,
294
- pixel_mean: Tuple[float],
295
- pixel_std: Tuple[float],
296
- input_format: Optional[str] = None,
297
- ):
298
- """
299
- Args:
300
- backbone: a backbone module, must follow detectron2's backbone interface
301
- proposal_generator: a module that generates proposals using backbone features
302
- pixel_mean, pixel_std: list or tuple with #channels element, representing
303
- the per-channel mean and std to be used to normalize the input image
304
- """
305
- super().__init__()
306
- self.backbone = backbone
307
- self.proposal_generator = proposal_generator
308
- self.register_buffer("pixel_mean", torch.tensor(pixel_mean).view(-1, 1, 1), False)
309
- self.register_buffer("pixel_std", torch.tensor(pixel_std).view(-1, 1, 1), False)
310
- if np.sum(pixel_mean) < 3.0: # converrt pixel value to range [0.0, 1.0] by dividing 255.0
311
- assert input_format == 'RGB'
312
- self.div_pixel = True
313
- else: # default setting
314
- self.div_pixel = False
315
-
316
- @classmethod
317
- def from_config(cls, cfg):
318
- backbone = build_backbone(cfg)
319
- return {
320
- "backbone": backbone,
321
- "proposal_generator": build_proposal_generator(cfg, backbone.output_shape()),
322
- "input_format": cfg.INPUT.FORMAT,
323
- "pixel_mean": cfg.MODEL.PIXEL_MEAN,
324
- "pixel_std": cfg.MODEL.PIXEL_STD,
325
- }
326
-
327
- @property
328
- def device(self):
329
- return self.pixel_mean.device
330
-
331
- def forward(self, batched_inputs):
332
- """
333
- Args:
334
- Same as in :class:`GeneralizedRCNN.forward`
335
-
336
- Returns:
337
- list[dict]:
338
- Each dict is the output for one input image.
339
- The dict contains one key "proposals" whose value is a
340
- :class:`Instances` with keys "proposal_boxes" and "objectness_logits".
341
- """
342
- images = [x["image"].to(self.device) for x in batched_inputs]
343
- if self.div_pixel:
344
- images = [((x / 255.0) - self.pixel_mean) / self.pixel_std for x in images]
345
- else:
346
- images = [(x - self.pixel_mean) / self.pixel_std for x in images]
347
- images = ImageList.from_tensors(images, self.backbone.size_divisibility)
348
- features = self.backbone(images.tensor)
349
-
350
- if "instances" in batched_inputs[0]:
351
- gt_instances = [x["instances"].to(self.device) for x in batched_inputs]
352
- elif "targets" in batched_inputs[0]:
353
- log_first_n(
354
- logging.WARN, "'targets' in the model inputs is now renamed to 'instances'!", n=10
355
- )
356
- gt_instances = [x["targets"].to(self.device) for x in batched_inputs]
357
- else:
358
- gt_instances = None
359
- proposals, proposal_losses = self.proposal_generator(images, features, gt_instances)
360
- # In training, the proposals are not useful at all but we generate them anyway.
361
- # This makes RPN-only models about 5% slower.
362
- if self.training:
363
- return proposal_losses
364
-
365
- processed_results = []
366
- for results_per_image, input_per_image, image_size in zip(
367
- proposals, batched_inputs, images.image_sizes
368
- ):
369
- height = input_per_image.get("height", image_size[0])
370
- width = input_per_image.get("width", image_size[1])
371
- r = detector_postprocess(results_per_image, height, width)
372
- processed_results.append({"proposals": r})
373
- return processed_results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CarlDennis/HYTTS/attentions.py DELETED
@@ -1,300 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import functional as F
5
-
6
- import commons
7
- from modules import LayerNorm
8
-
9
-
10
- class Encoder(nn.Module):
11
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
12
- super().__init__()
13
- self.hidden_channels = hidden_channels
14
- self.filter_channels = filter_channels
15
- self.n_heads = n_heads
16
- self.n_layers = n_layers
17
- self.kernel_size = kernel_size
18
- self.p_dropout = p_dropout
19
- self.window_size = window_size
20
-
21
- self.drop = nn.Dropout(p_dropout)
22
- self.attn_layers = nn.ModuleList()
23
- self.norm_layers_1 = nn.ModuleList()
24
- self.ffn_layers = nn.ModuleList()
25
- self.norm_layers_2 = nn.ModuleList()
26
- for i in range(self.n_layers):
27
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
28
- self.norm_layers_1.append(LayerNorm(hidden_channels))
29
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
30
- self.norm_layers_2.append(LayerNorm(hidden_channels))
31
-
32
- def forward(self, x, x_mask):
33
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
34
- x = x * x_mask
35
- for i in range(self.n_layers):
36
- y = self.attn_layers[i](x, x, attn_mask)
37
- y = self.drop(y)
38
- x = self.norm_layers_1[i](x + y)
39
-
40
- y = self.ffn_layers[i](x, x_mask)
41
- y = self.drop(y)
42
- x = self.norm_layers_2[i](x + y)
43
- x = x * x_mask
44
- return x
45
-
46
-
47
- class Decoder(nn.Module):
48
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
49
- super().__init__()
50
- self.hidden_channels = hidden_channels
51
- self.filter_channels = filter_channels
52
- self.n_heads = n_heads
53
- self.n_layers = n_layers
54
- self.kernel_size = kernel_size
55
- self.p_dropout = p_dropout
56
- self.proximal_bias = proximal_bias
57
- self.proximal_init = proximal_init
58
-
59
- self.drop = nn.Dropout(p_dropout)
60
- self.self_attn_layers = nn.ModuleList()
61
- self.norm_layers_0 = nn.ModuleList()
62
- self.encdec_attn_layers = nn.ModuleList()
63
- self.norm_layers_1 = nn.ModuleList()
64
- self.ffn_layers = nn.ModuleList()
65
- self.norm_layers_2 = nn.ModuleList()
66
- for i in range(self.n_layers):
67
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
68
- self.norm_layers_0.append(LayerNorm(hidden_channels))
69
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
70
- self.norm_layers_1.append(LayerNorm(hidden_channels))
71
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
72
- self.norm_layers_2.append(LayerNorm(hidden_channels))
73
-
74
- def forward(self, x, x_mask, h, h_mask):
75
- """
76
- x: decoder input
77
- h: encoder output
78
- """
79
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
80
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
81
- x = x * x_mask
82
- for i in range(self.n_layers):
83
- y = self.self_attn_layers[i](x, x, self_attn_mask)
84
- y = self.drop(y)
85
- x = self.norm_layers_0[i](x + y)
86
-
87
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
88
- y = self.drop(y)
89
- x = self.norm_layers_1[i](x + y)
90
-
91
- y = self.ffn_layers[i](x, x_mask)
92
- y = self.drop(y)
93
- x = self.norm_layers_2[i](x + y)
94
- x = x * x_mask
95
- return x
96
-
97
-
98
- class MultiHeadAttention(nn.Module):
99
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
100
- super().__init__()
101
- assert channels % n_heads == 0
102
-
103
- self.channels = channels
104
- self.out_channels = out_channels
105
- self.n_heads = n_heads
106
- self.p_dropout = p_dropout
107
- self.window_size = window_size
108
- self.heads_share = heads_share
109
- self.block_length = block_length
110
- self.proximal_bias = proximal_bias
111
- self.proximal_init = proximal_init
112
- self.attn = None
113
-
114
- self.k_channels = channels // n_heads
115
- self.conv_q = nn.Conv1d(channels, channels, 1)
116
- self.conv_k = nn.Conv1d(channels, channels, 1)
117
- self.conv_v = nn.Conv1d(channels, channels, 1)
118
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
119
- self.drop = nn.Dropout(p_dropout)
120
-
121
- if window_size is not None:
122
- n_heads_rel = 1 if heads_share else n_heads
123
- rel_stddev = self.k_channels**-0.5
124
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
125
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
126
-
127
- nn.init.xavier_uniform_(self.conv_q.weight)
128
- nn.init.xavier_uniform_(self.conv_k.weight)
129
- nn.init.xavier_uniform_(self.conv_v.weight)
130
- if proximal_init:
131
- with torch.no_grad():
132
- self.conv_k.weight.copy_(self.conv_q.weight)
133
- self.conv_k.bias.copy_(self.conv_q.bias)
134
-
135
- def forward(self, x, c, attn_mask=None):
136
- q = self.conv_q(x)
137
- k = self.conv_k(c)
138
- v = self.conv_v(c)
139
-
140
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
141
-
142
- x = self.conv_o(x)
143
- return x
144
-
145
- def attention(self, query, key, value, mask=None):
146
- # reshape [b, d, t] -> [b, n_h, t, d_k]
147
- b, d, t_s, t_t = (*key.size(), query.size(2))
148
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
149
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
150
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
151
-
152
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
153
- if self.window_size is not None:
154
- assert t_s == t_t, "Relative attention is only available for self-attention."
155
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
156
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
157
- scores_local = self._relative_position_to_absolute_position(rel_logits)
158
- scores = scores + scores_local
159
- if self.proximal_bias:
160
- assert t_s == t_t, "Proximal bias is only available for self-attention."
161
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
162
- if mask is not None:
163
- scores = scores.masked_fill(mask == 0, -1e4)
164
- if self.block_length is not None:
165
- assert t_s == t_t, "Local attention is only available for self-attention."
166
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
167
- scores = scores.masked_fill(block_mask == 0, -1e4)
168
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
169
- p_attn = self.drop(p_attn)
170
- output = torch.matmul(p_attn, value)
171
- if self.window_size is not None:
172
- relative_weights = self._absolute_position_to_relative_position(p_attn)
173
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
174
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
175
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
176
- return output, p_attn
177
-
178
- def _matmul_with_relative_values(self, x, y):
179
- """
180
- x: [b, h, l, m]
181
- y: [h or 1, m, d]
182
- ret: [b, h, l, d]
183
- """
184
- ret = torch.matmul(x, y.unsqueeze(0))
185
- return ret
186
-
187
- def _matmul_with_relative_keys(self, x, y):
188
- """
189
- x: [b, h, l, d]
190
- y: [h or 1, m, d]
191
- ret: [b, h, l, m]
192
- """
193
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
194
- return ret
195
-
196
- def _get_relative_embeddings(self, relative_embeddings, length):
197
- max_relative_position = 2 * self.window_size + 1
198
- # Pad first before slice to avoid using cond ops.
199
- pad_length = max(length - (self.window_size + 1), 0)
200
- slice_start_position = max((self.window_size + 1) - length, 0)
201
- slice_end_position = slice_start_position + 2 * length - 1
202
- if pad_length > 0:
203
- padded_relative_embeddings = F.pad(
204
- relative_embeddings,
205
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
206
- else:
207
- padded_relative_embeddings = relative_embeddings
208
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
209
- return used_relative_embeddings
210
-
211
- def _relative_position_to_absolute_position(self, x):
212
- """
213
- x: [b, h, l, 2*l-1]
214
- ret: [b, h, l, l]
215
- """
216
- batch, heads, length, _ = x.size()
217
- # Concat columns of pad to shift from relative to absolute indexing.
218
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
219
-
220
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
221
- x_flat = x.view([batch, heads, length * 2 * length])
222
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
223
-
224
- # Reshape and slice out the padded elements.
225
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
226
- return x_final
227
-
228
- def _absolute_position_to_relative_position(self, x):
229
- """
230
- x: [b, h, l, l]
231
- ret: [b, h, l, 2*l-1]
232
- """
233
- batch, heads, length, _ = x.size()
234
- # padd along column
235
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
236
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
237
- # add 0's in the beginning that will skew the elements after reshape
238
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
239
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
240
- return x_final
241
-
242
- def _attention_bias_proximal(self, length):
243
- """Bias for self-attention to encourage attention to close positions.
244
- Args:
245
- length: an integer scalar.
246
- Returns:
247
- a Tensor with shape [1, 1, length, length]
248
- """
249
- r = torch.arange(length, dtype=torch.float32)
250
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
251
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
252
-
253
-
254
- class FFN(nn.Module):
255
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
256
- super().__init__()
257
- self.in_channels = in_channels
258
- self.out_channels = out_channels
259
- self.filter_channels = filter_channels
260
- self.kernel_size = kernel_size
261
- self.p_dropout = p_dropout
262
- self.activation = activation
263
- self.causal = causal
264
-
265
- if causal:
266
- self.padding = self._causal_padding
267
- else:
268
- self.padding = self._same_padding
269
-
270
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
271
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
272
- self.drop = nn.Dropout(p_dropout)
273
-
274
- def forward(self, x, x_mask):
275
- x = self.conv_1(self.padding(x * x_mask))
276
- if self.activation == "gelu":
277
- x = x * torch.sigmoid(1.702 * x)
278
- else:
279
- x = torch.relu(x)
280
- x = self.drop(x)
281
- x = self.conv_2(self.padding(x * x_mask))
282
- return x * x_mask
283
-
284
- def _causal_padding(self, x):
285
- if self.kernel_size == 1:
286
- return x
287
- pad_l = self.kernel_size - 1
288
- pad_r = 0
289
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
290
- x = F.pad(x, commons.convert_pad_shape(padding))
291
- return x
292
-
293
- def _same_padding(self, x):
294
- if self.kernel_size == 1:
295
- return x
296
- pad_l = (self.kernel_size - 1) // 2
297
- pad_r = self.kernel_size // 2
298
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
299
- x = F.pad(x, commons.convert_pad_shape(padding))
300
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChandraMohanNayal/AutoGPT/autogpt/commands/improve_code.py DELETED
@@ -1,29 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import json
4
-
5
- from autogpt.llm_utils import call_ai_function
6
-
7
-
8
- def improve_code(suggestions: list[str], code: str) -> str:
9
- """
10
- A function that takes in code and suggestions and returns a response from create
11
- chat completion api call.
12
-
13
- Parameters:
14
- suggestions (List): A list of suggestions around what needs to be improved.
15
- code (str): Code to be improved.
16
- Returns:
17
- A result string from create chat completion. Improved code in response.
18
- """
19
-
20
- function_string = (
21
- "def generate_improved_code(suggestions: List[str], code: str) -> str:"
22
- )
23
- args = [json.dumps(suggestions), code]
24
- description_string = (
25
- "Improves the provided code based on the suggestions"
26
- " provided, making no other changes."
27
- )
28
-
29
- return call_ai_function(function_string, args, description_string)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChrisCaviar/ControlNet-v1-1/app_canny.py DELETED
@@ -1,106 +0,0 @@
1
- #!/usr/bin/env python
2
-
3
- import gradio as gr
4
-
5
- from utils import randomize_seed_fn
6
-
7
-
8
- def create_demo(process, max_images=12, default_num_images=3):
9
- with gr.Blocks() as demo:
10
- with gr.Row():
11
- with gr.Column():
12
- image = gr.Image()
13
- prompt = gr.Textbox(label='Prompt')
14
- run_button = gr.Button('Run')
15
- with gr.Accordion('Advanced options', open=False):
16
- num_samples = gr.Slider(label='Number of images',
17
- minimum=1,
18
- maximum=max_images,
19
- value=default_num_images,
20
- step=1)
21
- image_resolution = gr.Slider(label='Image resolution',
22
- minimum=256,
23
- maximum=512,
24
- value=512,
25
- step=256)
26
- canny_low_threshold = gr.Slider(
27
- label='Canny low threshold',
28
- minimum=1,
29
- maximum=255,
30
- value=100,
31
- step=1)
32
- canny_high_threshold = gr.Slider(
33
- label='Canny high threshold',
34
- minimum=1,
35
- maximum=255,
36
- value=200,
37
- step=1)
38
- num_steps = gr.Slider(label='Number of steps',
39
- minimum=1,
40
- maximum=100,
41
- value=20,
42
- step=1)
43
- guidance_scale = gr.Slider(label='Guidance scale',
44
- minimum=0.1,
45
- maximum=30.0,
46
- value=9.0,
47
- step=0.1)
48
- seed = gr.Slider(label='Seed',
49
- minimum=0,
50
- maximum=1000000,
51
- step=1,
52
- value=0,
53
- randomize=True)
54
- randomize_seed = gr.Checkbox(label='Randomize seed',
55
- value=True)
56
- a_prompt = gr.Textbox(
57
- label='Additional prompt',
58
- value='best quality, extremely detailed')
59
- n_prompt = gr.Textbox(
60
- label='Negative prompt',
61
- value=
62
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
63
- )
64
- with gr.Column():
65
- result = gr.Gallery(label='Output', show_label=False).style(
66
- columns=2, object_fit='scale-down')
67
- inputs = [
68
- image,
69
- prompt,
70
- a_prompt,
71
- n_prompt,
72
- num_samples,
73
- image_resolution,
74
- num_steps,
75
- guidance_scale,
76
- seed,
77
- canny_low_threshold,
78
- canny_high_threshold,
79
- ]
80
- prompt.submit(
81
- fn=randomize_seed_fn,
82
- inputs=[seed, randomize_seed],
83
- outputs=seed,
84
- ).then(
85
- fn=process,
86
- inputs=inputs,
87
- outputs=result,
88
- )
89
- run_button.click(
90
- fn=randomize_seed_fn,
91
- inputs=[seed, randomize_seed],
92
- outputs=seed,
93
- ).then(
94
- fn=process,
95
- inputs=inputs,
96
- outputs=result,
97
- api_name='canny',
98
- )
99
- return demo
100
-
101
-
102
- if __name__ == '__main__':
103
- from model import Model
104
- model = Model(task_name='Canny')
105
- demo = create_demo(model.process_canny)
106
- demo.queue().launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/system/quit.js DELETED
@@ -1,36 +0,0 @@
1
- import cfg from '../../lib/config/config.js'
2
-
3
- export class quit extends plugin {
4
- constructor () {
5
- super({
6
- name: 'notice',
7
- dsc: '自动退群',
8
- event: 'notice.group.increase'
9
- })
10
- }
11
-
12
- async accept () {
13
- if (this.e.user_id != this.e.self_id) return
14
-
15
- let other = cfg.other
16
- if (other.autoQuit <= 0) return
17
-
18
- /** 判断主人,主人邀请不退群 */
19
- let gl = await this.e.group.getMemberMap()
20
- for (let qq of cfg.masterQQ) {
21
- if (gl.has(Number(qq) || String(qq))) {
22
- logger.mark(`[主人拉群] ${this.e.group_id}`)
23
- return
24
- }
25
- }
26
-
27
- /** 自动退群 */
28
- if (Array.from(gl).length <= other.autoQuit && !this.e.group.is_owner) {
29
- await this.e.reply('禁止拉群,已自动退出')
30
- logger.mark(`[自动退群] ${this.e.group_id}`)
31
- setTimeout(() => {
32
- this.e.group.quit()
33
- }, 2000)
34
- }
35
- }
36
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ClueAI/ChatYuan-large-v2/app.py DELETED
@@ -1,310 +0,0 @@
1
- import os
2
- import gradio as gr
3
- import clueai
4
- import torch
5
- from transformers import T5Tokenizer, T5ForConditionalGeneration
6
-
7
- tokenizer = T5Tokenizer.from_pretrained("ClueAI/ChatYuan-large-v2")
8
- model = T5ForConditionalGeneration.from_pretrained("ClueAI/ChatYuan-large-v2")
9
- # 使用
10
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
11
- model.to(device)
12
- model.half()
13
-
14
- base_info = ""
15
-
16
-
17
- def preprocess(text):
18
- text = f"{base_info}{text}"
19
- text = text.replace("\n", "\\n").replace("\t", "\\t")
20
- return text
21
-
22
-
23
- def postprocess(text):
24
- return text.replace("\\n", "\n").replace("\\t", "\t").replace(
25
- '%20', ' ') #.replace(" ", "&nbsp;")
26
-
27
-
28
- generate_config = {
29
- 'do_sample': True,
30
- 'top_p': 0.9,
31
- 'top_k': 50,
32
- 'temperature': 0.7,
33
- 'num_beams': 1,
34
- 'max_length': 1024,
35
- 'min_length': 3,
36
- 'no_repeat_ngram_size': 5,
37
- 'length_penalty': 0.6,
38
- 'return_dict_in_generate': True,
39
- 'output_scores': True
40
- }
41
-
42
-
43
- def answer(
44
- text,
45
- top_p,
46
- temperature,
47
- sample=True,
48
- ):
49
- '''sample:是否抽样。生成任务,可以设置为True;
50
- top_p:0-1之间,生成的内容越多样'''
51
- text = preprocess(text)
52
- encoding = tokenizer(text=[text],
53
- truncation=True,
54
- padding=True,
55
- max_length=1024,
56
- return_tensors="pt").to(device)
57
- if not sample:
58
- out = model.generate(**encoding,
59
- return_dict_in_generate=True,
60
- output_scores=False,
61
- max_new_tokens=1024,
62
- num_beams=1,
63
- length_penalty=0.6)
64
- else:
65
- out = model.generate(**encoding,
66
- return_dict_in_generate=True,
67
- output_scores=False,
68
- max_new_tokens=1024,
69
- do_sample=True,
70
- top_p=top_p,
71
- temperature=temperature,
72
- no_repeat_ngram_size=12)
73
- #out=model.generate(**encoding, **generate_config)
74
- out_text = tokenizer.batch_decode(out["sequences"],
75
- skip_special_tokens=True)
76
- return postprocess(out_text[0])
77
-
78
-
79
- def clear_session():
80
- return '', None
81
-
82
-
83
- def chatyuan_bot(input, history, top_p, temperature, num):
84
- history = history or []
85
- if len(history) > num:
86
- history = history[-num:]
87
-
88
- context = "\n".join([
89
- f"用户:{input_text}\n小元:{answer_text}"
90
- for input_text, answer_text in history
91
- ])
92
- #print(context)
93
-
94
- input_text = context + "\n用户:" + input + "\n小元:"
95
- input_text = input_text.strip()
96
- output_text = answer(input_text, top_p, temperature)
97
- print("open_model".center(20, "="))
98
- print(f"{input_text}\n{output_text}")
99
- #print("="*20)
100
- history.append((input, output_text))
101
- #print(history)
102
- return '', history, history
103
-
104
-
105
- def chatyuan_bot_regenerate(input, history, top_p, temperature, num):
106
-
107
- history = history or []
108
-
109
- if history:
110
- input = history[-1][0]
111
- history = history[:-1]
112
-
113
- if len(history) > num:
114
- history = history[-num:]
115
-
116
- context = "\n".join([
117
- f"用户:{input_text}\n小元:{answer_text}"
118
- for input_text, answer_text in history
119
- ])
120
- #print(context)
121
-
122
- input_text = context + "\n用户:" + input + "\n小元:"
123
- input_text = input_text.strip()
124
- output_text = answer(input_text, top_p, temperature)
125
- print("open_model".center(20, "="))
126
- print(f"{input_text}\n{output_text}")
127
- history.append((input, output_text))
128
- #print(history)
129
- return '', history, history
130
-
131
-
132
- block = gr.Blocks()
133
-
134
- with block as demo:
135
- gr.Markdown("""<h1><center>元语智能——ChatYuan</center></h1>
136
- <font size=4>回答来自ChatYuan, 是模型生成的结果, 请谨慎辨别和参考, 不代表任何人观点 | Answer generated by ChatYuan model</font>
137
- <font size=4>注意:gradio对markdown代码格式展示有限</font>
138
- """)
139
- with gr.Row():
140
- with gr.Column(scale=3):
141
- chatbot = gr.Chatbot(label='ChatYuan').style(height=400)
142
-
143
- with gr.Column(scale=1):
144
-
145
- num = gr.Slider(minimum=4,
146
- maximum=10,
147
- label="最大的对话轮数",
148
- value=5,
149
- step=1)
150
- top_p = gr.Slider(minimum=0,
151
- maximum=1,
152
- label="top_p",
153
- value=1,
154
- step=0.1)
155
- temperature = gr.Slider(minimum=0,
156
- maximum=1,
157
- label="temperature",
158
- value=0.7,
159
- step=0.1)
160
- clear_history = gr.Button("👋 清除历史对话 | Clear History")
161
- send = gr.Button("🚀 发送 | Send")
162
- regenerate = gr.Button("🚀 重新生成本次结果 | regenerate")
163
- message = gr.Textbox()
164
- state = gr.State()
165
- message.submit(chatyuan_bot,
166
- inputs=[message, state, top_p, temperature, num],
167
- outputs=[message, chatbot, state])
168
- regenerate.click(chatyuan_bot_regenerate,
169
- inputs=[message, state, top_p, temperature, num],
170
- outputs=[message, chatbot, state])
171
- send.click(chatyuan_bot,
172
- inputs=[message, state, top_p, temperature, num],
173
- outputs=[message, chatbot, state])
174
-
175
- clear_history.click(fn=clear_session,
176
- inputs=[],
177
- outputs=[chatbot, state],
178
- queue=False)
179
-
180
-
181
- def ChatYuan(api_key, text_prompt, top_p):
182
- generate_config = {
183
- "do_sample": True,
184
- "top_p": top_p,
185
- "max_length": 128,
186
- "min_length": 10,
187
- "length_penalty": 1.0,
188
- "num_beams": 1
189
- }
190
- cl = clueai.Client(api_key, check_api_key=True)
191
- # generate a prediction for a prompt
192
- # 需要返回得分的话,指定return_likelihoods="GENERATION"
193
- prediction = cl.generate(model_name='ChatYuan-large', prompt=text_prompt)
194
- # print the predicted text
195
- #print('prediction: {}'.format(prediction.generations[0].text))
196
- response = prediction.generations[0].text
197
- if response == '':
198
- response = "很抱歉,我无法回答这个问题"
199
-
200
- return response
201
-
202
-
203
- def chatyuan_bot_api(api_key, input, history, top_p, num):
204
- history = history or []
205
-
206
- if len(history) > num:
207
- history = history[-num:]
208
-
209
- context = "\n".join([
210
- f"用户:{input_text}\n小元:{answer_text}"
211
- for input_text, answer_text in history
212
- ])
213
-
214
- input_text = context + "\n用户:" + input + "\n小元:"
215
- input_text = input_text.strip()
216
- output_text = ChatYuan(api_key, input_text, top_p)
217
- print("api".center(20, "="))
218
- print(f"api_key:{api_key}\n{input_text}\n{output_text}")
219
-
220
- history.append((input, output_text))
221
-
222
- return '', history, history
223
-
224
-
225
- block = gr.Blocks()
226
-
227
- with block as demo_1:
228
- gr.Markdown("""<h1><center>元语智能——ChatYuan</center></h1>
229
- <font size=4>回答来自ChatYuan, 以上是模型生成的结果, 请谨慎辨别和参考, 不代表任何人观点 | Answer generated by ChatYuan model</font>
230
- <font size=4>注意:gradio对markdown代码格式展示有限</font>
231
- <font size=4>在使用此功能前,你需要有个API key. API key 可以通过这个<a href='https://www.clueai.cn/' target="_blank">平台</a>获取</font>
232
- """)
233
- with gr.Row():
234
- with gr.Column(scale=3):
235
- chatbot = gr.Chatbot(label='ChatYuan').style(height=400)
236
-
237
- with gr.Column(scale=1):
238
- api_key = gr.inputs.Textbox(label="请输入你的api-key(必填)",
239
- default="",
240
- type='password')
241
- num = gr.Slider(minimum=4,
242
- maximum=10,
243
- label="最大的对话轮数",
244
- value=5,
245
- step=1)
246
- top_p = gr.Slider(minimum=0,
247
- maximum=1,
248
- label="top_p",
249
- value=1,
250
- step=0.1)
251
- clear_history = gr.Button("👋 清除历史对话 | Clear History")
252
- send = gr.Button("🚀 发送 | Send")
253
-
254
- message = gr.Textbox()
255
- state = gr.State()
256
- message.submit(chatyuan_bot_api,
257
- inputs=[api_key, message, state, top_p, num],
258
- outputs=[message, chatbot, state])
259
-
260
- send.click(chatyuan_bot_api,
261
- inputs=[api_key, message, state, top_p, num],
262
- outputs=[message, chatbot, state])
263
- clear_history.click(fn=clear_session,
264
- inputs=[],
265
- outputs=[chatbot, state],
266
- queue=False)
267
-
268
- block = gr.Blocks()
269
- with block as introduction:
270
- gr.Markdown("""<h1><center>元语智能——ChatYuan</center></h1>
271
-
272
- <font size=4>😉ChatYuan: 元语功能型对话大模型 | General Model for Dialogue with ChatYuan
273
- <br>
274
- 👏ChatYuan-large-v2是一个支持中英双语的功能型对话语言大模型,是继ChatYuan系列中ChatYuan-large-v1开源后的又一个开源模型。ChatYuan-large-v2使用了和 v1版本相同的技术方案,在微调数据、人类反馈强化学习、思维链等方面进行了优化。
275
- <br>
276
- ChatYuan large v2 is an open-source large language model for dialogue, supports both Chinese and English languages, and in ChatGPT style.
277
- <br>
278
- ChatYuan-large-v2是ChatYuan系列中以轻量化实现高质量效果的模型之一,用户可以在消费级显卡、 PC甚至手机上进行推理(INT4 最低只需 400M )。
279
- <br>
280
- 在Chatyuan-large-v1的原有功能的基础上,我们给模型进行了如下优化:
281
- - 新增了中英双语对话能力。
282
- - 新增了拒答能力。对于一些危险、有害的问题,学会了拒答处理。
283
- - 新增了代码生成功能。对于基础代码生成进行了一定程度优化。
284
- - 增强了基础能力。原有上下文问答、创意性写作能力明显提升。
285
- - 新增了表格生成功能。使生成的表格内容和格式更适配。
286
- - 增强了基础数学运算能力。
287
- - 最大长度token数扩展到4096。
288
- - 增强了模拟情景能力。.<br>
289
- <br>
290
- Based on the original functions of Chatyuan-large-v1, we optimized the model as follows:
291
- -Added the ability to speak in both Chinese and English.
292
- -Added the ability to refuse to answer. Learn to refuse to answer some dangerous and harmful questions.
293
- -Added code generation functionality. Basic code generation has been optimized to a certain extent.
294
- -Enhanced basic capabilities. The original contextual Q&A and creative writing skills have significantly improved.
295
- -Added a table generation function. Make the generated table content and format more appropriate.
296
- -Enhanced basic mathematical computing capabilities.
297
- -The maximum number of length tokens has been expanded to 4096.
298
- -Enhanced ability to simulate scenarios< br>
299
- <br>
300
- 👀<a href='https://www.cluebenchmarks.com/clueai.html'>PromptCLUE-large</a>在1000亿token中文语料上预训练, 累计学习1.5万亿中文token, 并且在数百种任务上进行Prompt任务式训练. 针对理解类任务, 如分类、情感分析、抽取等, 可以自定义标签体系; 针对多种生成任务, 可以进行采样自由生成. <br>
301
- <br>
302
- &nbsp; <a href='https://modelscope.cn/models/ClueAI/ChatYuan-large/summary' target="_blank">ModelScope</a> &nbsp; | &nbsp; <a href='https://huggingface.co/ClueAI/ChatYuan-large-v1' target="_blank">Huggingface</a> &nbsp; | &nbsp; <a href='https://www.clueai.cn' target="_blank">官网体验场</a> &nbsp; | &nbsp; <a href='https://github.com/clue-ai/clueai-python#ChatYuan%E5%8A%9F%E8%83%BD%E5%AF%B9%E8%AF%9D' target="_blank">ChatYuan-API</a> &nbsp; | &nbsp; <a href='https://github.com/clue-ai/ChatYuan' target="_blank">Github项目地址</a> &nbsp; | &nbsp; <a href='https://openi.pcl.ac.cn/ChatYuan/ChatYuan/src/branch/main/Fine_tuning_ChatYuan_large_with_pCLUE.ipynb' target="_blank">OpenI免费试用</a> &nbsp;
303
- </font>
304
- <center><a href="https://clustrmaps.com/site/1bts0" title="Visit tracker"><img src="//www.clustrmaps.com/map_v2.png?d=ycVCe17noTYFDs30w7AmkFaE-TwabMBukDP1802_Lts&cl=ffffff" /></a></center>
305
- """)
306
-
307
- gui = gr.TabbedInterface(
308
- interface_list=[introduction, demo, demo_1],
309
- tab_names=["相关介绍 | Introduction", "开源模型 | Online Demo", "API调用"])
310
- gui.launch(quiet=True, show_api=False, share=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/CDM/detect_compo/lib_ip/Bbox.py DELETED
@@ -1,122 +0,0 @@
1
- import numpy as np
2
- import CDM.detect_compo.lib_ip.ip_draw as draw
3
-
4
-
5
- class Bbox:
6
- def __init__(self, col_min, row_min, col_max, row_max):
7
- self.col_min = col_min
8
- self.row_min = row_min
9
- self.col_max = col_max
10
- self.row_max = row_max
11
-
12
- self.width = col_max - col_min
13
- self.height = row_max - row_min
14
- self.box_area = self.width * self.height
15
-
16
- def put_bbox(self):
17
- return self.col_min, self.row_min, self.col_max, self.row_max
18
-
19
- def bbox_cal_area(self):
20
- self.box_area = self.width * self.height
21
- return self.box_area
22
-
23
- def bbox_relation(self, bbox_b):
24
- """
25
- :return: -1 : a in b
26
- 0 : a, b are not intersected
27
- 1 : b in a
28
- 2 : a, b are identical or intersected
29
- """
30
- col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox()
31
- col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox()
32
-
33
- # if a is in b
34
- if col_min_a > col_min_b and row_min_a > row_min_b and col_max_a < col_max_b and row_max_a < row_max_b:
35
- return -1
36
- # if b is in a
37
- elif col_min_a < col_min_b and row_min_a < row_min_b and col_max_a > col_max_b and row_max_a > row_max_b:
38
- return 1
39
- # a and b are non-intersect
40
- elif (col_min_a > col_max_b or row_min_a > row_max_b) or (col_min_b > col_max_a or row_min_b > row_max_a):
41
- return 0
42
- # intersection
43
- else:
44
- return 2
45
-
46
- def bbox_relation_nms(self, bbox_b, bias=(0, 0)):
47
- '''
48
- Calculate the relation between two rectangles by nms
49
- :return: -1 : a in b
50
- 0 : a, b are not intersected
51
- 1 : b in a
52
- 2 : a, b are intersected
53
- '''
54
- col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox()
55
- col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox()
56
-
57
- bias_col, bias_row = bias
58
- # get the intersected area
59
- col_min_s = max(col_min_a - bias_col, col_min_b - bias_col)
60
- row_min_s = max(row_min_a - bias_row, row_min_b - bias_row)
61
- col_max_s = min(col_max_a + bias_col, col_max_b + bias_col)
62
- row_max_s = min(row_max_a + bias_row, row_max_b + bias_row)
63
- w = np.maximum(0, col_max_s - col_min_s)
64
- h = np.maximum(0, row_max_s - row_min_s)
65
- inter = w * h
66
- area_a = (col_max_a - col_min_a) * (row_max_a - row_min_a)
67
- area_b = (col_max_b - col_min_b) * (row_max_b - row_min_b)
68
- iou = inter / (area_a + area_b - inter)
69
- ioa = inter / self.box_area
70
- iob = inter / bbox_b.box_area
71
-
72
- if iou == 0 and ioa == 0 and iob == 0:
73
- return 0
74
-
75
- # import lib_ip.ip_preprocessing as pre
76
- # org_iou, _ = pre.read_img('uied/data/input/7.jpg', 800)
77
- # print(iou, ioa, iob)
78
- # board = draw.draw_bounding_box(org_iou, [self], color=(255,0,0))
79
- # draw.draw_bounding_box(board, [bbox_b], color=(0,255,0), show=True)
80
-
81
- # contained by b
82
- if ioa >= 1:
83
- return -1
84
- # contains b
85
- if iob >= 1:
86
- return 1
87
- # not intersected with each other
88
- # intersected
89
- if iou >= 0.02 or iob > 0.2 or ioa > 0.2:
90
- return 2
91
- # if iou == 0:
92
- # print('ioa:%.5f; iob:%.5f; iou:%.5f' % (ioa, iob, iou))
93
- return 0
94
-
95
- def bbox_cvt_relative_position(self, col_min_base, row_min_base):
96
- '''
97
- Convert to relative position based on base coordinator
98
- '''
99
- self.col_min += col_min_base
100
- self.col_max += col_min_base
101
- self.row_min += row_min_base
102
- self.row_max += row_min_base
103
-
104
- def bbox_merge(self, bbox_b):
105
- '''
106
- Merge two intersected bboxes
107
- '''
108
- col_min_a, row_min_a, col_max_a, row_max_a = self.put_bbox()
109
- col_min_b, row_min_b, col_max_b, row_max_b = bbox_b.put_bbox()
110
- col_min = min(col_min_a, col_min_b)
111
- col_max = max(col_max_a, col_max_b)
112
- row_min = min(row_min_a, row_min_b)
113
- row_max = max(row_max_a, row_max_b)
114
- new_bbox = Bbox(col_min, row_min, col_max, row_max)
115
- return new_bbox
116
-
117
- def bbox_padding(self, image_shape, pad):
118
- row, col = image_shape[:2]
119
- self.col_min = max(self.col_min - pad, 0)
120
- self.col_max = min(self.col_max + pad, col)
121
- self.row_min = max(self.row_min - pad, 0)
122
- self.row_max = min(self.row_max + pad, row)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/app.py DELETED
@@ -1,40 +0,0 @@
1
- import gradio as gr
2
- import numpy as np
3
- from tensorflow.keras.preprocessing import image
4
- from tensorflow.keras.models import load_model
5
- from PIL import Image as PILImage
6
- import io
7
-
8
- # Carregar o modelo treinado
9
- model = load_model('model_1.0000.h5')
10
-
11
- def predict_and_invert(input_image):
12
- input_image = input_image.resize((224, 224))
13
- img = image.img_to_array(input_image) / 255.0
14
- img = np.expand_dims(img, axis=0)
15
- img = img[:, :224, :224, :]
16
-
17
- prediction = model.predict(img)
18
-
19
- if prediction[0][0] > 0.5:
20
- result = "Anomalia cardíaca (Doente)"
21
- else:
22
- result = "Normal (Sem anomalia)"
23
-
24
- img_inverted = 1 - img[0] # Inverter a imagem
25
-
26
- img_inverted_pil = PILImage.fromarray(np.uint8(img_inverted * 255))
27
- img_inverted_bytes = io.BytesIO()
28
- img_inverted_pil.save(img_inverted_bytes, format='PNG')
29
-
30
- return result, img_inverted_pil
31
-
32
- # Criar uma interface Gradio
33
- iface = gr.Interface(
34
- fn=predict_and_invert,
35
- inputs=gr.inputs.Image(type="pil", label="Carregar uma imagem"),
36
- outputs=["text", "image"]
37
- )
38
-
39
- # Executar a interface Gradio
40
- iface.launch()