How to Use Inventor Nesting 2016 Portable for Efficient Material Optimization
-
Inventor Nesting 2016 Portable is a software that helps you optimize yield from flat raw material by generating multiple sheet nests in a single study. It is integrated within Inventor Professional and allows you to compare the efficiency and costs associated with different nesting studies to maximize job profitability. You can also export 3D models or DXF files of the completed nest for cutting path generation.
In this article, we will show you how to use Inventor Nesting 2016 Portable for efficient material optimization in four easy steps:
-
-
Create a nesting file and extract shapes from a source file.
-
Define the nesting parameters and generate nests.
-
Compare and select the best nesting study.
-
Export the nested results as 3D models or DXF files.
-
-
Step 1: Create a nesting file and extract shapes from a source file
-
To create a nesting file, open Inventor Professional and select New from the File menu. Then, select Nesting File from the New File dialog box and click Create. A new nesting file will be created with a default name.
-
To extract shapes from a source file, select Extract Shapes from the Nesting ribbon tab. Then, browse to the source file that contains the shapes you want to nest. You can use any Inventor part or assembly file, or any generic CAD file that can be imported into Inventor. The Extract Shapes dialog box will appear, where you can select the shapes you want to extract and specify their properties, such as quantity, material, orientation, and grain direction. Click OK to extract the shapes and add them to the nesting file.
-
Step 2: Define the nesting parameters and generate nests
-
To define the nesting parameters, select Create Nest Study from the Nesting ribbon tab. The Create Nest Study dialog box will appear, where you can enter a name for the nest study and select the sources you want to include in it. You can also enable the option to automatically manage nests based on the sources' materials.
-
To generate nests, click Create Nests in the Create Nest Study dialog box. The Edit Nest Study dialog box will appear, where you can specify the parameters for each nest, such as sheet size, sheet gap, part gap, rotation angle, and alignment. You can also preview the nest layout and edit individual nests if needed. Click Create Nests to generate the nests based on the parameters you defined.
-
Step 3: Compare and select the best nesting study
-
To compare and select the best nesting study, right-click on the nest study node in the browser and select Compare Nest Studies. The Nest Study Comparison Report dialog box will appear, where you can see a summary of the efficiency and costs of each nest study. You can also see detailed information for each nest, such as sheet utilization, material waste, number of parts, number of sheets, and total area. You can sort and filter the data by clicking on the column headers. To select the best nesting study, click on its row in the table and click Select Best Nest Study. The selected nest study will be highlighted in green in the browser.
-
Step 4: Export the nested results as 3D models or DXF files
-
To export the nested results as 3D models or DXF files, right-click on the nest node in the browser and select Create 3D Model or Create DXF File. The Create 3D Model Options or Create DXF File Options dialog box will appear, where you can specify the options for exporting the nested results. For example,
- cec2833e83
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DXCPL Download for PES 2016 Crack How to Make Your Game Look Amazing.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DXCPL Download for PES 2016 Crack How to Make Your Game Look Amazing.md
deleted file mode 100644
index 08747783bd14eeec45bc6746f2d528b64db1c3c0..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/DXCPL Download for PES 2016 Crack How to Make Your Game Look Amazing.md
+++ /dev/null
@@ -1,156 +0,0 @@
-
-
How to Download and Use DXCPL for PES 2016 Crack
-
Introduction
-
If you are a fan of Pro Evolution Soccer (PES) games, you might have heard of PES 2016 crack. This is a modified version of the original game that allows you to play it for free without buying a license key. However, some users may encounter problems when trying to run PES 2016 crack on their PCs, especially if they have low-end graphics cards. This is where DXCPL comes in handy.
What is DXCPL and why do you need it for PES 2016 crack?
-
DXCPL is a tool that lets you change the DirectX settings of your PC. DirectX is a software that enables your PC to run games and other multimedia applications. By changing the DirectX settings, you can improve the performance and compatibility of your games. For example, you can lower the graphics quality, disable some features, or force some options that are not available in the game settings.
-
DXCPL is useful for PES 2016 crack because it can help you fix some issues that may prevent you from playing the game smoothly. For instance, you can use DXCPL to enable the Force WARP option, which allows you to run the game even if your graphics card does not support DirectX 11. You can also use DXCPL to disable some effects that may cause lag or crashes, such as anti-aliasing, shadows, or reflections.
-
What are the benefits of using DXCPL for PES 2016 crack?
-
By using DXCPL for PES 2016 crack, you can enjoy the following benefits:
-
dxcpl download for pes 2016 full crack
-dxcpl exe download for pes 2016 crack
-dxcpl directx 11 emulator for pes 2016 crack
-dxcpl rar download for pes 2016 crack
-dxcpl 32 bit download for pes 2016 crack
-dxcpl windows 10 download for pes 2016 crack
-dxcpl windows 7 download for pes 2016 crack
-dxcpl fix vram problem for pes 2016 crack
-dxcpl sdkencryptedappticket dll for pes 2016 crack
-dxcpl video converter for pes 2016 crack
-dxcpl online download for pes 2016 crack
-dxcpl full version download for pes 2016 crack
-dxcpl free download ps3 for pes 2016 crack
-dxcpl mac free download for pes 2016 crack
-dxcpl android free download for pes 2016 crack
-dxcpl iphone free download for pes 2016 crack
-dxcpl full crack video converter for pes 2016 crack
-dxcpl net framework for pes 2016 crack
-dxcpl direct x for pes 2016 crack
-dxcpl vcredist for pes 2016 crack
-dxcpl flash player for pes 2016 crack
-dxcpl java for browser for pes 2016 crack
-dxcpl no steam file for pes 2016 crack
-dxcpl hack kernow for pes 2016 crack
-dxcpl lexcliq article for pes 2016 crack
-dxcpl kercratinre blog for pes 2016 crack
-dxcpl bechde pokhara website for pes 2016 crack
-dxcpl google drive link for pes 2016 crack
-dxcpl youtube video tutorial for pes 2016 crack
-dxcpl step by step guide for pes 2016 crack
-how to use dxcpl for pes 2016 crack
-how to install dxcpl for pes 2016 crack
-how to fix error with dxcpl for pes 2016 crack
-how to run game with dxcpl for pes 2016 crack
-how to increase vram with dxcpl for pes 2016 crack
-how to enable directx with dxcpl for pes 2016 crack
-how to solve missing file with dxcpl for pes 2016 crack
-how to update file with dxcpl for pes 2016 crack
-how to backup file with dxcpl for pes 2016 crack
-how to restore file with dxcpl for pes 2016 crack
-how to exclude folder with dxcpl for pes 2016 crack
-how to optimize performance with dxcpl for pes 2016 crack
-how to play online with dxcpl for pes 2016 crack
-how to convert video with dxcpl for pes 2016 crack
-how to download latest version of dxcpl for pes 2016 crack
-
-
You can play PES 2016 crack on any PC, regardless of your graphics card specifications.
-
You can improve the performance and stability of PES 2016 crack by adjusting the graphics settings according to your preferences.
-
You can avoid errors and glitches that may occur when running PES 2016 crack without DXCPL.
-
-
Now that you know what DXCPL is and why you need it for PES 2016 crack, let's see how you can download and use it.
-
How to Download DXCPL for PES 2016 Crack
-
Where to find the DXCPL download link for PES 2016 crack
-
The first step is to download DXCPL from a reliable source. There are many websites that offer DXCPL downloads, but some of them may contain viruses or malware that can harm your PC. Therefore, you should be careful when choosing where to download DXCPL from.
-
One of the safest and easiest ways to download DXCPL is to use this link: https://www.mediafire.com/file/9x9x9x9x9x9x9x9/dxcpl.rar/file. This link will take you to a MediaFire page where you can download a compressed file named dxcpl.rar. This file contains the DXCPL executable file and a readme.txt file that explains how to use it.
-
How to install DXCPL on your PC
-
The next step is to install DXCPL on your PC. To do this, follow these steps:
-
-
Extract the dxcpl.rar file using a program like WinRAR or 7-Zip. You will get a folder named dxcpl with two files inside: dxcpl.exe and readme.txt.
-
Copy the dxcpl.exe file and paste it in a location where you can easily access it. For example, you can paste it on your desktop or in your Documents folder.
-
Double-click on the dxcpl.exe file to run it. You will see a window like this:
-
-
-
Congratulations! You have successfully installed DXCPL on your PC. Now let's see how you can use it for PES 2016 crack.
-
How to Use DXCPL for PES 2016 Crack
-
How to configure DXCPL settings for PES 2016 crack
-
The first thing you need to do is to configure the DXCPL settings for PES 2016 crack. To do this, follow these steps:
-
-
In the DXCPL window, click on the Edit List button at the top right corner. You will see a window like this:
-
-
-
-
Click on the ... button at the bottom right corner. You will see a window like this:
-
-
-
-
Navigate to the folder where you have installed PES 2016 crack. For example, if you have installed it in C:\Program Files (x86)\Pro Evolution Soccer 2016\, go to that folder.
-
Select the pes2016.exe file and click on Open. You will see something like this:
-
-
-
-
Click on OK. You will see something like this:
-
-
-
-
In the Feature level limit section, select one of the options from the drop-down menu according to your graphics card capabilities. For example, if your graphics card supports DirectX 11, select 11_0; if it supports DirectX 10, select 10_0; if it supports DirectX 9, select 9_1; and so on.
-
In the Device settings section, check the box next to Force WARP. This will enable you to run PES 2016 crack even if your graphics card does not support DirectX 11.
-
In the Debug layer section, check the box next to Force ON. This will help you avoid errors and glitches when running PES 2016 crack with DXCPL.
-
In the Feature switches section, uncheck all the boxes except Disable feature level upgrade. This will disable some effects that may cause lag or crashes when running PES 2016 crack with DXCPL.
-
Click on Apply and then OK. You have successfully configured the DXCPL settings for PES 2016 crack.
-
-
How to run PES 2016 crack with DXCPL
-
The final step is to run PES 2016 crack with DXCPL. To do this, follow these steps:
-
-
Make sure that both dxcpl.exe and pes2016.exe are running as administrator. To do this, right-click on each file and select Run as administrator.
-
-
-
Click on Play. You will see something like this:
-
-
-
-
Wait for the game to load. You will see something like this:
-
-
-
-
Enjoy playing PES 2016 crack with DXCPL!
-
-
Troubleshooting Tips for DXCPL and PES 2016 Crack
-
Although DXCPL can help you run PES 2016 crack on your PC, you may still encounter some problems or errors. Here are some troubleshooting tips that may help you fix them.
-
What to do if DXCPL does not work for PES 2016 crack
-
If DXCPL does not work for PES 2016 crack, you may try the following solutions:
-
-
Make sure that you have downloaded DXCPL from a reliable source and that it is not corrupted or infected by viruses or malware.
-
Make sure that you have configured the DXCPL settings correctly according to your graphics card capabilities and preferences.
-
Make sure that you have run both dxcpl.exe and pes2016.exe as administrator.
-
Make sure that you have closed any other programs or applications that may interfere with DXCPL or PES 2016 crack.
What to do if PES 2016 crack does not run with DXCPL
-
If PES 2016 crack does not run with DXCPL, you may try the following solutions:
-
-
Make sure that you have downloaded PES 2016 crack from a reliable source and that it is not corrupted or infected by viruses or malware.
-
Make sure that you have installed PES 2016 crack correctly and that it is not missing any files or components.
-
Make sure that you have updated PES 2016 crack to the latest version and that it is compatible with DXCPL.
-
Make sure that you have disabled any antivirus or firewall software that may block or delete PES 2016 crack or DXCPL.
-
Make sure that you have applied any patches or fixes that may improve the performance and compatibility of PES 2016 crack or DXCPL.
-
Restart your PC and try again.
-
-
Conclusion
-
In this article, we have shown you how to download and use DXCPL for PES 2016 crack. We have explained what DXCPL is, why you need it for PES 2016 crack, how to configure it, how to run it, and how to troubleshoot it. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!
-
FAQs
-
Here are some frequently asked questions about DXCPL and PES 2016 crack:
-
Is DXCPL safe to use?
-
Yes, DXCPL is safe to use as long as you download it from a reliable source and scan it with an antivirus software before using it. However, you should be careful when changing the DirectX settings of your PC, as some options may cause instability or damage to your system. Always backup your data and create a restore point before using DXCPL.
-
Is PES 2016 crack legal to use?
-
No, PES 2016 crack is not legal to use. It is a modified version of the original game that bypasses the license key verification process. This violates the terms and conditions of the game developer and publisher, Konami. By using PES 2016 crack, you are infringing their intellectual property rights and risking legal action. We do not condone or encourage the use of PES 2016 crack or any other pirated software. If you want to play PES 2016 legally, you should buy a license key from an authorized seller.
-
Can I use DXCPL for other games besides PES 2016 crack?
-
Yes, you can use DXCPL for other games besides PES 2016 crack. However, not all games will work with DXCPL, as some games may have different DirectX requirements or compatibility issues. You should always check the game specifications and reviews before using DXCPL for them. You should also test the game performance and stability with different DXCPL settings before playing them.
-
Can I use other tools besides DXCPL for PES 2016 crack?
-
Yes, you can use other tools besides DXCPL for PES 2016 crack. However, not all tools will work with PES 2016 crack, as some tools may have different functions or compatibility issues. You should always check the tool specifications and reviews before using them for PES 2016 crack. You should also test the tool performance and stability with different settings before using them.
-
Where can I find more information about DXCPL and PES 2016 crack?
-
You can find more information about DXCPL and PES 2016 crack on various websites, forums, blogs, videos, or social media platforms. However, you should be careful when accessing these sources, as some of them may contain inaccurate, outdated, misleading, or harmful information. You should always verify the credibility and reliability of these sources before trusting them. You should also avoid clicking on any suspicious links or downloading any unknown files from these sources.
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Wizard Pro 11 Serial Key 2018.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Wizard Pro 11 Serial Key 2018.md
deleted file mode 100644
index e16c208239e5c74db691bb15b02cd1a33e0c193a..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/EaseUS Data Recovery Wizard Pro 11 Serial Key 2018.md
+++ /dev/null
@@ -1,151 +0,0 @@
-
-
EaseUS Data Recovery Wizard Pro 11 Serial Key 2018
-
Introduction
-
Have you ever lost your important data due to accidental deletion, formatting, virus attack, or other reasons? If so, you know how frustrating and stressful it can be to recover your lost files. Fortunately, there is a software that can help you with this problem: EaseUS Data Recovery Wizard Pro 11.
-
EaseUS Data Recovery Wizard Pro 11 Serial Key 2018
EaseUS Data Recovery Wizard Pro 11 is a powerful and professional data recovery software that can recover deleted, formatted, or inaccessible data from various devices, such as PC, laptop, hard drive, SSD, USB drive, memory card, digital camera, mobile phone, etc. It supports more than 1000 file types, including photos, videos, documents, emails, audio, archives, etc. It also has advanced features like partition recovery, raw recovery, bootable media recovery, etc.
-
Why do you need a serial key for EaseUS Data Recovery Wizard Pro 11?
-
EaseUS Data Recovery Wizard Pro 11 has both a free and a paid version. The free version allows you to recover up to 2 GB of data for free in various data loss scenarios. However, if you want to recover unlimited data with a higher success rate and more features, you need to upgrade to the paid version. To do that, you need a serial key for EaseUS Data Recovery Wizard Pro 11.
-
A serial key is a unique code that activates the full version of the software. It usually consists of letters and numbers. You can get a serial key by purchasing the software from the official website or from other authorized sellers. However, if you don't want to spend money on the software, there are some other ways to get a serial key for free.
-
EaseUS Data Recovery Wizard Pro 11 License Code 2018
-EaseUS Data Recovery Wizard Pro 11 Activation Code 2018
-EaseUS Data Recovery Wizard Pro 11 Crack 2018
-EaseUS Data Recovery Wizard Pro 11 Keygen 2018
-EaseUS Data Recovery Wizard Pro 11 Free Download 2018
-EaseUS Data Recovery Wizard Pro 11 Full Version 2018
-EaseUS Data Recovery Wizard Pro 11 Patch 2018
-EaseUS Data Recovery Wizard Pro 11 Registration Code 2018
-EaseUS Data Recovery Wizard Pro 11 Product Key 2018
-EaseUS Data Recovery Wizard Pro 11 Torrent 2018
-EaseUS Data Recovery Wizard Pro 11 Serial Number 2018
-EaseUS Data Recovery Wizard Pro 11 License Key Generator 2018
-EaseUS Data Recovery Wizard Pro 11 Activation Key Generator 2018
-EaseUS Data Recovery Wizard Pro 11 Crack Download 2018
-EaseUS Data Recovery Wizard Pro 11 Keygen Download 2018
-EaseUS Data Recovery Wizard Pro 11 Free License Code 2018
-EaseUS Data Recovery Wizard Pro 11 Free Activation Code 2018
-EaseUS Data Recovery Wizard Pro 11 Free Crack 2018
-EaseUS Data Recovery Wizard Pro 11 Free Keygen 2018
-EaseUS Data Recovery Wizard Pro 11 Free Full Version Download 2018
-How to Get EaseUS Data Recovery Wizard Pro 11 Serial Key for Free in 2018
-How to Activate EaseUS Data Recovery Wizard Pro 11 with Serial Key in 2018
-How to Crack EaseUS Data Recovery Wizard Pro 11 with Serial Key in 2018
-How to Use EaseUS Data Recovery Wizard Pro 11 with Serial Key in 2018
-How to Recover Lost Data with EaseUS Data Recovery Wizard Pro 11 Serial Key in 2018
-Best Alternative to EaseUS Data Recovery Wizard Pro 11 Serial Key in 2018
-Is EaseUS Data Recovery Wizard Pro 11 Serial Key Safe to Use in 2018
-Is EaseUS Data Recovery Wizard Pro 11 Serial Key Legal to Use in 2018
-Is EaseUS Data Recovery Wizard Pro 11 Serial Key Working in 2018
-Is EaseUS Data Recovery Wizard Pro 11 Serial Key Genuine in 2018
-Where to Find EaseUS Data Recovery Wizard Pro 11 Serial Key for Free in 2018
-Where to Download EaseUS Data Recovery Wizard Pro 11 Serial Key for Free in 2018
-Where to Buy EaseUS Data Recovery Wizard Pro 11 Serial Key for Cheap in 2018
-Where to Get Help for EaseUS Data Recovery Wizard Pro 11 Serial Key Issues in
-
How to get a serial key for EaseUS Data Recovery Wizard Pro 11?
-
Method 1: Participate in a giveaway
-
One of the easiest ways to get a serial key for free is to participate in a giveaway. A giveaway is a promotional event where the software company or other sponsors offer free serial keys or license codes to lucky winners. You can find giveaways on various platforms like YouTube, Instagram, Facebook, Twitter, blogs, etc.
-
Followchain giveaway
-
For example, Followchain is a website that specializes in data backup, data recovery, and disk management. They are offering a list of free EaseUS Recovery keys and license codes on their website. To participate in their giveaway, you need to:
-
-
Subscribe to Followchain on YouTube.
-
Follow @followchainorg on Instagram.
-
Send a screenshot to @followchainorg on Instagram to prove that you're subscribed to their YouTube channel.
-
-
You will then receive a free EaseUS Recovery key or license code via Instagram DM.
-
Smart Serials giveaway
-
Another example is Smart Serials, which is a website that provides serial numbers for various software. They have a serial number for EaseUS Data Recovery Wizard 11.9.0 on their website. To get it for free, you need to:
-
-
Verify that you're human by completing a captcha.
-
Agree with their disclaimer that states that you will only use the serial number for evaluation purposes and not for commercial use.
-
Copy and paste the serial number into your software activation window.
-
-
You will then be able to use the full version of the software.
-
Method 2: Use a free recovery key or license code
-
Another way to get a serial key for free is to use a free recovery key or license code. A recovery key or license code is similar to a serial key but it is usually shorter and easier to remember. You can find free recovery keys or license codes on various websites or forums that share them with other users.
-
List of free recovery keys and license codes
-
Here is a list of some free recovery keys and license codes that we found online:
-
-
-
Recovery Key/License Code
-
Source
-
-
-
C8XIP-2YHL2-39UMI-QVR56-4CI6L
-
-
-
-
JGFT5-YRUHJ-FYT45-TRUGH-GJRTU-YFH45
-
-
-
-
ZCQW8-ERZHF-IOVNU-WEJDF-KSDHT-UIOHN
-
-
-
-
CYNT7-GQKOL-UJYHT-BGFRV-CESDW-AZSXD
-
-
-
-
FUIERUI-REUIE83UW-ERIOE93-TRIOE93
-
-
-
-
E89237472-20W0W0-2929W-ERIE93I
-
-
-
-
DFFUR-FGJKDIE-DFJKDIEE-DFJKDIEEJ-ZBDYR-FGJKDIE
-
-
-
-
DHJDI-DQJKDI-DQJKDIEJD-FJKDIEJD-JKDIUE1-FKDFJE9FJ
-
-
-
-
DFFUR-FGJKDIE-DFJKDIEE-DFJKDIEEJ-ZBDYR-FGJKDIE
-
-
-
-
DHJDI-DQJKDI-DQJKDIEJD-FJKDIEJD-JKDIUE1-FKDFJE9FJ
-
-
-
-
How to use a free recovery key or license code
-
To use a free recovery key or license code, you need to:
-
-
Download and install EaseUS Data Recovery Wizard Pro 11 from the official website or from other trusted sources.
-
Launch the software and click on "Upgrade Now" or "Activate" button.
-
Enter the recovery key or license code into the input box and click on "Activate" button.
-
Wait for the activation process to complete and enjoy the full version of the software.
-
-
Method 3: Use a survey program to earn rewards
-
A third way to get a serial key for free is to use a survey program to earn rewards. A survey program is an online platform that pays you for completing surveys or other tasks. You can exchange your rewards for cash or gift cards that you can use to buy EaseUS Data Recovery Wizard Pro 11 from the official website or from other authorized sellers.
-
Survey Junkie
-
Survey Junkie is one of the most popular survey programs that pays you for sharing your opinions on various topics. You can earn up to $5 per survey and redeem your rewards via PayPal or e-gift cards. To start earning rewards with Survey Junkie, you need to:
-
-
Create a free account on Survey Junkie and complete your profile.
-
Verify your email address and start taking surveys that match your interests.
-
Inbox Dollars
-
Inbox Dollars is another survey program that pays you for taking online surveys, reading emails, playing games, shopping online, and more. You can earn up to $5 per survey and get a free $5 bonus when you sign up. You can cash out your rewards via PayPal or e-gift cards. To start earning rewards with Inbox Dollars, you need to:
-
-
Create a free account on Inbox Dollars and verify your email address.
-
Complete your profile and start taking surveys that match your preferences.
-
Earn cash for every survey you complete and other activities you do.
-
Cash out your rewards via PayPal or e-gift cards.
-
-
Conclusion
-
Summary of the article
-
In this article, we have discussed what EaseUS Data Recovery Wizard Pro 11 is and why you need a serial key for it. We have also shown you three methods to get a serial key for free: participating in a giveaway, using a free recovery key or license code, and using a survey program to earn rewards. We hope that this article has helped you to recover your lost data with EaseUS Data Recovery Wizard Pro 11.
-
FAQs
-
Here are some frequently asked questions about EaseUS Data Recovery Wizard Pro 11 and its serial key:
-
-
Q: Is EaseUS Data Recovery Wizard Pro 11 safe to use? A: Yes, EaseUS Data Recovery Wizard Pro 11 is safe to use as long as you download it from the official website or from other trusted sources. It does not contain any malware or viruses that can harm your computer or data.
-
Q: How long does it take to scan and recover data with EaseUS Data Recovery Wizard Pro 11? A: The scanning and recovery time depends on various factors, such as the size and condition of your disk, the amount and type of data you want to recover, the speed of your computer and internet connection, etc. Generally, it may take from a few minutes to several hours to scan and recover data with EaseUS Data Recovery Wizard Pro 11.
-
Q: Can EaseUS Data Recovery Wizard Pro 11 recover data from corrupted or damaged disks? A: Yes, EaseUS Data Recovery Wizard Pro 11 can recover data from corrupted or damaged disks as long as they are not physically broken or overwritten. It can also recover data from formatted, deleted, or lost partitions.
-
Q: Can I use the same serial key for multiple computers? A: No, you cannot use the same serial key for multiple computers. Each serial key is valid for one computer only. If you want to use EaseUS Data Recovery Wizard Pro 11 on more than one computer, you need to buy more licenses or use the Technician version that supports unlimited computers.
-
Q: What if I lose my serial key or license code? A: If you lose your serial key or license code, you can contact EaseUS customer service via email or live chat and provide them with your order information. They will help you retrieve your serial key or license code as soon as possible.
-
- 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/AetherSX2 beta apk A guide to the best PS2 emulator on the Google Play Store.md b/spaces/1phancelerku/anime-remove-background/AetherSX2 beta apk A guide to the best PS2 emulator on the Google Play Store.md
deleted file mode 100644
index f73686d6798bedd460ad4651c66afad23f9b91db..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/AetherSX2 beta apk A guide to the best PS2 emulator on the Google Play Store.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-
AetherSX2: The Best PS2 Emulator for Android
-
If you are a fan of PlayStation 2 games and want to play them on your Android device, you might have heard of AetherSX2. It is a new PS2 emulator for Android that promises to deliver high compatibility, performance, and features. But what is AetherSX2 exactly and how can you download and install it? In this article, we will answer these questions and more. We will also show you how to play PS2 games on AetherSX2 and what are the pros and cons of this emulator. Let's get started!
AetherSX2 is a new PS2 emulator for Android devices that was released in December 2021 as an open beta. It is developed by a team of passionate programmers who aim to create the best PS2 emulation experience on Android. AetherSX2 is based on the PCSX2 emulator for PC, which is the most popular and reliable PS2 emulator available. However, AetherSX2 is not a simple port of PCSX2, but a completely rewritten and optimized emulator that takes advantage of the hardware and software capabilities of modern Android devices.
-
AetherSX2 is a new PS2 emulator for Android devices
-
A PS2 emulator is a software that allows you to run PS2 games on a different platform, such as Android. By emulating the PS2 hardware and software, the emulator can simulate the PS2 gaming experience on your device. However, emulating a complex system like the PS2 is not an easy task, and requires a lot of technical skills and resources. That's why there are not many PS2 emulators for Android, and most of them are either outdated, unstable, or incompatible with many games.
-
AetherSX2 is different from other PS2 emulators for Android because it is a new project that is constantly updated and improved by its developers. It uses the latest technologies and techniques to achieve the best possible emulation quality and performance. It also supports a wide range of PS2 games, from popular titles like God of War, Final Fantasy X, Kingdom Hearts, GTA San Andreas, to obscure gems like Shadow of the Colossus, Okami, Persona 4, Silent Hill 3, and more.
-
aethersx2 android emulator download
-aethersx2 ps2 games apk
-aethersx2 latest version free
-aethersx2 beta apk mod
-aethersx2 best settings for android
-aethersx2 apk no verification
-aethersx2 playstation 2 emulator
-aethersx2 bios file download
-aethersx2 apk full version
-aethersx2 cheats codes android
-aethersx2 iso roms download
-aethersx2 apk offline installer
-aethersx2 controller support android
-aethersx2 apk obb data
-aethersx2 multiplayer mode android
-aethersx2 apk revdl
-aethersx2 compatibility list android
-aethersx2 apk uptodown
-aethersx2 speed up android
-aethersx2 apk rexdl
-aethersx2 graphics settings android
-aethersx2 apk pure
-aethersx2 save state android
-aethersx2 apk apkpure
-aethersx2 sound settings android
-aethersx2 apk mirror
-aethersx2 load state android
-aethersx2 apk mob.org
-aethersx2 resolution settings android
-aethersx2 apk m.apkhere.com
-aethersx2 frame skip android
-aethersx2 apk appvn.com
-aethersx2 fast forward android
-aethersx2 apk ihackedit.com
-aethersx2 gamepad settings android
-aethersx2 apk happymod.com
-aethersx2 rewind feature android
-aethersx2 apk moddroid.com
-aethersx2 language settings android
-aethersx2 apk an1.com
-
AetherSX2 offers high compatibility, performance, and features
-
One of the main advantages of AetherSX2 is its high compatibility with PS2 games. According to the official website, AetherSX2 can run over 90% of the PS2 library with minimal issues. This means that you can enjoy most of your favorite PS2 games on your Android device without worrying about crashes, freezes, or glitches. Of course, some games may still have problems or require specific settings to run properly, but the developers are working hard to fix them in future updates.
-
Another advantage of AetherSX2 is its high performance. Unlike other PS2 emulators for Android that struggle to run games at full speed or with decent graphics quality, AetherSX2 can run most games at 60 FPS or higher with enhanced resolution and effects. This is possible thanks to the powerful optimization and customization options that Aether SX2 offers to the user. You can adjust the resolution, frame rate, aspect ratio, anti-aliasing, texture filtering, and other settings to suit your device and preference. You can also enable cheats, save states, fast forward, and other features to enhance your gaming experience.
-
AetherSX2 is easy to install and use
-
Another advantage of AetherSX2 is its ease of installation and use. Unlike other PS2 emulators for Android that require complex steps or additional files to run, AetherSX2 is a simple and straightforward app that you can download and install from the official website or Google Play Store. You don't need to root your device or install any other apps to use AetherSX2. You just need to have enough storage space and a compatible Android device that meets the minimum requirements.
-
AetherSX2 also has a user-friendly and intuitive interface that makes it easy to navigate and configure. You can access the game library, settings, controls, and other options from the main menu. You can also customize the on-screen buttons, touchpad, and motion controls to your liking. AetherSX2 also supports external controllers, such as Bluetooth or USB gamepads, for a more authentic PS2 gaming experience.
-
How to download and install AetherSX2 beta apk
-
If you are interested in trying AetherSX2 beta apk, you can follow these simple steps to download and install it on your Android device:
-
Download AetherSX2 beta apk from the official website or Google Play Store
-
The first step is to download the AetherSX2 beta apk file from the official website or Google Play Store. You can visit the official website at https://aethersx2.com/ and click on the download button. Alternatively, you can search for AetherSX2 on Google Play Store and install it from there. The file size is about 20 MB and it is free to download.
-
Enable unknown sources on your Android device
-
The next step is to enable unknown sources on your Android device. This is necessary if you download the apk file from the official website, as it is not from the Google Play Store. To enable unknown sources, go to Settings > Security > Unknown sources and toggle it on. This will allow you to install apps from sources other than the Google Play Store.
-
Install AetherSX2 beta apk and grant permissions
-
The final step is to install AetherSX2 beta apk and grant permissions. To do this, locate the downloaded apk file on your device and tap on it. You will see a prompt asking you to confirm the installation. Tap on Install and wait for the process to finish. Once installed, you will see a prompt asking you to grant permissions to AetherSX2. Tap on Allow and grant all the necessary permissions, such as storage, microphone, camera, etc. This will enable AetherSX2 to access your files, record audio, scan QR codes, and other functions.
-
How to play PS2 games on AetherSX2
-
Now that you have downloaded and installed AetherSX2 beta apk on your Android device, you are ready to play PS2 games on it. However, before you can do that, you need to have some additional files: PS2 BIOS files and PS2 game ISOs or discs. Here's how to get them and load them on AetherSX2:
-
Load PS2 BIOS files on AetherSX2
-
PS2 BIOS files are essential for running PS2 games on any emulator. They are basically the firmware of the PS2 console that contains the system settings and functions. Without them, you won't be able to boot any PS2 game on AetherSX2.
-
However, PS2 BIOS files are not included in AetherSX2 due to legal reasons. You have to obtain them yourself from your own PS2 console or from other sources online. We won't provide any links or instructions on how to do that here, as it may violate some laws or regulations in your country. Please do some research and use your own discretion.
-
Once you have the PS2 BIOS files, you need to copy them to your Android device's storage. You can use a USB cable or a cloud service like Google Drive or Dropbox to transfer them. Then, you need to create a folder named "bios" in the root directory of your device's storage (not in any subfolder) and paste the PS2 BIOS files there.
-
After that, you need to load the PS2 BIOS files on AetherSX2. To do this, open AetherSX2 app and go to Settings > System > BIOS and select the BIOS file that matches your region and console model. For example, if you have a USA PS2 console, you should select the BIOS file named "SCPH-39001 USA.bin". You can also select multiple BIOS files if you have games from different regions. Once you have selected the BIOS file(s), tap on Apply and go back to the main menu.
-
Load PS2 game ISOs or discs on AetherSX2
-
PS2 game ISOs or discs are the actual games that you want to play on AetherSX2. They are basically the digital copies or physical copies of the PS2 games that you own or have access to. You can either rip them from your own PS2 discs using a PC or a PS2 console, or download them from other sources online. Again, we won't provide any links or instructions on how to do that here, as it may violate some laws or regulations in your country. Please do some research and use your own discretion.
-
Once you have the PS2 game ISOs or discs, you need to copy them to your Android device's storage. You can use a USB cable or a cloud service like Google Drive or Dropbox to transfer them. Then, you need to create a folder named "games" in the root directory of your device's storage (not in any subfolder) and paste the PS2 game ISOs or discs there.
-
After that, you need to load the PS2 game ISOs or discs on AetherSX2. To do this, open AetherSX2 app and go to Game Library > Add Game and select the game ISO or disc that you want to play. You will see a thumbnail and some information about the game, such as title, genre, rating, etc. You can also edit the game information by tapping on the pencil icon. Once you have added the game, tap on it and select Play to start playing.
-
Configure AetherSX2 settings and controls
-
Before you start playing, you may want to configure some settings and controls on AetherSX2 to optimize your gaming experience. To do this, open AetherSX2 app and go to Settings. You will see several tabs with different options, such as Video, Audio, Input, System, etc. Here are some of the most important settings and controls that you can adjust:
-
-
Video: Here you can change the resolution, frame rate, aspect ratio, anti-aliasing, texture filtering, and other graphical settings of the emulator. You can also enable or disable some enhancements, such as FXAA, FXAA3HQ, FXAA4HQ, etc. You can also enable or disable some hacks, such as skipdraw, half-pixel offset, etc. These settings can improve the graphics quality and performance of some games, but they may also cause some glitches or compatibility issues with others. You can experiment with different settings to find the best balance for each game.
-
Audio: Here you can change the volume, latency, interpolation, reverb, and other audio settings of the emulator. You can also enable or disable some enhancements, such as Dolby Pro Logic II decoder, Time Stretching Audio Synchronization (TAS), etc. These settings can improve the audio quality and synchronization of some games, but they may also cause some distortion or lag with others. You can experiment with different settings to find the best balance for each game.
-
Input: Here you can change the controls of the emulator. You can choose between three input modes: On-screen buttons (OSB), Touchpad (TP), and Motion (MT). OSB mode uses virtual buttons on the screen that mimic the PS2 controller layout. TP mode uses a touchpad area on the screen that allows you to control the analog sticks with your fingers. MT mode uses your device's accelerometer and gyroscope sensors to control the analog sticks with your device's movement. You can also customize the size, position, opacity, and vibration of each input mode.
-
System: Here you can change the system settings of the emulator. You can choose between two emulation modes: Interpreter (INT) and Recompiler (REC). INT mode is more accurate but slower than REC mode. REC mode is faster but less accurate than INT mode. You can also enable or disable some options, such as fast boot, fast memory access (FMA), multithreaded VU1 (MTVU), etc. These options can improve the performance and compatibility of some games, but they may also cause some instability or errors with others. You can experiment with different options to find the best balance for each game.
-
-
You can also save and load different settings profiles for each game by tapping on the profile icon at the top right corner of the Settings screen. You can also reset the settings to default by tapping on the reset icon at the top left corner of the Settings screen.
-
How to play PS2 games on AetherSX2
-
Now that you have configured the settings and controls of AetherSX2, you are ready to play PS2 games on it. To do this, open AetherSX2 app and go to Game Library. You will see a list of games that you have added to the emulator. Tap on the game that you want to play and select Play. The game will start loading and you will see the PS2 logo and the game intro. You can use the input mode that you have chosen to control the game. You can also access some options by tapping on the menu icon at the top right corner of the screen. You can save and load states, enable or disable cheats, fast forward or rewind, take screenshots, scan QR codes, and more.
-
Pros and cons of AetherSX2 beta apk
-
AetherSX2 beta apk is a great PS2 emulator for Android that offers many advantages, but it also has some drawbacks. Here are some of the pros and cons of AetherSX2 beta apk:
-
Pros: High compatibility, performance, features, and support
-
The main pros of AetherSX2 beta apk are its high compatibility, performance, features, and support. As we have mentioned before, AetherSX2 beta apk can run over 90% of the PS2 library with minimal issues. It can also run most games at 60 FPS or higher with enhanced resolution and effects. It also offers many features and options to customize and improve your gaming experience. It also has a dedicated website and a Discord server where you can get updates, news, guides, tips, support, and feedback from the developers and the community.
-
Cons: Beta version, bugs, glitches, and compatibility issues
-
The main cons of AetherSX2 beta apk are its beta version, bugs, glitches, and compatibility issues. As we have mentioned before, AetherSX2 beta apk is still in development and not a final product. This means that it may have some bugs, glitches, and compatibility issues with some games or devices. Some games may not run at all or run with errors or poor performance. Some devices may not be compatible or have problems with installation or permissions. Some settings or features may not work properly or cause crashes or freezes. These issues are expected in a beta version and the developers are working hard to fix them in future updates.
-
Conclusion and FAQs
-
AetherSX2 beta apk is a new PS2 emulator for Android that promises to deliver high compatibility, performance, and features. It is based on the PCSX2 emulator for PC, but it is a completely rewritten and optimized emulator that takes advantage of the hardware and software capabilities of modern Android devices. It is easy to install and use, and it supports a wide range of PS2 games. However, it is still in development and not a final product. It may have some bugs, glitches, and compatibility issues with some games or devices. These issues are expected in a beta version and the developers are working hard to fix them in future updates.
-
If you are interested in trying AetherSX2 beta apk, you can download it from the official website or Google Play Store. You will also need to have PS2 BIOS files and PS2 game ISOs or discs to play PS2 games on it. You can also configure some settings and controls to optimize your gaming experience. You can also access some options to enhance your gaming experience.
-
We hope this article has helped you learn more about AetherSX2 beta apk and how to use it. If you have any questions or feedback about AetherSX2 beta apk, you can visit the official website or join the Discord server. You can also check out some of the FAQs below:
-
FAQs
-
-
Q: Is AetherSX2 beta apk legal?
-
A: AetherSX2 beta apk itself is legal, as it is a software that emulates the PS2 hardware and software. However, downloading or distributing PS2 BIOS files or PS2 game ISOs or discs may be illegal in some countries or regions, depending on their laws or regulations. Please do some research and use your own discretion before obtaining these files.
-
Q: Is AetherSX2 beta apk safe?
-
A: AetherSX2 beta apk is safe if you download it from the official website or Google Play Store. It does not contain any viruses, malware, spyware, or other harmful components. However, if you download it from other sources online, you may risk getting infected by some malicious software or fake apps. Please be careful and only download AetherSX2 beta apk from trusted sources.
-
Q: Is AetherSX2 beta apk free?
-
A: AetherSX2 beta apk is free to download and use. You don't need to pay any fees or subscriptions to use AetherSX2 beta apk. However, you may need to pay for some PS2 games or discs if you don't own them already.
-
Q: What are the minimum requirements for AetherSX2 beta apk?
-
A: The minimum requirements for AetherSX2 beta apk are as follows:
-
-
An Android device running Android 7.0 or higher
-
A quad-core CPU with at least 2.0 GHz clock speed
-
At least 2 GB of RAM
-
At least 4 GB of free storage space
-
A GPU that supports OpenGL ES 3.0 or higher
-
-
Q: How can I update AetherSX2 beta apk?
-
A: You can update AetherSX2 beta apk by visiting the official website or Google Play Store and downloading the latest version. You can also enable automatic updates on your device's settings to get notified when a new update is available. You can also check the official website or Discord server for news and announcements about new updates.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Disfruta del juego de moto traffic rider apk un juego de conduccin increble con grficos espectaculares.md b/spaces/1phancelerku/anime-remove-background/Disfruta del juego de moto traffic rider apk un juego de conduccin increble con grficos espectaculares.md
deleted file mode 100644
index e56bc4ab81e227179059f847efd4d7205a9ae2b2..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Disfruta del juego de moto traffic rider apk un juego de conduccin increble con grficos espectaculares.md
+++ /dev/null
@@ -1,17 +0,0 @@
-
-
Juego de Moto Traffic Rider APK: Un Juego de Carreras de Motos Adictivo y Divertido
- ¿Te gustan los juegos de carreras de motos? ¿Quieres sentir la adrenalina de conducir a toda velocidad por las carreteras más peligrosas? ¿Buscas un juego que te ofrezca diversión, desafío y emoción? Entonces, el juego de moto traffic rider apk es para ti. El juego de moto traffic rider apk es un juego de carreras de motos en 3D que te pondrá al límite. Tendrás que esquivar el tráfico, adelantar a otros vehículos, realizar acrobacias y llegar a la meta lo más rápido posible. El juego cuenta con gráficos realistas, sonidos envolventes y una jugabilidad fluida e intuitiva. Además, podrás elegir entre diferentes tipos de motos, personalizarlas y mejorarlas con el dinero que ganes en las carreras.
¿Qué es el juego de moto traffic rider apk?
- El juego de moto traffic rider apk es un juego desarrollado por Launchship Studios, los creadores de Pastry Jam & Birds Pop Mania. El juego se lanzó en marzo de 2021 y desde entonces ha recibido miles de descargas y valoraciones positivas por parte de los usuarios. El juego está disponible para dispositivos Android y se puede descargar gratis desde la tienda Aptoide.
Características principales del juego
- El juego de moto traffic rider apk tiene las siguientes características principales: - Más de 20 motos diferentes para elegir y desbloquear. - 4 modos de juego: carrera libre, carrera contrarreloj, carrera con obstáculos y carrera con tráfico. - Varios escenarios y ambientes para conducir: ciudad, desierto, montaña, autopista y más. - Sistema de control sencillo y adaptable: puedes usar el acelerómetro, los botones o el joystick para manejar tu moto. - Efectos visuales y sonoros realistas: podrás ver el humo, las chispas, las sombras y los reflejos de tu moto y escuchar el rugido del motor, el claxon y los frenazos. - Tabla de clasificación global y logros para competir con otros jugadores y demostrar tu habilidad.
Cómo descargar e instalar el juego
- Para descargar e instalar el juego de moto traffic rider apk en tu dispositivo Android, solo tienes que seguir estos pasos: - Accede a la tienda Aptoide desde tu navegador o descarga la aplicación desde [aquí](^1^). - Busca el juego de moto traffic rider 3D en el buscador o en la categoría de juegos. - Haz clic en el botón "Instalar" y espera a que se complete la descarga. - Abre el archivo APK descargado y sigue las instrucciones para instalar el juego en tu dispositivo. - Disfruta del juego.
¿Cómo jugar al juego de moto traffic rider apk?
- El juego de moto traffic rider apk es muy fácil de jugar, pero también muy desafiante. Continuing the article:
Modos de juego disponibles
- El juego de moto traffic rider apk tiene cuatro modos de juego diferentes para que elijas el que más te guste: - Carrera libre: en este modo, puedes conducir tu moto sin ningún objetivo ni límite de tiempo. Solo disfruta del paisaje y de la sensación de velocidad. - Carrera contrarreloj: en este modo, tienes que completar una vuelta al circuito en el menor tiempo posible. Cada vez que superes tu récord, obtendrás más dinero y puntos. - Carrera con obstáculos: en este modo, tienes que evitar chocar con los obstáculos que aparecen en el camino, como barriles, conos, vallas y más. Cuanto más lejos llegues, más difícil se pondrá el juego. - Carrera con tráfico: en este modo, tienes que conducir tu moto entre el tráfico de la carretera, adelantando a los coches, camiones y autobuses que se cruzan en tu camino. Cuanto más cerca pases de ellos, más dinero y puntos ganarás.
Consejos y trucos para mejorar tu rendimiento
- Para jugar al juego de moto traffic rider apk como un profesional, te recomendamos que sigas estos consejos y trucos: - Elige la moto que mejor se adapte a tu estilo de conducción. Cada moto tiene sus propias características de velocidad, aceleración, frenado y manejo. Puedes ver las estadísticas de cada moto en el menú de selección. - Personaliza y mejora tu moto con el dinero que ganes en las carreras. Puedes cambiar el color, las llantas, el escape y el motor de tu moto para hacerla más rápida y bonita. - Usa el nitro para dar un impulso extra a tu velocidad. El nitro se recarga automáticamente cuando no lo usas, pero también puedes recoger botellas de nitro que aparecen en el camino. - Realiza acrobacias para ganar más dinero y puntos. Puedes hacer caballitos, saltos, giros y más. Pero ten cuidado de no perder el equilibrio ni caerte de la moto. - Aprovecha las rampas y los puentes para saltar por encima del tráfico y evitar los obstáculos. Pero ten cuidado de no salirte del camino ni chocar con nada. - Mantén una buena distancia con los vehículos que te preceden. Si los sigues muy de cerca, podrías chocar con ellos si frenan o cambian de carril repentinamente. - No te salgas del carril ni invadas el sentido contrario. Si lo haces, podrías recibir una multa o causar un accidente.
¿Por qué jugar al juego de moto traffic rider apk?
- El juego de moto traffic rider apk es un juego que te ofrece muchas razones para jugarlo y disfrutarlo. Aquí te contamos algunas de ellas:
Los beneficios de jugar a juegos de carreras de motos
- Jugar a juegos de carreras de motos tiene muchos beneficios para tu salud física y mental, como por ejemplo: - Mejora tu coordinación ojo-mano y tu capacidad de reacción. Al conducir una moto a alta velocidad, tienes que estar atento a todo lo que ocurre a tu alrededor y reaccionar rápidamente ante cualquier situación. - Estimula tu cerebro y tu memoria. Al jugar a juegos de carreras de motos, tienes que recordar los circuitos, los obstáculos, los atajos y las estrategias para ganar las carreras. - Reduce el estrés y la ansiedad. Al jugar a juegos de carreras de motos, puedes liberar la tensión acumulada y relajarte mientras te diviertes. - Aumenta tu autoestima y tu confianza. Al jugar a juegos de carreras de motos, puedes superar tus propios límites y desafíos, lo que te hace sentir orgulloso y satisfecho.
Las ventajas de jugar al juego de moto traffic rider apk
- Además de los beneficios generales de jugar a juegos de carreras de motos, el juego de moto traffic rider apk tiene algunas ventajas específicas que lo hacen único y especial: - Es un juego gratuito y sin anuncios. No tienes que pagar nada para descargarlo ni para jugarlo. Tampoco tienes que ver anuncios molestos ni esperar tiempos de carga. - Es un juego compatible con todos los dispositivos Android. No importa Continuing the article: - Es un juego compatible con todos los dispositivos Android. No importa si tienes un móvil antiguo o nuevo, el juego se adapta a tu pantalla y a tu rendimiento. - Es un juego que se actualiza constantemente. Los desarrolladores del juego están siempre añadiendo nuevas motos, nuevos escenarios, nuevos modos de juego y nuevas funciones para mejorar la experiencia de los jugadores. - Es un juego que te permite jugar online y offline. Puedes jugar al juego sin conexión a internet o conectarte a la red para competir con otros jugadores de todo el mundo.
Conclusión
- El juego de moto traffic rider apk es un juego de carreras de motos que te hará vivir una aventura increíble. Podrás conducir tu moto a toda velocidad por diferentes escenarios, esquivar el tráfico, realizar acrobacias y competir con otros jugadores. El juego tiene gráficos realistas, sonidos envolventes y una jugabilidad fluida e intuitiva. Además, es un juego gratuito, sin anuncios, compatible con todos los dispositivos Android y que se actualiza constantemente. Si te gustan los juegos de carreras de motos, no dudes en descargar el juego de moto traffic rider apk y disfrutar de la emoción de la velocidad.
Preguntas frecuentes sobre el juego de moto traffic rider apk
- Aquí te respondemos algunas de las preguntas más frecuentes que tienen los usuarios sobre el juego de moto traffic rider apk: - ¿Qué requisitos necesita mi dispositivo para jugar al juego? - Para jugar al juego de moto traffic rider apk, tu dispositivo debe tener al menos Android 4.4 o superior y 100 MB de espacio libre. - ¿Cómo puedo cambiar el idioma del juego? - Para cambiar el idioma del juego, debes ir al menú de ajustes y seleccionar la opción "Idioma". Allí podrás elegir entre varios idiomas disponibles, como español, inglés, francés, alemán, italiano y más. - ¿Cómo puedo contactar con el soporte técnico del juego? - Para contactar con el soporte técnico del juego, debes ir al menú de ajustes y seleccionar la opción "Soporte". Allí podrás enviar un correo electrónico con tu consulta o problema al equipo de Launchship Studios. - ¿Cómo puedo conseguir más dinero y puntos en el juego? - Para conseguir más dinero y puntos en el juego, debes completar las carreras con éxito, realizar acrobacias, pasar cerca de los vehículos, recoger las botellas de nitro y los billetes que aparecen en el camino y superar tus récords personales. - ¿Cómo puedo desbloquear más motos en el juego? - Para desbloquear más motos en el juego, debes ganar dinero en las carreras y usarlo para comprar las motos que quieras. También puedes desbloquear algunas motos especiales al completar ciertos logros o al participar en eventos especiales.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Car Parking Multiplayer with Friends - Get the Latest APK Here.md b/spaces/1phancelerku/anime-remove-background/Enjoy Car Parking Multiplayer with Friends - Get the Latest APK Here.md
deleted file mode 100644
index 93e8ad68db9378c24cc1067cf23dc2951b28af42..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy Car Parking Multiplayer with Friends - Get the Latest APK Here.md
+++ /dev/null
@@ -1,112 +0,0 @@
-
-
Car Parking Multiplayer: A Review of the Latest Version APK
-
If you are looking for a realistic and immersive driving simulator, you might want to check out Car Parking Multiplayer. This game is more than just parking your car, it is an open-world experience where you can explore different areas, customize your vehicle, interact with other players, and even walk around. In this article, we will review the latest version APK of Car Parking Multiplayer, which offers new features, improvements, and bug fixes. We will also tell you how to download and install it on your Android device, as well as why you should play this game and what tips and tricks you can use to enhance your gameplay.
Car Parking Multiplayer is a game developed by olzhass, a Turkish studio that specializes in simulation games. It was released in 2017 and has since gained over 100 million downloads on Google Play Store. It is one of the most popular car parking games on the market, with a rating of 4.4 out of 5 stars from over 2 million reviews. The game is available for free, but it contains ads and in-app purchases.
-
Features of the game
-
Car Parking Multiplayer has many features that make it stand out from other parking games. Here are some of them:
-
-
Multiplayer open world mode: You can join online servers and play with thousands of real players from around the world. You can chat with them, exchange cars, race against them, or cooperate with them in police mode. You can also create your own server and invite your friends to join you.
-
Car customization: You can choose from over 100 cars with real interiors and adjust various aspects of them, such as suspension, wheel angle, engine, turbo, gearbox, exhaust, and more. You can also change the appearance of your car with dynamic vinyls, car body parts, and plate types.
-
High-quality open world: You can explore different environments with high-detailed graphics, such as city, airport, desert, port, mountain, snow, and more. You can also enter buildings with interiors and interact with various objects.
-
Interesting gameplay: You can complete 82 real-life parking and driving challenges with different vehicles, such as tow truck, pickup, trucks, sport and classic cars. You can also enjoy free walking mode, where you can get out of your car and walk around the world.
-
-
How to download and install the latest version APK
-
If you want to play the latest version of Car Parking Multiplayer on your Android device, you will need to download and install the APK file from a trusted source. Here are the steps to do so:
-
-
Go to and click on the green button that says "Download". This will start downloading the APK file to your device.
-
Once the download is complete, locate the file in your device's file manager and tap on it to install it. You may need to enable "Unknown sources" in your device's settings to allow the installation.
-
After the installation is done, you can launch the game from your app drawer or home screen and enjoy playing Car Parking Multiplayer.
-
-
Why play Car Parking Multiplayer?
-
Car Parking Multiplayer is a game that offers a lot of fun and entertainment for car enthusiasts and casual gamers alike. Here are some reasons why you should play this game:
-
Pros and cons of the game
-
Like any other game, Car Parking Multiplayer has its pros and cons. Here are some of them:
-
car parking multiplayer mod apk latest version
-car parking multiplayer hack apk download
-car parking multiplayer online oyunu oyna
-car parking multiplayer android oyun club
-car parking multiplayer free download for pc
-car parking multiplayer unlimited money apk
-car parking multiplayer yeni güncelleme apk
-car parking multiplayer hileli apk indir
-car parking multiplayer nasıl oynanır
-car parking multiplayer oyun skor
-car parking multiplayer apk pure
-car parking multiplayer cheats codes
-car parking multiplayer custom maps
-car parking multiplayer discord server
-car parking multiplayer en iyi araba
-car parking multiplayer full apk
-car parking multiplayer garage mod
-car parking multiplayer ios download
-car parking multiplayer jeton hilesi
-car parking multiplayer kilit açma hilesi
-car parking multiplayer lamborghini modu
-car parking multiplayer mod menu apk
-car parking multiplayer nasıl arkadaş eklenir
-car parking multiplayer oyun indir club
-car parking multiplayer para hilesi apk
-car parking multiplayer real engine sound mod
-car parking multiplayer son sürüm hile apk
-car parking multiplayer türkçe yama indir
-car parking multiplayer unlimited coins apk
-car parking multiplayer vip mod apk
-
-
Pros
Cons
-
<
Realistic and immersive gameplay
Some bugs and glitches
-
Wide variety of cars and customization options
Some cars and features require in-app purchases
-
Large and diverse open world to explore
Some areas are not fully detailed or accessible
-
Friendly and active online community
Some players may be rude or disruptive
-
Regular updates and improvements
Some updates may cause compatibility issues or errors
-
-
Tips and tricks for beginners
-
If you are new to Car Parking Multiplayer, you may find it challenging to master the controls and the gameplay. Here are some tips and tricks that can help you get started:
-
-
Adjust the camera angle: You can switch between different camera views by tapping on the camera icon on the top right corner of the screen. You can also pinch the screen to zoom in or out. Try to find the best angle that suits your preference and gives you a clear view of your surroundings.
-
Use the brake and handbrake: You can use the brake pedal on the bottom right corner of the screen to slow down or stop your car. You can also use the handbrake button on the left side of the screen to make sharp turns or drifts. Be careful not to overuse them, as they may damage your car or cause accidents.
-
Follow the instructions and indicators: When you are playing a parking or driving challenge, you will see instructions and indicators on the screen that guide you to your destination. You will also see arrows, cones, and lines that mark your path. Pay attention to them and follow them carefully, as they will help you complete the challenge successfully.
-
Earn money and XP: You can earn money and XP by completing challenges, racing with other players, or selling your cars. You can use money to buy new cars, upgrade your existing ones, or unlock new features. You can use XP to level up your profile and access more servers and modes.
-
Have fun and be respectful: The most important tip is to have fun and enjoy the game. You can explore the open world, interact with other players, or create your own scenarios. However, be respectful of other players and do not ruin their experience by crashing into them, blocking their way, or spamming the chat. Remember, this is a game for everyone.
-
-
User reviews and ratings
-
Car Parking Multiplayer has received mostly positive feedback from its users. Here are some of their reviews and ratings from Google Play Store:
-
-
"This game is awesome! I love how realistic it is and how you can customize your car. The graphics are amazing and the multiplayer mode is fun. I recommend this game to anyone who likes driving games."
-- A user who gave 5 stars
-
-
-
"The game is good but it has some problems. Sometimes it crashes or freezes and I lose my progress. Also, some cars are too expensive and some features are locked behind paywalls. Please fix these issues."
-- A user who gave 3 stars
-
-
-
"This game is terrible! It is full of bugs and glitches and it lags a lot. The controls are hard to use and the physics are unrealistic. The online mode is boring and there are too many ads. Do not download this game."
-- A user who gave 1 star
-
-
Conclusion
-
Summary of the main points
-
In conclusion, Car Parking Multiplayer is a game that offers a realistic and immersive driving simulator with a wide variety of cars, customization options, environments, modes, and challenges. It also has a multiplayer open world mode where you can play with thousands of real players from around the world. The game is free to download and play, but it contains ads and in-app purchases. The game has some pros and cons, as well as some tips and tricks that can help you improve your gameplay. The game has received mostly positive reviews and ratings from its users.
-
Recommendations for potential players
-
If you are interested in playing Car Parking Multiplayer, here are some recommendations for you:
-
-
Download the latest version APK from a trusted source: To enjoy the new features, improvements, and bug fixes of the game, you should download the latest version APK from . This will ensure that you have the best version of the game on your device.
-
Try different cars and modes: To make the most out of the game, you should try different cars and modes that suit your taste and skill level. You can experiment with different settings and features to customize your car and enhance your performance. You can also switch between different modes, such as parking, driving, racing, or police, to have different experiences and challenges.
-
Join the online community: To have more fun and interaction, you should join the online community of Car Parking Multiplayer. You can chat with other players, exchange cars, race with them, or cooperate with them in various scenarios. You can also create your own server and invite your friends to play with you. You can also follow the official social media accounts of the game to get updates, news, and tips.
-
-
FAQs
-
Here are some frequently asked questions about Car Parking Multiplayer:
-
-
Is Car Parking Multiplayer safe to download and play? Yes, Car Parking Multiplayer is safe to download and play, as long as you get it from a trusted source like . However, you should be careful when playing online, as some players may try to scam you or hack your account. You should also avoid clicking on suspicious links or ads that may redirect you to malicious websites or apps.
-
How can I remove ads from Car Parking Multiplayer? You can remove ads from Car Parking Multiplayer by purchasing the premium version of the game for $2.99. This will also give you access to some exclusive cars and features. Alternatively, you can turn off your internet connection while playing the game, but this will disable the multiplayer mode and some online features.
-
How can I get more money and XP in Car Parking Multiplayer? You can get more money and XP in Car Parking Multiplayer by completing challenges, racing with other players, or selling your cars. You can also watch ads or complete offers to get free money and XP. However, you should avoid using any cheats or hacks that claim to give you unlimited money and XP, as they may harm your device or get you banned from the game.
-
How can I contact the developers of Car Parking Multiplayer? You can contact the developers of Car Parking Multiplayer by sending them an email at olzhass@yandex.com. You can also visit their website at or follow them on Facebook at . You can also leave a review or a comment on Google Play Store to share your feedback or suggestions.
-
What are the system requirements for Car Parking Multiplayer? The system requirements for Car Parking Multiplayer are as follows:
-
-
Android version: 4.4 or higher
-
RAM: 1 GB or higher
-
Storage: 300 MB or higher
-
Internet connection: Required for multiplayer mode and some online features
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/801artistry/RVC801/extract_locale.py b/spaces/801artistry/RVC801/extract_locale.py
deleted file mode 100644
index a4ff5ea3ddd7c612c640544099ab98a861b8fe35..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/extract_locale.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import json
-import re
-
-# Define regular expression patterns
-pattern = r"""i18n\([\s\n\t]*(["'][^"']+["'])[\s\n\t]*\)"""
-
-# Initialize the dictionary to store key-value pairs
-data = {}
-
-
-def process(fn: str):
- global data
- with open(fn, "r", encoding="utf-8") as f:
- contents = f.read()
- matches = re.findall(pattern, contents)
- for key in matches:
- key = eval(key)
- print("extract:", key)
- data[key] = key
-
-
-print("processing infer-web.py")
-process("infer-web.py")
-
-print("processing gui_v0.py")
-process("gui_v0.py")
-
-print("processing gui_v1.py")
-process("gui_v1.py")
-
-# Save as a JSON file
-with open("./i18n/en_US.json", "w", encoding="utf-8") as f:
- json.dump(data, f, ensure_ascii=False, indent=4)
- f.write("\n")
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/quantization/base.py b/spaces/AIConsultant/MusicGen/audiocraft/quantization/base.py
deleted file mode 100644
index a77fefb98e62a5bbc6385910261ffdde2ffa5a25..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/quantization/base.py
+++ /dev/null
@@ -1,99 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Base class for all quantizers.
-"""
-
-from dataclasses import dataclass, field
-import typing as tp
-
-import torch
-from torch import nn
-
-
-@dataclass
-class QuantizedResult:
- x: torch.Tensor
- codes: torch.Tensor
- bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item.
- penalty: tp.Optional[torch.Tensor] = None
- metrics: dict = field(default_factory=dict)
-
-
-class BaseQuantizer(nn.Module):
- """Base class for quantizers.
- """
-
- def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult:
- """
- Given input tensor x, returns first the quantized (or approximately quantized)
- representation along with quantized codes, bandwidth, and any penalty term for the loss.
- Finally, this returns a dict of metrics to update logging etc.
- Frame rate must be passed so that the bandwidth is properly computed.
- """
- raise NotImplementedError()
-
- def encode(self, x: torch.Tensor) -> torch.Tensor:
- """Encode a given input tensor with the specified sample rate at the given bandwidth."""
- raise NotImplementedError()
-
- def decode(self, codes: torch.Tensor) -> torch.Tensor:
- """Decode the given codes to the quantized representation."""
- raise NotImplementedError()
-
- @property
- def total_codebooks(self):
- """Total number of codebooks."""
- raise NotImplementedError()
-
- @property
- def num_codebooks(self):
- """Number of active codebooks."""
- raise NotImplementedError()
-
- def set_num_codebooks(self, n: int):
- """Set the number of active codebooks."""
- raise NotImplementedError()
-
-
-class DummyQuantizer(BaseQuantizer):
- """Fake quantizer that actually does not perform any quantization.
- """
- def __init__(self):
- super().__init__()
-
- def forward(self, x: torch.Tensor, frame_rate: int):
- q = x.unsqueeze(1)
- return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x))
-
- def encode(self, x: torch.Tensor) -> torch.Tensor:
- """Encode a given input tensor with the specified sample rate at the given bandwidth.
- In the case of the DummyQuantizer, the codes are actually identical
- to the input and resulting quantized representation as no quantization is done.
- """
- return x.unsqueeze(1)
-
- def decode(self, codes: torch.Tensor) -> torch.Tensor:
- """Decode the given codes to the quantized representation.
- In the case of the DummyQuantizer, the codes are actually identical
- to the input and resulting quantized representation as no quantization is done.
- """
- return codes.squeeze(1)
-
- @property
- def total_codebooks(self):
- """Total number of codebooks."""
- return 1
-
- @property
- def num_codebooks(self):
- """Total number of codebooks."""
- return self.total_codebooks
-
- def set_num_codebooks(self, n: int):
- """Set the number of active codebooks."""
- raise AttributeError("Cannot override the number of codebooks for the dummy quantizer")
diff --git a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/pos_encoding.py b/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/pos_encoding.py
deleted file mode 100644
index 066be3e1f8a1636f7eaabd1c534b9c618ee3e9f8..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/VQ-Trans/models/pos_encoding.py
+++ /dev/null
@@ -1,43 +0,0 @@
-"""
-Various positional encodings for the transformer.
-"""
-import math
-import torch
-from torch import nn
-
-def PE1d_sincos(seq_length, dim):
- """
- :param d_model: dimension of the model
- :param length: length of positions
- :return: length*d_model position matrix
- """
- if dim % 2 != 0:
- raise ValueError("Cannot use sin/cos positional encoding with "
- "odd dim (got dim={:d})".format(dim))
- pe = torch.zeros(seq_length, dim)
- position = torch.arange(0, seq_length).unsqueeze(1)
- div_term = torch.exp((torch.arange(0, dim, 2, dtype=torch.float) *
- -(math.log(10000.0) / dim)))
- pe[:, 0::2] = torch.sin(position.float() * div_term)
- pe[:, 1::2] = torch.cos(position.float() * div_term)
-
- return pe.unsqueeze(1)
-
-
-class PositionEmbedding(nn.Module):
- """
- Absolute pos embedding (standard), learned.
- """
- def __init__(self, seq_length, dim, dropout, grad=False):
- super().__init__()
- self.embed = nn.Parameter(data=PE1d_sincos(seq_length, dim), requires_grad=grad)
- self.dropout = nn.Dropout(p=dropout)
-
- def forward(self, x):
- # x.shape: bs, seq_len, feat_dim
- l = x.shape[1]
- x = x.permute(1, 0, 2) + self.embed[:l].expand(x.permute(1, 0, 2).shape)
- x = self.dropout(x.permute(1, 0, 2))
- return x
-
-
\ No newline at end of file
diff --git a/spaces/AIWaves/Software_Company/src/agents/Memory/__init__.py b/spaces/AIWaves/Software_Company/src/agents/Memory/__init__.py
deleted file mode 100644
index 56f3aa09d927077ebc7f1a925f956dee78cb1c26..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Software_Company/src/agents/Memory/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .base_Memory import Memory
\ No newline at end of file
diff --git a/spaces/AIWaves/Software_Company/src/agents/SOP.py b/spaces/AIWaves/Software_Company/src/agents/SOP.py
deleted file mode 100644
index 7fc3e2f5e0c496774d9967fb88593fa4c88347e2..0000000000000000000000000000000000000000
--- a/spaces/AIWaves/Software_Company/src/agents/SOP.py
+++ /dev/null
@@ -1,296 +0,0 @@
-# coding=utf-8
-# Copyright 2023 The AIWaves Inc. team.
-
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""standard operation procedure of an LLM Autonomous agent"""
-import random
-from LLM.base_LLM import *
-from State import State
-from utils import extract, get_relevant_history
-from Memory import Memory
-from Prompt import *
-import json
-import os
-
-class SOP:
- """
- Responsible for managing the operational processes of all agents
- """
-
- # SOP should have args : "states" "relations" "root"
-
- def __init__(self, **kwargs):
- self.controller_dict = {}
- self.LLM = init_LLM("logs/god",**kwargs)
-
- self.states = {}
- self.init_states(kwargs["states"])
- self.init_relation(kwargs["relations"])
- for state_name, states_dict in kwargs["states"].items():
- if state_name != "end_state" and "controller" in states_dict:
- self.controller_dict[state_name] = states_dict["controller"]
-
- self.user_names = kwargs["user_names"] if "user_names" in kwargs else []
- self.root = self.states[kwargs["root"]]
- self.current_state = self.root
- self.finish_state_name = (
- kwargs["finish_state_name"]
- if "finish_state_name" in kwargs
- else "end_state"
- )
- self.roles_to_names = None
- self.names_to_roles = None
- self.finished = False
-
- @classmethod
- def from_config(cls, config_path):
- with open(config_path) as f:
- config = json.load(f)
- os.environ.clear()
- for key,value in config["config"].items():
- if key == "API_BASE":
- if value == "":
- pass
- else:
- os.environ[key] = value
- # assert "API_KEY" in os.environ and os.environ["API_KEY"] != "API_KEY","Please go to config.json to set API_KEY"
-
- sop = SOP(**config)
- return sop
-
- def init_states(self, states_dict):
- for state_name, state_dict in states_dict.items():
- state_dict["name"] = state_name
- self.states[state_name] = State(**state_dict)
-
- def init_relation(self, relations):
- for state_name, state_relation in relations.items():
- for idx, next_state_name in state_relation.items():
- self.states[state_name].next_states[idx] = self.states[next_state_name]
-
- def transit(self, chat_history, **kwargs):
- """
- Determine the next state based on the current situation
- Return :
- next_state(State) : the next state
- """
- # 如果是单一循环节点,则一直循环即可
- # If it is a single loop node, just keep looping
- if len(self.current_state.next_states) == 1:
- next_state = "0"
-
- # 否则则需要controller去判断进入哪一节点
- # Otherwise, the controller needs to determine which node to enter.
- else:
- current_state = self.current_state
- controller_dict = self.controller_dict[current_state.name]
- relevant_history = kwargs["relevant_history"]
-
- max_chat_nums = controller_dict["max_chat_nums"] if "max_chat_nums" in controller_dict else 1000
- if current_state.chat_nums>=max_chat_nums:
- return self.current_state.next_states["1"]
-
-
- # 否则则让controller判断是否结束
- # Otherwise, let the controller judge whether to end
- judge_system_prompt = controller_dict["judge_system_prompt"]
- environment_prompt = eval(Get_environment_prompt) if current_state.environment_prompt else ""
- transit_system_prompt = eval(Transit_system_prompt)
-
- judge_last_prompt = controller_dict["judge_last_prompt"]
- transit_last_prompt = eval(Transit_last_prompt)
-
-
-
- environment = kwargs["environment"]
- environment_summary = environment.shared_memory["short_term_memory"]
- chat_history_message = Memory.get_chat_history(chat_history)
- query = chat_history[-1].get_query()
-
- chat_messages = [
- {
- "role": "user",
- "content": eval(Transit_message)
- }
- ]
-
- extract_words = controller_dict["judge_extract_words"] if "judge_extract_words" in controller_dict else "end"
-
-
- response = self.LLM.get_response(
- chat_messages, transit_system_prompt, transit_last_prompt, stream=False, **kwargs
- )
- next_state = (
- response if response.isdigit() else extract(response, extract_words)
- )
-
- # 如果没有parse出来则继续循环
- # If no parse comes out, continue looping
- if not next_state.isdigit():
- next_state = "0"
-
- next_state = self.current_state.next_states[next_state]
- return next_state
-
-
- def route(self, chat_history, **kwargs):
- """
- Determine the role that needs action based on the current situation
- Return :
- current_agent(Agent) : the next act agent
- """
-
- agents = kwargs["agents"]
-
- # 知道进入哪一状态后开始分配角色,如果该状态下只有一个角色则直接分配给他
- # Start assigning roles after knowing which state you have entered. If there is only one role in that state, assign it directly to him.
- if len(self.current_state.roles) == 1:
- next_role = self.current_state.roles[0]
-
-
-
- # 否则controller进行分配
- # Otherwise the controller determines
- else:
- relevant_history = kwargs["relevant_history"]
- controller_type = (
- self.controller_dict[self.current_state.name]["controller_type"]
- if "controller_type" in self.controller_dict[self.current_state.name]
- else "order"
- )
-
-
- # 如果是rule 控制器,则交由LLM进行分配角色
- # If controller type is rule, it is left to LLM to assign roles.
- if controller_type == "rule":
- controller_dict = self.controller_dict[self.current_state.name]
-
- call_last_prompt = controller_dict["call_last_prompt"] if "call_last_prompt" in controller_dict else ""
-
- allocate_prompt = ""
- roles = list(set(self.current_state.roles))
- for role in roles:
- allocate_prompt += eval(Allocate_component)
-
- call_system_prompt = controller_dict["call_system_prompt"] if "call_system_prompt" in controller_dict else ""
- environment_prompt = eval(Get_environment_prompt) if self.current_state.environment_prompt else ""
- # call_system_prompt + environment + allocate_prompt
- call_system_prompt = eval(Call_system_prompt)
-
- query = chat_history[-1].get_query()
- last_name = chat_history[-1].send_name
- # last_prompt: note + last_prompt + query
- call_last_prompt =eval(Call_last_prompt)
-
-
- chat_history_message = Memory.get_chat_history(chat_history)
- # Intermediate historical conversation records
- chat_messages = [
- {
- "role": "user",
- "content": eval(Call_message),
- }
- ]
-
- extract_words = controller_dict["call_extract_words"] if "call_extract_words" in controller_dict else "end"
-
- response = self.LLM.get_response(
- chat_messages, call_system_prompt, call_last_prompt, stream=False, **kwargs
- )
-
- # get next role
- next_role = extract(response, extract_words)
-
- # Speak in order
- elif controller_type == "order":
- # If there is no begin role, it will be given directly to the first person.
- if not self.current_state.current_role:
- next_role = self.current_state.roles[0]
- # otherwise first
- else:
- self.current_state.index += 1
- self.current_state.index = (self.current_state.index) % len(self.current_state.roles)
- next_role = self.current_state.roles[self.current_state.index]
- # random speak
- elif controller_type == "random":
- next_role = random.choice(self.current_state.roles)
-
- # 如果下一角色不在,则随机挑选一个
- # If the next character is not available, pick one at random
- if next_role not in self.current_state.roles:
- next_role = random.choice(self.current_state.roles)
-
- self.current_state.current_role = next_role
-
- next_agent = agents[self.roles_to_names[self.current_state.name][next_role]]
-
- return next_agent
-
- def next(self, environment, agents):
- """
- Determine the next state and the agent that needs action based on the current situation
- """
-
- # 如果是第一次进入该状态
- # If it is the first time to enter this state
-
- if self.current_state.is_begin:
- agent_name = self.roles_to_names[self.current_state.name][self.current_state.begin_role]
- agent = agents[agent_name]
- return self.current_state,agent
-
-
- # get relevant history
- query = environment.shared_memory["long_term_memory"][-1].content
- relevant_history = get_relevant_history(
- query,
- environment.shared_memory["long_term_memory"][:-1],
- environment.shared_memory["chat_embeddings"][:-1],
- )
- relevant_history = Memory.get_chat_history(relevant_history)
-
-
-
- next_state = self.transit(
- chat_history=environment.shared_memory["long_term_memory"][
- environment.current_chat_history_idx :
- ],
- relevant_history=relevant_history,
- environment=environment,
- )
- # 如果进入终止节点,则直接终止
- # If you enter the termination node, terminate directly
- if next_state.name == self.finish_state_name:
- self.finished = True
- return None, None
-
- self.current_state = next_state
-
- # 如果是首次进入该节点且有开场白,则直接分配给开场角色
- # If it is the first time to enter the state and there is a begin query, it will be directly assigned to the begin role.
- if self.current_state.is_begin and self.current_state.begin_role:
- agent_name = self.roles_to_names[self.current_state.name][self.current_state.begin_role]
- agent = agents[agent_name]
- return self.current_state,agent
-
-
- next_agent = self.route(
- chat_history=environment.shared_memory["long_term_memory"][
- environment.current_chat_history_idx :
- ],
- agents = agents,
- relevant_history=relevant_history,
- )
-
- return self.current_state, next_agent
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Easychat.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Easychat.py
deleted file mode 100644
index eb740da991eb8f740489f6bc76a1ad55f006663b..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Easychat.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import requests
-import os
-import json
-from ...typing import sha256, Dict, get_type_hints
-
-url = 'https://free.easychat.work'
-model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k',
- 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
-supports_stream = True
-needs_auth = False
-
-
-def _create_completion(model: str, messages: list, stream: bool, **kwargs):
- headers = {
- 'authority': 'free.easychat.work',
- 'accept': 'text/event-stream',
- 'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
- 'content-type': 'application/json',
- 'endpoint': '',
- 'origin': 'https://free.easychat.work',
- 'plugins': '0',
- 'referer': 'https://free.easychat.work/',
- 'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
- 'sec-ch-ua-mobile': '?0',
- 'sec-ch-ua-platform': '"macOS"',
- 'sec-fetch-dest': 'empty',
- 'sec-fetch-mode': 'cors',
- 'sec-fetch-site': 'same-origin',
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
- 'usesearch': 'false',
- 'x-requested-with': 'XMLHttpRequest',
- }
-
- json_data = {
- 'messages': messages,
- 'stream': True,
- 'model': model,
- 'temperature': 0.5,
- 'presence_penalty': 0,
- 'frequency_penalty': 0,
- 'top_p': 1,
- }
-
- response = requests.post('https://free.easychat.work/api/openai/v1/chat/completions',
- headers=headers, json=json_data)
-
- for chunk in response.iter_lines():
- if b'content' in chunk:
- data = json.loads(chunk.decode().split('data: ')[1])
- yield (data['choices'][0]['delta']['content'])
-
-
-params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
- '(%s)' % ', '.join(
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/raycaster-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/raycaster-plugin.d.ts
deleted file mode 100644
index 31eff7f42e7a9127d1bf61897ae94c5a4c3841da..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/raycaster-plugin.d.ts
+++ /dev/null
@@ -1,8 +0,0 @@
-import Raycaster from './raycaster';
-
-export default class RaycasterPlugin extends Phaser.Plugins.BasePlugin {
- add(
- config?: Raycaster.IConfig
- ): Raycaster;
-
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/cube/Cube.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/cube/Cube.js
deleted file mode 100644
index 5354e3aaa5b81d30f074768530c30ce3606b646b..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/cube/Cube.js
+++ /dev/null
@@ -1,57 +0,0 @@
-import Base from '../base/Base.js';
-import { Line } from '../utils/Geoms.js';
-import Yoyo from '../utils/Yoyo.js';
-
-const Linear = Phaser.Math.Linear;
-const ExpoIn = Phaser.Math.Easing.Expo.In;
-const RowNum = 2;
-const ColNum = 2;
-
-class Cube extends Base {
- constructor(scene, config) {
- super(scene, config);
- this.type = 'rexSpinnerCube';
- }
-
- buildShapes() {
- var cnt = RowNum * ColNum;
- for (var i = 0; i < cnt; i++) {
- var line = new Line();
- this.addShape(line);
- }
- }
-
- updateShapes() {
- var centerX = this.centerX;
- var centerY = this.centerY;
- var radius = this.radius;
- var leftBound = centerX - radius;
- var topBound = centerY - radius;
- var cellWidth = (radius * 2) / ColNum;
- var cellHeight = (radius * 2) / RowNum;
-
- var shapes = this.getShapes(),
- cnt = shapes.length;
- for (var i = 0; i < cnt; i++) {
- var colIdx = (i % ColNum);
- var rowIdx = Math.floor(i / RowNum);
- var x = leftBound + (cellWidth * (colIdx + 0.5));
- var y = topBound + (cellHeight * (rowIdx + 0.5));
-
- var line = shapes[i];
- var t = (this.value + ((cnt - i) * 0.1)) % 1;
- t = ExpoIn(Yoyo(t));
-
- var lineAlpha = (cnt - i) / cnt;
- var lineHeight = Linear(0.7, 1, t) * cellHeight;
- var lineWidth = Linear(0.7, 1, t) * cellWidth;
-
- line
- .lineStyle(lineWidth, this.color, lineAlpha)
- .setP0(x - (lineHeight / 2), y)
- .setP1(x + (lineHeight / 2), y);
- }
- }
-}
-
-export default Cube;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-plugin.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-plugin.d.ts
deleted file mode 100644
index c8a8f3bc193c719ddc408628c1e1df077b8870a3..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/spinner-plugin.d.ts
+++ /dev/null
@@ -1,87 +0,0 @@
-import AudioFactory from './audio/Factory';
-import BallFactory from './ball/Factory';
-import BarsFactory from './bars/Factory';
-import BoxFactory from './box/Factory';
-import ClockFactory from './clock/Factory';
-import CubeFactory from './cube/Factory';
-import CustomFactory from './custom/Factory';
-import DotsFactory from './dots/Factory';
-import FacebookFactory from './facebook/Factory';
-import GridFactory from './grid/Factory';
-import LosFactory from './los/Factory';
-import OrbitFactory from './orbit/Factory';
-import OvalFactory from './oval/Factory';
-import PieFactory from './pie/Factory';
-import PuffFactory from './puff/Factory';
-import RadioFactory from './radio/Factory';
-import RingsFactory from './rings/Factory';
-import SpinnerFactory from './spinner/Factory';
-
-export default SpinnerPlugins;
-
-declare class Factories {
- audio: typeof AudioFactory;
- ball: typeof BallFactory;
- bars: typeof BarsFactory;
- box: typeof BoxFactory;
- clock: typeof ClockFactory;
- cube: typeof CubeFactory;
- custom: typeof CustomFactory;
- dots: typeof DotsFactory;
- facebook: typeof FacebookFactory;
- grid: typeof GridFactory;
- los: typeof LosFactory;
- orbit: typeof OrbitFactory;
- oval: typeof OvalFactory;
- pie: typeof PieFactory;
- puff: typeof PuffFactory;
- radio: typeof RadioFactory;
- rings: typeof RingsFactory;
- spinner: typeof SpinnerFactory;
-}
-
-declare class SpinnerPlugins {
- constructor(scene: Phaser.Scene);
-
- add: Factories;
-}
-
-import AudioClass from './audio/Audio';
-import BallClass from './ball/Ball';
-import BarsClass from './bars/Bars';
-import BoxClass from './box/Box';
-import ClockClass from './clock/Clock';
-import CubeClass from './cube/Cube';
-import CustomClass from './custom/Custom';
-import DotsClass from './dots/Dots';
-import FacebookClass from './facebook/Facebook';
-import GridClass from './grid/Grid';
-import LosClass from './los/Los';
-import OrbitClass from './orbit/Orbit';
-import OvalClass from './oval/Oval';
-import PieClass from './pie/Pie';
-import PuffClass from './puff/Puff';
-import RadioClass from './radio/Radio';
-import RingsClass from './rings/Rings';
-import SpinnerClass from './spinner/Spinner';
-
-declare namespace SpinnerPlugins {
- type Audio = AudioClass;
- type Ball = BallClass;
- type Bars = BarsClass
- type Box = BoxClass;
- type Clock = ClockClass;
- type Cube = CubeClass;
- type Custom = CustomClass;
- type Dots = DotsClass;
- type Facebook = FacebookClass;
- type Grid = GridClass;
- type Los = LosClass;
- type Orbit = OrbitClass;
- type Oval = OvalClass;
- type Pie = PieClass;
- type Puff = PuffClass;
- type Radio = RadioClass;
- type Rings = RingsClass;
- type Spinner = SpinnerClass;
-}
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/HPaletteCanvas.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/HPaletteCanvas.js
deleted file mode 100644
index d1029c05ca60fa1ffca4528c92485efe960e05e1..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/HPaletteCanvas.js
+++ /dev/null
@@ -1,107 +0,0 @@
-import Canvas from '../../../canvas/Canvas.js';
-import GetOrientationMode from '../../../utils/GetOrientationMode.js';
-import { DrawHPalette } from '../../../../../plugins/utils/canvas/DrawHSVPalette.js';
-
-const Color = Phaser.Display.Color;
-const Percent = Phaser.Math.Percent;
-const ColorToRGBA = Phaser.Display.Color.ColorToRGBA;
-const HSVToRGB = Phaser.Display.Color.HSVToRGB;
-
-class HPaletteCanvas extends Canvas {
- constructor(scene, x, y, width, height, orientation) {
- if (x === undefined) { x = 0; }
- if (y === undefined) { y = 0; }
- if (width === undefined) { width = 2; }
- if (height === undefined) { height = 2; }
-
- super(scene, x, y, width, height);
- this.type = 'rexColorPicker.HPaletteCanvas';
-
- this.colorObject = new Color();
-
- this.setOrientation(orientation);
- this.setSize(width, height);
- }
-
- setOrientation(orientation) {
- this.orientation = GetOrientationMode(orientation);
- return this;
- }
-
- updateTexture() {
- DrawHPalette(this.canvas, this.context, this.orientation);
- super.updateTexture();
- return this;
- }
-
- get color() {
- return this.colorObject.color;
- }
-
- get hue() {
- return this._hue;
- }
-
- set hue(value) {
- this._hue = value;
- }
-
- getHue(localX, localY) {
- if (localX === undefined) {
- return this.hue;
- }
-
- if (this.orientation === 0) {
- this.hue = Percent(localX, 0, this.width);
- } else {
- this.hue = Percent(localY, 0, this.height);
- }
-
- return this.hue;
- }
-
- getColor(localX, localY) {
- if (localX === undefined) {
- return this.color;
- }
-
- var h = this.getHue(localX, localY);
- this.colorObject.setFromRGB(HSVToRGB(h, 1, 1));
- return this.colorObject.color;
- }
-
- setColor(color) {
- if (this.color === color) {
- return this;
- }
-
- return this;
- }
-
- colorToLocalPosition(color, out) {
- if (out === undefined) {
- out = {};
- } else if (out === true) {
- if (LocalXY === undefined) {
- LocalXY = {};
- }
- out = LocalXY;
- }
-
- this.colorObject.setFromRGB(ColorToRGBA(color));
-
- if (this.orientation === 0) {
- out.x = this.width * this.colorObject.h;
- out.y = this.height / 2;
- } else {
- out.x = this.width / 2;
- out.y = this.height * this.colorObject.h;
- }
-
- return out;
- }
-}
-
-var LocalXY = undefined;
-
-export default HPaletteCanvas;
\ No newline at end of file
diff --git a/spaces/AlbertoFH98/CastenaApp/app.py b/spaces/AlbertoFH98/CastenaApp/app.py
deleted file mode 100644
index ad00c6d3cf2afc22e39dbe4d9f77e0229e9de7d5..0000000000000000000000000000000000000000
--- a/spaces/AlbertoFH98/CastenaApp/app.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# -- Import libraries
-from langchain.prompts import PromptTemplate
-from streamlit.logger import get_logger
-import pandas as pd
-import streamlit as st
-import urllib.request
-import argparse
-import together
-import logging
-import utils
-import spacy
-import time
-import os
-
-def main():
- # -- 1. Setup arguments
- parser = argparse.ArgumentParser()
- parser.add_argument('--DEFAULT_SYSTEM_PROMPT_LINK', type=str, default="https://raw.githubusercontent.com/AlbertoUAH/Castena/main/prompts/default_system_prompt.txt", help='Valor para DEFAULT_SYSTEM_PROMPT_LINK')
- parser.add_argument('--PODCAST_URL_VIDEO_PATH', type=str, default="https://raw.githubusercontent.com/AlbertoUAH/Castena/main/data/podcast_youtube_video.csv", help='Valor para PODCAST_URL_VIDEO_PATH')
- parser.add_argument('--TRANSCRIPTION', type=str, default='worldcast_roberto_vaquero', help='Name of the trascription')
- parser.add_argument('--MODEL', type=str, default='togethercomputer/llama-2-7b-chat', help='Model name')
- parser.add_argument('--EMB_MODEL', type=str, default='BAAI/bge-base-en-v1.5', help='Embedding model name')
- os.system("python -m spacy download es_core_news_lg")
-
- # -- 2. Setup env and logger
- os.environ["TOGETHER_API_KEY"] = "6101599d6e33e3bda336b8d007ca22e35a64c72cfd52c2d8197f663389fc50c5"
- logger = get_logger(__name__)
-
- # -- 3. Setup constants
- B_INST, E_INST = "[INST]", "[/INST]"
- B_SYS, E_SYS = "<>\n", "\n<>\n\n"
- args = parser.parse_args()
- PODCAST_URL_VIDEO_PATH = args.PODCAST_URL_VIDEO_PATH
- DEFAULT_SYSTEM_PROMPT_LINK = args.DEFAULT_SYSTEM_PROMPT_LINK
- TRANSCRIPTION = args.TRANSCRIPTION
- TRANSCRIPTION_PATH = '{}_transcription.txt'.format(TRANSCRIPTION)
- MODEL = args.MODEL
- EMB_MODEL = args.EMB_MODEL
- SOCIAL_ICONS = {
- "LinkedIn": ["https://www.linkedin.com/in/alberto-fernandez-hernandez-3a3474136/", "https://icon.signature.email/social/linkedin-circle-medium-0077b5-FFFFFF.png"],
- "GitHub": ["https://github.com/AlbertoUAH", "https://icon.signature.email/social/github-circle-medium-24292e-FFFFFF.png"]
- }
- social_icons_html = [f"" for platform in SOCIAL_ICONS]
-
- together.api_key = os.environ["TOGETHER_API_KEY"]
- together.Models.start(MODEL)
- podcast_url_video_df = pd.read_csv(PODCAST_URL_VIDEO_PATH, sep=';')
- youtube_video_url = list(podcast_url_video_df[podcast_url_video_df['podcast_name'] == "\'" + TRANSCRIPTION + "\'"]['youtube_video_url'])[0].replace("\'", "")
-
- # -- 4. Setup request for system prompt
- f = urllib.request.urlopen(DEFAULT_SYSTEM_PROMPT_LINK)
- DEFAULT_SYSTEM_PROMPT = str(f.read(), 'UTF-8')
-
- # -- 5. Setup app
- translator, nlp, retriever = utils.setup_app(TRANSCRIPTION_PATH, EMB_MODEL, MODEL, logger)
-
-
- # -- 6. Setup prompt template + llm chain
- instruction = """CONTEXT:/n/n {context}/n
-
- Question: {question}"""
- prompt_template = utils.get_prompt(instruction, DEFAULT_SYSTEM_PROMPT, B_SYS, E_SYS, B_INST, E_INST, logger)
-
- llama_prompt = PromptTemplate(
- template=prompt_template, input_variables=["context", "question"]
- )
- chain_type_kwargs = {"prompt": llama_prompt}
-
- qa_chain = utils.create_llm_chain(MODEL, retriever, chain_type_kwargs, logger)
-
- # ---------------------------------------------------------------------
- # -- 7. Setup Streamlit app
- st.title("Podcast: {}".format(' '.join(x.capitalize() for x in TRANSCRIPTION.split('_'))))
- st.image("https://raw.githubusercontent.com/AlbertoUAH/autexTification/main/media/{}.jpeg".format(TRANSCRIPTION))
-
- original_input_text = st.text_input("Pregunta")
- if st.button("Consultar") or original_input_text:
- translated_input_text = utils.translate_text(original_input_text, nlp, target_lang='en')
- logger.info('A query has been launched. Query: {}'.format(original_input_text))
- logger.info('Waiting for response...')
- llm_response = qa_chain(translated_input_text)
- llm_response = utils.process_llm_response(llm_response, nlp).replace(': ', ': ').replace('. ', '. ').replace('" ', '" ')
- logger.info('Response recieved successfully! {}'.format(llm_response))
- typewrited_llm_response = utils.typewrite(utils.add_hyperlink_and_convert_to_seconds(llm_response), youtube_video_url)
- st.components.v1.html(typewrited_llm_response, width=800, height=750, scrolling=True)
-
- st.write(f"""
Información de contacto
""", unsafe_allow_html=True)
- st.write(f"""
-
- {''.join(social_icons_html)}
-
""",
- unsafe_allow_html=True
- )
-
-# -- Sample: streamlit run app.py -- --DEFAULT_SYSTEM_PROMPT_LINK=https://raw.githubusercontent.com/AlbertoUAH/Castena/main/prompts/default_system_prompt.txt --PODCAST_URL_VIDEO_PATH=https://raw.githubusercontent.com/AlbertoUAH/Castena/main/data/podcast_youtube_video.csv --TRANSCRIPTION=worldcast_roberto_vaquero --MODEL=togethercomputer/llama-2-7b-chat --EMB_MODEL=BAAI/bge-base-en-v1.5
-if __name__ == '__main__':
- main()
\ No newline at end of file
diff --git a/spaces/Alisonbakers/Fml/Dockerfile b/spaces/Alisonbakers/Fml/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Alisonbakers/Fml/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Aloento/9Nine-PITS/text/frontend/tone_sandhi.py b/spaces/Aloento/9Nine-PITS/text/frontend/tone_sandhi.py
deleted file mode 100644
index f80deae5c7fb0bf8a0de6ae952e70bba0442b13b..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/text/frontend/tone_sandhi.py
+++ /dev/null
@@ -1,348 +0,0 @@
-# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import List
-from typing import Tuple
-
-import jieba
-from pypinyin import Style
-from pypinyin import lazy_pinyin
-
-
-class ToneSandhi():
- def __init__(self):
- self.must_neural_tone_words = {
- '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝',
- '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊',
- '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去',
- '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号',
- '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当',
- '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻',
- '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂',
- '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆',
- '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂',
- '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿',
- '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台',
- '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算',
- '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨',
- '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快',
- '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜',
- '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔',
- '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事',
- '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾',
- '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼',
- '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实',
- '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头',
- '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼',
- '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数',
- '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气',
- '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈',
- '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方',
- '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴',
- '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦',
- '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝',
- '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹',
- '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息',
- '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤',
- '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家',
- '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故',
- '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨',
- '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅',
- '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱',
- '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱',
- '扫把', '惦记'
- }
- self.must_not_neural_tone_words = {
- "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎"
- }
- self.punc = ":,;。?!“”‘’':,;.?!"
-
- # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041
- # e.g.
- # word: "家里"
- # pos: "s"
- # finals: ['ia1', 'i3']
- def _neural_sandhi(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
-
- # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺
- for j, item in enumerate(word):
- if j - 1 >= 0 and item == word[j - 1] and pos[0] in {
- "n", "v", "a"
- } and word not in self.must_not_neural_tone_words:
- finals[j] = finals[j][:-1] + "5"
- ge_idx = word.find("个")
- if len(word) >= 1 and word[-1] in "吧呢哈啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶":
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) >= 1 and word[-1] in "的地得":
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 走了, 看着, 去过
- elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}:
- finals[-1] = finals[-1][:-1] + "5"
- elif len(word) > 1 and word[-1] in "们子" and pos in {
- "r", "n"
- } and word not in self.must_not_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 桌上, 地下, 家里
- elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}:
- finals[-1] = finals[-1][:-1] + "5"
- # e.g. 上来, 下去
- elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开":
- finals[-1] = finals[-1][:-1] + "5"
- # 个做量词
- elif (ge_idx >= 1 and
- (word[ge_idx - 1].isnumeric() or
- word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个':
- finals[ge_idx] = finals[ge_idx][:-1] + "5"
- else:
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals[-1] = finals[-1][:-1] + "5"
-
- word_list = self._split_word(word)
- finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]]
- for i, word in enumerate(word_list):
- # conventional neural in Chinese
- if word in self.must_neural_tone_words or word[
- -2:] in self.must_neural_tone_words:
- finals_list[i][-1] = finals_list[i][-1][:-1] + "5"
- finals = sum(finals_list, [])
- return finals
-
- def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # e.g. 看不懂
- if len(word) == 3 and word[1] == "不":
- finals[1] = finals[1][:-1] + "5"
- else:
- for i, char in enumerate(word):
- # "不" before tone4 should be bu2, e.g. 不怕
- if char == "不" and i + 1 < len(word) and finals[i +
- 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- return finals
-
- def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]:
- # "一" in number sequences, e.g. 一零零, 二一零
- if word.find("一") != -1 and all(
- [item.isnumeric() for item in word if item != "一"]):
- return finals
- # "一" between reduplication words shold be yi5, e.g. 看一看
- elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]:
- finals[1] = finals[1][:-1] + "5"
- # when "一" is ordinal word, it should be yi1
- elif word.startswith("第一"):
- finals[1] = finals[1][:-1] + "1"
- else:
- for i, char in enumerate(word):
- if char == "一" and i + 1 < len(word):
- # "一" before tone4 should be yi2, e.g. 一段
- if finals[i + 1][-1] == "4":
- finals[i] = finals[i][:-1] + "2"
- # "一" before non-tone4 should be yi4, e.g. 一天
- else:
- # "一" 后面如果是标点,还读一声
- if word[i + 1] not in self.punc:
- finals[i] = finals[i][:-1] + "4"
- return finals
-
- def _split_word(self, word: str) -> List[str]:
- word_list = jieba.cut_for_search(word)
- word_list = sorted(word_list, key=lambda i: len(i), reverse=False)
- first_subword = word_list[0]
- first_begin_idx = word.find(first_subword)
- if first_begin_idx == 0:
- second_subword = word[len(first_subword):]
- new_word_list = [first_subword, second_subword]
- else:
- second_subword = word[:-len(first_subword)]
- new_word_list = [second_subword, first_subword]
- return new_word_list
-
- def _three_sandhi(self, word: str, finals: List[str]) -> List[str]:
- if len(word) == 2 and self._all_tone_three(finals):
- finals[0] = finals[0][:-1] + "2"
- elif len(word) == 3:
- word_list = self._split_word(word)
- if self._all_tone_three(finals):
- # disyllabic + monosyllabic, e.g. 蒙古/包
- if len(word_list[0]) == 2:
- finals[0] = finals[0][:-1] + "2"
- finals[1] = finals[1][:-1] + "2"
- # monosyllabic + disyllabic, e.g. 纸/老虎
- elif len(word_list[0]) == 1:
- finals[1] = finals[1][:-1] + "2"
- else:
- finals_list = [
- finals[:len(word_list[0])], finals[len(word_list[0]):]
- ]
- if len(finals_list) == 2:
- for i, sub in enumerate(finals_list):
- # e.g. 所有/人
- if self._all_tone_three(sub) and len(sub) == 2:
- finals_list[i][0] = finals_list[i][0][:-1] + "2"
- # e.g. 好/喜欢
- elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \
- finals_list[0][-1][-1] == "3":
-
- finals_list[0][-1] = finals_list[0][-1][:-1] + "2"
- finals = sum(finals_list, [])
- # split idiom into two words who's length is 2
- elif len(word) == 4:
- finals_list = [finals[:2], finals[2:]]
- finals = []
- for sub in finals_list:
- if self._all_tone_three(sub):
- sub[0] = sub[0][:-1] + "2"
- finals += sub
-
- return finals
-
- def _all_tone_three(self, finals: List[str]) -> bool:
- return all(x[-1] == "3" for x in finals)
-
- # merge "不" and the word behind it
- # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error
- def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- last_word = ""
- for word, pos in seg:
- if last_word == "不":
- word = last_word + word
- if word != "不":
- new_seg.append((word, pos))
- last_word = word[:]
- if last_word == "不":
- new_seg.append((last_word, 'd'))
- last_word = ""
- return new_seg
-
- # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听"
- # function 2: merge single "一" and the word behind it
- # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error
- # e.g.
- # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')]
- # output seg: [['听一听', 'v']]
- def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- # function 1
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][
- 0] == seg[i + 1][0] and seg[i - 1][1] == "v":
- new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0]
- else:
- if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][
- 0] == word and pos == "v":
- continue
- else:
- new_seg.append([word, pos])
- seg = new_seg
- new_seg = []
- # function 2
- for i, (word, pos) in enumerate(seg):
- if new_seg and new_seg[-1][0] == "一":
- new_seg[-1][0] = new_seg[-1][0] + word
- else:
- new_seg.append([word, pos])
- return new_seg
-
- # the first and the second words are all_tone_three
- def _merge_continuous_three_tones(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and self._all_tone_three(
- sub_finals_list[i - 1]) and self._all_tone_three(
- sub_finals_list[i]) and not merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
-
- return new_seg
-
- def _is_reduplication(self, word: str) -> bool:
- return len(word) == 2 and word[0] == word[1]
-
- # the last char of first word and the first char of second word is tone_three
- def _merge_continuous_three_tones_2(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- sub_finals_list = [
- lazy_pinyin(
- word, neutral_tone_with_five=True, style=Style.FINALS_TONE3)
- for (word, pos) in seg
- ]
- assert len(sub_finals_list) == len(seg)
- merge_last = [False] * len(seg)
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \
- merge_last[i - 1]:
- # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi
- if not self._is_reduplication(seg[i - 1][0]) and len(
- seg[i - 1][0]) + len(seg[i][0]) <= 3:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- merge_last[i] = True
- else:
- new_seg.append([word, pos])
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if i - 1 >= 0 and word == "儿" and seg[i - 1][0] != "#":
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def _merge_reduplication(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- new_seg = []
- for i, (word, pos) in enumerate(seg):
- if new_seg and word == new_seg[-1][0]:
- new_seg[-1][0] = new_seg[-1][0] + seg[i][0]
- else:
- new_seg.append([word, pos])
- return new_seg
-
- def pre_merge_for_modify(
- self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]:
- seg = self._merge_bu(seg)
- seg = self._merge_yi(seg)
- seg = self._merge_reduplication(seg)
- seg = self._merge_continuous_three_tones(seg)
- seg = self._merge_continuous_three_tones_2(seg)
- seg = self._merge_er(seg)
- return seg
-
- def modified_tone(self, word: str, pos: str,
- finals: List[str]) -> List[str]:
- finals = self._bu_sandhi(word, finals)
- finals = self._yi_sandhi(word, finals)
- finals = self._neural_sandhi(word, pos, finals)
- finals = self._three_sandhi(word, finals)
- return finals
diff --git a/spaces/AlphonseBrandon/speecht5-tts-demo/README.md b/spaces/AlphonseBrandon/speecht5-tts-demo/README.md
deleted file mode 100644
index b00de1f0412a56568cc8b554a4ee8b880a8b7afb..0000000000000000000000000000000000000000
--- a/spaces/AlphonseBrandon/speecht5-tts-demo/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: SpeechT5 Speech Synthesis Demo
-emoji: 👩🎤
-colorFrom: yellow
-colorTo: blue
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: Matthijs/speecht5-tts-demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/dreambooth.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/dreambooth.md
deleted file mode 100644
index 2ff5aab1f52395c18bcd24710c80bdefdb367184..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/dreambooth.md
+++ /dev/null
@@ -1,707 +0,0 @@
-
-
-# DreamBooth
-
-[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text-to-image models like Stable Diffusion given just a few (3-5) images of a subject. It allows the model to generate contextualized images of the subject in different scenes, poses, and views.
-
-
-Dreambooth examples from the project's blog.
-
-This guide will show you how to finetune DreamBooth with the [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) model for various GPU sizes, and with Flax. All the training scripts for DreamBooth used in this guide can be found [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) if you're interested in digging deeper and seeing how things work.
-
-Before running the scripts, make sure you install the library's training dependencies. We also recommend installing 🧨 Diffusers from the `main` GitHub branch:
-
-```bash
-pip install git+https://github.com/huggingface/diffusers
-pip install -U -r diffusers/examples/dreambooth/requirements.txt
-```
-
-xFormers is not part of the training requirements, but we recommend you [install](../optimization/xformers) it if you can because it could make your training faster and less memory intensive.
-
-After all the dependencies have been set up, initialize a [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with:
-
-```bash
-accelerate config
-```
-
-To setup a default 🤗 Accelerate environment without choosing any configurations:
-
-```bash
-accelerate config default
-```
-
-Or if your environment doesn't support an interactive shell like a notebook, you can use:
-
-```py
-from accelerate.utils import write_basic_config
-
-write_basic_config()
-```
-
-Finally, download a [few images of a dog](https://huggingface.co/datasets/diffusers/dog-example) to DreamBooth with:
-
-```py
-from huggingface_hub import snapshot_download
-
-local_dir = "./dog"
-snapshot_download(
- "diffusers/dog-example",
- local_dir=local_dir,
- repo_type="dataset",
- ignore_patterns=".gitattributes",
-)
-```
-
-To use your own dataset, take a look at the [Create a dataset for training](create_dataset) guide.
-
-## Finetuning
-
-
-
-DreamBooth finetuning is very sensitive to hyperparameters and easy to overfit. We recommend you take a look at our [in-depth analysis](https://huggingface.co/blog/dreambooth) with recommended settings for different subjects to help you choose the appropriate hyperparameters.
-
-
-
-
-
-Set the `INSTANCE_DIR` environment variable to the path of the directory containing the dog images.
-
-Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`] argument. The `instance_prompt` argument is a text prompt that contains a unique identifier, such as `sks`, and the class the image belongs to, which in this example is `a photo of a sks dog`.
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="./dog"
-export OUTPUT_DIR="path_to_saved_model"
-```
-
-Then you can launch the training script (you can find the full training script [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py)) with the following command:
-
-```bash
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --max_train_steps=400 \
- --push_to_hub
-```
-
-
-If you have access to TPUs or want to train even faster, you can try out the [Flax training script](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flax.py). The Flax training script doesn't support gradient checkpointing or gradient accumulation, so you'll need a GPU with at least 30GB of memory.
-
-Before running the script, make sure you have the requirements installed:
-
-```bash
-pip install -U -r requirements.txt
-```
-
-Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`] argument. The `instance_prompt` argument is a text prompt that contains a unique identifier, such as `sks`, and the class the image belongs to, which in this example is `a photo of a sks dog`.
-
-Now you can launch the training script with the following command:
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export INSTANCE_DIR="./dog"
-export OUTPUT_DIR="path-to-save-model"
-
-python train_dreambooth_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --learning_rate=5e-6 \
- --max_train_steps=400 \
- --push_to_hub
-```
-
-
-
-## Finetuning with prior-preserving loss
-
-Prior preservation is used to avoid overfitting and language-drift (check out the [paper](https://arxiv.org/abs/2208.12242) to learn more if you're interested). For prior preservation, you use other images of the same class as part of the training process. The nice thing is that you can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path you specify.
-
-The authors recommend generating `num_epochs * num_samples` images for prior preservation. In most cases, 200-300 images work well.
-
-
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path_to_class_images"
-export OUTPUT_DIR="path_to_saved_model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --push_to_hub
-```
-
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-python train_dreambooth_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --learning_rate=5e-6 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --push_to_hub
-```
-
-
-
-## Finetuning the text encoder and UNet
-
-The script also allows you to finetune the `text_encoder` along with the `unet`. In our experiments (check out the [Training Stable Diffusion with DreamBooth using 🧨 Diffusers](https://huggingface.co/blog/dreambooth) post for more details), this yields much better results, especially when generating images of faces.
-
-
-
-Training the text encoder requires additional memory and it won't fit on a 16GB GPU. You'll need at least 24GB VRAM to use this option.
-
-
-
-Pass the `--train_text_encoder` argument to the training script to enable finetuning the `text_encoder` and `unet`:
-
-
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path_to_class_images"
-export OUTPUT_DIR="path_to_saved_model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_text_encoder \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --use_8bit_adam \
- --gradient_checkpointing \
- --learning_rate=2e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --push_to_hub
-```
-
-
-```bash
-export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-python train_dreambooth_flax.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --train_text_encoder \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --learning_rate=2e-6 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --push_to_hub
-```
-
-
-
-## Finetuning with LoRA
-
-You can also use Low-Rank Adaptation of Large Language Models (LoRA), a fine-tuning technique for accelerating training large models, on DreamBooth. For more details, take a look at the [LoRA training](./lora#dreambooth) guide.
-
-## Saving checkpoints while training
-
-It's easy to overfit while training with Dreambooth, so sometimes it's useful to save regular checkpoints during the training process. One of the intermediate checkpoints might actually work better than the final model! Pass the following argument to the training script to enable saving checkpoints:
-
-```bash
- --checkpointing_steps=500
-```
-
-This saves the full training state in subfolders of your `output_dir`. Subfolder names begin with the prefix `checkpoint-`, followed by the number of steps performed so far; for example, `checkpoint-1500` would be a checkpoint saved after 1500 training steps.
-
-### Resume training from a saved checkpoint
-
-If you want to resume training from any of the saved checkpoints, you can pass the argument `--resume_from_checkpoint` to the script and specify the name of the checkpoint you want to use. You can also use the special string `"latest"` to resume from the last saved checkpoint (the one with the largest number of steps). For example, the following would resume training from the checkpoint saved after 1500 steps:
-
-```bash
- --resume_from_checkpoint="checkpoint-1500"
-```
-
-This is a good opportunity to tweak some of your hyperparameters if you wish.
-
-### Inference from a saved checkpoint
-
-Saved checkpoints are stored in a format suitable for resuming training. They not only include the model weights, but also the state of the optimizer, data loaders, and learning rate.
-
-If you have **`"accelerate>=0.16.0"`** installed, use the following code to run
-inference from an intermediate checkpoint.
-
-```python
-from diffusers import DiffusionPipeline, UNet2DConditionModel
-from transformers import CLIPTextModel
-import torch
-
-# Load the pipeline with the same arguments (model, revision) that were used for training
-model_id = "CompVis/stable-diffusion-v1-4"
-
-unet = UNet2DConditionModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/unet")
-
-# if you have trained with `--args.train_text_encoder` make sure to also load the text encoder
-text_encoder = CLIPTextModel.from_pretrained("/sddata/dreambooth/daruma-v2-1/checkpoint-100/text_encoder")
-
-pipeline = DiffusionPipeline.from_pretrained(model_id, unet=unet, text_encoder=text_encoder, dtype=torch.float16)
-pipeline.to("cuda")
-
-# Perform inference, or save, or push to the hub
-pipeline.save_pretrained("dreambooth-pipeline")
-```
-
-If you have **`"accelerate<0.16.0"`** installed, you need to convert it to an inference pipeline first:
-
-```python
-from accelerate import Accelerator
-from diffusers import DiffusionPipeline
-
-# Load the pipeline with the same arguments (model, revision) that were used for training
-model_id = "CompVis/stable-diffusion-v1-4"
-pipeline = DiffusionPipeline.from_pretrained(model_id)
-
-accelerator = Accelerator()
-
-# Use text_encoder if `--train_text_encoder` was used for the initial training
-unet, text_encoder = accelerator.prepare(pipeline.unet, pipeline.text_encoder)
-
-# Restore state from a checkpoint path. You have to use the absolute path here.
-accelerator.load_state("/sddata/dreambooth/daruma-v2-1/checkpoint-100")
-
-# Rebuild the pipeline with the unwrapped models (assignment to .unet and .text_encoder should work too)
-pipeline = DiffusionPipeline.from_pretrained(
- model_id,
- unet=accelerator.unwrap_model(unet),
- text_encoder=accelerator.unwrap_model(text_encoder),
-)
-
-# Perform inference, or save, or push to the hub
-pipeline.save_pretrained("dreambooth-pipeline")
-```
-
-## Optimizations for different GPU sizes
-
-Depending on your hardware, there are a few different ways to optimize DreamBooth on GPUs from 16GB to just 8GB!
-
-### xFormers
-
-[xFormers](https://github.com/facebookresearch/xformers) is a toolbox for optimizing Transformers, and it includes a [memory-efficient attention](https://facebookresearch.github.io/xformers/components/ops.html#module-xformers.ops) mechanism that is used in 🧨 Diffusers. You'll need to [install xFormers](./optimization/xformers) and then add the following argument to your training script:
-
-```bash
- --enable_xformers_memory_efficient_attention
-```
-
-xFormers is not available in Flax.
-
-### Set gradients to none
-
-Another way you can lower your memory footprint is to [set the gradients](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html) to `None` instead of zero. However, this may change certain behaviors, so if you run into any issues, try removing this argument. Add the following argument to your training script to set the gradients to `None`:
-
-```bash
- --set_grads_to_none
-```
-
-### 16GB GPU
-
-With the help of gradient checkpointing and [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) 8-bit optimizer, it's possible to train DreamBooth on a 16GB GPU. Make sure you have bitsandbytes installed:
-
-```bash
-pip install bitsandbytes
-```
-
-Then pass the `--use_8bit_adam` option to the training script:
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path_to_class_images"
-export OUTPUT_DIR="path_to_saved_model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=2 --gradient_checkpointing \
- --use_8bit_adam \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --push_to_hub
-```
-
-### 12GB GPU
-
-To run DreamBooth on a 12GB GPU, you'll need to enable gradient checkpointing, the 8-bit optimizer, xFormers, and set the gradients to `None`:
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path-to-class-images"
-export OUTPUT_DIR="path-to-save-model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --use_8bit_adam \
- --enable_xformers_memory_efficient_attention \
- --set_grads_to_none \
- --learning_rate=2e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --push_to_hub
-```
-
-### 8 GB GPU
-
-For 8GB GPUs, you'll need the help of [DeepSpeed](https://www.deepspeed.ai/) to offload some
-tensors from the VRAM to either the CPU or NVME, enabling training with less GPU memory.
-
-Run the following command to configure your 🤗 Accelerate environment:
-
-```bash
-accelerate config
-```
-
-During configuration, confirm that you want to use DeepSpeed. Now it's possible to train on under 8GB VRAM by combining DeepSpeed stage 2, fp16 mixed precision, and offloading the model parameters and the optimizer state to the CPU. The drawback is that this requires more system RAM, about 25 GB. See [the DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options.
-
-You should also change the default Adam optimizer to DeepSpeed's optimized version of Adam
-[`deepspeed.ops.adam.DeepSpeedCPUAdam`](https://deepspeed.readthedocs.io/en/latest/optimizers.html#adam-cpu) for a substantial speedup. Enabling `DeepSpeedCPUAdam` requires your system's CUDA toolchain version to be the same as the one installed with PyTorch.
-
-8-bit optimizers don't seem to be compatible with DeepSpeed at the moment.
-
-Launch training with the following command:
-
-```bash
-export MODEL_NAME="CompVis/stable-diffusion-v1-4"
-export INSTANCE_DIR="./dog"
-export CLASS_DIR="path_to_class_images"
-export OUTPUT_DIR="path_to_saved_model"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --class_data_dir=$CLASS_DIR \
- --output_dir=$OUTPUT_DIR \
- --with_prior_preservation --prior_loss_weight=1.0 \
- --instance_prompt="a photo of sks dog" \
- --class_prompt="a photo of dog" \
- --resolution=512 \
- --train_batch_size=1 \
- --sample_batch_size=1 \
- --gradient_accumulation_steps=1 --gradient_checkpointing \
- --learning_rate=5e-6 \
- --lr_scheduler="constant" \
- --lr_warmup_steps=0 \
- --num_class_images=200 \
- --max_train_steps=800 \
- --mixed_precision=fp16 \
- --push_to_hub
-```
-
-## Inference
-
-Once you have trained a model, specify the path to where the model is saved, and use it for inference in the [`StableDiffusionPipeline`]. Make sure your prompts include the special `identifier` used during training (`sks` in the previous examples).
-
-If you have **`"accelerate>=0.16.0"`** installed, you can use the following code to run
-inference from an intermediate checkpoint:
-
-```python
-from diffusers import DiffusionPipeline
-import torch
-
-model_id = "path_to_saved_model"
-pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
-
-prompt = "A photo of sks dog in a bucket"
-image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
-
-image.save("dog-bucket.png")
-```
-
-You may also run inference from any of the [saved training checkpoints](#inference-from-a-saved-checkpoint).
-
-## IF
-
-You can use the lora and full dreambooth scripts to train the text to image [IF model](https://huggingface.co/DeepFloyd/IF-I-XL-v1.0) and the stage II upscaler
-[IF model](https://huggingface.co/DeepFloyd/IF-II-L-v1.0).
-
-Note that IF has a predicted variance, and our finetuning scripts only train the models predicted error, so for finetuned IF models we switch to a fixed
-variance schedule. The full finetuning scripts will update the scheduler config for the full saved model. However, when loading saved LoRA weights, you
-must also update the pipeline's scheduler config.
-
-```py
-from diffusers import DiffusionPipeline
-
-pipe = DiffusionPipeline.from_pretrained("DeepFloyd/IF-I-XL-v1.0")
-
-pipe.load_lora_weights("")
-
-# Update scheduler config to fixed variance schedule
-pipe.scheduler = pipe.scheduler.__class__.from_config(pipe.scheduler.config, variance_type="fixed_small")
-```
-
-Additionally, a few alternative cli flags are needed for IF.
-
-`--resolution=64`: IF is a pixel space diffusion model. In order to operate on un-compressed pixels, the input images are of a much smaller resolution.
-
-`--pre_compute_text_embeddings`: IF uses [T5](https://huggingface.co/docs/transformers/model_doc/t5) for its text encoder. In order to save GPU memory, we pre compute all text embeddings and then de-allocate
-T5.
-
-`--tokenizer_max_length=77`: T5 has a longer default text length, but the default IF encoding procedure uses a smaller number.
-
-`--text_encoder_use_attention_mask`: T5 passes the attention mask to the text encoder.
-
-### Tips and Tricks
-We find LoRA to be sufficient for finetuning the stage I model as the low resolution of the model makes representing finegrained detail hard regardless.
-
-For common and/or not-visually complex object concepts, you can get away with not-finetuning the upscaler. Just be sure to adjust the prompt passed to the
-upscaler to remove the new token from the instance prompt. I.e. if your stage I prompt is "a sks dog", use "a dog" for your stage II prompt.
-
-For finegrained detail like faces that aren't present in the original training set, we find that full finetuning of the stage II upscaler is better than
-LoRA finetuning stage II.
-
-For finegrained detail like faces, we find that lower learning rates along with larger batch sizes work best.
-
-For stage II, we find that lower learning rates are also needed.
-
-We found experimentally that the DDPM scheduler with the default larger number of denoising steps to sometimes work better than the DPM Solver scheduler
-used in the training scripts.
-
-### Stage II additional validation images
-
-The stage II validation requires images to upscale, we can download a downsized version of the training set:
-
-```py
-from huggingface_hub import snapshot_download
-
-local_dir = "./dog_downsized"
-snapshot_download(
- "diffusers/dog-example-downsized",
- local_dir=local_dir,
- repo_type="dataset",
- ignore_patterns=".gitattributes",
-)
-```
-
-### IF stage I LoRA Dreambooth
-This training configuration requires ~28 GB VRAM.
-
-```sh
-export MODEL_NAME="DeepFloyd/IF-I-XL-v1.0"
-export INSTANCE_DIR="dog"
-export OUTPUT_DIR="dreambooth_dog_lora"
-
-accelerate launch train_dreambooth_lora.py \
- --report_to wandb \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a sks dog" \
- --resolution=64 \
- --train_batch_size=4 \
- --gradient_accumulation_steps=1 \
- --learning_rate=5e-6 \
- --scale_lr \
- --max_train_steps=1200 \
- --validation_prompt="a sks dog" \
- --validation_epochs=25 \
- --checkpointing_steps=100 \
- --pre_compute_text_embeddings \
- --tokenizer_max_length=77 \
- --text_encoder_use_attention_mask
-```
-
-### IF stage II LoRA Dreambooth
-
-`--validation_images`: These images are upscaled during validation steps.
-
-`--class_labels_conditioning=timesteps`: Pass additional conditioning to the UNet needed for stage II.
-
-`--learning_rate=1e-6`: Lower learning rate than stage I.
-
-`--resolution=256`: The upscaler expects higher resolution inputs
-
-```sh
-export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
-export INSTANCE_DIR="dog"
-export OUTPUT_DIR="dreambooth_dog_upscale"
-export VALIDATION_IMAGES="dog_downsized/image_1.png dog_downsized/image_2.png dog_downsized/image_3.png dog_downsized/image_4.png"
-
-python train_dreambooth_lora.py \
- --report_to wandb \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a sks dog" \
- --resolution=256 \
- --train_batch_size=4 \
- --gradient_accumulation_steps=1 \
- --learning_rate=1e-6 \
- --max_train_steps=2000 \
- --validation_prompt="a sks dog" \
- --validation_epochs=100 \
- --checkpointing_steps=500 \
- --pre_compute_text_embeddings \
- --tokenizer_max_length=77 \
- --text_encoder_use_attention_mask \
- --validation_images $VALIDATION_IMAGES \
- --class_labels_conditioning=timesteps
-```
-
-### IF Stage I Full Dreambooth
-`--skip_save_text_encoder`: When training the full model, this will skip saving the entire T5 with the finetuned model. You can still load the pipeline
-with a T5 loaded from the original model.
-
-`use_8bit_adam`: Due to the size of the optimizer states, we recommend training the full XL IF model with 8bit adam.
-
-`--learning_rate=1e-7`: For full dreambooth, IF requires very low learning rates. With higher learning rates model quality will degrade. Note that it is
-likely the learning rate can be increased with larger batch sizes.
-
-Using 8bit adam and a batch size of 4, the model can be trained in ~48 GB VRAM.
-
-```sh
-export MODEL_NAME="DeepFloyd/IF-I-XL-v1.0"
-
-export INSTANCE_DIR="dog"
-export OUTPUT_DIR="dreambooth_if"
-
-accelerate launch train_dreambooth.py \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a photo of sks dog" \
- --resolution=64 \
- --train_batch_size=4 \
- --gradient_accumulation_steps=1 \
- --learning_rate=1e-7 \
- --max_train_steps=150 \
- --validation_prompt "a photo of sks dog" \
- --validation_steps 25 \
- --text_encoder_use_attention_mask \
- --tokenizer_max_length 77 \
- --pre_compute_text_embeddings \
- --use_8bit_adam \
- --set_grads_to_none \
- --skip_save_text_encoder \
- --push_to_hub
-```
-
-### IF Stage II Full Dreambooth
-
-`--learning_rate=5e-6`: With a smaller effective batch size of 4, we found that we required learning rates as low as
-1e-8.
-
-`--resolution=256`: The upscaler expects higher resolution inputs
-
-`--train_batch_size=2` and `--gradient_accumulation_steps=6`: We found that full training of stage II particularly with
-faces required large effective batch sizes.
-
-```sh
-export MODEL_NAME="DeepFloyd/IF-II-L-v1.0"
-export INSTANCE_DIR="dog"
-export OUTPUT_DIR="dreambooth_dog_upscale"
-export VALIDATION_IMAGES="dog_downsized/image_1.png dog_downsized/image_2.png dog_downsized/image_3.png dog_downsized/image_4.png"
-
-accelerate launch train_dreambooth.py \
- --report_to wandb \
- --pretrained_model_name_or_path=$MODEL_NAME \
- --instance_data_dir=$INSTANCE_DIR \
- --output_dir=$OUTPUT_DIR \
- --instance_prompt="a sks dog" \
- --resolution=256 \
- --train_batch_size=2 \
- --gradient_accumulation_steps=6 \
- --learning_rate=5e-6 \
- --max_train_steps=2000 \
- --validation_prompt="a sks dog" \
- --validation_steps=150 \
- --checkpointing_steps=500 \
- --pre_compute_text_embeddings \
- --tokenizer_max_length=77 \
- --text_encoder_use_attention_mask \
- --validation_images $VALIDATION_IMAGES \
- --class_labels_conditioning timesteps \
- --push_to_hub
-```
-
-## Stable Diffusion XL
-
-We support fine-tuning of the UNet shipped in [Stable Diffusion XL](https://huggingface.co/papers/2307.01952) with DreamBooth and LoRA via the `train_dreambooth_lora_sdxl.py` script. Please refer to the docs [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sdxl.md).
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py
deleted file mode 100644
index f73ef15d7de7948a9cbad246027ca71f4a6db198..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/alt_diffusion/modeling_roberta_series.py
+++ /dev/null
@@ -1,124 +0,0 @@
-from dataclasses import dataclass
-from typing import Optional, Tuple
-
-import torch
-from torch import nn
-from transformers import RobertaPreTrainedModel, XLMRobertaConfig, XLMRobertaModel
-from transformers.utils import ModelOutput
-
-
-@dataclass
-class TransformationModelOutput(ModelOutput):
- """
- Base class for text model's outputs that also contains a pooling of the last hidden states.
-
- Args:
- text_embeds (`torch.FloatTensor` of shape `(batch_size, output_dim)` *optional* returned when model is initialized with `with_projection=True`):
- The text embeddings obtained by applying the projection layer to the pooler_output.
- last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
- Sequence of hidden-states at the output of the last layer of the model.
- hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, +
- one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`.
-
- Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
- attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length,
- sequence_length)`.
-
- Attentions weights after the attention softmax, used to compute the weighted average in the self-attention
- heads.
- """
-
- projection_state: Optional[torch.FloatTensor] = None
- last_hidden_state: torch.FloatTensor = None
- hidden_states: Optional[Tuple[torch.FloatTensor]] = None
- attentions: Optional[Tuple[torch.FloatTensor]] = None
-
-
-class RobertaSeriesConfig(XLMRobertaConfig):
- def __init__(
- self,
- pad_token_id=1,
- bos_token_id=0,
- eos_token_id=2,
- project_dim=512,
- pooler_fn="cls",
- learn_encoder=False,
- use_attention_mask=True,
- **kwargs,
- ):
- super().__init__(pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, **kwargs)
- self.project_dim = project_dim
- self.pooler_fn = pooler_fn
- self.learn_encoder = learn_encoder
- self.use_attention_mask = use_attention_mask
-
-
-class RobertaSeriesModelWithTransformation(RobertaPreTrainedModel):
- _keys_to_ignore_on_load_unexpected = [r"pooler", r"logit_scale"]
- _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
- base_model_prefix = "roberta"
- config_class = RobertaSeriesConfig
-
- def __init__(self, config):
- super().__init__(config)
- self.roberta = XLMRobertaModel(config)
- self.transformation = nn.Linear(config.hidden_size, config.project_dim)
- self.has_pre_transformation = getattr(config, "has_pre_transformation", False)
- if self.has_pre_transformation:
- self.transformation_pre = nn.Linear(config.hidden_size, config.project_dim)
- self.pre_LN = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
- self.post_init()
-
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- token_type_ids: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.Tensor] = None,
- head_mask: Optional[torch.Tensor] = None,
- inputs_embeds: Optional[torch.Tensor] = None,
- encoder_hidden_states: Optional[torch.Tensor] = None,
- encoder_attention_mask: Optional[torch.Tensor] = None,
- output_attentions: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- ):
- r""" """
-
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- outputs = self.base_model(
- input_ids=input_ids,
- attention_mask=attention_mask,
- token_type_ids=token_type_ids,
- position_ids=position_ids,
- head_mask=head_mask,
- inputs_embeds=inputs_embeds,
- encoder_hidden_states=encoder_hidden_states,
- encoder_attention_mask=encoder_attention_mask,
- output_attentions=output_attentions,
- output_hidden_states=True if self.has_pre_transformation else output_hidden_states,
- return_dict=return_dict,
- )
-
- if self.has_pre_transformation:
- sequence_output2 = outputs["hidden_states"][-2]
- sequence_output2 = self.pre_LN(sequence_output2)
- projection_state2 = self.transformation_pre(sequence_output2)
-
- return TransformationModelOutput(
- projection_state=projection_state2,
- last_hidden_state=outputs.last_hidden_state,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
- else:
- projection_state = self.transformation(outputs.last_hidden_state)
- return TransformationModelOutput(
- projection_state=projection_state,
- last_hidden_state=outputs.last_hidden_state,
- hidden_states=outputs.hidden_states,
- attentions=outputs.attentions,
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/shap_e/test_shap_e.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/shap_e/test_shap_e.py
deleted file mode 100644
index 90ff37de6e9a7e71087919977fad42fb69392df9..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/shap_e/test_shap_e.py
+++ /dev/null
@@ -1,265 +0,0 @@
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import unittest
-
-import numpy as np
-import torch
-from transformers import CLIPTextConfig, CLIPTextModelWithProjection, CLIPTokenizer
-
-from diffusers import HeunDiscreteScheduler, PriorTransformer, ShapEPipeline
-from diffusers.pipelines.shap_e import ShapERenderer
-from diffusers.utils import load_numpy, slow
-from diffusers.utils.testing_utils import require_torch_gpu, torch_device
-
-from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
-
-
-class ShapEPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
- pipeline_class = ShapEPipeline
- params = ["prompt"]
- batch_params = ["prompt"]
- required_optional_params = [
- "num_images_per_prompt",
- "num_inference_steps",
- "generator",
- "latents",
- "guidance_scale",
- "frame_size",
- "output_type",
- "return_dict",
- ]
- test_xformers_attention = False
-
- @property
- def text_embedder_hidden_size(self):
- return 32
-
- @property
- def time_input_dim(self):
- return 32
-
- @property
- def time_embed_dim(self):
- return self.time_input_dim * 4
-
- @property
- def renderer_dim(self):
- return 8
-
- @property
- def dummy_tokenizer(self):
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
- return tokenizer
-
- @property
- def dummy_text_encoder(self):
- torch.manual_seed(0)
- config = CLIPTextConfig(
- bos_token_id=0,
- eos_token_id=2,
- hidden_size=self.text_embedder_hidden_size,
- projection_dim=self.text_embedder_hidden_size,
- intermediate_size=37,
- layer_norm_eps=1e-05,
- num_attention_heads=4,
- num_hidden_layers=5,
- pad_token_id=1,
- vocab_size=1000,
- )
- return CLIPTextModelWithProjection(config)
-
- @property
- def dummy_prior(self):
- torch.manual_seed(0)
-
- model_kwargs = {
- "num_attention_heads": 2,
- "attention_head_dim": 16,
- "embedding_dim": self.time_input_dim,
- "num_embeddings": 32,
- "embedding_proj_dim": self.text_embedder_hidden_size,
- "time_embed_dim": self.time_embed_dim,
- "num_layers": 1,
- "clip_embed_dim": self.time_input_dim * 2,
- "additional_embeddings": 0,
- "time_embed_act_fn": "gelu",
- "norm_in_type": "layer",
- "encoder_hid_proj_type": None,
- "added_emb_type": None,
- }
-
- model = PriorTransformer(**model_kwargs)
- return model
-
- @property
- def dummy_renderer(self):
- torch.manual_seed(0)
-
- model_kwargs = {
- "param_shapes": (
- (self.renderer_dim, 93),
- (self.renderer_dim, 8),
- (self.renderer_dim, 8),
- (self.renderer_dim, 8),
- ),
- "d_latent": self.time_input_dim,
- "d_hidden": self.renderer_dim,
- "n_output": 12,
- "background": (
- 0.1,
- 0.1,
- 0.1,
- ),
- }
- model = ShapERenderer(**model_kwargs)
- return model
-
- def get_dummy_components(self):
- prior = self.dummy_prior
- text_encoder = self.dummy_text_encoder
- tokenizer = self.dummy_tokenizer
- shap_e_renderer = self.dummy_renderer
-
- scheduler = HeunDiscreteScheduler(
- beta_schedule="exp",
- num_train_timesteps=1024,
- prediction_type="sample",
- use_karras_sigmas=True,
- clip_sample=True,
- clip_sample_range=1.0,
- )
- components = {
- "prior": prior,
- "text_encoder": text_encoder,
- "tokenizer": tokenizer,
- "shap_e_renderer": shap_e_renderer,
- "scheduler": scheduler,
- }
-
- return components
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
- inputs = {
- "prompt": "horse",
- "generator": generator,
- "num_inference_steps": 1,
- "frame_size": 32,
- "output_type": "np",
- }
- return inputs
-
- def test_shap_e(self):
- device = "cpu"
-
- components = self.get_dummy_components()
-
- pipe = self.pipeline_class(**components)
- pipe = pipe.to(device)
-
- pipe.set_progress_bar_config(disable=None)
-
- output = pipe(**self.get_dummy_inputs(device))
- image = output.images[0]
- image_slice = image[0, -3:, -3:, -1]
-
- assert image.shape == (20, 32, 32, 3)
-
- expected_slice = np.array(
- [
- 0.00039216,
- 0.00039216,
- 0.00039216,
- 0.00039216,
- 0.00039216,
- 0.00039216,
- 0.00039216,
- 0.00039216,
- 0.00039216,
- ]
- )
-
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
-
- def test_inference_batch_consistent(self):
- # NOTE: Larger batch sizes cause this test to timeout, only test on smaller batches
- self._test_inference_batch_consistent(batch_sizes=[1, 2])
-
- def test_inference_batch_single_identical(self):
- test_max_difference = torch_device == "cpu"
- relax_max_difference = True
-
- self._test_inference_batch_single_identical(
- batch_size=2,
- test_max_difference=test_max_difference,
- relax_max_difference=relax_max_difference,
- )
-
- def test_num_images_per_prompt(self):
- components = self.get_dummy_components()
- pipe = self.pipeline_class(**components)
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- batch_size = 1
- num_images_per_prompt = 2
-
- inputs = self.get_dummy_inputs(torch_device)
-
- for key in inputs.keys():
- if key in self.batch_params:
- inputs[key] = batch_size * [inputs[key]]
-
- images = pipe(**inputs, num_images_per_prompt=num_images_per_prompt)[0]
-
- assert images.shape[0] == batch_size * num_images_per_prompt
-
-
-@slow
-@require_torch_gpu
-class ShapEPipelineIntegrationTests(unittest.TestCase):
- def tearDown(self):
- # clean up the VRAM after each test
- super().tearDown()
- gc.collect()
- torch.cuda.empty_cache()
-
- def test_shap_e(self):
- expected_image = load_numpy(
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- "/shap_e/test_shap_e_np_out.npy"
- )
- pipe = ShapEPipeline.from_pretrained("openai/shap-e")
- pipe = pipe.to(torch_device)
- pipe.set_progress_bar_config(disable=None)
-
- generator = torch.Generator(device=torch_device).manual_seed(0)
-
- images = pipe(
- "a shark",
- generator=generator,
- guidance_scale=15.0,
- num_inference_steps=64,
- frame_size=64,
- output_type="np",
- ).images[0]
-
- assert images.shape == (20, 64, 64, 3)
-
- assert_mean_pixel_difference(images, expected_image)
diff --git a/spaces/Andy1621/uniformer_video_demo/README.md b/spaces/Andy1621/uniformer_video_demo/README.md
deleted file mode 100644
index 83f8ad2b61fb7d558d63376f8c2ad99048dfd320..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_video_demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Uniformer_video_demo
-emoji: 📹
-colorFrom: pink
-colorTo: green
-sdk: gradio
-sdk_version: 3.0.3
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/api.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/api.py
deleted file mode 100644
index 993e2b7d6d99bf3c5b8c79e82d54b592bdf76f21..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/api.py
+++ /dev/null
@@ -1,207 +0,0 @@
-"""
-This module is responsible for the VectorDB API. It currently supports:
-* DELETE api/v1/clear
- - Clears the whole DB.
-* POST api/v1/add
- - Add some corpus to the DB. You can also specify metadata to be added alongside it.
-* POST api/v1/delete
- - Delete specific records with given metadata.
-* POST api/v1/get
- - Get results from chromaDB.
-"""
-
-import json
-from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
-from urllib.parse import urlparse, parse_qs
-from threading import Thread
-
-from modules import shared
-from modules.logging_colors import logger
-
-from .chromadb import ChromaCollector
-from .data_processor import process_and_add_to_collector
-
-import extensions.superboogav2.parameters as parameters
-
-
-class CustomThreadingHTTPServer(ThreadingHTTPServer):
- def __init__(self, server_address, RequestHandlerClass, collector: ChromaCollector, bind_and_activate=True):
- self.collector = collector
- super().__init__(server_address, RequestHandlerClass, bind_and_activate)
-
- def finish_request(self, request, client_address):
- self.RequestHandlerClass(request, client_address, self, self.collector)
-
-
-class Handler(BaseHTTPRequestHandler):
- def __init__(self, request, client_address, server, collector: ChromaCollector):
- self.collector = collector
- super().__init__(request, client_address, server)
-
-
- def _send_412_error(self, message):
- self.send_response(412)
- self.send_header("Content-type", "application/json")
- self.end_headers()
- response = json.dumps({"error": message})
- self.wfile.write(response.encode('utf-8'))
-
-
- def _send_404_error(self):
- self.send_response(404)
- self.send_header("Content-type", "application/json")
- self.end_headers()
- response = json.dumps({"error": "Resource not found"})
- self.wfile.write(response.encode('utf-8'))
-
-
- def _send_400_error(self, error_message: str):
- self.send_response(400)
- self.send_header("Content-type", "application/json")
- self.end_headers()
- response = json.dumps({"error": error_message})
- self.wfile.write(response.encode('utf-8'))
-
-
- def _send_200_response(self, message: str):
- self.send_response(200)
- self.send_header("Content-type", "application/json")
- self.end_headers()
-
- if isinstance(message, str):
- response = json.dumps({"message": message})
- else:
- response = json.dumps(message)
-
- self.wfile.write(response.encode('utf-8'))
-
-
- def _handle_get(self, search_strings: list[str], n_results: int, max_token_count: int, sort_param: str):
- if sort_param == parameters.SORT_DISTANCE:
- results = self.collector.get_sorted_by_dist(search_strings, n_results, max_token_count)
- elif sort_param == parameters.SORT_ID:
- results = self.collector.get_sorted_by_id(search_strings, n_results, max_token_count)
- else: # Default is dist
- results = self.collector.get_sorted_by_dist(search_strings, n_results, max_token_count)
-
- return {
- "results": results
- }
-
-
- def do_GET(self):
- self._send_404_error()
-
-
- def do_POST(self):
- try:
- content_length = int(self.headers['Content-Length'])
- body = json.loads(self.rfile.read(content_length).decode('utf-8'))
-
- parsed_path = urlparse(self.path)
- path = parsed_path.path
- query_params = parse_qs(parsed_path.query)
-
- if path in ['/api/v1/add', '/api/add']:
- corpus = body.get('corpus')
- if corpus is None:
- self._send_412_error("Missing parameter 'corpus'")
- return
-
- clear_before_adding = body.get('clear_before_adding', False)
- metadata = body.get('metadata')
- process_and_add_to_collector(corpus, self.collector, clear_before_adding, metadata)
- self._send_200_response("Data successfully added")
-
- elif path in ['/api/v1/delete', '/api/delete']:
- metadata = body.get('metadata')
- if corpus is None:
- self._send_412_error("Missing parameter 'metadata'")
- return
-
- self.collector.delete(ids_to_delete=None, where=metadata)
- self._send_200_response("Data successfully deleted")
-
- elif path in ['/api/v1/get', '/api/get']:
- search_strings = body.get('search_strings')
- if search_strings is None:
- self._send_412_error("Missing parameter 'search_strings'")
- return
-
- n_results = body.get('n_results')
- if n_results is None:
- n_results = parameters.get_chunk_count()
-
- max_token_count = body.get('max_token_count')
- if max_token_count is None:
- max_token_count = parameters.get_max_token_count()
-
- sort_param = query_params.get('sort', ['distance'])[0]
-
- results = self._handle_get(search_strings, n_results, max_token_count, sort_param)
- self._send_200_response(results)
-
- else:
- self._send_404_error()
- except Exception as e:
- self._send_400_error(str(e))
-
-
- def do_DELETE(self):
- try:
- parsed_path = urlparse(self.path)
- path = parsed_path.path
- query_params = parse_qs(parsed_path.query)
-
- if path in ['/api/v1/clear', '/api/clear']:
- self.collector.clear()
- self._send_200_response("Data successfully cleared")
- else:
- self._send_404_error()
- except Exception as e:
- self._send_400_error(str(e))
-
-
- def do_OPTIONS(self):
- self.send_response(200)
- self.end_headers()
-
-
- def end_headers(self):
- self.send_header('Access-Control-Allow-Origin', '*')
- self.send_header('Access-Control-Allow-Methods', '*')
- self.send_header('Access-Control-Allow-Headers', '*')
- self.send_header('Cache-Control', 'no-store, no-cache, must-revalidate')
- super().end_headers()
-
-
-class APIManager:
- def __init__(self, collector: ChromaCollector):
- self.server = None
- self.collector = collector
- self.is_running = False
-
- def start_server(self, port: int):
- if self.server is not None:
- print("Server already running.")
- return
-
- address = '0.0.0.0' if shared.args.listen else '127.0.0.1'
- self.server = CustomThreadingHTTPServer((address, port), Handler, self.collector)
-
- logger.info(f'Starting chromaDB API at http://{address}:{port}/api')
-
- Thread(target=self.server.serve_forever, daemon=True).start()
-
- self.is_running = True
-
- def stop_server(self):
- if self.server is not None:
- logger.info(f'Stopping chromaDB API.')
- self.server.shutdown()
- self.server.server_close()
- self.server = None
- self.is_running = False
-
- def is_server_running(self):
- return self.is_running
\ No newline at end of file
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/webencodings/labels.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/webencodings/labels.py
deleted file mode 100644
index 29cbf91ef79b89971e51db9ddfc3720d8b4db82a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/webencodings/labels.py
+++ /dev/null
@@ -1,231 +0,0 @@
-"""
-
- webencodings.labels
- ~~~~~~~~~~~~~~~~~~~
-
- Map encoding labels to their name.
-
- :copyright: Copyright 2012 by Simon Sapin
- :license: BSD, see LICENSE for details.
-
-"""
-
-# XXX Do not edit!
-# This file is automatically generated by mklabels.py
-
-LABELS = {
- 'unicode-1-1-utf-8': 'utf-8',
- 'utf-8': 'utf-8',
- 'utf8': 'utf-8',
- '866': 'ibm866',
- 'cp866': 'ibm866',
- 'csibm866': 'ibm866',
- 'ibm866': 'ibm866',
- 'csisolatin2': 'iso-8859-2',
- 'iso-8859-2': 'iso-8859-2',
- 'iso-ir-101': 'iso-8859-2',
- 'iso8859-2': 'iso-8859-2',
- 'iso88592': 'iso-8859-2',
- 'iso_8859-2': 'iso-8859-2',
- 'iso_8859-2:1987': 'iso-8859-2',
- 'l2': 'iso-8859-2',
- 'latin2': 'iso-8859-2',
- 'csisolatin3': 'iso-8859-3',
- 'iso-8859-3': 'iso-8859-3',
- 'iso-ir-109': 'iso-8859-3',
- 'iso8859-3': 'iso-8859-3',
- 'iso88593': 'iso-8859-3',
- 'iso_8859-3': 'iso-8859-3',
- 'iso_8859-3:1988': 'iso-8859-3',
- 'l3': 'iso-8859-3',
- 'latin3': 'iso-8859-3',
- 'csisolatin4': 'iso-8859-4',
- 'iso-8859-4': 'iso-8859-4',
- 'iso-ir-110': 'iso-8859-4',
- 'iso8859-4': 'iso-8859-4',
- 'iso88594': 'iso-8859-4',
- 'iso_8859-4': 'iso-8859-4',
- 'iso_8859-4:1988': 'iso-8859-4',
- 'l4': 'iso-8859-4',
- 'latin4': 'iso-8859-4',
- 'csisolatincyrillic': 'iso-8859-5',
- 'cyrillic': 'iso-8859-5',
- 'iso-8859-5': 'iso-8859-5',
- 'iso-ir-144': 'iso-8859-5',
- 'iso8859-5': 'iso-8859-5',
- 'iso88595': 'iso-8859-5',
- 'iso_8859-5': 'iso-8859-5',
- 'iso_8859-5:1988': 'iso-8859-5',
- 'arabic': 'iso-8859-6',
- 'asmo-708': 'iso-8859-6',
- 'csiso88596e': 'iso-8859-6',
- 'csiso88596i': 'iso-8859-6',
- 'csisolatinarabic': 'iso-8859-6',
- 'ecma-114': 'iso-8859-6',
- 'iso-8859-6': 'iso-8859-6',
- 'iso-8859-6-e': 'iso-8859-6',
- 'iso-8859-6-i': 'iso-8859-6',
- 'iso-ir-127': 'iso-8859-6',
- 'iso8859-6': 'iso-8859-6',
- 'iso88596': 'iso-8859-6',
- 'iso_8859-6': 'iso-8859-6',
- 'iso_8859-6:1987': 'iso-8859-6',
- 'csisolatingreek': 'iso-8859-7',
- 'ecma-118': 'iso-8859-7',
- 'elot_928': 'iso-8859-7',
- 'greek': 'iso-8859-7',
- 'greek8': 'iso-8859-7',
- 'iso-8859-7': 'iso-8859-7',
- 'iso-ir-126': 'iso-8859-7',
- 'iso8859-7': 'iso-8859-7',
- 'iso88597': 'iso-8859-7',
- 'iso_8859-7': 'iso-8859-7',
- 'iso_8859-7:1987': 'iso-8859-7',
- 'sun_eu_greek': 'iso-8859-7',
- 'csiso88598e': 'iso-8859-8',
- 'csisolatinhebrew': 'iso-8859-8',
- 'hebrew': 'iso-8859-8',
- 'iso-8859-8': 'iso-8859-8',
- 'iso-8859-8-e': 'iso-8859-8',
- 'iso-ir-138': 'iso-8859-8',
- 'iso8859-8': 'iso-8859-8',
- 'iso88598': 'iso-8859-8',
- 'iso_8859-8': 'iso-8859-8',
- 'iso_8859-8:1988': 'iso-8859-8',
- 'visual': 'iso-8859-8',
- 'csiso88598i': 'iso-8859-8-i',
- 'iso-8859-8-i': 'iso-8859-8-i',
- 'logical': 'iso-8859-8-i',
- 'csisolatin6': 'iso-8859-10',
- 'iso-8859-10': 'iso-8859-10',
- 'iso-ir-157': 'iso-8859-10',
- 'iso8859-10': 'iso-8859-10',
- 'iso885910': 'iso-8859-10',
- 'l6': 'iso-8859-10',
- 'latin6': 'iso-8859-10',
- 'iso-8859-13': 'iso-8859-13',
- 'iso8859-13': 'iso-8859-13',
- 'iso885913': 'iso-8859-13',
- 'iso-8859-14': 'iso-8859-14',
- 'iso8859-14': 'iso-8859-14',
- 'iso885914': 'iso-8859-14',
- 'csisolatin9': 'iso-8859-15',
- 'iso-8859-15': 'iso-8859-15',
- 'iso8859-15': 'iso-8859-15',
- 'iso885915': 'iso-8859-15',
- 'iso_8859-15': 'iso-8859-15',
- 'l9': 'iso-8859-15',
- 'iso-8859-16': 'iso-8859-16',
- 'cskoi8r': 'koi8-r',
- 'koi': 'koi8-r',
- 'koi8': 'koi8-r',
- 'koi8-r': 'koi8-r',
- 'koi8_r': 'koi8-r',
- 'koi8-u': 'koi8-u',
- 'csmacintosh': 'macintosh',
- 'mac': 'macintosh',
- 'macintosh': 'macintosh',
- 'x-mac-roman': 'macintosh',
- 'dos-874': 'windows-874',
- 'iso-8859-11': 'windows-874',
- 'iso8859-11': 'windows-874',
- 'iso885911': 'windows-874',
- 'tis-620': 'windows-874',
- 'windows-874': 'windows-874',
- 'cp1250': 'windows-1250',
- 'windows-1250': 'windows-1250',
- 'x-cp1250': 'windows-1250',
- 'cp1251': 'windows-1251',
- 'windows-1251': 'windows-1251',
- 'x-cp1251': 'windows-1251',
- 'ansi_x3.4-1968': 'windows-1252',
- 'ascii': 'windows-1252',
- 'cp1252': 'windows-1252',
- 'cp819': 'windows-1252',
- 'csisolatin1': 'windows-1252',
- 'ibm819': 'windows-1252',
- 'iso-8859-1': 'windows-1252',
- 'iso-ir-100': 'windows-1252',
- 'iso8859-1': 'windows-1252',
- 'iso88591': 'windows-1252',
- 'iso_8859-1': 'windows-1252',
- 'iso_8859-1:1987': 'windows-1252',
- 'l1': 'windows-1252',
- 'latin1': 'windows-1252',
- 'us-ascii': 'windows-1252',
- 'windows-1252': 'windows-1252',
- 'x-cp1252': 'windows-1252',
- 'cp1253': 'windows-1253',
- 'windows-1253': 'windows-1253',
- 'x-cp1253': 'windows-1253',
- 'cp1254': 'windows-1254',
- 'csisolatin5': 'windows-1254',
- 'iso-8859-9': 'windows-1254',
- 'iso-ir-148': 'windows-1254',
- 'iso8859-9': 'windows-1254',
- 'iso88599': 'windows-1254',
- 'iso_8859-9': 'windows-1254',
- 'iso_8859-9:1989': 'windows-1254',
- 'l5': 'windows-1254',
- 'latin5': 'windows-1254',
- 'windows-1254': 'windows-1254',
- 'x-cp1254': 'windows-1254',
- 'cp1255': 'windows-1255',
- 'windows-1255': 'windows-1255',
- 'x-cp1255': 'windows-1255',
- 'cp1256': 'windows-1256',
- 'windows-1256': 'windows-1256',
- 'x-cp1256': 'windows-1256',
- 'cp1257': 'windows-1257',
- 'windows-1257': 'windows-1257',
- 'x-cp1257': 'windows-1257',
- 'cp1258': 'windows-1258',
- 'windows-1258': 'windows-1258',
- 'x-cp1258': 'windows-1258',
- 'x-mac-cyrillic': 'x-mac-cyrillic',
- 'x-mac-ukrainian': 'x-mac-cyrillic',
- 'chinese': 'gbk',
- 'csgb2312': 'gbk',
- 'csiso58gb231280': 'gbk',
- 'gb2312': 'gbk',
- 'gb_2312': 'gbk',
- 'gb_2312-80': 'gbk',
- 'gbk': 'gbk',
- 'iso-ir-58': 'gbk',
- 'x-gbk': 'gbk',
- 'gb18030': 'gb18030',
- 'hz-gb-2312': 'hz-gb-2312',
- 'big5': 'big5',
- 'big5-hkscs': 'big5',
- 'cn-big5': 'big5',
- 'csbig5': 'big5',
- 'x-x-big5': 'big5',
- 'cseucpkdfmtjapanese': 'euc-jp',
- 'euc-jp': 'euc-jp',
- 'x-euc-jp': 'euc-jp',
- 'csiso2022jp': 'iso-2022-jp',
- 'iso-2022-jp': 'iso-2022-jp',
- 'csshiftjis': 'shift_jis',
- 'ms_kanji': 'shift_jis',
- 'shift-jis': 'shift_jis',
- 'shift_jis': 'shift_jis',
- 'sjis': 'shift_jis',
- 'windows-31j': 'shift_jis',
- 'x-sjis': 'shift_jis',
- 'cseuckr': 'euc-kr',
- 'csksc56011987': 'euc-kr',
- 'euc-kr': 'euc-kr',
- 'iso-ir-149': 'euc-kr',
- 'korean': 'euc-kr',
- 'ks_c_5601-1987': 'euc-kr',
- 'ks_c_5601-1989': 'euc-kr',
- 'ksc5601': 'euc-kr',
- 'ksc_5601': 'euc-kr',
- 'windows-949': 'euc-kr',
- 'csiso2022kr': 'iso-2022-kr',
- 'iso-2022-kr': 'iso-2022-kr',
- 'utf-16be': 'utf-16be',
- 'utf-16': 'utf-16le',
- 'utf-16le': 'utf-16le',
- 'x-user-defined': 'x-user-defined',
-}
diff --git a/spaces/Awiny/Image2Paragraph/models/blip2_model.py b/spaces/Awiny/Image2Paragraph/models/blip2_model.py
deleted file mode 100644
index 3abbe17338835b6c76d45e8edbb069b2aeee2963..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/blip2_model.py
+++ /dev/null
@@ -1,46 +0,0 @@
-from PIL import Image
-import requests
-from transformers import Blip2Processor, Blip2ForConditionalGeneration, BlipProcessor, BlipForConditionalGeneration
-import torch
-from utils.util import resize_long_edge
-
-
-class ImageCaptioning:
- def __init__(self, device, captioner_base_model='blip'):
- self.device = device
- self.captioner_base_model = captioner_base_model
- self.processor, self.model = self.initialize_model()
-
- def initialize_model(self,):
- if self.device == 'cpu':
- self.data_type = torch.float32
- else:
- self.data_type = torch.float16
- if self.captioner_base_model == 'blip2':
- processor = Blip2Processor.from_pretrained("pretrained_models/blip2-opt-2.7b")
- model = Blip2ForConditionalGeneration.from_pretrained(
- "pretrained_models/blip2-opt-2.7b", torch_dtype=self.data_type
- )
- # for gpu with small memory
- elif self.captioner_base_model == 'blip':
- processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base")
- model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=self.data_type)
- else:
- raise ValueError('arch not supported')
- model.to(self.device)
- return processor, model
-
- def image_caption(self, image_src):
- image = Image.open(image_src)
- image = resize_long_edge(image, 384)
- inputs = self.processor(images=image, return_tensors="pt").to(self.device, self.data_type)
- generated_ids = self.model.generate(**inputs)
- generated_text = self.processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
- print('\033[1;35m' + '*' * 100 + '\033[0m')
- print('\nStep1, BLIP2 caption:')
- print(generated_text)
- print('\033[1;35m' + '*' * 100 + '\033[0m')
- return generated_text
-
- def image_caption_debug(self, image_src):
- return "A dish with salmon, broccoli, and something yellow."
\ No newline at end of file
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.h b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.h
deleted file mode 100644
index db246e49a026b7cd989b305f4d3d98100be3c912..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/cocoeval/cocoeval.h
+++ /dev/null
@@ -1,88 +0,0 @@
-// Copyright (c) Facebook, Inc. and its affiliates.
-#pragma once
-
-#include
-#include
-#include
-#include
-#include
-
-namespace py = pybind11;
-
-namespace detectron2 {
-
-namespace COCOeval {
-
-// Annotation data for a single object instance in an image
-struct InstanceAnnotation {
- InstanceAnnotation(
- uint64_t id,
- double score,
- double area,
- bool is_crowd,
- bool ignore)
- : id{id}, score{score}, area{area}, is_crowd{is_crowd}, ignore{ignore} {}
- uint64_t id;
- double score = 0.;
- double area = 0.;
- bool is_crowd = false;
- bool ignore = false;
-};
-
-// Stores intermediate results for evaluating detection results for a single
-// image that has D detected instances and G ground truth instances. This stores
-// matches between detected and ground truth instances
-struct ImageEvaluation {
- // For each of the D detected instances, the id of the matched ground truth
- // instance, or 0 if unmatched
- std::vector detection_matches;
-
- // The detection score of each of the D detected instances
- std::vector detection_scores;
-
- // Marks whether or not each of G instances was ignored from evaluation (e.g.,
- // because it's outside area_range)
- std::vector ground_truth_ignores;
-
- // Marks whether or not each of D instances was ignored from evaluation (e.g.,
- // because it's outside aRng)
- std::vector detection_ignores;
-};
-
-template
-using ImageCategoryInstances = std::vector>>;
-
-// C++ implementation of COCO API cocoeval.py::COCOeval.evaluateImg(). For each
-// combination of image, category, area range settings, and IOU thresholds to
-// evaluate, it matches detected instances to ground truth instances and stores
-// the results into a vector of ImageEvaluation results, which will be
-// interpreted by the COCOeval::Accumulate() function to produce precion-recall
-// curves. The parameters of nested vectors have the following semantics:
-// image_category_ious[i][c][d][g] is the intersection over union of the d'th
-// detected instance and g'th ground truth instance of
-// category category_ids[c] in image image_ids[i]
-// image_category_ground_truth_instances[i][c] is a vector of ground truth
-// instances in image image_ids[i] of category category_ids[c]
-// image_category_detection_instances[i][c] is a vector of detected
-// instances in image image_ids[i] of category category_ids[c]
-std::vector EvaluateImages(
- const std::vector>& area_ranges, // vector of 2-tuples
- int max_detections,
- const std::vector& iou_thresholds,
- const ImageCategoryInstances>& image_category_ious,
- const ImageCategoryInstances&
- image_category_ground_truth_instances,
- const ImageCategoryInstances&
- image_category_detection_instances);
-
-// C++ implementation of COCOeval.accumulate(), which generates precision
-// recall curves for each set of category, IOU threshold, detection area range,
-// and max number of detections parameters. It is assumed that the parameter
-// evaluations is the return value of the functon COCOeval::EvaluateImages(),
-// which was called with the same parameter settings params
-py::dict Accumulate(
- const py::object& params,
- const std::vector& evalutations);
-
-} // namespace COCOeval
-} // namespace detectron2
diff --git a/spaces/Ayaka-daisuki/anime-remove-background/README.md b/spaces/Ayaka-daisuki/anime-remove-background/README.md
deleted file mode 100644
index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000
--- a/spaces/Ayaka-daisuki/anime-remove-background/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Remove Background
-emoji: 🪄🖼️
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: skytnt/anime-remove-background
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Basil2k4/botbasil203/Dockerfile b/spaces/Basil2k4/botbasil203/Dockerfile
deleted file mode 100644
index 70f7f0efcb6aa465cbe627f15e1828cfe92d31bf..0000000000000000000000000000000000000000
--- a/spaces/Basil2k4/botbasil203/Dockerfile
+++ /dev/null
@@ -1,106 +0,0 @@
-# syntax=docker/dockerfile:experimental
-
-# ./hooks/build latest
-# ./hooks/test latest
-
-### Example: Build and test 'dev' tag locally like
-### ./hooks/build dev
-### ./hooks/test dev
-### or with additional arguments
-### ./hooks/build dev --no-cache
-### ./hooks/test dev
-### or using the utility
-### ./utils/util-hdx.sh Dockerfile 3
-### ./utils/util-hdx.sh Dockerfile 4
-### The last output line should be '+ exit 0'
-### If '+ exit 1' then adjust the version sticker
-### variables in script './hooks/env'
-
-ARG BASETAG=latest
-
-FROM accetto/ubuntu-vnc-xfce:${BASETAG} as stage-install
-
-### Be sure to use root user
-USER 0
-
-### 'apt-get clean' runs automatically
-RUN apt-get update \
- && DEBIAN_FRONTEND=noninteractive apt-get install -y \
- chromium-browser \
- neofetch \
- python3-pip \
- firefox \
- sudo \
- unzip \
- git \
- curl \
- default-jdk \
- snapd \
- && curl -sL https://deb.nodesource.com/setup_16.x | sudo -E bash -\
- && apt install nodejs\
- && apt-get -y autoremove \
- && rm -rf /var/lib/apt/lists/*
-### Chromium browser requires some presets
-### Note that 'no-sandbox' flag is required, but intended for development only
-RUN echo "CHROMIUM_FLAGS='--no-sandbox --disable-gpu --user-data-dir --window-size=${VNC_RESOLUTION%x*},${VNC_RESOLUTION#*x} --window-position=0,0'" > ${HOME}/.chromium-browser.init
-
-FROM stage-install as stage-config
-
-### Arguments can be provided during build
-ARG ARG_VNC_USER
-
-ENV VNC_USER=${ARG_VNC_USER:-headless:headless}
-
-WORKDIR ${HOME}
-SHELL ["/bin/bash", "-c"]
-
-COPY [ "./src/create_user_and_fix_permissions.sh", "./" ]
-
-### 'sync' mitigates automated build failures
-RUN chmod +x \
- ./create_user_and_fix_permissions.sh \
- && sync \
- && ./create_user_and_fix_permissions.sh $STARTUPDIR $HOME \
- && rm ./create_user_and_fix_permissions.sh
-
-FROM stage-config as stage-final
-
-### Arguments can be provided during build
-ARG ARG_REFRESHED_AT
-ARG ARG_VCS_REF
-ARG ARG_VERSION_STICKER
-ARG ARG_VNC_BLACKLIST_THRESHOLD
-ARG ARG_VNC_BLACKLIST_TIMEOUT
-ARG ARG_VNC_RESOLUTION
-
-LABEL \
- any.accetto.description="Headless Ubuntu VNC/noVNC container with Xfce desktop and Chromium Browser" \
- any.accetto.display-name="Headless Ubuntu/Xfce VNC/noVNC container with Firefox and Chromium" \
- any.accetto.tags="ubuntu, xfce, vnc, novnc, chromium" \
- version-sticker="${ARG_VERSION_STICKER}" \
- org.label-schema.vcs-ref="${ARG_VCS_REF}" \
- org.label-schema.vcs-url="https://github.com/accetto/ubuntu-vnc-xfce-chromium"
-
-ENV \
- REFRESHED_AT=${ARG_REFRESHED_AT} \
- VERSION_STICKER=${ARG_VERSION_STICKER} \
- VNC_BLACKLIST_THRESHOLD=${ARG_VNC_BLACKLIST_THRESHOLD:-20} \
- VNC_BLACKLIST_TIMEOUT=${ARG_VNC_BLACKLIST_TIMEOUT:-0} \
- VNC_RESOLUTION=${ARG_VNC_RESOLUTION:-1360x768}
-
-### Preconfigure Xfce
-COPY [ "./src/home/Desktop", "./Desktop/" ]
-COPY [ "./src/home/config/xfce4/panel", "./.config/xfce4/panel/" ]
-COPY [ "./src/home/config/xfce4/xfconf/xfce-perchannel-xml", "./.config/xfce4/xfconf/xfce-perchannel-xml/" ]
-COPY [ "./src/startup/version_sticker.sh", "${STARTUPDIR}/" ]
-
-### Fix permissions
-RUN \
- chmod a+wx "${STARTUPDIR}"/version_sticker.sh \
- && "${STARTUPDIR}"/set_user_permissions.sh "${STARTUPDIR}" "${HOME}"
-
-### Switch to non-root user
-USER 0
-
-### Issue #7 (base): Mitigating problems with foreground mode
-WORKDIR ${STARTUPDIR}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/_parser.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/_parser.py
deleted file mode 100644
index f1bb0aa19a556725aa2ae2b8cea95489c99a9078..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_vendor/tomli/_parser.py
+++ /dev/null
@@ -1,691 +0,0 @@
-# SPDX-License-Identifier: MIT
-# SPDX-FileCopyrightText: 2021 Taneli Hukkinen
-# Licensed to PSF under a Contributor Agreement.
-
-from __future__ import annotations
-
-from collections.abc import Iterable
-import string
-from types import MappingProxyType
-from typing import Any, BinaryIO, NamedTuple
-
-from ._re import (
- RE_DATETIME,
- RE_LOCALTIME,
- RE_NUMBER,
- match_to_datetime,
- match_to_localtime,
- match_to_number,
-)
-from ._types import Key, ParseFloat, Pos
-
-ASCII_CTRL = frozenset(chr(i) for i in range(32)) | frozenset(chr(127))
-
-# Neither of these sets include quotation mark or backslash. They are
-# currently handled as separate cases in the parser functions.
-ILLEGAL_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t")
-ILLEGAL_MULTILINE_BASIC_STR_CHARS = ASCII_CTRL - frozenset("\t\n")
-
-ILLEGAL_LITERAL_STR_CHARS = ILLEGAL_BASIC_STR_CHARS
-ILLEGAL_MULTILINE_LITERAL_STR_CHARS = ILLEGAL_MULTILINE_BASIC_STR_CHARS
-
-ILLEGAL_COMMENT_CHARS = ILLEGAL_BASIC_STR_CHARS
-
-TOML_WS = frozenset(" \t")
-TOML_WS_AND_NEWLINE = TOML_WS | frozenset("\n")
-BARE_KEY_CHARS = frozenset(string.ascii_letters + string.digits + "-_")
-KEY_INITIAL_CHARS = BARE_KEY_CHARS | frozenset("\"'")
-HEXDIGIT_CHARS = frozenset(string.hexdigits)
-
-BASIC_STR_ESCAPE_REPLACEMENTS = MappingProxyType(
- {
- "\\b": "\u0008", # backspace
- "\\t": "\u0009", # tab
- "\\n": "\u000A", # linefeed
- "\\f": "\u000C", # form feed
- "\\r": "\u000D", # carriage return
- '\\"': "\u0022", # quote
- "\\\\": "\u005C", # backslash
- }
-)
-
-
-class TOMLDecodeError(ValueError):
- """An error raised if a document is not valid TOML."""
-
-
-def load(__fp: BinaryIO, *, parse_float: ParseFloat = float) -> dict[str, Any]:
- """Parse TOML from a binary file object."""
- b = __fp.read()
- try:
- s = b.decode()
- except AttributeError:
- raise TypeError(
- "File must be opened in binary mode, e.g. use `open('foo.toml', 'rb')`"
- ) from None
- return loads(s, parse_float=parse_float)
-
-
-def loads(__s: str, *, parse_float: ParseFloat = float) -> dict[str, Any]: # noqa: C901
- """Parse TOML from a string."""
-
- # The spec allows converting "\r\n" to "\n", even in string
- # literals. Let's do so to simplify parsing.
- src = __s.replace("\r\n", "\n")
- pos = 0
- out = Output(NestedDict(), Flags())
- header: Key = ()
- parse_float = make_safe_parse_float(parse_float)
-
- # Parse one statement at a time
- # (typically means one line in TOML source)
- while True:
- # 1. Skip line leading whitespace
- pos = skip_chars(src, pos, TOML_WS)
-
- # 2. Parse rules. Expect one of the following:
- # - end of file
- # - end of line
- # - comment
- # - key/value pair
- # - append dict to list (and move to its namespace)
- # - create dict (and move to its namespace)
- # Skip trailing whitespace when applicable.
- try:
- char = src[pos]
- except IndexError:
- break
- if char == "\n":
- pos += 1
- continue
- if char in KEY_INITIAL_CHARS:
- pos = key_value_rule(src, pos, out, header, parse_float)
- pos = skip_chars(src, pos, TOML_WS)
- elif char == "[":
- try:
- second_char: str | None = src[pos + 1]
- except IndexError:
- second_char = None
- out.flags.finalize_pending()
- if second_char == "[":
- pos, header = create_list_rule(src, pos, out)
- else:
- pos, header = create_dict_rule(src, pos, out)
- pos = skip_chars(src, pos, TOML_WS)
- elif char != "#":
- raise suffixed_err(src, pos, "Invalid statement")
-
- # 3. Skip comment
- pos = skip_comment(src, pos)
-
- # 4. Expect end of line or end of file
- try:
- char = src[pos]
- except IndexError:
- break
- if char != "\n":
- raise suffixed_err(
- src, pos, "Expected newline or end of document after a statement"
- )
- pos += 1
-
- return out.data.dict
-
-
-class Flags:
- """Flags that map to parsed keys/namespaces."""
-
- # Marks an immutable namespace (inline array or inline table).
- FROZEN = 0
- # Marks a nest that has been explicitly created and can no longer
- # be opened using the "[table]" syntax.
- EXPLICIT_NEST = 1
-
- def __init__(self) -> None:
- self._flags: dict[str, dict] = {}
- self._pending_flags: set[tuple[Key, int]] = set()
-
- def add_pending(self, key: Key, flag: int) -> None:
- self._pending_flags.add((key, flag))
-
- def finalize_pending(self) -> None:
- for key, flag in self._pending_flags:
- self.set(key, flag, recursive=False)
- self._pending_flags.clear()
-
- def unset_all(self, key: Key) -> None:
- cont = self._flags
- for k in key[:-1]:
- if k not in cont:
- return
- cont = cont[k]["nested"]
- cont.pop(key[-1], None)
-
- def set(self, key: Key, flag: int, *, recursive: bool) -> None: # noqa: A003
- cont = self._flags
- key_parent, key_stem = key[:-1], key[-1]
- for k in key_parent:
- if k not in cont:
- cont[k] = {"flags": set(), "recursive_flags": set(), "nested": {}}
- cont = cont[k]["nested"]
- if key_stem not in cont:
- cont[key_stem] = {"flags": set(), "recursive_flags": set(), "nested": {}}
- cont[key_stem]["recursive_flags" if recursive else "flags"].add(flag)
-
- def is_(self, key: Key, flag: int) -> bool:
- if not key:
- return False # document root has no flags
- cont = self._flags
- for k in key[:-1]:
- if k not in cont:
- return False
- inner_cont = cont[k]
- if flag in inner_cont["recursive_flags"]:
- return True
- cont = inner_cont["nested"]
- key_stem = key[-1]
- if key_stem in cont:
- cont = cont[key_stem]
- return flag in cont["flags"] or flag in cont["recursive_flags"]
- return False
-
-
-class NestedDict:
- def __init__(self) -> None:
- # The parsed content of the TOML document
- self.dict: dict[str, Any] = {}
-
- def get_or_create_nest(
- self,
- key: Key,
- *,
- access_lists: bool = True,
- ) -> dict:
- cont: Any = self.dict
- for k in key:
- if k not in cont:
- cont[k] = {}
- cont = cont[k]
- if access_lists and isinstance(cont, list):
- cont = cont[-1]
- if not isinstance(cont, dict):
- raise KeyError("There is no nest behind this key")
- return cont
-
- def append_nest_to_list(self, key: Key) -> None:
- cont = self.get_or_create_nest(key[:-1])
- last_key = key[-1]
- if last_key in cont:
- list_ = cont[last_key]
- if not isinstance(list_, list):
- raise KeyError("An object other than list found behind this key")
- list_.append({})
- else:
- cont[last_key] = [{}]
-
-
-class Output(NamedTuple):
- data: NestedDict
- flags: Flags
-
-
-def skip_chars(src: str, pos: Pos, chars: Iterable[str]) -> Pos:
- try:
- while src[pos] in chars:
- pos += 1
- except IndexError:
- pass
- return pos
-
-
-def skip_until(
- src: str,
- pos: Pos,
- expect: str,
- *,
- error_on: frozenset[str],
- error_on_eof: bool,
-) -> Pos:
- try:
- new_pos = src.index(expect, pos)
- except ValueError:
- new_pos = len(src)
- if error_on_eof:
- raise suffixed_err(src, new_pos, f"Expected {expect!r}") from None
-
- if not error_on.isdisjoint(src[pos:new_pos]):
- while src[pos] not in error_on:
- pos += 1
- raise suffixed_err(src, pos, f"Found invalid character {src[pos]!r}")
- return new_pos
-
-
-def skip_comment(src: str, pos: Pos) -> Pos:
- try:
- char: str | None = src[pos]
- except IndexError:
- char = None
- if char == "#":
- return skip_until(
- src, pos + 1, "\n", error_on=ILLEGAL_COMMENT_CHARS, error_on_eof=False
- )
- return pos
-
-
-def skip_comments_and_array_ws(src: str, pos: Pos) -> Pos:
- while True:
- pos_before_skip = pos
- pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE)
- pos = skip_comment(src, pos)
- if pos == pos_before_skip:
- return pos
-
-
-def create_dict_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]:
- pos += 1 # Skip "["
- pos = skip_chars(src, pos, TOML_WS)
- pos, key = parse_key(src, pos)
-
- if out.flags.is_(key, Flags.EXPLICIT_NEST) or out.flags.is_(key, Flags.FROZEN):
- raise suffixed_err(src, pos, f"Cannot declare {key} twice")
- out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False)
- try:
- out.data.get_or_create_nest(key)
- except KeyError:
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
-
- if not src.startswith("]", pos):
- raise suffixed_err(src, pos, "Expected ']' at the end of a table declaration")
- return pos + 1, key
-
-
-def create_list_rule(src: str, pos: Pos, out: Output) -> tuple[Pos, Key]:
- pos += 2 # Skip "[["
- pos = skip_chars(src, pos, TOML_WS)
- pos, key = parse_key(src, pos)
-
- if out.flags.is_(key, Flags.FROZEN):
- raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}")
- # Free the namespace now that it points to another empty list item...
- out.flags.unset_all(key)
- # ...but this key precisely is still prohibited from table declaration
- out.flags.set(key, Flags.EXPLICIT_NEST, recursive=False)
- try:
- out.data.append_nest_to_list(key)
- except KeyError:
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
-
- if not src.startswith("]]", pos):
- raise suffixed_err(src, pos, "Expected ']]' at the end of an array declaration")
- return pos + 2, key
-
-
-def key_value_rule(
- src: str, pos: Pos, out: Output, header: Key, parse_float: ParseFloat
-) -> Pos:
- pos, key, value = parse_key_value_pair(src, pos, parse_float)
- key_parent, key_stem = key[:-1], key[-1]
- abs_key_parent = header + key_parent
-
- relative_path_cont_keys = (header + key[:i] for i in range(1, len(key)))
- for cont_key in relative_path_cont_keys:
- # Check that dotted key syntax does not redefine an existing table
- if out.flags.is_(cont_key, Flags.EXPLICIT_NEST):
- raise suffixed_err(src, pos, f"Cannot redefine namespace {cont_key}")
- # Containers in the relative path can't be opened with the table syntax or
- # dotted key/value syntax in following table sections.
- out.flags.add_pending(cont_key, Flags.EXPLICIT_NEST)
-
- if out.flags.is_(abs_key_parent, Flags.FROZEN):
- raise suffixed_err(
- src, pos, f"Cannot mutate immutable namespace {abs_key_parent}"
- )
-
- try:
- nest = out.data.get_or_create_nest(abs_key_parent)
- except KeyError:
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
- if key_stem in nest:
- raise suffixed_err(src, pos, "Cannot overwrite a value")
- # Mark inline table and array namespaces recursively immutable
- if isinstance(value, (dict, list)):
- out.flags.set(header + key, Flags.FROZEN, recursive=True)
- nest[key_stem] = value
- return pos
-
-
-def parse_key_value_pair(
- src: str, pos: Pos, parse_float: ParseFloat
-) -> tuple[Pos, Key, Any]:
- pos, key = parse_key(src, pos)
- try:
- char: str | None = src[pos]
- except IndexError:
- char = None
- if char != "=":
- raise suffixed_err(src, pos, "Expected '=' after a key in a key/value pair")
- pos += 1
- pos = skip_chars(src, pos, TOML_WS)
- pos, value = parse_value(src, pos, parse_float)
- return pos, key, value
-
-
-def parse_key(src: str, pos: Pos) -> tuple[Pos, Key]:
- pos, key_part = parse_key_part(src, pos)
- key: Key = (key_part,)
- pos = skip_chars(src, pos, TOML_WS)
- while True:
- try:
- char: str | None = src[pos]
- except IndexError:
- char = None
- if char != ".":
- return pos, key
- pos += 1
- pos = skip_chars(src, pos, TOML_WS)
- pos, key_part = parse_key_part(src, pos)
- key += (key_part,)
- pos = skip_chars(src, pos, TOML_WS)
-
-
-def parse_key_part(src: str, pos: Pos) -> tuple[Pos, str]:
- try:
- char: str | None = src[pos]
- except IndexError:
- char = None
- if char in BARE_KEY_CHARS:
- start_pos = pos
- pos = skip_chars(src, pos, BARE_KEY_CHARS)
- return pos, src[start_pos:pos]
- if char == "'":
- return parse_literal_str(src, pos)
- if char == '"':
- return parse_one_line_basic_str(src, pos)
- raise suffixed_err(src, pos, "Invalid initial character for a key part")
-
-
-def parse_one_line_basic_str(src: str, pos: Pos) -> tuple[Pos, str]:
- pos += 1
- return parse_basic_str(src, pos, multiline=False)
-
-
-def parse_array(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, list]:
- pos += 1
- array: list = []
-
- pos = skip_comments_and_array_ws(src, pos)
- if src.startswith("]", pos):
- return pos + 1, array
- while True:
- pos, val = parse_value(src, pos, parse_float)
- array.append(val)
- pos = skip_comments_and_array_ws(src, pos)
-
- c = src[pos : pos + 1]
- if c == "]":
- return pos + 1, array
- if c != ",":
- raise suffixed_err(src, pos, "Unclosed array")
- pos += 1
-
- pos = skip_comments_and_array_ws(src, pos)
- if src.startswith("]", pos):
- return pos + 1, array
-
-
-def parse_inline_table(src: str, pos: Pos, parse_float: ParseFloat) -> tuple[Pos, dict]:
- pos += 1
- nested_dict = NestedDict()
- flags = Flags()
-
- pos = skip_chars(src, pos, TOML_WS)
- if src.startswith("}", pos):
- return pos + 1, nested_dict.dict
- while True:
- pos, key, value = parse_key_value_pair(src, pos, parse_float)
- key_parent, key_stem = key[:-1], key[-1]
- if flags.is_(key, Flags.FROZEN):
- raise suffixed_err(src, pos, f"Cannot mutate immutable namespace {key}")
- try:
- nest = nested_dict.get_or_create_nest(key_parent, access_lists=False)
- except KeyError:
- raise suffixed_err(src, pos, "Cannot overwrite a value") from None
- if key_stem in nest:
- raise suffixed_err(src, pos, f"Duplicate inline table key {key_stem!r}")
- nest[key_stem] = value
- pos = skip_chars(src, pos, TOML_WS)
- c = src[pos : pos + 1]
- if c == "}":
- return pos + 1, nested_dict.dict
- if c != ",":
- raise suffixed_err(src, pos, "Unclosed inline table")
- if isinstance(value, (dict, list)):
- flags.set(key, Flags.FROZEN, recursive=True)
- pos += 1
- pos = skip_chars(src, pos, TOML_WS)
-
-
-def parse_basic_str_escape(
- src: str, pos: Pos, *, multiline: bool = False
-) -> tuple[Pos, str]:
- escape_id = src[pos : pos + 2]
- pos += 2
- if multiline and escape_id in {"\\ ", "\\\t", "\\\n"}:
- # Skip whitespace until next non-whitespace character or end of
- # the doc. Error if non-whitespace is found before newline.
- if escape_id != "\\\n":
- pos = skip_chars(src, pos, TOML_WS)
- try:
- char = src[pos]
- except IndexError:
- return pos, ""
- if char != "\n":
- raise suffixed_err(src, pos, "Unescaped '\\' in a string")
- pos += 1
- pos = skip_chars(src, pos, TOML_WS_AND_NEWLINE)
- return pos, ""
- if escape_id == "\\u":
- return parse_hex_char(src, pos, 4)
- if escape_id == "\\U":
- return parse_hex_char(src, pos, 8)
- try:
- return pos, BASIC_STR_ESCAPE_REPLACEMENTS[escape_id]
- except KeyError:
- raise suffixed_err(src, pos, "Unescaped '\\' in a string") from None
-
-
-def parse_basic_str_escape_multiline(src: str, pos: Pos) -> tuple[Pos, str]:
- return parse_basic_str_escape(src, pos, multiline=True)
-
-
-def parse_hex_char(src: str, pos: Pos, hex_len: int) -> tuple[Pos, str]:
- hex_str = src[pos : pos + hex_len]
- if len(hex_str) != hex_len or not HEXDIGIT_CHARS.issuperset(hex_str):
- raise suffixed_err(src, pos, "Invalid hex value")
- pos += hex_len
- hex_int = int(hex_str, 16)
- if not is_unicode_scalar_value(hex_int):
- raise suffixed_err(src, pos, "Escaped character is not a Unicode scalar value")
- return pos, chr(hex_int)
-
-
-def parse_literal_str(src: str, pos: Pos) -> tuple[Pos, str]:
- pos += 1 # Skip starting apostrophe
- start_pos = pos
- pos = skip_until(
- src, pos, "'", error_on=ILLEGAL_LITERAL_STR_CHARS, error_on_eof=True
- )
- return pos + 1, src[start_pos:pos] # Skip ending apostrophe
-
-
-def parse_multiline_str(src: str, pos: Pos, *, literal: bool) -> tuple[Pos, str]:
- pos += 3
- if src.startswith("\n", pos):
- pos += 1
-
- if literal:
- delim = "'"
- end_pos = skip_until(
- src,
- pos,
- "'''",
- error_on=ILLEGAL_MULTILINE_LITERAL_STR_CHARS,
- error_on_eof=True,
- )
- result = src[pos:end_pos]
- pos = end_pos + 3
- else:
- delim = '"'
- pos, result = parse_basic_str(src, pos, multiline=True)
-
- # Add at maximum two extra apostrophes/quotes if the end sequence
- # is 4 or 5 chars long instead of just 3.
- if not src.startswith(delim, pos):
- return pos, result
- pos += 1
- if not src.startswith(delim, pos):
- return pos, result + delim
- pos += 1
- return pos, result + (delim * 2)
-
-
-def parse_basic_str(src: str, pos: Pos, *, multiline: bool) -> tuple[Pos, str]:
- if multiline:
- error_on = ILLEGAL_MULTILINE_BASIC_STR_CHARS
- parse_escapes = parse_basic_str_escape_multiline
- else:
- error_on = ILLEGAL_BASIC_STR_CHARS
- parse_escapes = parse_basic_str_escape
- result = ""
- start_pos = pos
- while True:
- try:
- char = src[pos]
- except IndexError:
- raise suffixed_err(src, pos, "Unterminated string") from None
- if char == '"':
- if not multiline:
- return pos + 1, result + src[start_pos:pos]
- if src.startswith('"""', pos):
- return pos + 3, result + src[start_pos:pos]
- pos += 1
- continue
- if char == "\\":
- result += src[start_pos:pos]
- pos, parsed_escape = parse_escapes(src, pos)
- result += parsed_escape
- start_pos = pos
- continue
- if char in error_on:
- raise suffixed_err(src, pos, f"Illegal character {char!r}")
- pos += 1
-
-
-def parse_value( # noqa: C901
- src: str, pos: Pos, parse_float: ParseFloat
-) -> tuple[Pos, Any]:
- try:
- char: str | None = src[pos]
- except IndexError:
- char = None
-
- # IMPORTANT: order conditions based on speed of checking and likelihood
-
- # Basic strings
- if char == '"':
- if src.startswith('"""', pos):
- return parse_multiline_str(src, pos, literal=False)
- return parse_one_line_basic_str(src, pos)
-
- # Literal strings
- if char == "'":
- if src.startswith("'''", pos):
- return parse_multiline_str(src, pos, literal=True)
- return parse_literal_str(src, pos)
-
- # Booleans
- if char == "t":
- if src.startswith("true", pos):
- return pos + 4, True
- if char == "f":
- if src.startswith("false", pos):
- return pos + 5, False
-
- # Arrays
- if char == "[":
- return parse_array(src, pos, parse_float)
-
- # Inline tables
- if char == "{":
- return parse_inline_table(src, pos, parse_float)
-
- # Dates and times
- datetime_match = RE_DATETIME.match(src, pos)
- if datetime_match:
- try:
- datetime_obj = match_to_datetime(datetime_match)
- except ValueError as e:
- raise suffixed_err(src, pos, "Invalid date or datetime") from e
- return datetime_match.end(), datetime_obj
- localtime_match = RE_LOCALTIME.match(src, pos)
- if localtime_match:
- return localtime_match.end(), match_to_localtime(localtime_match)
-
- # Integers and "normal" floats.
- # The regex will greedily match any type starting with a decimal
- # char, so needs to be located after handling of dates and times.
- number_match = RE_NUMBER.match(src, pos)
- if number_match:
- return number_match.end(), match_to_number(number_match, parse_float)
-
- # Special floats
- first_three = src[pos : pos + 3]
- if first_three in {"inf", "nan"}:
- return pos + 3, parse_float(first_three)
- first_four = src[pos : pos + 4]
- if first_four in {"-inf", "+inf", "-nan", "+nan"}:
- return pos + 4, parse_float(first_four)
-
- raise suffixed_err(src, pos, "Invalid value")
-
-
-def suffixed_err(src: str, pos: Pos, msg: str) -> TOMLDecodeError:
- """Return a `TOMLDecodeError` where error message is suffixed with
- coordinates in source."""
-
- def coord_repr(src: str, pos: Pos) -> str:
- if pos >= len(src):
- return "end of document"
- line = src.count("\n", 0, pos) + 1
- if line == 1:
- column = pos + 1
- else:
- column = pos - src.rindex("\n", 0, pos)
- return f"line {line}, column {column}"
-
- return TOMLDecodeError(f"{msg} (at {coord_repr(src, pos)})")
-
-
-def is_unicode_scalar_value(codepoint: int) -> bool:
- return (0 <= codepoint <= 55295) or (57344 <= codepoint <= 1114111)
-
-
-def make_safe_parse_float(parse_float: ParseFloat) -> ParseFloat:
- """A decorator to make `parse_float` safe.
-
- `parse_float` must not return dicts or lists, because these types
- would be mixed with parsed TOML tables and arrays, thus confusing
- the parser. The returned decorated callable raises `ValueError`
- instead of returning illegal types.
- """
- # The default `float` callable never returns illegal types. Optimize it.
- if parse_float is float: # type: ignore[comparison-overlap]
- return float
-
- def safe_parse_float(float_str: str) -> Any:
- float_value = parse_float(float_str)
- if isinstance(float_value, (dict, list)):
- raise ValueError("parse_float must not return dicts or lists")
- return float_value
-
- return safe_parse_float
diff --git a/spaces/Blealtan/clip-guided-binary-autoencoder/app.py b/spaces/Blealtan/clip-guided-binary-autoencoder/app.py
deleted file mode 100644
index 3dca7e85b4236809271485a1a84885175aa0cb36..0000000000000000000000000000000000000000
--- a/spaces/Blealtan/clip-guided-binary-autoencoder/app.py
+++ /dev/null
@@ -1,327 +0,0 @@
-import base64
-from huggingface_hub import hf_hub_download
-import streamlit as st
-import io
-import gc
-import json
-
-########################################################################################################
-# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
-########################################################################################################
-
-MODEL_REPO = 'BlinkDL/clip-guided-binary-autoencoder'
-
-import torch, types
-import numpy as np
-from PIL import Image
-import torch.nn as nn
-from torch.nn import functional as F
-import torchvision as vision
-import torchvision.transforms as transforms
-from torchvision.transforms import functional as VF
-
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-IMG_BITS = 13
-
-
-class ResBlock(nn.Module):
-
- def __init__(self, c_x, c_hidden):
- super().__init__()
- self.B0 = nn.BatchNorm2d(c_x)
- self.C0 = nn.Conv2d(c_x, c_hidden, kernel_size=3, padding=1)
- self.C1 = nn.Conv2d(c_hidden, c_x, kernel_size=3, padding=1)
- self.C2 = nn.Conv2d(c_x, c_hidden, kernel_size=3, padding=1)
- self.C3 = nn.Conv2d(c_hidden, c_x, kernel_size=3, padding=1)
-
- def forward(self, x):
- ACT = F.mish
- x = x + self.C1(ACT(self.C0(ACT(self.B0(x)))))
- x = x + self.C3(ACT(self.C2(x)))
- return x
-
-
-class REncoderSmall(nn.Module):
-
- def __init__(self):
- super().__init__()
- dd = 8
- self.Bxx = nn.BatchNorm2d(dd * 64)
-
- self.CIN = nn.Conv2d(3, dd, kernel_size=3, padding=1)
- self.Cx0 = nn.Conv2d(dd, 32, kernel_size=3, padding=1)
- self.Cx1 = nn.Conv2d(32, dd, kernel_size=3, padding=1)
-
- self.B00 = nn.BatchNorm2d(dd * 4)
- self.C00 = nn.Conv2d(dd * 4, 256, kernel_size=3, padding=1)
- self.C01 = nn.Conv2d(256, dd * 4, kernel_size=3, padding=1)
- self.C02 = nn.Conv2d(dd * 4, 256, kernel_size=3, padding=1)
- self.C03 = nn.Conv2d(256, dd * 4, kernel_size=3, padding=1)
-
- self.B10 = nn.BatchNorm2d(dd * 16)
- self.C10 = nn.Conv2d(dd * 16, 256, kernel_size=3, padding=1)
- self.C11 = nn.Conv2d(256, dd * 16, kernel_size=3, padding=1)
- self.C12 = nn.Conv2d(dd * 16, 256, kernel_size=3, padding=1)
- self.C13 = nn.Conv2d(256, dd * 16, kernel_size=3, padding=1)
-
- self.B20 = nn.BatchNorm2d(dd * 64)
- self.C20 = nn.Conv2d(dd * 64, 256, kernel_size=3, padding=1)
- self.C21 = nn.Conv2d(256, dd * 64, kernel_size=3, padding=1)
- self.C22 = nn.Conv2d(dd * 64, 256, kernel_size=3, padding=1)
- self.C23 = nn.Conv2d(256, dd * 64, kernel_size=3, padding=1)
-
- self.COUT = nn.Conv2d(dd * 64, IMG_BITS, kernel_size=3, padding=1)
-
- def forward(self, img):
- ACT = F.mish
-
- x = self.CIN(img)
- xx = self.Bxx(F.pixel_unshuffle(x, 8))
- x = x + self.Cx1(ACT(self.Cx0(x)))
-
- x = F.pixel_unshuffle(x, 2)
- x = x + self.C01(ACT(self.C00(ACT(self.B00(x)))))
- x = x + self.C03(ACT(self.C02(x)))
-
- x = F.pixel_unshuffle(x, 2)
- x = x + self.C11(ACT(self.C10(ACT(self.B10(x)))))
- x = x + self.C13(ACT(self.C12(x)))
-
- x = F.pixel_unshuffle(x, 2)
- x = x + self.C21(ACT(self.C20(ACT(self.B20(x)))))
- x = x + self.C23(ACT(self.C22(x)))
-
- x = self.COUT(x + xx)
- return torch.sigmoid(x)
-
-
-class RDecoderSmall(nn.Module):
-
- def __init__(self):
- super().__init__()
- dd = 8
- self.CIN = nn.Conv2d(IMG_BITS, dd * 64, kernel_size=3, padding=1)
-
- self.B00 = nn.BatchNorm2d(dd * 64)
- self.C00 = nn.Conv2d(dd * 64, 256, kernel_size=3, padding=1)
- self.C01 = nn.Conv2d(256, dd * 64, kernel_size=3, padding=1)
- self.C02 = nn.Conv2d(dd * 64, 256, kernel_size=3, padding=1)
- self.C03 = nn.Conv2d(256, dd * 64, kernel_size=3, padding=1)
-
- self.B10 = nn.BatchNorm2d(dd * 16)
- self.C10 = nn.Conv2d(dd * 16, 256, kernel_size=3, padding=1)
- self.C11 = nn.Conv2d(256, dd * 16, kernel_size=3, padding=1)
- self.C12 = nn.Conv2d(dd * 16, 256, kernel_size=3, padding=1)
- self.C13 = nn.Conv2d(256, dd * 16, kernel_size=3, padding=1)
-
- self.B20 = nn.BatchNorm2d(dd * 4)
- self.C20 = nn.Conv2d(dd * 4, 256, kernel_size=3, padding=1)
- self.C21 = nn.Conv2d(256, dd * 4, kernel_size=3, padding=1)
- self.C22 = nn.Conv2d(dd * 4, 256, kernel_size=3, padding=1)
- self.C23 = nn.Conv2d(256, dd * 4, kernel_size=3, padding=1)
-
- self.Cx0 = nn.Conv2d(dd, 32, kernel_size=3, padding=1)
- self.Cx1 = nn.Conv2d(32, dd, kernel_size=3, padding=1)
- self.COUT = nn.Conv2d(dd, 3, kernel_size=3, padding=1)
-
- def forward(self, code):
- ACT = F.mish
- x = self.CIN(code)
-
- x = x + self.C01(ACT(self.C00(ACT(self.B00(x)))))
- x = x + self.C03(ACT(self.C02(x)))
- x = F.pixel_shuffle(x, 2)
-
- x = x + self.C11(ACT(self.C10(ACT(self.B10(x)))))
- x = x + self.C13(ACT(self.C12(x)))
- x = F.pixel_shuffle(x, 2)
-
- x = x + self.C21(ACT(self.C20(ACT(self.B20(x)))))
- x = x + self.C23(ACT(self.C22(x)))
- x = F.pixel_shuffle(x, 2)
-
- x = x + self.Cx1(ACT(self.Cx0(x)))
- x = self.COUT(x)
-
- return torch.sigmoid(x)
-
-
-class REncoderLarge(nn.Module):
-
- def __init__(self, dd, ee, ff):
- super().__init__()
- self.CXX = nn.Conv2d(3, dd, kernel_size=3, padding=1)
- self.BXX = nn.BatchNorm2d(dd)
- self.CX0 = nn.Conv2d(dd, ee, kernel_size=3, padding=1)
- self.CX1 = nn.Conv2d(ee, dd, kernel_size=3, padding=1)
- self.R0 = ResBlock(dd * 4, ff)
- self.R1 = ResBlock(dd * 16, ff)
- self.R2 = ResBlock(dd * 64, ff)
- self.CZZ = nn.Conv2d(dd * 64, IMG_BITS, kernel_size=3, padding=1)
-
- def forward(self, x):
- ACT = F.mish
- x = self.BXX(self.CXX(x))
-
- x = x + self.CX1(ACT(self.CX0(x)))
- x = F.pixel_unshuffle(x, 2)
- x = self.R0(x)
- x = F.pixel_unshuffle(x, 2)
- x = self.R1(x)
- x = F.pixel_unshuffle(x, 2)
- x = self.R2(x)
-
- x = self.CZZ(x)
- return torch.sigmoid(x)
-
-
-class RDecoderLarge(nn.Module):
-
- def __init__(self, dd, ee, ff):
- super().__init__()
- self.CZZ = nn.Conv2d(IMG_BITS, dd * 64, kernel_size=3, padding=1)
- self.BZZ = nn.BatchNorm2d(dd * 64)
- self.R0 = ResBlock(dd * 64, ff)
- self.R1 = ResBlock(dd * 16, ff)
- self.R2 = ResBlock(dd * 4, ff)
- self.CX0 = nn.Conv2d(dd, ee, kernel_size=3, padding=1)
- self.CX1 = nn.Conv2d(ee, dd, kernel_size=3, padding=1)
- self.CXX = nn.Conv2d(dd, 3, kernel_size=3, padding=1)
-
- def forward(self, x):
- ACT = F.mish
- x = self.BZZ(self.CZZ(x))
-
- x = self.R0(x)
- x = F.pixel_shuffle(x, 2)
- x = self.R1(x)
- x = F.pixel_shuffle(x, 2)
- x = self.R2(x)
- x = F.pixel_shuffle(x, 2)
- x = x + self.CX1(ACT(self.CX0(x)))
-
- x = self.CXX(x)
- return torch.sigmoid(x)
-
-
-@st.cache
-def prepare_model(model_prefix):
- gc.collect()
-
- if model_prefix == 'out-v7c_d8_256-224-13bit-OB32x0.5-745':
- R_ENCODER, R_DECODER = REncoderSmall(), RDecoderSmall()
- else:
- if 'd16_512' in model_prefix:
- dd, ee, ff = 16, 64, 512
- elif 'd32_1024' in model_prefix:
- dd, ee, ff = 32, 128, 1024
- R_ENCODER = REncoderLarge(dd, ee, ff)
- R_DECODER = RDecoderLarge(dd, ee, ff)
-
- encoder = R_ENCODER.eval().to(device)
- decoder = R_DECODER.eval().to(device)
-
- encoder.load_state_dict(
- torch.load(hf_hub_download(MODEL_REPO, f'{model_prefix}-E.pth')))
- decoder.load_state_dict(
- torch.load(hf_hub_download(MODEL_REPO, f'{model_prefix}-D.pth')))
-
- return encoder, decoder
-
-
-def compute_padding(img_shape):
- hsize, vsize = (img_shape[1] + 7) // 8 * 8, (img_shape[0] + 7) // 8 * 8
- hpad, vpad = hsize - img_shape[1], vsize - img_shape[0]
- left, top = hpad // 2, vpad // 2
- right, bottom = hpad - left, vpad - top
- return left, top, right, bottom
-
-
-def encode(model_prefix, img, keep_shape):
- gc.collect()
- encoder, _ = prepare_model(model_prefix)
-
- with torch.no_grad():
- img = VF.pil_to_tensor(img.convert("RGB"))
- img = VF.convert_image_dtype(img)
- img = img.unsqueeze(0).to(device)
- img_shape = img.shape[2:]
-
- if keep_shape:
- left, top, right, bottom = compute_padding(img_shape)
- img = VF.pad(img, [left, top, right, bottom], padding_mode='edge')
- else:
- img = VF.resize(img, [224, 224])
-
- z = torch.floor(encoder(img) + 0.5)
-
- with io.BytesIO() as buffer:
- np.save(buffer, np.packbits(z.cpu().numpy().astype('bool')))
- z_b64 = base64.b64encode(buffer.getvalue()).decode()
-
- return json.dumps({
- "img_shape": img_shape,
- "z_shape": z.shape[2:],
- "keep_shape": keep_shape,
- "data": z_b64,
- })
-
-
-def decode(model_prefix, z_str):
- gc.collect()
- _, decoder = prepare_model(model_prefix)
-
- z_json = json.loads(z_str)
- with io.BytesIO() as buffer:
- buffer.write(base64.b64decode(z_json["data"]))
- buffer.seek(0)
- z = np.load(buffer)
- img_shape = z_json["img_shape"]
- z_shape = z_json["z_shape"]
- keep_shape = z_json["keep_shape"]
-
- z = np.unpackbits(z)[:IMG_BITS * z_shape[0] * z_shape[1]].astype('float')
- z = z.reshape([1, IMG_BITS] + z_shape)
-
- img = decoder(torch.Tensor(z).to(device))
-
- if keep_shape:
- left, top, right, bottom = compute_padding(img_shape)
- img = img[0, :, top:img.shape[2] - bottom, left:img.shape[3] - right]
- else:
- img = img[0]
-
- return VF.to_pil_image(img)
-
-
-st.title("Clip Guided Binary Autoencoder")
-st.write(
- "Model is from [@BlinkDL](https://huggingface.co/BlinkDL/clip-guided-binary-autoencoder)"
-)
-model_prefix = st.selectbox('The model to use',
- ('out-v7c_d8_256-224-13bit-OB32x0.5-745',
- 'out-v7d_d16_512-224-13bit-OB32x0.5-2487',
- 'out-v7d_d32_1024-224-13bit-OB32x0.5-5560'))
-
-encoder_tab, decoder_tab = st.tabs(["Encode", "Decode"])
-
-with encoder_tab:
- col_in, col_out = st.columns(2)
- keep_shape = col_in.checkbox(
- 'Use original size of input image instead of rescaling (Experimental)')
- uploaded_file = col_in.file_uploader('Choose an Image')
- if uploaded_file is not None:
- image = Image.open(uploaded_file)
- col_in.image(image, 'Input Image')
- z_str = encode(model_prefix, image, keep_shape)
- col_out.write("Encoded to:")
- col_out.code(z_str, language=None)
- col_out.image(decode(model_prefix, z_str), 'Output Image preview')
-
-with decoder_tab:
- col_in, col_out = st.columns(2)
- z_str = col_in.text_area('Paste encoded string here:')
- if len(z_str) > 0:
- image = decode(model_prefix, z_str)
- col_out.image(image, 'Output Image')
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_fast_rcnn.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_fast_rcnn.py
deleted file mode 100644
index 713be6feeaad26cf28a0ff994c208cd985f5ab2d..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_fast_rcnn.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import logging
-import unittest
-import torch
-
-from detectron2.layers import ShapeSpec
-from detectron2.modeling.box_regression import Box2BoxTransform, Box2BoxTransformRotated
-from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
-from detectron2.modeling.roi_heads.rotated_fast_rcnn import RotatedFastRCNNOutputLayers
-from detectron2.structures import Boxes, Instances, RotatedBoxes
-from detectron2.utils.events import EventStorage
-
-logger = logging.getLogger(__name__)
-
-
-class FastRCNNTest(unittest.TestCase):
- def test_fast_rcnn(self):
- torch.manual_seed(132)
-
- box_head_output_size = 8
-
- box_predictor = FastRCNNOutputLayers(
- ShapeSpec(channels=box_head_output_size), Box2BoxTransform(weights=(10, 10, 5, 5)), 5
- )
- feature_pooled = torch.rand(2, box_head_output_size)
- predictions = box_predictor(feature_pooled)
-
- proposal_boxes = torch.tensor([[0.8, 1.1, 3.2, 2.8], [2.3, 2.5, 7, 8]], dtype=torch.float32)
- gt_boxes = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32)
- proposal = Instances((10, 10))
- proposal.proposal_boxes = Boxes(proposal_boxes)
- proposal.gt_boxes = Boxes(gt_boxes)
- proposal.gt_classes = torch.tensor([1, 2])
-
- with EventStorage(): # capture events in a new storage to discard them
- losses = box_predictor.losses(predictions, [proposal])
-
- expected_losses = {
- "loss_cls": torch.tensor(1.7951188087),
- "loss_box_reg": torch.tensor(4.0357131958),
- }
- for name in expected_losses.keys():
- assert torch.allclose(losses[name], expected_losses[name])
-
- def test_fast_rcnn_empty_batch(self):
- box_predictor = FastRCNNOutputLayers(
- ShapeSpec(channels=10), Box2BoxTransform(weights=(10, 10, 5, 5)), 8
- )
-
- logits = torch.randn(0, 100, requires_grad=True)
- deltas = torch.randn(0, 4, requires_grad=True)
- losses = box_predictor.losses([logits, deltas], [])
- for value in losses.values():
- self.assertTrue(torch.allclose(value, torch.zeros_like(value)))
- sum(losses.values()).backward()
- self.assertTrue(logits.grad is not None)
- self.assertTrue(deltas.grad is not None)
-
- predictions, _ = box_predictor.inference([logits, deltas], [])
- self.assertEqual(len(predictions), 0)
-
- def test_fast_rcnn_rotated(self):
- torch.manual_seed(132)
- box_head_output_size = 8
-
- box_predictor = RotatedFastRCNNOutputLayers(
- ShapeSpec(channels=box_head_output_size),
- Box2BoxTransformRotated(weights=(10, 10, 5, 5, 1)),
- 5,
- )
- feature_pooled = torch.rand(2, box_head_output_size)
- predictions = box_predictor(feature_pooled)
- proposal_boxes = torch.tensor(
- [[2, 1.95, 2.4, 1.7, 0], [4.65, 5.25, 4.7, 5.5, 0]], dtype=torch.float32
- )
- gt_boxes = torch.tensor([[2, 2, 2, 2, 0], [4, 4, 4, 4, 0]], dtype=torch.float32)
- proposal = Instances((10, 10))
- proposal.proposal_boxes = RotatedBoxes(proposal_boxes)
- proposal.gt_boxes = RotatedBoxes(gt_boxes)
- proposal.gt_classes = torch.tensor([1, 2])
-
- with EventStorage(): # capture events in a new storage to discard them
- losses = box_predictor.losses(predictions, [proposal])
-
- # Note: the expected losses are slightly different even if
- # the boxes are essentially the same as in the FastRCNNOutput test, because
- # bbox_pred in FastRCNNOutputLayers have different Linear layers/initialization
- # between the two cases.
- expected_losses = {
- "loss_cls": torch.tensor(1.7920907736),
- "loss_box_reg": torch.tensor(4.0410838127),
- }
- for name in expected_losses.keys():
- assert torch.allclose(losses[name], expected_losses[name])
-
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/model_cfgs.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/model_cfgs.py
deleted file mode 100644
index 2a9e36eef60556c5dc2315a48bd337b00f6cbb50..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mmnasnet/model_cfgs.py
+++ /dev/null
@@ -1,28 +0,0 @@
-# --------------------------------------------------------
-# OpenVQA
-# Written by Zhenwei Shao https://github.com/ParadoxZW
-# --------------------------------------------------------
-
-from openvqa.core.base_cfgs import BaseCfgs
-
-
-class Cfgs(BaseCfgs):
- def __init__(self):
- super(Cfgs, self).__init__()
-
- self.ARCH = {
- 'enc': ['SA', 'SA', 'SA', 'SA', 'FFN', 'FFN', 'FFN', 'FFN', 'SA', 'FFN', 'FFN', 'FFN'],
- 'dec': ['GA', 'GA', 'FFN', 'FFN', 'GA', 'FFN', 'RSA', 'GA', 'FFN', 'GA', 'RSA', 'FFN', 'RSA', 'SA', 'FFN', 'RSA', 'GA', 'FFN']
- }
- self.HIDDEN_SIZE = 512
- self.BBOXFEAT_EMB_SIZE = 2048
- self.FF_SIZE = 2048
- self.MULTI_HEAD = 8
- self.DROPOUT_R = 0.1
- self.FLAT_MLP_SIZE = 512
- self.FLAT_GLIMPSES = 1
- self.FLAT_OUT_SIZE = 1024
- self.USE_AUX_FEAT = False
- self.USE_BBOX_FEAT = False
- self.REL_HBASE = 64
- self.REL_SIZE = 64
diff --git a/spaces/CVPR/LIVE/thrust/thrust/mr/fancy_pointer_resource.h b/spaces/CVPR/LIVE/thrust/thrust/mr/fancy_pointer_resource.h
deleted file mode 100644
index 53ffc7eb76baf00f291e05e22dc9a49c2224e8f8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/mr/fancy_pointer_resource.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Copyright 2018 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-#include
-#include
-
-namespace thrust
-{
-namespace mr
-{
-
-template
-class fancy_pointer_resource THRUST_FINAL : public memory_resource, private validator
-{
-public:
- fancy_pointer_resource() : m_upstream(get_global_resource())
- {
- }
-
- fancy_pointer_resource(Upstream * upstream) : m_upstream(upstream)
- {
- }
-
- THRUST_NODISCARD
- virtual Pointer do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE
- {
- return static_cast(m_upstream->do_allocate(bytes, alignment));
- }
-
- virtual void do_deallocate(Pointer p, std::size_t bytes, std::size_t alignment) THRUST_OVERRIDE
- {
- return m_upstream->do_deallocate(
- static_cast(
- thrust::detail::pointer_traits::get(p)),
- bytes, alignment);
- }
-
-private:
- Upstream * m_upstream;
-};
-
-} // end mr
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform_reduce.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform_reduce.h
deleted file mode 100644
index 23123fa4952f7b30619ad502ff13f7de7a245445..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/transform_reduce.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
- OutputType transform_reduce(thrust::execution_policy &exec,
- InputIterator first,
- InputIterator last,
- UnaryFunction unary_op,
- OutputType init,
- BinaryFunction binary_op);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/lama-example/bin/analyze_errors.py b/spaces/CVPR/lama-example/bin/analyze_errors.py
deleted file mode 100644
index a11f9478de76ede162f5511449ac98e549ff4b6e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/bin/analyze_errors.py
+++ /dev/null
@@ -1,316 +0,0 @@
-#!/usr/bin/env python3
-import cv2
-import numpy as np
-import sklearn
-import torch
-import os
-import pickle
-import pandas as pd
-import matplotlib.pyplot as plt
-from joblib import Parallel, delayed
-
-from saicinpainting.evaluation.data import PrecomputedInpaintingResultsDataset, load_image
-from saicinpainting.evaluation.losses.fid.inception import InceptionV3
-from saicinpainting.evaluation.utils import load_yaml
-from saicinpainting.training.visualizers.base import visualize_mask_and_images
-
-
-def draw_score(img, score):
- img = np.transpose(img, (1, 2, 0))
- cv2.putText(img, f'{score:.2f}',
- (40, 40),
- cv2.FONT_HERSHEY_SIMPLEX,
- 1,
- (0, 1, 0),
- thickness=3)
- img = np.transpose(img, (2, 0, 1))
- return img
-
-
-def save_global_samples(global_mask_fnames, mask2real_fname, mask2fake_fname, out_dir, real_scores_by_fname, fake_scores_by_fname):
- for cur_mask_fname in global_mask_fnames:
- cur_real_fname = mask2real_fname[cur_mask_fname]
- orig_img = load_image(cur_real_fname, mode='RGB')
- fake_img = load_image(mask2fake_fname[cur_mask_fname], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]]
- mask = load_image(cur_mask_fname, mode='L')[None, ...]
-
- draw_score(orig_img, real_scores_by_fname.loc[cur_real_fname, 'real_score'])
- draw_score(fake_img, fake_scores_by_fname.loc[cur_mask_fname, 'fake_score'])
-
- cur_grid = visualize_mask_and_images(dict(image=orig_img, mask=mask, fake=fake_img),
- keys=['image', 'fake'],
- last_without_mask=True)
- cur_grid = np.clip(cur_grid * 255, 0, 255).astype('uint8')
- cur_grid = cv2.cvtColor(cur_grid, cv2.COLOR_RGB2BGR)
- cv2.imwrite(os.path.join(out_dir, os.path.splitext(os.path.basename(cur_mask_fname))[0] + '.jpg'),
- cur_grid)
-
-
-def save_samples_by_real(worst_best_by_real, mask2fake_fname, fake_info, out_dir):
- for real_fname in worst_best_by_real.index:
- worst_mask_path = worst_best_by_real.loc[real_fname, 'worst']
- best_mask_path = worst_best_by_real.loc[real_fname, 'best']
- orig_img = load_image(real_fname, mode='RGB')
- worst_mask_img = load_image(worst_mask_path, mode='L')[None, ...]
- worst_fake_img = load_image(mask2fake_fname[worst_mask_path], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]]
- best_mask_img = load_image(best_mask_path, mode='L')[None, ...]
- best_fake_img = load_image(mask2fake_fname[best_mask_path], mode='RGB')[:, :orig_img.shape[1], :orig_img.shape[2]]
-
- draw_score(orig_img, worst_best_by_real.loc[real_fname, 'real_score'])
- draw_score(worst_fake_img, worst_best_by_real.loc[real_fname, 'worst_score'])
- draw_score(best_fake_img, worst_best_by_real.loc[real_fname, 'best_score'])
-
- cur_grid = visualize_mask_and_images(dict(image=orig_img, mask=np.zeros_like(worst_mask_img),
- worst_mask=worst_mask_img, worst_img=worst_fake_img,
- best_mask=best_mask_img, best_img=best_fake_img),
- keys=['image', 'worst_mask', 'worst_img', 'best_mask', 'best_img'],
- rescale_keys=['worst_mask', 'best_mask'],
- last_without_mask=True)
- cur_grid = np.clip(cur_grid * 255, 0, 255).astype('uint8')
- cur_grid = cv2.cvtColor(cur_grid, cv2.COLOR_RGB2BGR)
- cv2.imwrite(os.path.join(out_dir,
- os.path.splitext(os.path.basename(real_fname))[0] + '.jpg'),
- cur_grid)
-
- fig, (ax1, ax2) = plt.subplots(1, 2)
- cur_stat = fake_info[fake_info['real_fname'] == real_fname]
- cur_stat['fake_score'].hist(ax=ax1)
- cur_stat['real_score'].hist(ax=ax2)
- fig.tight_layout()
- fig.savefig(os.path.join(out_dir,
- os.path.splitext(os.path.basename(real_fname))[0] + '_scores.png'))
- plt.close(fig)
-
-
-def extract_overlapping_masks(mask_fnames, cur_i, fake_scores_table, max_overlaps_n=2):
- result_pairs = []
- result_scores = []
- mask_fname_a = mask_fnames[cur_i]
- mask_a = load_image(mask_fname_a, mode='L')[None, ...] > 0.5
- cur_score_a = fake_scores_table.loc[mask_fname_a, 'fake_score']
- for mask_fname_b in mask_fnames[cur_i + 1:]:
- mask_b = load_image(mask_fname_b, mode='L')[None, ...] > 0.5
- if not np.any(mask_a & mask_b):
- continue
- cur_score_b = fake_scores_table.loc[mask_fname_b, 'fake_score']
- result_pairs.append((mask_fname_a, mask_fname_b))
- result_scores.append(cur_score_b - cur_score_a)
- if len(result_pairs) >= max_overlaps_n:
- break
- return result_pairs, result_scores
-
-
-def main(args):
- config = load_yaml(args.config)
-
- latents_dir = os.path.join(args.outpath, 'latents')
- os.makedirs(latents_dir, exist_ok=True)
- global_worst_dir = os.path.join(args.outpath, 'global_worst')
- os.makedirs(global_worst_dir, exist_ok=True)
- global_best_dir = os.path.join(args.outpath, 'global_best')
- os.makedirs(global_best_dir, exist_ok=True)
- worst_best_by_best_worst_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'best_worst_score_diff_max')
- os.makedirs(worst_best_by_best_worst_score_diff_max_dir, exist_ok=True)
- worst_best_by_best_worst_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'best_worst_score_diff_min')
- os.makedirs(worst_best_by_best_worst_score_diff_min_dir, exist_ok=True)
- worst_best_by_real_best_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_best_score_diff_max')
- os.makedirs(worst_best_by_real_best_score_diff_max_dir, exist_ok=True)
- worst_best_by_real_best_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_best_score_diff_min')
- os.makedirs(worst_best_by_real_best_score_diff_min_dir, exist_ok=True)
- worst_best_by_real_worst_score_diff_max_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_worst_score_diff_max')
- os.makedirs(worst_best_by_real_worst_score_diff_max_dir, exist_ok=True)
- worst_best_by_real_worst_score_diff_min_dir = os.path.join(args.outpath, 'worst_best_by_real', 'real_worst_score_diff_min')
- os.makedirs(worst_best_by_real_worst_score_diff_min_dir, exist_ok=True)
-
- if not args.only_report:
- block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[2048]
- inception_model = InceptionV3([block_idx]).eval().cuda()
-
- dataset = PrecomputedInpaintingResultsDataset(args.datadir, args.predictdir, **config.dataset_kwargs)
-
- real2vector_cache = {}
-
- real_features = []
- fake_features = []
-
- orig_fnames = []
- mask_fnames = []
- mask2real_fname = {}
- mask2fake_fname = {}
-
- for batch_i, batch in enumerate(dataset):
- orig_img_fname = dataset.img_filenames[batch_i]
- mask_fname = dataset.mask_filenames[batch_i]
- fake_fname = dataset.pred_filenames[batch_i]
- mask2real_fname[mask_fname] = orig_img_fname
- mask2fake_fname[mask_fname] = fake_fname
-
- cur_real_vector = real2vector_cache.get(orig_img_fname, None)
- if cur_real_vector is None:
- with torch.no_grad():
- in_img = torch.from_numpy(batch['image'][None, ...]).cuda()
- cur_real_vector = inception_model(in_img)[0].squeeze(-1).squeeze(-1).cpu().numpy()
- real2vector_cache[orig_img_fname] = cur_real_vector
-
- pred_img = torch.from_numpy(batch['inpainted'][None, ...]).cuda()
- cur_fake_vector = inception_model(pred_img)[0].squeeze(-1).squeeze(-1).cpu().numpy()
-
- real_features.append(cur_real_vector)
- fake_features.append(cur_fake_vector)
-
- orig_fnames.append(orig_img_fname)
- mask_fnames.append(mask_fname)
-
- ids_features = np.concatenate(real_features + fake_features, axis=0)
- ids_labels = np.array(([1] * len(real_features)) + ([0] * len(fake_features)))
-
- with open(os.path.join(latents_dir, 'featues.pkl'), 'wb') as f:
- pickle.dump(ids_features, f, protocol=3)
- with open(os.path.join(latents_dir, 'labels.pkl'), 'wb') as f:
- pickle.dump(ids_labels, f, protocol=3)
- with open(os.path.join(latents_dir, 'orig_fnames.pkl'), 'wb') as f:
- pickle.dump(orig_fnames, f, protocol=3)
- with open(os.path.join(latents_dir, 'mask_fnames.pkl'), 'wb') as f:
- pickle.dump(mask_fnames, f, protocol=3)
- with open(os.path.join(latents_dir, 'mask2real_fname.pkl'), 'wb') as f:
- pickle.dump(mask2real_fname, f, protocol=3)
- with open(os.path.join(latents_dir, 'mask2fake_fname.pkl'), 'wb') as f:
- pickle.dump(mask2fake_fname, f, protocol=3)
-
- svm = sklearn.svm.LinearSVC(dual=False)
- svm.fit(ids_features, ids_labels)
-
- pred_scores = svm.decision_function(ids_features)
- real_scores = pred_scores[:len(real_features)]
- fake_scores = pred_scores[len(real_features):]
-
- with open(os.path.join(latents_dir, 'pred_scores.pkl'), 'wb') as f:
- pickle.dump(pred_scores, f, protocol=3)
- with open(os.path.join(latents_dir, 'real_scores.pkl'), 'wb') as f:
- pickle.dump(real_scores, f, protocol=3)
- with open(os.path.join(latents_dir, 'fake_scores.pkl'), 'wb') as f:
- pickle.dump(fake_scores, f, protocol=3)
- else:
- with open(os.path.join(latents_dir, 'orig_fnames.pkl'), 'rb') as f:
- orig_fnames = pickle.load(f)
- with open(os.path.join(latents_dir, 'mask_fnames.pkl'), 'rb') as f:
- mask_fnames = pickle.load(f)
- with open(os.path.join(latents_dir, 'mask2real_fname.pkl'), 'rb') as f:
- mask2real_fname = pickle.load(f)
- with open(os.path.join(latents_dir, 'mask2fake_fname.pkl'), 'rb') as f:
- mask2fake_fname = pickle.load(f)
- with open(os.path.join(latents_dir, 'real_scores.pkl'), 'rb') as f:
- real_scores = pickle.load(f)
- with open(os.path.join(latents_dir, 'fake_scores.pkl'), 'rb') as f:
- fake_scores = pickle.load(f)
-
- real_info = pd.DataFrame(data=[dict(real_fname=fname,
- real_score=score)
- for fname, score
- in zip(orig_fnames, real_scores)])
- real_info.set_index('real_fname', drop=True, inplace=True)
-
- fake_info = pd.DataFrame(data=[dict(mask_fname=fname,
- fake_fname=mask2fake_fname[fname],
- real_fname=mask2real_fname[fname],
- fake_score=score)
- for fname, score
- in zip(mask_fnames, fake_scores)])
- fake_info = fake_info.join(real_info, on='real_fname', how='left')
- fake_info.drop_duplicates(['fake_fname', 'real_fname'], inplace=True)
-
- fake_stats_by_real = fake_info.groupby('real_fname')['fake_score'].describe()[['mean', 'std']].rename(
- {'mean': 'mean_fake_by_real', 'std': 'std_fake_by_real'}, axis=1)
- fake_info = fake_info.join(fake_stats_by_real, on='real_fname', rsuffix='stat_by_real')
- fake_info.drop_duplicates(['fake_fname', 'real_fname'], inplace=True)
- fake_info.to_csv(os.path.join(latents_dir, 'join_scores_table.csv'), sep='\t', index=False)
-
- fake_scores_table = fake_info.set_index('mask_fname')['fake_score'].to_frame()
- real_scores_table = fake_info.set_index('real_fname')['real_score'].drop_duplicates().to_frame()
-
- fig, (ax1, ax2) = plt.subplots(1, 2)
- ax1.hist(fake_scores)
- ax2.hist(real_scores)
- fig.tight_layout()
- fig.savefig(os.path.join(args.outpath, 'global_scores_hist.png'))
- plt.close(fig)
-
- global_worst_masks = fake_info.sort_values('fake_score', ascending=True)['mask_fname'].iloc[:config.take_global_top].to_list()
- global_best_masks = fake_info.sort_values('fake_score', ascending=False)['mask_fname'].iloc[:config.take_global_top].to_list()
- save_global_samples(global_worst_masks, mask2real_fname, mask2fake_fname, global_worst_dir, real_scores_table, fake_scores_table)
- save_global_samples(global_best_masks, mask2real_fname, mask2fake_fname, global_best_dir, real_scores_table, fake_scores_table)
-
- # grouped by real
- worst_samples_by_real = fake_info.groupby('real_fname').apply(
- lambda d: d.set_index('mask_fname')['fake_score'].idxmin()).to_frame().rename({0: 'worst'}, axis=1)
- best_samples_by_real = fake_info.groupby('real_fname').apply(
- lambda d: d.set_index('mask_fname')['fake_score'].idxmax()).to_frame().rename({0: 'best'}, axis=1)
- worst_best_by_real = pd.concat([worst_samples_by_real, best_samples_by_real], axis=1)
-
- worst_best_by_real = worst_best_by_real.join(fake_scores_table.rename({'fake_score': 'worst_score'}, axis=1),
- on='worst')
- worst_best_by_real = worst_best_by_real.join(fake_scores_table.rename({'fake_score': 'best_score'}, axis=1),
- on='best')
- worst_best_by_real = worst_best_by_real.join(real_scores_table)
-
- worst_best_by_real['best_worst_score_diff'] = worst_best_by_real['best_score'] - worst_best_by_real['worst_score']
- worst_best_by_real['real_best_score_diff'] = worst_best_by_real['real_score'] - worst_best_by_real['best_score']
- worst_best_by_real['real_worst_score_diff'] = worst_best_by_real['real_score'] - worst_best_by_real['worst_score']
-
- worst_best_by_best_worst_score_diff_min = worst_best_by_real.sort_values('best_worst_score_diff', ascending=True).iloc[:config.take_worst_best_top]
- worst_best_by_best_worst_score_diff_max = worst_best_by_real.sort_values('best_worst_score_diff', ascending=False).iloc[:config.take_worst_best_top]
- save_samples_by_real(worst_best_by_best_worst_score_diff_min, mask2fake_fname, fake_info, worst_best_by_best_worst_score_diff_min_dir)
- save_samples_by_real(worst_best_by_best_worst_score_diff_max, mask2fake_fname, fake_info, worst_best_by_best_worst_score_diff_max_dir)
-
- worst_best_by_real_best_score_diff_min = worst_best_by_real.sort_values('real_best_score_diff', ascending=True).iloc[:config.take_worst_best_top]
- worst_best_by_real_best_score_diff_max = worst_best_by_real.sort_values('real_best_score_diff', ascending=False).iloc[:config.take_worst_best_top]
- save_samples_by_real(worst_best_by_real_best_score_diff_min, mask2fake_fname, fake_info, worst_best_by_real_best_score_diff_min_dir)
- save_samples_by_real(worst_best_by_real_best_score_diff_max, mask2fake_fname, fake_info, worst_best_by_real_best_score_diff_max_dir)
-
- worst_best_by_real_worst_score_diff_min = worst_best_by_real.sort_values('real_worst_score_diff', ascending=True).iloc[:config.take_worst_best_top]
- worst_best_by_real_worst_score_diff_max = worst_best_by_real.sort_values('real_worst_score_diff', ascending=False).iloc[:config.take_worst_best_top]
- save_samples_by_real(worst_best_by_real_worst_score_diff_min, mask2fake_fname, fake_info, worst_best_by_real_worst_score_diff_min_dir)
- save_samples_by_real(worst_best_by_real_worst_score_diff_max, mask2fake_fname, fake_info, worst_best_by_real_worst_score_diff_max_dir)
-
- # analyze what change of mask causes bigger change of score
- overlapping_mask_fname_pairs = []
- overlapping_mask_fname_score_diffs = []
- for cur_real_fname in orig_fnames:
- cur_fakes_info = fake_info[fake_info['real_fname'] == cur_real_fname]
- cur_mask_fnames = sorted(cur_fakes_info['mask_fname'].unique())
-
- cur_mask_pairs_and_scores = Parallel(args.n_jobs)(
- delayed(extract_overlapping_masks)(cur_mask_fnames, i, fake_scores_table)
- for i in range(len(cur_mask_fnames) - 1)
- )
- for cur_pairs, cur_scores in cur_mask_pairs_and_scores:
- overlapping_mask_fname_pairs.extend(cur_pairs)
- overlapping_mask_fname_score_diffs.extend(cur_scores)
-
- overlapping_mask_fname_pairs = np.asarray(overlapping_mask_fname_pairs)
- overlapping_mask_fname_score_diffs = np.asarray(overlapping_mask_fname_score_diffs)
- overlapping_sort_idx = np.argsort(overlapping_mask_fname_score_diffs)
- overlapping_mask_fname_pairs = overlapping_mask_fname_pairs[overlapping_sort_idx]
- overlapping_mask_fname_score_diffs = overlapping_mask_fname_score_diffs[overlapping_sort_idx]
-
-
-
-
-
-
-if __name__ == '__main__':
- import argparse
-
- aparser = argparse.ArgumentParser()
- aparser.add_argument('config', type=str, help='Path to config for dataset generation')
- aparser.add_argument('datadir', type=str,
- help='Path to folder with images and masks (output of gen_mask_dataset.py)')
- aparser.add_argument('predictdir', type=str,
- help='Path to folder with predicts (e.g. predict_hifill_baseline.py)')
- aparser.add_argument('outpath', type=str, help='Where to put results')
- aparser.add_argument('--only-report', action='store_true',
- help='Whether to skip prediction and feature extraction, '
- 'load all the possible latents and proceed with report only')
- aparser.add_argument('--n-jobs', type=int, default=8, help='how many processes to use for pair mask mining')
-
- main(aparser.parse_args())
diff --git a/spaces/CVPR/lama-example/saicinpainting/evaluation/data.py b/spaces/CVPR/lama-example/saicinpainting/evaluation/data.py
deleted file mode 100644
index 69ddb8d3c12d0261e459f7c4f66a702d0c477df0..0000000000000000000000000000000000000000
--- a/spaces/CVPR/lama-example/saicinpainting/evaluation/data.py
+++ /dev/null
@@ -1,167 +0,0 @@
-import glob
-import os
-
-import cv2
-import PIL.Image as Image
-import numpy as np
-
-from torch.utils.data import Dataset
-import torch.nn.functional as F
-
-
-def load_image(fname, mode='RGB', return_orig=False):
- img = np.array(Image.open(fname).convert(mode))
- if img.ndim == 3:
- img = np.transpose(img, (2, 0, 1))
- out_img = img.astype('float32') / 255
- if return_orig:
- return out_img, img
- else:
- return out_img
-
-
-def ceil_modulo(x, mod):
- if x % mod == 0:
- return x
- return (x // mod + 1) * mod
-
-
-def pad_img_to_modulo(img, mod):
- channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return np.pad(img, ((0, 0), (0, out_height - height), (0, out_width - width)), mode='symmetric')
-
-
-def pad_tensor_to_modulo(img, mod):
- batch_size, channels, height, width = img.shape
- out_height = ceil_modulo(height, mod)
- out_width = ceil_modulo(width, mod)
- return F.pad(img, pad=(0, out_width - width, 0, out_height - height), mode='reflect')
-
-
-def scale_image(img, factor, interpolation=cv2.INTER_AREA):
- if img.shape[0] == 1:
- img = img[0]
- else:
- img = np.transpose(img, (1, 2, 0))
-
- img = cv2.resize(img, dsize=None, fx=factor, fy=factor, interpolation=interpolation)
-
- if img.ndim == 2:
- img = img[None, ...]
- else:
- img = np.transpose(img, (2, 0, 1))
- return img
-
-
-class InpaintingDataset(Dataset):
- def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None):
- self.datadir = datadir
- self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, '**', '*mask*.png'), recursive=True)))
- self.img_filenames = [fname.rsplit('_mask', 1)[0] + img_suffix for fname in self.mask_filenames]
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.mask_filenames)
-
- def __getitem__(self, i):
- image = load_image(self.img_filenames[i], mode='RGB')
- mask = load_image(self.mask_filenames[i], mode='L')
- result = dict(image=image, mask=mask[None, ...])
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
-
- return result
-
-class OurInpaintingDataset(Dataset):
- def __init__(self, datadir, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None):
- self.datadir = datadir
- self.mask_filenames = sorted(list(glob.glob(os.path.join(self.datadir, 'mask', '**', '*mask*.png'), recursive=True)))
- self.img_filenames = [os.path.join(self.datadir, 'img', os.path.basename(fname.rsplit('-', 1)[0].rsplit('_', 1)[0]) + '.png') for fname in self.mask_filenames]
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.mask_filenames)
-
- def __getitem__(self, i):
- result = dict(image=load_image(self.img_filenames[i], mode='RGB'),
- mask=load_image(self.mask_filenames[i], mode='L')[None, ...])
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
-
- return result
-
-class PrecomputedInpaintingResultsDataset(InpaintingDataset):
- def __init__(self, datadir, predictdir, inpainted_suffix='_inpainted.jpg', **kwargs):
- super().__init__(datadir, **kwargs)
- if not datadir.endswith('/'):
- datadir += '/'
- self.predictdir = predictdir
- self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix)
- for fname in self.mask_filenames]
-
- def __getitem__(self, i):
- result = super().__getitem__(i)
- result['inpainted'] = load_image(self.pred_filenames[i])
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo)
- return result
-
-class OurPrecomputedInpaintingResultsDataset(OurInpaintingDataset):
- def __init__(self, datadir, predictdir, inpainted_suffix="png", **kwargs):
- super().__init__(datadir, **kwargs)
- if not datadir.endswith('/'):
- datadir += '/'
- self.predictdir = predictdir
- self.pred_filenames = [os.path.join(predictdir, os.path.basename(os.path.splitext(fname)[0]) + f'_inpainted.{inpainted_suffix}')
- for fname in self.mask_filenames]
- # self.pred_filenames = [os.path.join(predictdir, os.path.splitext(fname[len(datadir):])[0] + inpainted_suffix)
- # for fname in self.mask_filenames]
-
- def __getitem__(self, i):
- result = super().__getitem__(i)
- result['inpainted'] = self.file_loader(self.pred_filenames[i])
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['inpainted'] = pad_img_to_modulo(result['inpainted'], self.pad_out_to_modulo)
- return result
-
-class InpaintingEvalOnlineDataset(Dataset):
- def __init__(self, indir, mask_generator, img_suffix='.jpg', pad_out_to_modulo=None, scale_factor=None, **kwargs):
- self.indir = indir
- self.mask_generator = mask_generator
- self.img_filenames = sorted(list(glob.glob(os.path.join(self.indir, '**', f'*{img_suffix}' ), recursive=True)))
- self.pad_out_to_modulo = pad_out_to_modulo
- self.scale_factor = scale_factor
-
- def __len__(self):
- return len(self.img_filenames)
-
- def __getitem__(self, i):
- img, raw_image = load_image(self.img_filenames[i], mode='RGB', return_orig=True)
- mask = self.mask_generator(img, raw_image=raw_image)
- result = dict(image=img, mask=mask)
-
- if self.scale_factor is not None:
- result['image'] = scale_image(result['image'], self.scale_factor)
- result['mask'] = scale_image(result['mask'], self.scale_factor, interpolation=cv2.INTER_NEAREST)
-
- if self.pad_out_to_modulo is not None and self.pad_out_to_modulo > 1:
- result['image'] = pad_img_to_modulo(result['image'], self.pad_out_to_modulo)
- result['mask'] = pad_img_to_modulo(result['mask'], self.pad_out_to_modulo)
- return result
\ No newline at end of file
diff --git a/spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/unet3d_kitti-checkpoint.py b/spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/unet3d_kitti-checkpoint.py
deleted file mode 100644
index 91d5339fbdf34e28d017d7e4e29ce4923169bef5..0000000000000000000000000000000000000000
--- a/spaces/CVPR/monoscene_lite/monoscene/.ipynb_checkpoints/unet3d_kitti-checkpoint.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# encoding: utf-8
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from monoscene.modules import SegmentationHead
-from monoscene.CRP3D import CPMegaVoxels
-from monoscene.modules import Process, Upsample, Downsample
-
-
-class UNet3D(nn.Module):
- def __init__(
- self,
- class_num,
- norm_layer,
- full_scene_size,
- feature,
- project_scale,
- context_prior=None,
- bn_momentum=0.1,
- ):
- super(UNet3D, self).__init__()
- self.business_layer = []
- self.project_scale = project_scale
- self.full_scene_size = full_scene_size
- self.feature = feature
-
- size_l1 = (
- int(self.full_scene_size[0] / project_scale),
- int(self.full_scene_size[1] / project_scale),
- int(self.full_scene_size[2] / project_scale),
- )
- size_l2 = (size_l1[0] // 2, size_l1[1] // 2, size_l1[2] // 2)
- size_l3 = (size_l2[0] // 2, size_l2[1] // 2, size_l2[2] // 2)
-
- dilations = [1, 2, 3]
- self.process_l1 = nn.Sequential(
- Process(self.feature, norm_layer, bn_momentum, dilations=[1, 2, 3]),
- Downsample(self.feature, norm_layer, bn_momentum),
- )
- self.process_l2 = nn.Sequential(
- Process(self.feature * 2, norm_layer, bn_momentum, dilations=[1, 2, 3]),
- Downsample(self.feature * 2, norm_layer, bn_momentum),
- )
-
- self.up_13_l2 = Upsample(
- self.feature * 4, self.feature * 2, norm_layer, bn_momentum
- )
- self.up_12_l1 = Upsample(
- self.feature * 2, self.feature, norm_layer, bn_momentum
- )
- self.up_l1_lfull = Upsample(
- self.feature, self.feature // 2, norm_layer, bn_momentum
- )
-
- self.ssc_head = SegmentationHead(
- self.feature // 2, self.feature // 2, class_num, dilations
- )
-
- self.context_prior = context_prior
- if context_prior:
- self.CP_mega_voxels = CPMegaVoxels(
- self.feature * 4, size_l3, bn_momentum=bn_momentum
- )
-
- def forward(self, input_dict):
- res = {}
-
- x3d_l1 = input_dict["x3d"]
-
- x3d_l2 = self.process_l1(x3d_l1)
-
- x3d_l3 = self.process_l2(x3d_l2)
-
- if self.context_prior:
- ret = self.CP_mega_voxels(x3d_l3)
- x3d_l3 = ret["x"]
- for k in ret.keys():
- res[k] = ret[k]
-
- x3d_up_l2 = self.up_13_l2(x3d_l3) + x3d_l2
- x3d_up_l1 = self.up_12_l1(x3d_up_l2) + x3d_l1
- x3d_up_lfull = self.up_l1_lfull(x3d_up_l1)
-
- ssc_logit_full = self.ssc_head(x3d_up_lfull)
-
- res["ssc_logit"] = ssc_logit_full
-
- return res
diff --git a/spaces/CVPR/regionclip-demo/detectron2/export/api.py b/spaces/CVPR/regionclip-demo/detectron2/export/api.py
deleted file mode 100644
index e80989231ea5233e40f48a76e375a5a3c39208b1..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/export/api.py
+++ /dev/null
@@ -1,273 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import logging
-import os
-import torch
-from caffe2.proto import caffe2_pb2
-from torch import nn
-
-from detectron2.config import CfgNode
-from detectron2.utils.file_io import PathManager
-
-from .caffe2_inference import ProtobufDetectionModel
-from .caffe2_modeling import META_ARCH_CAFFE2_EXPORT_TYPE_MAP, convert_batched_inputs_to_c2_format
-from .shared import get_pb_arg_vali, get_pb_arg_vals, save_graph
-
-__all__ = [
- "add_export_config",
- "export_caffe2_model",
- "Caffe2Model",
- "export_onnx_model",
- "Caffe2Tracer",
-]
-
-
-def add_export_config(cfg):
- """
- Add options needed by caffe2 export.
-
- Args:
- cfg (CfgNode): a detectron2 config
-
- Returns:
- CfgNode:
- an updated config with new options that will be used by :class:`Caffe2Tracer`.
- """
- is_frozen = cfg.is_frozen()
- cfg.defrost()
- cfg.EXPORT_CAFFE2 = CfgNode()
- cfg.EXPORT_CAFFE2.USE_HEATMAP_MAX_KEYPOINT = False
- if is_frozen:
- cfg.freeze()
- return cfg
-
-
-class Caffe2Tracer:
- """
- Make a detectron2 model traceable with Caffe2 operators.
- This class creates a traceable version of a detectron2 model which:
-
- 1. Rewrite parts of the model using ops in Caffe2. Note that some ops do
- not have GPU implementation in Caffe2.
- 2. Remove post-processing and only produce raw layer outputs
-
- After making a traceable model, the class provide methods to export such a
- model to different deployment formats.
- Exported graph produced by this class take two input tensors:
-
- 1. (1, C, H, W) float "data" which is an image (usually in [0, 255]).
- (H, W) often has to be padded to multiple of 32 (depend on the model
- architecture).
- 2. 1x3 float "im_info", each row of which is (height, width, 1.0).
- Height and width are true image shapes before padding.
-
- The class currently only supports models using builtin meta architectures.
- Batch inference is not supported, and contributions are welcome.
- """
-
- def __init__(self, cfg: CfgNode, model: nn.Module, inputs):
- """
- Args:
- cfg (CfgNode): a detectron2 config, with extra export-related options
- added by :func:`add_export_config`. It's used to construct
- caffe2-compatible model.
- model (nn.Module): An original pytorch model. Must be among a few official models
- in detectron2 that can be converted to become caffe2-compatible automatically.
- Weights have to be already loaded to this model.
- inputs: sample inputs that the given model takes for inference.
- Will be used to trace the model. For most models, random inputs with
- no detected objects will not work as they lead to wrong traces.
- """
- assert isinstance(cfg, CfgNode), cfg
- assert isinstance(model, torch.nn.Module), type(model)
-
- if "EXPORT_CAFFE2" not in cfg:
- cfg = add_export_config(cfg) # will just the defaults
- # TODO make it support custom models, by passing in c2 model directly
- C2MetaArch = META_ARCH_CAFFE2_EXPORT_TYPE_MAP[cfg.MODEL.META_ARCHITECTURE]
- self.traceable_model = C2MetaArch(cfg, copy.deepcopy(model))
- self.inputs = inputs
- self.traceable_inputs = self.traceable_model.get_caffe2_inputs(inputs)
-
- def export_caffe2(self):
- """
- Export the model to Caffe2's protobuf format.
- The returned object can be saved with its :meth:`.save_protobuf()` method.
- The result can be loaded and executed using Caffe2 runtime.
-
- Returns:
- :class:`Caffe2Model`
- """
- from .caffe2_export import export_caffe2_detection_model
-
- predict_net, init_net = export_caffe2_detection_model(
- self.traceable_model, self.traceable_inputs
- )
- return Caffe2Model(predict_net, init_net)
-
- def export_onnx(self):
- """
- Export the model to ONNX format.
- Note that the exported model contains custom ops only available in caffe2, therefore it
- cannot be directly executed by other runtime (such as onnxruntime or TensorRT).
- Post-processing or transformation passes may be applied on the model to accommodate
- different runtimes, but we currently do not provide support for them.
-
- Returns:
- onnx.ModelProto: an onnx model.
- """
- from .caffe2_export import export_onnx_model as export_onnx_model_impl
-
- return export_onnx_model_impl(self.traceable_model, (self.traceable_inputs,))
-
- def export_torchscript(self):
- """
- Export the model to a ``torch.jit.TracedModule`` by tracing.
- The returned object can be saved to a file by ``.save()``.
-
- Returns:
- torch.jit.TracedModule: a torch TracedModule
- """
- logger = logging.getLogger(__name__)
- logger.info("Tracing the model with torch.jit.trace ...")
- with torch.no_grad():
- return torch.jit.trace(self.traceable_model, (self.traceable_inputs,))
-
-
-class Caffe2Model(nn.Module):
- """
- A wrapper around the traced model in Caffe2's protobuf format.
- The exported graph has different inputs/outputs from the original Pytorch
- model, as explained in :class:`Caffe2Tracer`. This class wraps around the
- exported graph to simulate the same interface as the original Pytorch model.
- It also provides functions to save/load models in Caffe2's format.'
-
- Examples:
- ::
- c2_model = Caffe2Tracer(cfg, torch_model, inputs).export_caffe2()
- inputs = [{"image": img_tensor_CHW}]
- outputs = c2_model(inputs)
- orig_outputs = torch_model(inputs)
- """
-
- def __init__(self, predict_net, init_net):
- super().__init__()
- self.eval() # always in eval mode
- self._predict_net = predict_net
- self._init_net = init_net
- self._predictor = None
-
- __init__.__HIDE_SPHINX_DOC__ = True
-
- @property
- def predict_net(self):
- """
- caffe2.core.Net: the underlying caffe2 predict net
- """
- return self._predict_net
-
- @property
- def init_net(self):
- """
- caffe2.core.Net: the underlying caffe2 init net
- """
- return self._init_net
-
- def save_protobuf(self, output_dir):
- """
- Save the model as caffe2's protobuf format.
- It saves the following files:
-
- * "model.pb": definition of the graph. Can be visualized with
- tools like `netron `_.
- * "model_init.pb": model parameters
- * "model.pbtxt": human-readable definition of the graph. Not
- needed for deployment.
-
- Args:
- output_dir (str): the output directory to save protobuf files.
- """
- logger = logging.getLogger(__name__)
- logger.info("Saving model to {} ...".format(output_dir))
- if not PathManager.exists(output_dir):
- PathManager.mkdirs(output_dir)
-
- with PathManager.open(os.path.join(output_dir, "model.pb"), "wb") as f:
- f.write(self._predict_net.SerializeToString())
- with PathManager.open(os.path.join(output_dir, "model.pbtxt"), "w") as f:
- f.write(str(self._predict_net))
- with PathManager.open(os.path.join(output_dir, "model_init.pb"), "wb") as f:
- f.write(self._init_net.SerializeToString())
-
- def save_graph(self, output_file, inputs=None):
- """
- Save the graph as SVG format.
-
- Args:
- output_file (str): a SVG file
- inputs: optional inputs given to the model.
- If given, the inputs will be used to run the graph to record
- shape of every tensor. The shape information will be
- saved together with the graph.
- """
- from .caffe2_export import run_and_save_graph
-
- if inputs is None:
- save_graph(self._predict_net, output_file, op_only=False)
- else:
- size_divisibility = get_pb_arg_vali(self._predict_net, "size_divisibility", 0)
- device = get_pb_arg_vals(self._predict_net, "device", b"cpu").decode("ascii")
- inputs = convert_batched_inputs_to_c2_format(inputs, size_divisibility, device)
- inputs = [x.cpu().numpy() for x in inputs]
- run_and_save_graph(self._predict_net, self._init_net, inputs, output_file)
-
- @staticmethod
- def load_protobuf(dir):
- """
- Args:
- dir (str): a directory used to save Caffe2Model with
- :meth:`save_protobuf`.
- The files "model.pb" and "model_init.pb" are needed.
-
- Returns:
- Caffe2Model: the caffe2 model loaded from this directory.
- """
- predict_net = caffe2_pb2.NetDef()
- with PathManager.open(os.path.join(dir, "model.pb"), "rb") as f:
- predict_net.ParseFromString(f.read())
-
- init_net = caffe2_pb2.NetDef()
- with PathManager.open(os.path.join(dir, "model_init.pb"), "rb") as f:
- init_net.ParseFromString(f.read())
-
- return Caffe2Model(predict_net, init_net)
-
- def __call__(self, inputs):
- """
- An interface that wraps around a Caffe2 model and mimics detectron2's models'
- input/output format. See details about the format at :doc:`/tutorials/models`.
- This is used to compare the outputs of caffe2 model with its original torch model.
-
- Due to the extra conversion between Pytorch/Caffe2, this method is not meant for
- benchmark. Because of the conversion, this method also has dependency
- on detectron2 in order to convert to detectron2's output format.
- """
- if self._predictor is None:
- self._predictor = ProtobufDetectionModel(self._predict_net, self._init_net)
- return self._predictor(inputs)
-
-
-def export_caffe2_model(cfg, model, inputs):
- logger = logging.getLogger(__name__)
- logger.warning(
- "export_caffe2_model() is deprecated. Please use `Caffe2Tracer().export_caffe2() instead."
- )
- return Caffe2Tracer(cfg, model, inputs).export_caffe2()
-
-
-def export_onnx_model(cfg, model, inputs):
- logger = logging.getLogger(__name__)
- logger.warning(
- "export_caffe2_model() is deprecated. Please use `Caffe2Tracer().export_onnx() instead."
- )
- return Caffe2Tracer(cfg, model, inputs).export_onnx()
diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/__init__.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/__init__.py
deleted file mode 100644
index bb90d1229f08d046446b3c7059aa930d553989f3..0000000000000000000000000000000000000000
--- a/spaces/CVPR/unicl-zero-shot-img-recog/model/image_encoder/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-from .build import build_model as build_image_encoder
\ No newline at end of file
diff --git a/spaces/CaliforniaHealthCollaborative/README/README.md b/spaces/CaliforniaHealthCollaborative/README/README.md
deleted file mode 100644
index 91e5016c63b2bc392da4f307b4ea76e2550b25df..0000000000000000000000000000000000000000
--- a/spaces/CaliforniaHealthCollaborative/README/README.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: 🔥README🔥
-emoji: 📚
-colorFrom: yellow
-colorTo: red
-sdk: static
-pinned: true
-license: mit
----
-
-
-
-
-## Summary
-
-This research proposal aims to develop an innovative approach for language retrieval by leveraging emoji-to-Kaktovik translation. The objective is to build a more interpretable and interoperable language retrieval process that surpasses traditional binary fragmenting techniques. By translating emojis to Kaktovik numerals, we can capture their inherent meaning and relationships, enabling precise and resource-efficient language retrieval. This proposal is being submitted to Microsoft AI Research and prospective universities to explore the potential of emoji-based language retrieval.
-
-## Objectives
-
-1. Develop an emoji clustering, indexing, and fragmentation method that organizes emojis into semantically meaningful groups, surpassing the limitations of binary fragmenting techniques.
-2. Investigate techniques for translating emojis to Kaktovik numerals, preserving their inherent meaning and relationships in the translation process.
-3. Design algorithms and models that leverage the translated Kaktovik numerals for precise and resource-efficient language retrieval.
-4. Evaluate the interpretability and interoperability of the proposed approach, comparing it to traditional binary fragmenting techniques.
-5. Demonstrate the practical applications of the emoji-to-Kaktovik translation in real-world language retrieval scenarios, showcasing its advantages in precision, efficiency, and interpretability.
-
-## Methodology
-
-1. **Emoji Clustering**: Develop an advanced clustering method that groups visually and semantically similar emojis together. Explore techniques that consider various factors, such as visual characteristics, semantic meanings, and user interpretations, to form cohesive and meaningful clusters.
-2. **Kaktovik Numerals Encoding**: Investigate methods for translating emojis to Kaktovik numerals while preserving their inherent meaning and relationships. Design encoding algorithms that capture the nuanced representations of emojis in Kaktovik numerals, enabling precise and interpretable language retrieval.
-3. **Translation Models**: Develop machine learning models and algorithms that translate emojis to their corresponding Kaktovik numerals. Train these models using large-scale annotated datasets of emojis and their Kaktovik numeral translations to ensure accurate and context-aware translations.
-4. **Language Retrieval Integration**: Integrate the translated Kaktovik numerals into the language retrieval process. Develop efficient indexing and retrieval techniques that leverage the inherent meaning and relationships captured in the Kaktovik numerals to enhance the precision and efficiency of language retrieval.
-5. **Evaluation and Analysis**: Evaluate the performance of the proposed approach by measuring metrics such as retrieval accuracy, precision, recall, and resource efficiency. Compare the results against traditional binary fragmenting techniques to assess the interpretability and interoperability advantages of the emoji-to-Kaktovik translation.
-6. **Real-World Applications**: Deploy the developed language retrieval system in real-world scenarios, such as information retrieval, chatbots, and recommendation systems. Demonstrate the practical benefits of the emoji-to-Kaktovik translation approach in terms of improved precision, interpretability, and reduced resource requirements.
-
-## Expected Outcomes
-
-1. Enhanced precision and interpretability in language retrieval through the emoji-to-Kaktovik translation approach.
-2. Improved interoperability of the language retrieval system, surpassing the limitations of binary fragmenting techniques.
-3. Resource-efficient language retrieval process, enabling faster and more precise retrieval of relevant information.
-4. Insights into the potential applications and advantages of emoji-to-Kaktovik translation in various language processing tasks.
-5. Collaboration and knowledge exchange opportunities with Microsoft AI Research and prospective universities in advancing the field of interpretable and interoperable language retrieval.
-
-6. 
\ No newline at end of file
diff --git a/spaces/Chaitanya01/InvestingPlatform/mapping.py b/spaces/Chaitanya01/InvestingPlatform/mapping.py
deleted file mode 100644
index b7cedb84e033d07c253efacdb9152708452da8a3..0000000000000000000000000000000000000000
--- a/spaces/Chaitanya01/InvestingPlatform/mapping.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# This mapping is required to get the name from symbol for the us stocks
-us_stocks_mapping = {"A": "Agilent Technologies Inc. Common Stock", "AA": "Alcoa Corporation Common Stock ", "AAC": "Ares Acquisition Corporation Class A Ordinary Shares", "AACG": "ATA Creativity Global American Depositary Shares", "AACIU": "Armada Acquisition Corp. I Unit", "AADI": "Aadi Bioscience Inc. Common Stock", "AAIC": "Arlington Asset Investment Corp Class A (new)", "AAIC^B": "Arlington Asset Investment Corp 7.00% ", "AAIC^C": "Arlington Asset Investment Corp 8.250% Seies C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock ", "AAIN": "Arlington Asset Investment Corp 6.000% Senior Notes Due 2026", "AAL": "American Airlines Group Inc. Common Stock", "AAMC": "Altisource Asset Management Corp Com", "AAME": "Atlantic American Corporation Common Stock", "AAN": "Aarons Holdings Company Inc. Common Stock ", "AAOI": "Applied Optoelectronics Inc. Common Stock", "AAON": "AAON Inc. Common Stock", "AAP": "Advance Auto Parts Inc Advance Auto Parts Inc W/I", "AAPL": "Apple Inc. Common Stock", "AAQC": "Accelerate Acquisition Corp. Class A Common Stock", "AAT": "American Assets Trust Inc. Common Stock", "AATC": "Autoscope Technologies Corporation Common Stock", "AAU": "Almaden Minerals Ltd. Common Shares", "AAWW": "Atlas Air Worldwide Holdings NEW Common Stock", "AB": "AllianceBernstein Holding L.P. Units", "ABB": "ABB Ltd Common Stock", "ABBV": "AbbVie Inc. Common Stock", "ABC": "AmerisourceBergen Corporation Common Stock", "ABCB": "Ameris Bancorp Common Stock", "ABCL": "AbCellera Biologics Inc. Common Shares", "ABCM": "Abcam plc American Depositary Shares", "ABEO": "Abeona Therapeutics Inc. Common Stock", "ABEV": "Ambev S.A. American Depositary Shares (Each representing 1 Common Share)", "ABG": "Asbury Automotive Group Inc Common Stock", "ABGI": "ABG Acquisition Corp. I Class A Ordinary Shares", "ABIO": "ARCA biopharma Inc. Common Stock", "ABM": "ABM Industries Incorporated Common Stock", "ABMD": "ABIOMED Inc. Common Stock", "ABNB": "Airbnb Inc. Class A Common Stock", "ABOS": "Acumen Pharmaceuticals Inc. Common Stock", "ABR": "Arbor Realty Trust Common Stock", "ABR^D": "Arbor Realty Trust 6.375% Series D Cumulative Redeemable Preferred Stock Liquidation Preference $25.00 per Share", "ABR^E": "Arbor Realty Trust 6.25% Series E Cumulative Redeemable Preferred Stock", "ABSI": "Absci Corporation Common Stock", "ABST": "Absolute Software Corporation Common Stock", "ABT": "Abbott Laboratories Common Stock", "ABTX": "Allegiance Bancshares Inc. Common Stock", "ABUS": "Arbutus Biopharma Corporation Common Stock", "ABVC": "ABVC Biopharma Inc. Common Stock", "AC": "Associated Capital Group Inc. Common Stock", "ACA": "Arcosa Inc. Common Stock ", "ACAD": "ACADIA Pharmaceuticals Inc. Common Stock", "ACAH": "Atlantic Coastal Acquisition Corp. Class A Common Stock", "ACAHU": "Atlantic Coastal Acquisition Corp. Unit", "ACAHW": "Atlantic Coastal Acquisition Corp. Warrant", "ACB": "Aurora Cannabis Inc. Common Shares", "ACBA": "Ace Global Business Acquisition Limited Ordinary Shares", "ACBAU": "Ace Global Business Acquisition Limited Unit", "ACBAW": "Ace Global Business Acquisition Limited Warrant", "ACBI": "Atlantic Capital Bancshares Inc. Common Stock", "ACC": "American Campus Communities Inc Common Stock", "ACCD": "Accolade Inc. Common Stock", "ACCO": "Acco Brands Corporation Common Stock", "ACEL": "Accel Entertainment Inc. ", "ACER": "Acer Therapeutics Inc. Common Stock (DE)", "ACET": "Adicet Bio Inc. Common Stock ", "ACEV": "ACE Convergence Acquisition Corp. Class A Ordinary Shares", "ACEVU": "ACE Convergence Acquisition Corp. Unit", "ACEVW": "ACE Convergence Acquisition Corp. Warrant", "ACGL": "Arch Capital Group Ltd. Common Stock", "ACGLN": "Arch Capital Group Ltd. Depositary Shares each Representing a 1/1000th Interest in a 4.550% Non-Cumulative Preferred Share Series G", "ACGLO": "Arch Capital Group Ltd. Depositary Shares Each Representing 1/1000th Interest in a Share of 5.45% Non-Cumulative Preferred Shares Series F", "ACH": "Aluminum Corporation of China Limited American Depositary Shares", "ACHC": "Acadia Healthcare Company Inc. Common Stock", "ACHL": "Achilles Therapeutics plc American Depositary Shares", "ACHR": "Archer Aviation Inc. Class A Common Stock", "ACHV": "Achieve Life Sciences Inc. Common Shares", "ACI": "Albertsons Companies Inc. Class A Common Stock", "ACII": "Atlas Crest Investment Corp. II Class A Common Stock", "ACIU": "AC Immune SA Common Stock", "ACIW": "ACI Worldwide Inc. Common Stock", "ACKIT": "Ackrell SPAC Partners I Co. Subunits", "ACKIU": "Ackrell SPAC Partners I Co. Units", "ACKIW": "Ackrell SPAC Partners I Co. Warrants", "ACLS": "Axcelis Technologies Inc. Common Stock", "ACM": "AECOM Common Stock", "ACMR": "ACM Research Inc. Class A Common Stock", "ACN": "Accenture plc Class A Ordinary Shares (Ireland)", "ACNB": "ACNB Corporation Common Stock", "ACOR": "Acorda Therapeutics Inc. Common Stock", "ACP": "Aberdeen Income Credit Strategies Fund Common Shares", "ACP^A": "Aberdeen Income Credit Strategies Fund 5.250% Series A Perpetual Preferred Stock", "ACQR": "Independence Holdings Corp. Class A Ordinary Share", "ACQRU": "Independence Holdings Corp. Units", "ACQRW": "Independence Holdings Corp. Warrant", "ACR": "ACRES Commercial Realty Corp. Common Stock", "ACR^C": "ACRES Commercial Realty Corp. 8.625% Fixed-to-Floating Series C Cumulative Redeemable Preferred Stock ", "ACR^D": "ACRES Commercial Realty Corp. 7.875% Series D Cumulative Redeemable Preferred Stock", "ACRE": "Ares Commercial Real Estate Corporation Common Stock", "ACRO": "Acropolis Infrastructure Acquisition Corp. Class A Common Stock", "ACRS": "Aclaris Therapeutics Inc. Common Stock", "ACRX": "AcelRx Pharmaceuticals Inc. Common Stock", "ACST": "Acasti Pharma Inc. Class A Common Stock", "ACT": "Enact Holdings Inc. Common Stock", "ACTD": "ArcLight Clean Transition Corp. II Class A Ordinary Share", "ACTDU": "ArcLight Clean Transition Corp. II Unit", "ACTDW": "ArcLight Clean Transition Corp. II Warrant", "ACTG": "Acacia Research Corporation (Acacia Tech) Common Stock", "ACU": "Acme United Corporation. Common Stock", "ACV": "Virtus AllianzGI Diversified Income & Convertible Fund Common Shares of Beneficial Interest", "ACVA": "ACV Auctions Inc. Class A Common Stock", "ACXP": "Acurx Pharmaceuticals Inc. Common Stock", "ACY": "AeroCentury Corp. Common Stock", "ADAG": "Adagene Inc. American Depositary Shares", "ADAP": "Adaptimmune Therapeutics plc American Depositary Shares", "ADBE": "Adobe Inc. Common Stock", "ADC": "Agree Realty Corporation Common Stock", "ADC^A": "Agree Realty Corporation Depositary Shares each representing 1/1000th of a 4.250% Series A Cumulative Redeemable Preferred Stock", "ADCT": "ADC Therapeutics SA Common Shares", "ADER": "26 Capital Acquisition Corp. Class A Common Stock", "ADERU": "26 Capital Acquisition Corp. Unit", "ADERW": "26 Capital Acquisition Corp. Warrant", "ADES": "Advanced Emissions Solutions Inc. Common Stock", "ADEX": "Adit EdTech Acquisition Corp. Common Stock", "ADF": "Aldel Financial Inc. Class A Common Stock", "ADGI": "Adagio Therapeutics Inc. Common Stock", "ADI": "Analog Devices Inc. Common Stock", "ADIL": "Adial Pharmaceuticals Inc Common Stock", "ADILW": "Adial Pharmaceuticals Inc Warrant", "ADM": "Archer-Daniels-Midland Company Common Stock", "ADMA": "ADMA Biologics Inc Common Stock", "ADMP": "Adamis Pharmaceuticals Corporation Common Stock", "ADMS": "Adamas Pharmaceuticals Inc. Common Stock", "ADN": "Advent Technologies Holdings Inc. Class A Common Stock", "ADNT": "Adient plc Ordinary Shares ", "ADNWW": "Advent Technologies Holdings Inc. Warrant", "ADOC": "Edoc Acquisition Corp. Class A Ordinary Share", "ADOCR": "Edoc Acquisition Corp. Right", "ADOCW": "Edoc Acquisition Corp. Warrant", "ADP": "Automatic Data Processing Inc. Common Stock", "ADPT": "Adaptive Biotechnologies Corporation Common Stock", "ADRA": "Adara Acquisition Corp. Class A Common Stock", "ADS": "Alliance Data Systems Corporation Common Stock", "ADSK": "Autodesk Inc. Common Stock", "ADT": "ADT Inc. Common Stock", "ADTN": "ADTRAN Inc. Common Stock", "ADTX": "Aditxt Inc. Common Stock", "ADUS": "Addus HomeCare Corporation Common Stock", "ADV": "Advantage Solutions Inc. Class A Common Stock", "ADVM": "Adverum Biotechnologies Inc. Common Stock", "ADVWW": "Advantage Solutions Inc. Warrant", "ADX": "Adams Diversified Equity Fund Inc.", "ADXN": "Addex Therapeutics Ltd American Depositary Shares", "ADXS": "Advaxis Inc. Common Stock", "AE": "Adams Resources & Energy Inc. Common Stock", "AEAC": "Authentic Equity Acquisition Corp. Class A ordinary share", "AEACU": "Authentic Equity Acquisition Corp. Unit", "AEACW": "Authentic Equity Acquisition Corp. Warrant", "AEE": "Ameren Corporation Common Stock", "AEF": "Aberdeen Emerging Markets Equity Income Fund Inc. Common Stock", "AEFC": "Aegon Funding Company LLC 5.10% Subordinated Notes due 2049", "AEG": "AEGON N.V. Common Stock", "AEHAU": "Aesther Healthcare Acquisition Corp Units", "AEHL": "Antelope Enterprise Holdings Limited Common Stock (0.024 par)", "AEHR": "Aehr Test Systems Common Stock", "AEI": "Alset EHome International Inc. Common Stock", "AEIS": "Advanced Energy Industries Inc. Common Stock", "AEL": "American Equity Investment Life Holding Company Common Stock", "AEL^A": "American Equity Investment Life Holding Company Depositary Shares each representing a 1/1000th interest in a share of 5.95% Fixed-Rate Reset Non-Cumulative Preferred Stock Series A", "AEL^B": "American Equity Investment Life Holding Company Depositary Shares each representing a 1/1000th interest in a share of 6.625% Fixed-Rate Reset Non-Cumulative Preferred Stock Series B", "AEM": "Agnico Eagle Mines Limited Common Stock", "AEMD": "Aethlon Medical Inc. Common Stock", "AENZ": "Aenza S.A.A. American Depositary Shares", "AEO": "American Eagle Outfitters Inc. Common Stock", "AEP": "American Electric Power Company Inc. Common Stock", "AEPPL": "American Electric Power Company Inc. Corporate Unit", "AEPPZ": "American Electric Power Company Inc. Corporate Units", "AER": "AerCap Holdings N.V. Ordinary Shares", "AERI": "Aerie Pharmaceuticals Inc. Common Stock", "AES": "The AES Corporation Common Stock", "AESC": "The AES Corporation Corporate Units", "AESE": "Allied Esports Entertainment Inc. Common Stock", "AEVA": "Aeva Technologies Inc. Common Stock", "AEY": "ADDvantage Technologies Group Inc. Common Stock", "AEYE": "AudioEye Inc. Common Stock", "AEZS": "Aeterna Zentaris Inc. Common Stock", "AFAQ": "AF Acquisition Corp. Class A Common Stock", "AFAQU": "AF Acquisition Corp. Units", "AFAQW": "AF Acquisition Corp. Warrants", "AFB": "AllianceBernstein National Municipal Income Fund Inc", "AFBI": "Affinity Bancshares Inc. Common Stock (MD)", "AFCG": "AFC Gamma Inc. Common Stock", "AFG": "American Financial Group Inc. Common Stock", "AFGB": "American Financial Group Inc. 5.875% Subordinated Debentures due 2059", "AFGC": "American Financial Group Inc. 5.125% Subordinated Debentures due 2059", "AFGD": "American Financial Group Inc. 5.625% Subordinated Debentures due 2060", "AFGE": "American Financial Group Inc. 4.500% Subordinated Debentures due 2060", "AFI": "Armstrong Flooring Inc. Common Stock", "AFIB": "Acutus Medical Inc. Common Stock", "AFIN": "American Finance Trust Inc. Class A Common Stock", "AFINO": "American Finance Trust Inc. 7.375% Series C Cumulative Redeemable Preferred Stock", "AFINP": "American Finance Trust Inc. 7.50% Series A Cumulative Redeemable Perpetual Preferred Stock", "AFL": "AFLAC Incorporated Common Stock", "AFMD": "Affimed N.V.", "AFRM": "Affirm Holdings Inc. Class A Common Stock", "AFT": "Apollo Senior Floating Rate Fund Inc. Common Stock", "AFTR": "AfterNext HealthTech Acquisition Corp.", "AFYA": "Afya Limited Class A Common Shares", "AG": "First Majestic Silver Corp. Ordinary Shares (Canada)", "AGAC": "African Gold Acquisition Corporation Class A Ordinary Shares", "AGBA": "AGBA Acquisition Limited Ordinary Share", "AGBAR": "AGBA Acquisition Limited Right", "AGBAW": "AGBA Acquisition Limited Warrant", "AGC": "Altimeter Growth Corp. Class A Ordinary Shares", "AGCB": "Altimeter Growth Corp. 2 Class A Ordinary Shares", "AGCO": "AGCO Corporation Common Stock", "AGCUU": "Altimeter Growth Corp. Unit", "AGCWW": "Altimeter Growth Corp. Warrant", "AGD": "Aberdeen Global Dynamic Dividend Fund", "AGE": "AgeX Therapeutics Inc. Common Stock", "AGEN": "Agenus Inc. Common Stock", "AGFS": "AgroFresh Solutions Inc. Common Stock", "AGFY": "Agrify Corporation Common Stock", "AGGR": "Agile Growth Corp. Class A Ordinary Share", "AGGRU": "Agile Growth Corp. Units", "AGGRW": "Agile Growth Corp. Warrant.", "AGI": "Alamos Gold Inc. Class A Common Shares", "AGIL": "AgileThought Inc. Class A Common Stock", "AGILW": "AgileThought Inc. Warrant", "AGIO": "Agios Pharmaceuticals Inc. Common Stock", "AGL": "agilon health inc. Common Stock", "AGLE": "Aeglea BioTherapeutics Inc. Common Stock", "AGM": "Federal Agricultural Mortgage Corporation Common Stock", "AGM^C": "Federal Agricultural Mortgage Corporation Preferred Series C Fixed to Fltg", "AGM^D": "Federal Agricultural Mortgage Corporation 5.700% Non-Cumulative Preferred Stock Series D", "AGM^E": "Federal Agricultural Mortgage Corporation 5.750% Non-Cumulative Preferred Stock Series E", "AGM^F": "Federal Agricultural Mortgage Corporation 5.250% Non-Cumulative Preferred Stock Series F", "AGM^G": "Federal Agricultural Mortgage Corporation 4.875% Non-Cumulative Preferred Stock Series G", "AGMH": "AGM Group Holdings Inc. Class A Ordinary Shares", "AGNC": "AGNC Investment Corp. Common Stock", "AGNCM": "AGNC Investment Corp. Depositary Shares rep 6.875% Series D Fixed-to-Floating Cumulative Redeemable Preferred Stock", "AGNCN": "AGNC Investment Corp. Depositary Shares Each Representing a 1/1000th Interest in a Share of 7.00% Series C Fixed-To-Floating Rate Cumulative Redeemable Preferred Stock", "AGNCO": "AGNC Investment Corp. Depositary Shares each representing a 1/1000th interest in a share of 6.50% Series E Fixed-to-Floating Cumulative Redeemable Preferred Stock", "AGNCP": "AGNC Investment Corp. Depositary Shares Each Representing a 1/1000th Interest in a Share of 6.125% Series F Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "AGO": "Assured Guaranty Ltd. Common Stock", "AGR": "Avangrid Inc. Common Stock", "AGRI": "AgriFORCE Growing Systems Ltd. Common Shares", "AGRIW": "AgriFORCE Growing Systems Ltd. Warrant", "AGRO": "Adecoagro S.A. Common Shares", "AGRX": "Agile Therapeutics Inc. Common Stock", "AGS": "PlayAGS Inc. Common Stock", "AGTC": "Applied Genetic Technologies Corporation Common Stock", "AGTI": "Agiliti Inc. Common Stock", "AGX": "Argan Inc. Common Stock", "AGYS": "Agilysys Inc. Common Stock", "AHCO": "AdaptHealth Corp. Common Stock", "AHH": "Armada Hoffler Properties Inc. Common Stock", "AHH^A": "Armada Hoffler Properties Inc. 6.75% Series A Cumulative Redeemable Perpetual Preferred Stock", "AHL^C": "Aspen Insurance Holdings Limited 5.95% Fixed-to-Floating Rate Perpetual Non-Cumulative Preference Shares", "AHL^D": "Aspen Insurance Holdings Limited 5.625% Perpetual Non-Cumulative Preference Shares", "AHL^E": "Aspen Insurance Holdings Limited Depositary Shares each representing a 1/1000th interest in a share of 5.625% Perpetual Non-Cumulative Preference Shares", "AHPA": "Avista Public Acquisition Corp. II Class A Ordinary Shares", "AHPAU": "Avista Public Acquisition Corp. II Unit", "AHPAW": "Avista Public Acquisition Corp. II Warrant", "AHPI": "Allied Healthcare Products Inc. Common Stock", "AHT": "Ashford Hospitality Trust Inc Common Stock", "AHT^D": "Ashford Hospitality Trust Inc 8.45% Series D Cumulative Preferred Stock", "AHT^F": "Ashford Hospitality Trust Inc 7.375% Series F Cumulative Preferred Stock", "AHT^G": "Ashford Hospitality Trust Inc 7.375% Series G Cumulative Preferred Stock", "AHT^H": "Ashford Hospitality Trust Inc 7.50% Series H Cumulative Preferred Stock", "AHT^I": "Ashford Hospitality Trust Inc 7.50% Series I Cumulative Preferred Stock", "AI": "C3.ai Inc. Class A Common Stock", "AIC": "Arlington Asset Investment Corp 6.750% Notes due 2025", "AIF": "Apollo Tactical Income Fund Inc. Common Stock", "AIG": "American International Group Inc. New Common Stock", "AIG^A": "American International Group Inc. Depositary Shares Each Representing a 1/1000th Interest in a Share of Series A 5.85% Non-Cumulative Perpetual Preferred Stock", "AIH": "Aesthetic Medical International Holdings Group Ltd. American Depositary Shares", "AIHS": "Senmiao Technology Limited Common Stock", "AIKI": "AIkido Pharma Inc. Common Stock", "AIM": "AIM ImmunoTech Inc. Common Stock", "AIMC": "Altra Industrial Motion Corp. Common Stock", "AIN": "Albany International Corporation Common Stock", "AINC": "Ashford Inc. (Holding Company) Common Stock", "AINV": "Apollo Investment Corporation Common Stock", "AIO": "Virtus AllianzGI Artificial Intelligence & Technology Opportunities Fund Common Shares", "AIR": "AAR Corp. Common Stock", "AIRC": "Apartment Income REIT Corp. Common Stock", "AIRG": "Airgain Inc. Common Stock", "AIRI": "Air Industries Group Common Stock", "AIRT": "Air T Inc. Common Stock", "AIRTP": "Air T Inc. Air T Funding Alpha Income Trust Preferred Securities", "AIT": "Applied Industrial Technologies Inc. Common Stock", "AIV": "Apartment Investment and Management Company Common Stock", "AIZ": "Assurant Inc. Common Stock", "AIZN": "Assurant Inc. 5.25% Subordinated Notes due 2061", "AJG": "Arthur J. Gallagher & Co. Common Stock", "AJRD": "Aerojet Rocketdyne Holdings Inc. Common Stock", "AJX": "Great Ajax Corp. Common Stock", "AJXA": "Great Ajax Corp. 7.25% Convertible Senior Notes due 2024", "AKA": "a.k.a. Brands Holding Corp. Common Stock", "AKAM": "Akamai Technologies Inc. Common Stock", "AKBA": "Akebia Therapeutics Inc. Common Stock", "AKIC": "Sports Ventures Acquisition Corp. Class A Ordinary Shares", "AKICU": "Sports Ventures Acquisition Corp. Unit", "AKICW": "Sports Ventures Acquisition Corp. Warrant", "AKO/A": "Embotelladora Andina S.A.", "AKO/B": "Embotelladora Andina S.A.", "AKR": "Acadia Realty Trust Common Stock", "AKRO": "Akero Therapeutics Inc. Common Stock", "AKTS": "Akoustis Technologies Inc. Common Stock", "AKTX": "Akari Therapeutics plc ADR (0.01 USD)", "AKU": "Akumin Inc. Common Shares", "AKUS": "Akouos Inc. Common Stock", "AKYA": "Akoya BioSciences Inc. Common Stock", "AL": "Air Lease Corporation Class A Common Stock", "AL^A": "Air Lease Corporation 6.150% Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series A", "ALAC": "Alberton Acquisition Corporation Ordinary Shares", "ALACR": "Alberton Acquisition Corporation Rights exp April 26 2021", "ALACU": "Alberton Acquisition Corporation Unit", "ALACW": "Alberton Acquisition Corporation Warrant", "ALB": "Albemarle Corporation Common Stock", "ALBO": "Albireo Pharma Inc. Common Stock", "ALC": "Alcon Inc. Ordinary Shares", "ALCC": "AltC Acquisition Corp. Class A Common Stock", "ALCO": "Alico Inc. Common Stock", "ALDX": "Aldeyra Therapeutics Inc. Common Stock", "ALE": "Allete Inc.", "ALEC": "Alector Inc. Common Stock", "ALEX": "Alexander & Baldwin Inc. Common Stock REIT Holding Company", "ALF": "ALFI Inc. Common Stock", "ALFIW": "ALFI Inc. Warrant", "ALG": "Alamo Group Inc. Common Stock", "ALGM": "Allegro MicroSystems Inc. Common Stock", "ALGN": "Align Technology Inc. Common Stock", "ALGS": "Aligos Therapeutics Inc. Common Stock", "ALGT": "Allegiant Travel Company Common Stock", "ALHC": "Alignment Healthcare Inc. Common Stock", "ALIM": "Alimera Sciences Inc. Common Stock", "ALIN^A": "Altera Infrastructure L.P. 7.25% Series A ", "ALIN^B": "Altera Infrastructure L.P. 8.50% Series B ", "ALIN^E": "Altera Infrastructure L.P. 8.875% Series E ", "ALIT": "Alight Inc. Class A Common Stock", "ALJJ": "ALJ Regional Holdings Inc. Common Stock", "ALK": "Alaska Air Group Inc. Common Stock", "ALKS": "Alkermes plc Ordinary Shares", "ALKT": "Alkami Technology Inc. Common Stock", "ALL": "Allstate Corporation (The) Common Stock", "ALL^B": "Allstate Corporation (The) 5.100% Fixed-to-Floating Rate Subordinated Debentures due 2053", "ALL^G": "Allstate Corporation (The) Depositary Shares each representing a 1/1000th interest in a share of Fixed Rate Noncumulative Perpetual Preferred Stock Series G", "ALL^H": "Allstate Corporation (The) Depositary Shares each representing a 1/1000th interest in a share of Fixed Rate Noncumulative Perpetual Preferred Stock Series H", "ALL^I": "Allstate Corporation (The) Depositary Shares each representing a 1/1000th interest in a share of Fixed Rate Noncumulative Perpetual Preferred Stock Series I", "ALLE": "Allegion plc Ordinary Shares", "ALLK": "Allakos Inc. Common Stock", "ALLO": "Allogene Therapeutics Inc. Common Stock", "ALLT": "Allot Ltd. Ordinary Shares", "ALLY": "Ally Financial Inc. Common Stock", "ALLY^A": "GMAC Capital Trust I Fixed Rate Floating Rate Trust Preferred Securities Series 2", "ALNA": "Allena Pharmaceuticals Inc. Common Stock", "ALNY": "Alnylam Pharmaceuticals Inc. Common Stock", "ALOT": "AstroNova Inc. Common Stock", "ALP^Q": "Alabama Power Company 5.00% Class A Preferred Stock Cumulative Par Value $1 Per Share (Stated Capital $25 Per Share)", "ALPA": "Alpha Healthcare Acquisition Corp. III Class A Common Stock", "ALPAW": "Alpha Healthcare Acquisition Corp. III Warrant", "ALPN": "Alpine Immune Sciences Inc. Common Stock", "ALRM": "Alarm.com Holdings Inc. Common Stock", "ALRN": "Aileron Therapeutics Inc. Common Stock", "ALRS": "Alerus Financial Corporation Common Stock", "ALSN": "Allison Transmission Holdings Inc. Common Stock", "ALT": "Altimmune Inc. Common Stock", "ALTG": "Alta Equipment Group Inc. Class A Common Stock", "ALTG^A": "Alta Equipment Group Inc. Depositary Shares (each representing 1/1000th in a share of 10% Series A Cumulative Perpetual Preferred Stock)", "ALTM": "Altus Midstream Company Class A Common Stock", "ALTO": "Alto Ingredients Inc. Common Stock", "ALTR": "Altair Engineering Inc. Class A Common Stock", "ALTU": "Altitude Acquisition Corp. Class A Common Stock", "ALTUU": "Altitude Acquisition Corp. Unit", "ALTUW": "Altitude Acquisition Corp. Warrant", "ALV": "Autoliv Inc. Common Stock", "ALVR": "AlloVir Inc. Common Stock", "ALX": "Alexander's Inc. Common Stock", "ALXO": "ALX Oncology Holdings Inc. Common Stock", "ALYA": "Alithya Group inc. Class A Subordinate Voting Shares", "ALZN": "Alzamend Neuro Inc. Common Stock", "AM": "Antero Midstream Corporation Common Stock", "AMAL": "Amalgamated Financial Corp. Common Stock (DE)", "AMAM": "Ambrx Biopharma Inc. American Depositary Shares (each representing seven Ordinary Shares)", "AMAO": "American Acquisition Opportunity Inc. Class A Common Stock", "AMAOU": "American Acquisition Opportunity Inc. Units", "AMAOW": "American Acquisition Opportunity Inc. Warrant", "AMAT": "Applied Materials Inc. Common Stock", "AMBA": "Ambarella Inc. Ordinary Shares", "AMBC": "Ambac Financial Group Inc. Common Stock", "AMBO": "Ambow Education Holding Ltd. American Depository Shares each representing two Class A ordinary shares", "AMBP": "Ardagh Metal Packaging S.A. Ordinary Shares", "AMC": "AMC Entertainment Holdings Inc. Class A Common Stock", "AMCI": "AMCI Acquisition Corp. II Class A Common Stock", "AMCIU": "AMCI Acquisition Corp. II Units", "AMCIW": "AMCI Acquisition Corp. II Warrant", "AMCR": "Amcor plc Ordinary Shares", "AMCX": "AMC Networks Inc. Class A Common Stock", "AMD": "Advanced Micro Devices Inc. Common Stock", "AME": "AMETEK Inc.", "AMED": "Amedisys Inc Common Stock", "AMEH": "Apollo Medical Holdings Inc. Common Stock", "AMG": "Affiliated Managers Group Inc. Common Stock", "AMGN": "Amgen Inc. Common Stock", "AMH": "American Homes 4 Rent Common Shares of Beneficial Interest", "AMH^F": "American Homes 4 Rent 5.875% Series F Cumulative Redeemable Perpetual Preferred Shares", "AMH^G": "American Homes 4 Rent Series G cumulative redeemable perpetual preferred shares of beneficial interest", "AMH^H": "American Homes 4 Rent Series H cumulative redeemable perpetual Preferred Shares of Beneficial Interest", "AMK": "AssetMark Financial Holdings Inc. Common Stock", "AMKR": "Amkor Technology Inc. Common Stock", "AMN": "AMN Healthcare Services Inc AMN Healthcare Services Inc", "AMNB": "American National Bankshares Inc. Common Stock", "AMOT": "Allied Motion Technologies Inc.", "AMOV": "America Movil S.A.B. de C.V. Class A American Depositary Shares", "AMP": "Ameriprise Financial Inc. Common Stock", "AMPE": "Ampio Pharmaceuticals Inc.", "AMPG": "Amplitech Group Inc. Common Stock", "AMPGW": "Amplitech Group Inc. Warrants", "AMPH": "Amphastar Pharmaceuticals Inc. Common Stock", "AMPI": "Advanced Merger Partners Inc. Class A Common Stock", "AMPL": "Amplitude Inc. Class A Common Stock", "AMPY": "Amplify Energy Corp. Common Stock", "AMR": "Alpha Metallurgical Resources Inc. Common Stock", "AMRC": "Ameresco Inc. Class A Common Stock", "AMRK": "A-Mark Precious Metals Inc. Common Stock", "AMRN": "Amarin Corporation plc", "AMRS": "Amyris Inc. Common Stock", "AMRX": "Amneal Pharmaceuticals Inc. Class A Common Stock", "AMS": "American Shared Hospital Services Common Stock", "AMSC": "American Superconductor Corporation Common Stock", "AMSF": "AMERISAFE Inc. Common Stock", "AMST": "Amesite Inc. Common Stock", "AMSWA": "American Software Inc. Class A Common Stock", "AMT": "American Tower Corporation (REIT) Common Stock", "AMTB": "Amerant Bancorp Inc. Class A Common Stock", "AMTBB": "Amerant Bancorp Inc. Class B Common Stock", "AMTI": "Applied Molecular Transport Inc. Common Stock", "AMTX": "Aemetis Inc. Common Stock", "AMWD": "American Woodmark Corporation Common Stock", "AMWL": "American Well Corporation Class A Common Stock", "AMX": "America Movil S.A.B. de C.V. American Depository Receipt Series L", "AMYT": "Amryt Pharma plc American Depositary Shares", "AMZN": "Amazon.com Inc. Common Stock", "AN": "AutoNation Inc. Common Stock", "ANAB": "AnaptysBio Inc. Common Stock", "ANAC": "Arctos NorthStar Acquisition Corp. Class A Ordinary Shares", "ANAT": "American National Group Inc. Common Stock", "ANDE": "Andersons Inc. (The) Common Stock", "ANEB": "Anebulo Pharmaceuticals Inc. Common Stock", "ANET": "Arista Networks Inc. Common Stock", "ANF": "Abercrombie & Fitch Company Common Stock", "ANGI": "Angi Inc. Class A Common Stock", "ANGN": "Angion Biomedica Corp. Common Stock", "ANGO": "AngioDynamics Inc. Common Stock", "ANIK": "Anika Therapeutics Inc. Common Stock", "ANIP": "ANI Pharmaceuticals Inc.", "ANIX": "Anixa Biosciences Inc. Common Stock", "ANNX": "Annexon Inc. Common Stock", "ANPC": "AnPac Bio-Medical Science Co. Ltd. American Depositary Shares", "ANSS": "ANSYS Inc. Common Stock", "ANTE": "AirNet Technology Inc. American Depositary Shares", "ANTM": "Anthem Inc. Common Stock", "ANVS": "Annovis Bio Inc. Common Stock", "ANY": "Sphere 3D Corp. Common Shares", "ANZU": "Anzu Special Acquisition Corp I Class A Common Stock", "ANZUU": "Anzu Special Acquisition Corp I Units", "ANZUW": "Anzu Special Acquisition Corp I Warrant", "AOD": "Aberdeen Total Dynamic Dividend Fund Common Stock", "AOMR": "Angel Oak Mortgage Inc. Common Stock", "AON": "Aon plc Class A Ordinary Shares (Ireland)", "AOS": "A.O. Smith Corporation Common Stock", "AOSL": "Alpha and Omega Semiconductor Limited Common Shares", "AOUT": "American Outdoor Brands Inc. Common Stock ", "AP": "Ampco-Pittsburgh Corporation Common Stock", "APA": "APA Corporation Common Stock", "APAC": "StoneBridge Acquisition Corporation Class A Ordinary Shares", "APACU": "StoneBridge Acquisition Corporation Unit", "APACW": "StoneBridge Acquisition Corporation Warrant", "APAM": "Artisan Partners Asset Management Inc. Class A Common Stock", "APD": "Air Products and Chemicals Inc. Common Stock", "APDN": "Applied DNA Sciences Inc. Common Stock", "APEI": "American Public Education Inc. Common Stock", "APEN": "Apollo Endosurgery Inc. Common Stock", "APG": "APi Group Corporation Common Stock", "APGB": "Apollo Strategic Growth Capital II Class A Ordinary Shares", "APH": "Amphenol Corporation Common Stock", "API": "Agora Inc. American Depositary Shares", "APLE": "Apple Hospitality REIT Inc. Common Shares", "APLS": "Apellis Pharmaceuticals Inc. Common Stock", "APLT": "Applied Therapeutics Inc. Common Stock", "APM": "Aptorum Group Limited Class A Ordinary Shares", "APMIU": "AxonPrime Infrastructure Acquisition Corporation Unit", "APO": "Apollo Global Management Inc. Class A Common Stock", "APO^A": "Apollo Global Management Inc. 6.375% Series A Preferred Shares", "APO^B": "Apollo Global Management Inc 6.375% Series B Preferred Shares", "APOG": "Apogee Enterprises Inc. Common Stock", "APOP": "Cellect Biotechnology Ltd. American Depositary Shares", "APP": "Applovin Corporation Class A Common Stock", "APPF": "AppFolio Inc. Class A Common Stock", "APPH": "AppHarvest Inc. Common Stock", "APPHW": "AppHarvest Inc. Warrants", "APPN": "Appian Corporation Class A Common Stock", "APPS": "Digital Turbine Inc. Common Stock", "APR": "Apria Inc. Common Stock", "APRE": "Aprea Therapeutics Inc. Common stock", "APRN": "Blue Apron Holdings Inc. Class A Common Stock", "APSG": "Apollo Strategic Growth Capital Class A Ordinary Shares", "APT": "Alpha Pro Tech Ltd. Common Stock", "APTM": "Alpha Partners Technology Merger Corp. Class A Ordinary Shares", "APTMU": "Alpha Partners Technology Merger Corp. Unit", "APTMW": "Alpha Partners Technology Merger Corp. Warrant", "APTO": "Aptose Biosciences Inc. Common Shares", "APTS": "Preferred Apartment Communities Inc. Common Stock", "APTV": "Aptiv PLC Ordinary Shares", "APTV^A": "Aptiv PLC 5.50% Series A Mandatory Convertible Preferred Shares", "APTX": "Aptinyx Inc. Common Stock", "APVO": "Aptevo Therapeutics Inc. Common Stock", "APWC": "Asia Pacific Wire & Cable Corporation Ltd. Ordinary Shares (Bermuda)", "APYX": "Apyx Medical Corporation Common Stock", "AQB": "AquaBounty Technologies Inc. Common Stock", "AQMS": "Aqua Metals Inc. Common Stock", "AQN": "Algonquin Power & Utilities Corp. Common Shares", "AQNA": "Algonquin Power & Utilities Corp. 6.875% Fixed-to-Floating Rate Subordinated Notes Series 2018-A due October 17 2078", "AQNB": "Algonquin Power & Utilities Corp. 6.20% Fixed-to-Floating Subordinated Notes Series 2019-A due July 1 2079", "AQNU": "Algonquin Power & Utilities Corp. Corporate Units", "AQST": "Aquestive Therapeutics Inc. Common Stock", "AQUA": "Evoqua Water Technologies Corp. Common Stock", "AR": "Antero Resources Corporation Common Stock", "ARAV": "Aravive Inc. Common Stock", "ARAY": "Accuray Incorporated Common Stock", "ARBG": "Aequi Acquisition Corp. Class A Common Stock", "ARBGU": "Aequi Acquisition Corp. Unit", "ARBGW": "Aequi Acquisition Corp. warrants", "ARBK": "Argo Blockchain plc American Depositary Shares", "ARC": "ARC Document Solutions Inc. Common Stock", "ARCB": "ArcBest Corporation Common Stock", "ARCC": "Ares Capital Corporation Common Stock", "ARCE": "Arco Platform Limited Class A Common Shares", "ARCH": "Arch Resources Inc. Class A Common Stock", "ARCO": "Arcos Dorados Holdings Inc. Class A Shares", "ARCT": "Arcturus Therapeutics Holdings Inc. Common Stock", "ARD": "Ardagh Group S.A. Common Shares", "ARDC": "Ares Dynamic Credit Allocation Fund Inc. Common Shares", "ARDS": "Aridis Pharmaceuticals Inc. Common Stock", "ARDX": "Ardelyx Inc. Common Stock", "ARE": "Alexandria Real Estate Equities Inc. Common Stock", "AREC": "American Resources Corporation Class A Common Stock", "ARES": "Ares Management Corporation Class A Common Stock", "ARGD": "Argo Group International Holdings Ltd. 6.5% Senior Notes Due 2042", "ARGO": "Argo Group International Holdings Ltd.", "ARGO^A": "Argo Group International Holdings Ltd. Depositary Shares Each Representing a 1/1000th Interest in a 7.00% Resettable Fixed Rate Preference Share Series A", "ARGUU": "Argus Capital Corp. Unit", "ARGX": "argenx SE American Depositary Shares", "ARI": "Apollo Commercial Real Estate Finance Inc", "ARKO": "ARKO Corp. Common Stock", "ARKOW": "ARKO Corp. Warrant", "ARKR": "Ark Restaurants Corp. Common Stock", "ARL": "American Realty Investors Inc. Common Stock", "ARLO": "Arlo Technologies Inc. Common Stock", "ARLP": "Alliance Resource Partners L.P. Common Units representing Limited Partners Interests", "ARMK": "Aramark Common Stock", "ARMP": "Armata Pharmaceuticals Inc. Common Stock", "ARNA": "Arena Pharmaceuticals Inc. Common Stock", "ARNC": "Arconic Corporation Common Stock ", "AROC": "Archrock Inc. Common Stock", "AROW": "Arrow Financial Corporation Common Stock", "ARQQ": "Arqit Quantum Inc. Ordinary Shares", "ARQQW": "Arqit Quantum Inc. Warrants", "ARQT": "Arcutis Biotherapeutics Inc. Common Stock", "ARR": "ARMOUR Residential REIT Inc.", "ARR^C": "ARMOUR Residential REIT Inc. 7% Series C Cumulative Redeemable Preferred Stock (liquidation preference $25.00 per share)", "ARRW": "Arrowroot Acquisition Corp. Class A common stock", "ARRWU": "Arrowroot Acquisition Corp. Unit", "ARRWW": "Arrowroot Acquisition Corp. Warrant", "ARRY": "Array Technologies Inc. Common Stock", "ARTA": "Artisan Acquisition Corp. Class A Ordinary Shares ", "ARTAU": "Artisan Acquisition Corp. Units", "ARTAW": "Artisan Acquisition Corp. Warrants", "ARTEU": "Artemis Strategic Investment Corporation Unit", "ARTL": "Artelo Biosciences Inc. Common Stock", "ARTLW": "Artelo Biosciences Inc. Warrant", "ARTNA": "Artesian Resources Corporation Class A Common Stock", "ARTW": "Art's-Way Manufacturing Co. Inc. Common Stock", "ARVL": "Arrival Ordinary Shares", "ARVN": "Arvinas Inc. Common Stock", "ARW": "Arrow Electronics Inc. Common Stock", "ARWR": "Arrowhead Pharmaceuticals Inc. Common Stock", "ARYD": "ARYA Sciences Acquisition Corp IV Class A Odinary Shares", "ARYE": "ARYA Sciences Acquisition Corp V Class A Ordinary Shares", "ASA": "ASA Gold and Precious Metals Limited", "ASAI": "Sendas Distribuidora S A ADS", "ASAN": "Asana Inc. Class A Common Stock", "ASAQ": "Atlantic Street Acquisition Corp Class A Common Stock", "ASAX": "Astrea Acquisition Corp. Class A Common Stock", "ASAXU": "Astrea Acquisition Corp. Unit", "ASAXW": "Astrea Acquisition Corp. Warrant", "ASB": "Associated Banc-Corp Common Stock", "ASB^E": "Associated Banc-Corp Depositary Shares each representing a 1/40th interest in a share of 5.875% Non-Cumulative Perpetual Preferred Stock Series E", "ASB^F": "Associated Banc-Corp Depositary Shares each representing a 1/40th interest in a share of Associated Banc-Corp 5.625% Non-Cumulative Perpetual Preferred Stock Series F", "ASC": "Ardmore Shipping Corporation Common Stock", "ASG": "Liberty All-Star Growth Fund Inc.", "ASGI": "Aberdeen Standard Global Infrastructure Income Fund Common Shares of Beneficial Interest", "ASGN": "ASGN Incorporated Common Stock", "ASH": "Ashland Global Holdings Inc. Common Stock", "ASIX": "AdvanSix Inc. Common Stock ", "ASLE": "AerSale Corporation Common Stock", "ASLEW": "AerSale Corporation Warrants", "ASLN": "ASLAN Pharmaceuticals Limited American Depositary Shares", "ASM": "Avino Silver & Gold Mines Ltd. Common Shares (Canada)", "ASMB": "Assembly Biosciences Inc. Common Stock", "ASML": "ASML Holding N.V. New York Registry Shares", "ASND": "Ascendis Pharma A/S American Depositary Shares", "ASO": "Academy Sports and Outdoors Inc. Common Stock", "ASPA": "ABRI SPAC I INC. Common Stock", "ASPAW": "ABRI SPAC I INC. Warrant", "ASPC": "Alpha Capital Acquisition Company One Class A Ordinary Share", "ASPCU": "Alpha Capital Acquisition Company Unit", "ASPCW": "Alpha Capital Acquisition Company Warrant", "ASPN": "Aspen Aerogels Inc. Common Stock", "ASPS": "Altisource Portfolio Solutions S.A. Common Stock", "ASPU": "Aspen Group Inc. Common Stock", "ASR": "Grupo Aeroportuario del Sureste S.A. de C.V. Common Stock", "ASRT": "Assertio Holdings Inc. Common Stock", "ASRV": "AmeriServ Financial Inc. Common Stock", "ASTC": "Astrotech Corporation (DE) Common Stock", "ASTE": "Astec Industries Inc. Common Stock", "ASTR": "Astra Space Inc. Class A Common Stock ", "ASTRW": "Astra Space Inc. Warrant", "ASTS": "AST SpaceMobile Inc. Class A Common Stock", "ASTSW": "AST SpaceMobile Inc. Warrant", "ASUR": "Asure Software Inc Common Stock", "ASX": "ASE Technology Holding Co. Ltd. American Depositary Shares (each representing Two Common Shares) ", "ASXC": "Asensus Surgical Inc. Common Stock", "ASYS": "Amtech Systems Inc. Common Stock", "ASZ": "Austerlitz Acquisition Corporation II Class A Ordinary Shares", "ATA": "Americas Technology Acquisition Corp. Ordinary Shares", "ATAI": "ATAI Life Sciences N.V. Common Shares", "ATAQ": "Altimar Acquisition Corp. III Class A Ordinary Shares", "ATAX": "America First Multifamily Investors L.P. Beneficial Unit Certificates (BUCs) representing Limited Partnership Interests", "ATC": "Atotech Limited Common Shares", "ATCO": "Atlas Corp. Common Shares", "ATCO^D": "Atlas Corp. 7.95% Series D", "ATCO^H": "Atlas Corp. 7.875% Series H", "ATCO^I": "Atlas Corp. Series I Fixed-to-Floating ", "ATCOL": "Atlas Corp. 7.125% Notes due 2027", "ATCX": "Atlas Technical Consultants Inc. Class A Common Stock", "ATEC": "Alphatec Holdings Inc. Common Stock", "ATEN": "A10 Networks Inc. Common Stock", "ATER": "Aterian Inc. Common Stock", "ATEX": "Anterix Inc. Common Stock", "ATGE": "Adtalem Global Education Inc. Common Stock", "ATH": "Athene Holding Ltd. Class A Common Shares", "ATH^A": "Athene Holding Ltd. Depositary Shares Each Representing a 1/1000th Interest in a 6.35% Fixed-to-Floating Rate Perpetual Non-Cumulative Preference Share Series A", "ATH^B": "Athene Holding Ltd. Depositary Shares Each Representing a 1/1000th Interest in a 5.625% Fixed Rate Perpetual Non- Cumulative Preference Share Series B par value $1.00 per share", "ATH^C": "Athene Holding Ltd. Depositary Shares each representing a 1/1000th Interest in a Share of 6.375% Fixed-Rate Reset Perpetual Non-Cumulative Preference Shares Series C", "ATH^D": "Athene Holding Ltd. Depositary Shares Each Representing a 1/1000th Interest in a 4.875% Fixed-Rate Perpetual Non-Cumulative Preference Share Series D", "ATHA": "Athira Pharma Inc. Common Stock", "ATHE": "Alterity Therapeutics Limited American Depositary Shares", "ATHM": "Autohome Inc. American Depositary Shares each representing four class A ordinary shares.", "ATHN": "Athena Technology Acquisition Corp. Class A Common Stock", "ATHX": "Athersys Inc. Common Stock", "ATI": "Allegheny Technologies Incorporated Common Stock", "ATIF": "ATIF Holdings Limited Ordinary Shares", "ATIP": "ATI Physical Therapy Inc. Class A Common Stock", "ATKR": "Atkore Inc. Common Stock", "ATLC": "Atlanticus Holdings Corporation Common Stock", "ATLCP": "Atlanticus Holdings Corporation 7.625% Series B Cumulative Perpetual Preferred Stock no par value per share", "ATLO": "Ames National Corporation Common Stock", "ATMR": "Altimar Acquisition Corp. II Class A Ordinary Shares", "ATNF": "180 Life Sciences Corp. Common Stock", "ATNFW": "180 Life Sciences Corp. Warrant", "ATNI": "ATN International Inc. Common Stock", "ATNM": "Actinium Pharmaceuticals Inc. (Delaware) Common Stock", "ATNX": "Athenex Inc. Common Stock", "ATO": "Atmos Energy Corporation Common Stock", "ATOM": "Atomera Incorporated Common Stock", "ATOS": "Atossa Therapeutics Inc. Common Stock", "ATR": "AptarGroup Inc. Common Stock", "ATRA": "Atara Biotherapeutics Inc. Common Stock", "ATRC": "AtriCure Inc. Common Stock", "ATRI": "Atrion Corporation Common Stock", "ATRO": "Astronics Corporation Common Stock", "ATRS": "Antares Pharma Inc. Common Stock", "ATSG": "Air Transport Services Group Inc", "ATSPT": "Archimedes Tech SPAC Partners Co. Subunit", "ATSPU": "Archimedes Tech SPAC Partners Co. Unit", "ATSPW": "Archimedes Tech SPAC Partners Co. Warrant", "ATTO": "Atento S.A. Ordinary Shares", "ATUS": "Altice USA Inc. Class A Common Stock", "ATVC": "Tribe Capital Growth Corp I Class A common stock", "ATVCU": "Tribe Capital Growth Corp I Units", "ATVCW": "Tribe Capital Growth Corp I Warrant", "ATVI": "Activision Blizzard Inc. Common Stock", "ATXI": "Avenue Therapeutics Inc. Common Stock", "ATXS": "Astria Therapeutics Inc. Common Stock", "ATY": "AcuityAds Holdings Inc. Common Shares", "AU": "AngloGold Ashanti Limited Common Stock", "AUB": "Atlantic Union Bankshares Corporation Common Stock", "AUBAP": "Atlantic Union Bankshares Corporation Depositary Shares each representing a 1/400th ownership interest in a share of 6.875% Perpetual Non-Cumulative Preferred Stock Series A", "AUBN": "Auburn National Bancorporation Inc. Common Stock", "AUD": "Audacy Common Stock", "AUDC": "AudioCodes Ltd. Common Stock", "AUID": "Ipsidy Inc. Common Stock", "AUMN": "Golden Minerals Company Common Stock", "AUPH": "Aurinia Pharmaceuticals Inc Ordinary Shares", "AURC": "Aurora Acquisition Corp. Class A Ordinary Shares", "AURCU": "Aurora Acquisition Corp. Unit", "AURCW": "Aurora Acquisition Corp. Warrant", "AUS": "Austerlitz Acquisition Corporation I Class A Ordinary Shares", "AUTL": "Autolus Therapeutics plc American Depositary Share", "AUTO": "AutoWeb Inc. Common Stock", "AUUD": "Auddia Inc. Common Stock", "AUUDW": "Auddia Inc. Warrants", "AUVI": "Applied UV Inc. Common Stock", "AUVIP": "Applied UV Inc. 10.5% Series A Cumulative Perpetual Preferred Stock $0.0001 par value per share", "AUY": "Yamana Gold Inc. Ordinary Shares (Canada)", "AVA": "Avista Corporation Common Stock", "AVAH": "Aveanna Healthcare Holdings Inc. Common Stock", "AVAL": "Grupo Aval Acciones y Valores S.A. ADR (Each representing 20 preferred shares)", "AVAN": "Avanti Acquisition Corp. Class A Ordinary Shares", "AVAV": "AeroVironment Inc. Common Stock", "AVB": "AvalonBay Communities Inc. Common Stock", "AVCO": "Avalon GloboCare Corp. Common Stock", "AVCT": "American Virtual Cloud Technologies Inc. Common Stock ", "AVCTW": "American Virtual Cloud Technologies Inc. Warrant expiring 4/7/2025", "AVD": "American Vanguard Corporation Common Stock ($0.10 Par Value)", "AVDL": "Avadel Pharmaceuticals plc American Depositary Shares", "AVEO": "AVEO Pharmaceuticals Inc. Common Stock", "AVGO": "Broadcom Inc. Common Stock", "AVGOP": "Broadcom Inc. 8.00% Mandatory Convertible Preferred Stock Series A", "AVGR": "Avinger Inc. Common Stock", "AVID": "Avid Technology Inc. Common Stock", "AVIR": "Atea Pharmaceuticals Inc. Common Stock", "AVK": "Advent Convertible and Income Fund", "AVLR": "Avalara Inc. Common Stock", "AVNS": "Avanos Medical Inc. Common Stock", "AVNT": "Avient Corporation Common Stock", "AVNW": "Aviat Networks Inc. Common Stock", "AVO": "Mission Produce Inc. Common Stock", "AVPT": "AvePoint Inc. Class A Common Stock", "AVPTW": "AvePoint Inc. Warrant", "AVRO": "AVROBIO Inc. Common Stock", "AVT": "Avnet Inc. Common Stock", "AVTE": "Aerovate Therapeutics Inc. Common Stock", "AVTR": "Avantor Inc. Common Stock", "AVTR^A": "Avantor Inc. Series A Mandatory Convertible Preferred Stock", "AVTX": "Avalo Therapeutics Inc. Common Stock", "AVXL": "Anavex Life Sciences Corp. Common Stock", "AVY": "Avery Dennison Corporation Common Stock", "AVYA": "Avaya Holdings Corp. Common Stock", "AWF": "Alliancebernstein Global High Income Fund", "AWH": "Aspira Women's Health Inc. Common Stock", "AWI": "Armstrong World Industries Inc Common Stock", "AWK": "American Water Works Company Inc. Common Stock", "AWP": "Aberdeen Global Premier Properties Fund Common Shares of Beneficial Interest", "AWR": "American States Water Company Common Stock", "AWRE": "Aware Inc. Common Stock", "AWX": "Avalon Holdings Corporation Common Stock", "AX": "Axos Financial Inc. Common Stock", "AXDX": "Accelerate Diagnostics Inc. Common Stock", "AXGN": "Axogen Inc. Common Stock", "AXL": "American Axle & Manufacturing Holdings Inc. Common Stock", "AXLA": "Axcella Health Inc. Common Stock", "AXNX": "Axonics Inc. Common Stock", "AXON": "Axon Enterprise Inc. Common Stock", "AXP": "American Express Company Common Stock", "AXR": "AMREP Corporation Common Stock", "AXS": "Axis Capital Holdings Limited Common Stock", "AXS^E": "Axis Capital Holdings Limited Depositary Shares each representing 1/100th interest in a share of a 5.50% Series E Preferred Shares", "AXSM": "Axsome Therapeutics Inc. Common Stock", "AXTA": "Axalta Coating Systems Ltd. Common Shares", "AXTI": "AXT Inc Common Stock", "AXU": "Alexco Resource Corp Common Shares (Canada)", "AY": "Atlantica Sustainable Infrastructure plc Ordinary Shares", "AYI": "Acuity Brands Inc. ", "AYLA": "Ayala Pharmaceuticals Inc. Common Stock", "AYRO": "AYRO Inc. Common Stock", "AYTU": "Aytu BioPharma Inc. Common Stock", "AYX": "Alteryx Inc. Class A Common Stock", "AZEK": "The AZEK Company Inc. Class A Common Stock", "AZN": "AstraZeneca PLC American Depositary Shares", "AZO": "AutoZone Inc. Common Stock", "AZPN": "Aspen Technology Inc. Common Stock", "AZRE": "Azure Power Global Limited Equity Shares", "AZUL": "Azul S.A. American Depositary Shares (each representing three preferred shares)", "AZYO": "Aziyo Biologics Inc. Class A Common Stock", "AZZ": "AZZ Inc.", "B": "Barnes Group Inc. Common Stock", "BA": "Boeing Company (The) Common Stock", "BABA": "Alibaba Group Holding Limited American Depositary Shares each representing eight Ordinary share", "BAC": "Bank of America Corporation Common Stock", "BAC^B": "Bank of America Corporation Depositary Shares each representing a 1/1000th interest in a share of 6.000% Non-Cumulative Preferred Stock Series GG", "BAC^E": "Bank of America Corporation Depositary Sh repstg 1/1000th Perp Pfd Ser E", "BAC^K": "Bank of America Corporation Depositary Shares each representing a 1/1000th interest in a share of 5.875% Non- Cumulative Preferred Stock Series HH", "BAC^L": "Bank of America Corporation Non Cumulative Perpetual Conv Pfd Ser L", "BAC^M": "Bank of America Corporation Depositary Shares each representing a 1/1000th interest in a share of 5.375% Non-Cumulative Preferred Stock Series KK", "BAC^N": "Bank of America Corporation Depositary shares each representing 1/1000th interest in a share of 5.000% Non-Cumulative Preferred Stock Series LL", "BAC^O": "Bank of America Corporation Depositary shares each representing 1/1000th interest in a share of 4.375% Non-Cumulative Preferred Stock Series NN", "BAC^P": "Bank of America Corporation Depositary Shares each representing a 1/1000th interest in a share of 4.125% Non-Cumulative Preferred Stock Series PP", "BAH": "Booz Allen Hamilton Holding Corporation Common Stock", "BAK": "Braskem SA ADR", "BALY": "Bally's Corporation Common Stock", "BAM": "Brookfield Asset Management Inc. Common Stock", "BAMH": "Brookfield Finance Inc. 4.625% Subordinated Notes due October 16 2080", "BAMI": "Brookfield Finance Inc. 4.50% Perpetual Subordinated Notes", "BAMR": "Brookfield Asset Management Reinsurance Partners Ltd. Class A Exchangeable Limited Voting Shares", "BANC": "Banc of California Inc. Common Stock", "BANC^E": "Banc of California Inc. Depositary Shares Each Representing a 1/40th Interest in a Share of 7.000% Non-Cumulative Perpetual Preferred Stock Series E", "BAND": "Bandwidth Inc. Class A Common Stock", "BANF": "BancFirst Corporation Common Stock", "BANFP": "BancFirst Corporation - BFC Capital Trust II Cumulative Trust Preferred Securities", "BANR": "Banner Corporation Common Stock", "BANX": "StoneCastle Financial Corp Common Stock", "BAOS": "Baosheng Media Group Holdings Limited Ordinary shares", "BAP": "Credicorp Ltd. Common Stock", "BARK": "The Original BARK Company Common Stock", "BASE": "Couchbase Inc. Common Stock", "BATL": "Battalion Oil Corporation Common Stock", "BATRA": "Liberty Media Corporation Series A Liberty Braves Common Stock", "BATRK": "Liberty Media Corporation Series C Liberty Braves Common Stock", "BAX": "Baxter International Inc. Common Stock", "BB": "BlackBerry Limited Common Stock", "BBAR": "Banco BBVA Argentina S.A. ADS", "BBBY": "Bed Bath & Beyond Inc. Common Stock", "BBCP": "Concrete Pumping Holdings Inc. Common Stock", "BBD": "Banco Bradesco Sa American Depositary Shares", "BBDC": "Barings BDC Inc. Common Stock", "BBDO": "Banco Bradesco Sa American Depositary Shares (each representing one Common Share)", "BBGI": "Beasley Broadcast Group Inc. Class A Common Stock", "BBI": "Brickell Biotech Inc. Common Stock", "BBIG": "Vinco Ventures Inc. Common Stock", "BBIO": "BridgeBio Pharma Inc. Common Stock", "BBL": "BHP Group PlcSponsored ADR", "BBN": "BlackRock Taxable Municipal Bond Trust Common Shares of Beneficial Interest", "BBQ": "BBQ Holdings Inc. Common Stock", "BBSI": "Barrett Business Services Inc. Common Stock", "BBU": "Brookfield Business Partners L.P. Limited Partnership Units ", "BBVA": "Banco Bilbao Vizcaya Argentaria S.A. Common Stock", "BBW": "Build-A-Bear Workshop Inc. Common Stock", "BBWI": "Bath & Body Works Inc.", "BBY": "Best Buy Co. Inc. Common Stock", "BC": "Brunswick Corporation Common Stock", "BC^A": "Brunswick Corporation 6.500% Senior Notes due 2048", "BC^B": "Brunswick Corporation 6.625% Senior Notes due 2049", "BC^C": "Brunswick Corporation 6.375% Notes due 2049", "BCAB": "BioAtla Inc. Common Stock", "BCAC": "Brookline Capital Acquisition Corp. Common Stock", "BCACU": "Brookline Capital Acquisition Corp. Units", "BCACW": "Brookline Capital Acquisition Corp. Warrant", "BCAT": "BlackRock Capital Allocation Trust Common Shares of Beneficial Interest", "BCBP": "BCB Bancorp Inc. (NJ) Common Stock", "BCC": "Boise Cascade L.L.C. Common Stock", "BCDA": "BioCardia Inc. Common Stock", "BCDAW": "BioCardia Inc. Warrant", "BCE": "BCE Inc. Common Stock", "BCEI": "Bonanza Creek Energy Inc. Common Stock", "BCEL": "Atreca Inc. Class A Common Stock", "BCH": "Banco De Chile Banco De Chile ADS", "BCLI": "Brainstorm Cell Therapeutics Inc. Common Stock", "BCML": "BayCom Corp Common Stock", "BCO": "Brinks Company (The) Common Stock", "BCOR": "Blucora Inc. Common Stock", "BCOV": "Brightcove Inc. Common Stock", "BCOW": "1895 Bancorp of Wisconsin Inc. (MD) Common Stock", "BCPC": "Balchem Corporation Common Stock", "BCRX": "BioCryst Pharmaceuticals Inc. Common Stock", "BCS": "Barclays PLC Common Stock", "BCSF": "Bain Capital Specialty Finance Inc. Common Stock", "BCTX": "BriaCell Therapeutics Corp. Common Shares", "BCTXW": "BriaCell Therapeutics Corp. Warrant", "BCV": "Bancroft Fund Ltd.", "BCV^A": "Bancroft Fund Limited 5.375% Series A Cumulative Preferred Shares", "BCX": "BlackRock Resources Common Shares of Beneficial Interest", "BCYC": "Bicycle Therapeutics plc American Depositary Shares", "BCYP": "Big Cypress Acquisition Corp. Common stock", "BCYPU": "Big Cypress Acquisition Corp. Unit", "BCYPW": "Big Cypress Acquisition Corp. Warrant", "BDC": "Belden Inc Common Stock", "BDJ": "Blackrock Enhanced Equity Dividend Trust", "BDL": "Flanigan's Enterprises Inc. Common Stock", "BDN": "Brandywine Realty Trust Common Stock", "BDR": "Blonder Tongue Laboratories Inc. Common Stock", "BDSI": "BioDelivery Sciences International Inc. Common Stock", "BDSX": "Biodesix Inc. Common Stock", "BDTX": "Black Diamond Therapeutics Inc. Common Stock", "BDX": "Becton Dickinson and Company Common Stock", "BDXB": "Becton Dickinson and Company Depositary Shares each Representing a 1/20th Interest in a Share of 6.00% Mandatory Convertible Preferred Stock Series B", "BE": "Bloom Energy Corporation Class A Common Stock", "BEAM": "Beam Therapeutics Inc. Common Stock", "BECN": "Beacon Roofing Supply Inc. Common Stock", "BEDU": "Bright Scholar Education Holdings Limited American Depositary Shares each representing one Class A Ordinary Share", "BEEM": "Beam Global Common Stock", "BEEMW": "Beam Global Warrant", "BEKE": "KE Holdings Inc American Depositary Shares (each representing three Class A Ordinary Shares)", "BELFA": "Bel Fuse Inc. Class A Common Stock", "BELFB": "Bel Fuse Inc. Class B Common Stock", "BEN": "Franklin Resources Inc. Common Stock", "BENE": "Benessere Capital Acquisition Corp. Class A Common Stock", "BENER": "Benessere Capital Acquisition Corp. Right", "BENEU": "Benessere Capital Acquisition Corp. Unit", "BENEW": "Benessere Capital Acquisition Corp. Warrant", "BEP": "Brookfield Renewable Partners L.P. ", "BEP^A": "Brookfield Renewable Partners L.P. 5.25% Class A Preferred Limited Partnership Units Series 17", "BEPC": "Brookfield Renewable Corporation Class A Subordinate Voting Shares ", "BEPH": "Brookfield BRP Holdings (Canada) Inc. 4.625% Perpetual Subordinated Notes", "BERY": "Berry Global Group Inc. Common Stock", "BEST": "BEST Inc. American Depositary Shares each representing one Class A Ordinary Share", "BF/A": "Brown Forman Corporation", "BF/B": "Brown Forman Corporation", "BFAM": "Bright Horizons Family Solutions Inc. Common Stock", "BFC": "Bank First Corporation Common Stock", "BFI": "BurgerFi International Inc. Common Stock ", "BFIIW": "BurgerFi International Inc. Warrant ", "BFIN": "BankFinancial Corporation Common Stock", "BFK": "BlackRock Municipal Income Trust", "BFLY": "Butterfly Network Inc. Class A Common Stock", "BFRA": "Biofrontera AG American Depositary Shares", "BFS": "Saul Centers Inc. Common Stock", "BFS^D": "Saul Centers Inc. Depositary Shares each representing 1/100th of a share of 6.125% Series D Cumulative Redeemable Preferred Stock", "BFS^E": "Saul Centers Inc. Depositary shares each representing a 1/100th fractional interest in a share of 6.000% Series E Cumulative Redeemable Preferred Stock", "BFST": "Business First Bancshares Inc. Common Stock", "BFZ": "BlackRock California Municipal Income Trust", "BG": "Bunge Limited Bunge Limited", "BGB": "Blackstone Strategic Credit Fund Common Shares", "BGCP": "BGC Partners Inc Class A Common Stock", "BGFV": "Big 5 Sporting Goods Corporation Common Stock", "BGH": "Barings Global Short Duration High Yield Fund Common Shares of Beneficial Interests", "BGI": "Birks Group Inc. Common Stock", "BGIO": "BlackRock 2022 Global Income Opportunity Trust Common Shares of Beneficial Interest", "BGNE": "BeiGene Ltd. American Depositary Shares", "BGR": "BlackRock Energy and Resources Trust", "BGRY": "Berkshire Grey Inc. Class A Common Stock", "BGRYW": "Berkshire Grey Inc. Warrant", "BGS": "B&G Foods Inc. B&G Foods Inc. Common Stock", "BGSF": "BGSF Inc. Common Stock", "BGSX": "Build Acquisition Corp. Class A Common Stock", "BGT": "BlackRock Floating Rate Income Trust", "BGX": "Blackstone Long Short Credit Income Fund Common Shares", "BGY": "Blackrock Enhanced International Dividend Trust", "BH": "Biglari Holdings Inc. Class B Common Stock", "BHAT": "Blue Hat Interactive Entertainment Technology Ordinary Shares", "BHB": "Bar Harbor Bankshares Inc. Common Stock", "BHC": "Bausch Health Companies Inc. Common Stock", "BHE": "Benchmark Electronics Inc. Common Stock", "BHF": "Brighthouse Financial Inc. Common Stock", "BHFAL": "Brighthouse Financial Inc. 6.25% Junior Subordinated Debentures due 2058", "BHFAN": "Brighthouse Financial Inc. Depositary shares each representing a 1/1000th interest in a share of 5.375% Non-Cumulative Preferred Stock Series C", "BHFAO": "Brighthouse Financial Inc. Depositary Shares 6.75% Non-Cumulative Preferred Stock Series B", "BHFAP": "Brighthouse Financial Inc. Depositary Shares 6.6% Non-Cumulative Preferred Stock Series A", "BHG": "Bright Health Group Inc. Common Stock", "BHIL": "Benson Hill Inc. Common Stock", "BHK": "Blackrock Core Bond Trust Blackrock Core Bond Trust", "BHLB": "Berkshire Hills Bancorp Inc. Common Stock", "BHP": "BHP Group Limited American Depositary Shares (Each representing two Ordinary Shares)", "BHR": "Braemar Hotels & Resorts Inc. Common Stock", "BHR^B": "Braemar Hotels & Resorts Inc. 5.50% Series B Cumulative Convertible Preferred Stock par value $0.01 per share", "BHR^D": "Braemar Hotels & Resorts Inc. 8.25% Series D Cumulative Preferred Stock par value $0.01 per share", "BHSE": "Bull Horn Holdings Corp. Ordinary Shares", "BHSEU": "Bull Horn Holdings Corp. Unit", "BHSEW": "Bull Horn Holdings Corp. Warrants", "BHTG": "BioHiTech Global Inc. Common Stock", "BHV": "BlackRock Virginia Municipal Bond Trust", "BHVN": "Biohaven Pharmaceutical Holding Company Ltd. Common Shares", "BIDU": "Baidu Inc. ADS", "BIF": "Boulder Growth & Income Fund Inc.", "BIG": "Big Lots Inc. Common Stock", "BIGC": "BigCommerce Holdings Inc. Series 1 Common Stock", "BIGZ": "BlackRock Innovation and Growth Trust Common Shares of Beneficial Interest", "BIIB": "Biogen Inc. Common Stock", "BILI": "Bilibili Inc. American Depositary Shares", "BILL": "Bill.com Holdings Inc. Common Stock", "BIMI": "BIMI International Medical Inc. Common Stock", "BIO": "Bio-Rad Laboratories Inc. Class A Common Stock", "BIO/B": "Bio-Rad Laboratories Inc.", "BIOC": "Biocept Inc. Common Stock", "BIOL": "Biolase Inc. Common Stock", "BIOT": "Biotech Acquisition Company Class A Ordinary Shares", "BIOTU": "Biotech Acquisition Company Unit", "BIOTW": "Biotech Acquisition Company Warrant", "BIOX": "Bioceres Crop Solutions Corp. Ordinary Shares", "BIP": "Brookfield Infrastructure Partners LP Limited Partnership Units", "BIP^A": "Brookfield Infrastructure Partners LP 5.125% Class A Preferred Limited Partnership Units Series 13", "BIP^B": "Brookfield Infrastructure Partners LP 5.000% Class A Preferred Limited Partnership Units Series 14", "BIPC": "Brookfield Infrastructure Partners LP Class A Subordinate Voting Shares ", "BIPH": "Brookfield Infrastructure Corporation 5.000% Subordinated Notes due 2081", "BIT": "BlackRock Multi-Sector Income Trust Common Shares of Beneficial Interest", "BITE": "Bite Acquisition Corp. Common Stock", "BITF": "Bitfarms Ltd. Common Stock", "BIVI": "BioVie Inc. Class A Common Stock", "BJ": "BJ's Wholesale Club Holdings Inc. Common Stock", "BJRI": "BJ's Restaurants Inc. Common Stock", "BK": "The Bank of New York Mellon Corporation Common Stock", "BKCC": "BlackRock Capital Investment Corporation Common Stock", "BKD": "Brookdale Senior Living Inc. Common Stock", "BKE": "Buckle Inc. (The) Common Stock", "BKEP": "Blueknight Energy Partners L.P. Common Units", "BKEPP": "Blueknight Energy Partners L.P. Series A Preferred Units", "BKH": "Black Hills Corporation Common Stock", "BKI": "Black Knight Inc. Common Stock ", "BKN": "BlackRock Investment Quality Municipal Trust Inc. (The)", "BKNG": "Booking Holdings Inc. Common Stock", "BKR": "Baker Hughes Company Class A Common Stock", "BKSC": "Bank of South Carolina Corp. Common Stock", "BKSY": "BlackSky Technology Inc. Class A Common Stock", "BKT": "BlackRock Income Trust Inc. (The)", "BKTI": "BK Technologies Corporation Common Stock", "BKU": "BankUnited Inc. Common Stock", "BKYI": "BIO-key International Inc. Common Stock", "BL": "BlackLine Inc. Common Stock", "BLBD": "Blue Bird Corporation Common Stock", "BLCM": "Bellicum Pharmaceuticals Inc. Common Stock", "BLCT": "BlueCity Holdings Limited American Depositary Shares", "BLD": "TopBuild Corp. Common Stock", "BLDE": "Blade Air Mobility Inc. Class A Common Stock", "BLDEW": "Blade Air Mobility Inc. Warrants", "BLDP": "Ballard Power Systems Inc. Common Shares", "BLDR": "Builders FirstSource Inc. Common Stock", "BLE": "BlackRock Municipal Income Trust II", "BLFS": "BioLife Solutions Inc. Common Stock", "BLFY": "Blue Foundry Bancorp Common Stock", "BLI": "Berkeley Lights Inc. Common Stock", "BLIN": "Bridgeline Digital Inc. Common Stock", "BLK": "BlackRock Inc. Common Stock", "BLKB": "Blackbaud Inc. Common Stock", "BLL": "Ball Corporation Common Stock", "BLMN": "Bloomin' Brands Inc. Common Stock", "BLND": "Blend Labs Inc. Class A Common Stock", "BLNG": "Belong Acquisition Corp. Class A Common Stock", "BLNGU": "Belong Acquisition Corp. Units", "BLNGW": "Belong Acquisition Corp. Warrant", "BLNK": "Blink Charging Co. Common Stock", "BLNKW": "Blink Charging Co. Warrant", "BLPH": "Bellerophon Therapeutics Inc. Common Stock", "BLRX": "BioLineRx Ltd. American Depositary Shares", "BLSA": "BCLS Acquisition Corp. Class A Ordinary Shares", "BLTS": "Bright Lights Acquisition Corp. Class A Common Stock", "BLTSU": "Bright Lights Acquisition Corp. Unit", "BLTSW": "Bright Lights Acquisition Corp. Warrant", "BLU": "BELLUS Health Inc. Common Shares", "BLUA": "BlueRiver Acquisition Corp. Class A Ordinary Shares", "BLUE": "bluebird bio Inc. Common Stock", "BLW": "Blackrock Limited Duration Income Trust", "BLX": "Banco Latinoamericano de Comercio Exterior S.A.", "BMA": "Banco Macro S.A. ADR (representing Ten Class B Common Shares)", "BMBL": "Bumble Inc. Class A Common Stock", "BME": "Blackrock Health Sciences Trust", "BMEA": "Biomea Fusion Inc. Common Stock", "BMEZ": "BlackRock Health Sciences Trust II Common Shares of Beneficial Interest", "BMI": "Badger Meter Inc. Common Stock", "BML^G": "Bank of America Corporation Bank of America Corporation Depositary Shares (Each representing a 1/1200th interest in a share of Floating Rate Non-Cumulative Preferred Stock Series 1)", "BML^H": "Bank of America Corporation Bank of America Corporation Depositary Shares (Each representing a 1/1200th interest in a Share of Floating Rate Non-Cumulative Preferred Stock Series 2)", "BML^J": "Bank of America Corporation Bank of America Corporation Depositary Shares (Each representing a 1/1200th interest in a Share of Floating Rate Non-Cumulative Preferred Stock Series 4)", "BML^L": "Bank of America Corporation Bank of America Corporation Depositary Shares (Each representing a 1/1200th Interest in a Share of Floating Rate Non-Cumulative Preferred Stock Series 5)", "BMO": "Bank Of Montreal Common Stock", "BMRA": "Biomerica Inc. Common Stock", "BMRC": "Bank of Marin Bancorp Common Stock", "BMRN": "BioMarin Pharmaceutical Inc. Common Stock", "BMTC": "Bryn Mawr Bank Corporation Common Stock", "BMTX": "BM Technologies Inc. Common Stock", "BMY": "Bristol-Myers Squibb Company Common Stock", "BNED": "Barnes & Noble Education Inc Common Stock", "BNFT": "Benefitfocus Inc. Common Stock", "BNGO": "Bionano Genomics Inc. Common Stock", "BNGOW": "Bionano Genomics Inc. Warrant", "BNIXU": "Bannix Acquisition Corp. Unit", "BNL": "Broadstone Net Lease Inc. Common Stock", "BNNRU": "Banner Acquisition Corp. Units", "BNR": "Burning Rock Biotech Limited American Depositary Shares", "BNS": "Bank Nova Scotia Halifax Pfd 3 Ordinary Shares", "BNSO": "Bonso Electronics International Inc. Common Stock", "BNTC": "Benitec Biopharma Inc. Common Stock", "BNTX": "BioNTech SE American Depositary Share", "BNY": "BlackRock New York Municipal Income Trust", "BOAC": "Bluescape Opportunities Acquisition Corp. Class A Ordinary Shares", "BOAS": "BOA Acquisition Corp. Class A Common Stock", "BODY": "The Beachbody Company Inc. Class A Common Stock", "BOE": "Blackrock Enhanced Global Dividend Trust Common Shares of Beneficial Interest", "BOH": "Bank of Hawaii Corporation Common Stock", "BOH^A": "Bank of Hawaii Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of 4.375% Fixed Rate Non-Cumulative Perpetual Preferred Stock Series A", "BOKF": "BOK Financial Corporation Common Stock", "BOLT": "Bolt Biotherapeutics Inc. Common Stock", "BOMN": "Boston Omaha Corporation Class A Common Stock", "BON": "Bon Natural Life Limited Ordinary Shares", "BOOM": "DMC Global Inc. Common Stock", "BOOT": "Boot Barn Holdings Inc. Common Stock", "BORR": "Borr Drilling Limited Common Shares", "BOSC": "B.O.S. Better Online Solutions Common Stock", "BOTJ": "Bank of the James Financial Group Inc. Common Stock", "BOWX": "BowX Acquisition Corp. Class A Common Stock", "BOWXU": "BowX Acquisition Corp. Unit", "BOWXW": "BowX Acquisition Corp. Warrant", "BOX": "Box Inc. Class A Common Stock", "BOXL": "Boxlight Corporation Class A Common Stock", "BP": "BP p.l.c. Common Stock", "BPMC": "Blueprint Medicines Corporation Common Stock", "BPMP": "BP Midstream Partners LP Common Units representing Limited Partner Interests", "BPOP": "Popular Inc. Common Stock", "BPOPM": "Popular Inc. Popular Capital Trust II - 6.125% Cumulative Monthly Income Trust Preferred Securities", "BPOPN": "Popular Inc. 6.70% Cumulative Monthly Income Trust Preferred Securities", "BPRN": "The Bank of Princeton Common Stock", "BPT": "BP Prudhoe Bay Royalty Trust Common Stock", "BPTH": "Bio-Path Holdings Inc. Common Stock", "BPTS": "Biophytis SA American Depositary Share", "BPYPM": "Brookfield Property Partners L.P. 6.25% Class A Cumulative Redeemable Preferred Units Series 1", "BPYPN": "Brookfield Property Partners L.P. 5.750% Class A Cumulative Redeemable Perpetual Preferred Units Series 3", "BPYPO": "Brookfield Property Partners L.P. 6.375% Class A Cumulative Redeemable Perpetual Preferred Units Series 2", "BPYPP": "Brookfield Property Partners L.P. 6.50% Class A Cumulative Redeemable Perpetual Preferred Units", "BQ": "Boqii Holding Limited American Depositary Shares representing Class A Ordinary Shares", "BR": "Broadridge Financial Solutions Inc.Common Stock", "BRAG": "Bragg Gaming Group Inc. Common Shares", "BRBR": "BellRing Brands Inc. Class A Common Stock", "BRBS": "Blue Ridge Bankshares Inc. Common Stock", "BRC": "Brady Corporation Common Stock", "BRCN": "Burcon NutraScience Corp. Common Stock", "BRDG": "Bridge Investment Group Holdings Inc. Class A Common Stock", "BREZ": "Breeze Holdings Acquisition Corp. Common Stock", "BREZR": "Breeze Holdings Acquisition Corp. Right", "BREZW": "Breeze Holdings Acquisition Corp. Warrant", "BRFS": "BRF S.A.", "BRG": "Bluerock Residential Growth REIT Inc. Class A Common Stock", "BRG^C": "Bluerock Residential Growth REIT Inc. 7.625% Series C Cumulative Redeemable Preferred Stock", "BRG^D": "Bluerock Residential Growth REIT Inc. 7.125% Series D Cumulative Preferred Stock ($0.01 par value per share)", "BRID": "Bridgford Foods Corporation Common Stock", "BRIV": "B. Riley Principal 250 Merger Corp. Class A common stock", "BRIVU": "B. Riley Principal 250 Merger Corp. Units", "BRIVW": "B. Riley Principal 250 Merger Corp. Warrant", "BRK/A": "Berkshire Hathaway Inc.", "BRK/B": "Berkshire Hathaway Inc.", "BRKL": "Brookline Bancorp Inc. Common Stock", "BRKR": "Bruker Corporation Common Stock", "BRKS": "Brooks Automation Inc.", "BRLI": "Brilliant Acquisition Corporation Ordinary Shares", "BRLIR": "Brilliant Acquisition Corporation Rights", "BRLIU": "Brilliant Acquisition Corporation Unit", "BRLIW": "Brilliant Acquisition Corporation Warrants", "BRLT": "Brilliant Earth Group Inc. Class A Common Stock", "BRMK": "Broadmark Realty Capital Inc. Common Stock", "BRN": "Barnwell Industries Inc. Common Stock", "BRO": "Brown & Brown Inc. Common Stock", "BROG": "Brooge Energy Limited Ordinary Shares", "BROGW": "Brooge Holdings Limited Warrant expiring 12/20/2024", "BROS": "Dutch Bros Inc. Class A Common Stock", "BRP": "BRP Group Inc. (Insurance Company) Class A Common Stock", "BRPM": "B. Riley Principal 150 Merger Corp. Class A Common Stock", "BRPMW": "B. Riley Principal 150 Merger Corp. Warrant", "BRQS": "Borqs Technologies Inc. Ordinary Shares", "BRSP": "BrightSpire Capital Inc. Class A Common Stock", "BRT": "BRT Apartments Corp. (MD) Common Stock", "BRW": "Saba Capital Income & Opportunities Fund SBI", "BRX": "Brixmor Property Group Inc. Common Stock", "BRY": "Berry Corporation (bry) Common Stock", "BSA": "BrightSphere Investment Group Inc. 5.125% Notes due 2031", "BSAC": "Banco Santander - Chile ADS", "BSAQ": "Black Spade Acquisition Co Class A Ordinary Shares", "BSBK": "Bogota Financial Corp. Common Stock", "BSBR": "Banco Santander Brasil SA American Depositary Shares each representing one unit", "BSET": "Bassett Furniture Industries Incorporated Common Stock", "BSGA": "Blue Safari Group Acquisition Corp. Class A Ordinary Share", "BSGAR": "Blue Safari Group Acquisition Corp. Right", "BSGAU": "Blue Safari Group Acquisition Corp. Unit", "BSGM": "BioSig Technologies Inc. Common Stock", "BSIG": "BrightSphere Investment Group Inc. Common Stock", "BSKY": "Big Sky Growth Partners Inc. Class A Common Stock", "BSKYU": "Big Sky Growth Partners Inc. Unit", "BSKYW": "Big Sky Growth Partners Inc. Warrant", "BSL": "Blackstone Senior Floating Rate Term Fund Common Shares of Beneficial Interest", "BSM": "Black Stone Minerals L.P. Common units representing limited partner interests", "BSMX": "Banco Santander Mexico S.A. Institucion de Banca Multiple Grupo Financiero Santander Mexico", "BSN": "Broadstone Acquisition Corp. Class A Ordinary Shares", "BSQR": "BSQUARE Corporation Common Stock", "BSRR": "Sierra Bancorp Common Stock", "BST": "BlackRock Science and Technology Trust Common Shares of Beneficial Interest", "BSTZ": "BlackRock Science and Technology Trust II Common Shares of Beneficial Interest", "BSVN": "Bank7 Corp. Common stock", "BSX": "Boston Scientific Corporation Common Stock", "BSX^A": "Boston Scientific Corporation 5.50% Mandatory Convertible Preferred Stock Series A", "BSY": "Bentley Systems Incorporated Class B Common Stock", "BTA": "BlackRock Long-Term Municipal Advantage Trust BlackRock Long-Term Municipal Advantage Trust Common Shares of Beneficial Interest", "BTAI": "BioXcel Therapeutics Inc. Common Stock", "BTAQ": "Burgundy Technology Acquisition Corporation Class A Ordinary shares", "BTAQU": "Burgundy Technology Acquisition Corporation Unit", "BTAQW": "Burgundy Technology Acquisition Corporation Warrant", "BTB": "Bit Brother Limited Ordinary Shares", "BTBT": "Bit Digital Inc. Ordinary Shares", "BTCM": "BIT Mining Limited ADS", "BTCS": "BTCS Inc. Common Stock", "BTCY": "Biotricity Inc. Common Stock", "BTG": "B2Gold Corp Common shares (Canada)", "BTI": "British American Tobacco Industries p.l.c. Common Stock ADR", "BTN": "Ballantyne Strong Inc. Common Stock", "BTNB": "Bridgetown 2 Holdings Limited Class A Ordinary Shares", "BTO": "John Hancock Financial Opportunities Fund Common Stock", "BTRS": "BTRS Holdings Inc. Class 1 Common Stock", "BTRSW": "BTRS Holdings Inc. Warrants", "BTT": "BlackRock Municipal 2030 Target Term Trust", "BTTR": "Better Choice Company Inc. Common Stock", "BTU": "Peabody Energy Corporation Common Stock ", "BTWN": "Bridgetown Holdings Limited Class A Ordinary Shares", "BTWNU": "Bridgetown Holdings Limited Units", "BTWNW": "Bridgetown Holdings Limited Warrants", "BTX": "Brooklyn ImmunoTherapeutics Inc. Common Stock", "BTZ": "BlackRock Credit Allocation Income Trust", "BUD": "Anheuser-Busch Inbev SA Sponsored ADR (Belgium)", "BUI": "BlackRock Utility Infrastructure & Power Opportunities Trust", "BUR": "Burford Capital Limited Ordinary Shares", "BURL": "Burlington Stores Inc. Common Stock", "BUSE": "First Busey Corporation Class A Common Stock", "BV": "BrightView Holdings Inc. Common Stock", "BVH": "Bluegreen Vacations Holding Corporation Class A Common Stock", "BVN": "Buenaventura Mining Company Inc.", "BVS": "Bioventus Inc. Class A Common Stock", "BVXV": "BiondVax Pharmaceuticals Ltd. American Depositary Shares", "BW": "Babcock & Wilcox Enterprises Inc. Common Stock", "BW^A": "Babcock & Wilcox Enterprises Inc. 7.75% Series A Cumulative Perpetual Preferred Stock", "BWA": "BorgWarner Inc. Common Stock", "BWAC": "Better World Acquisition Corp. Common Stock", "BWACU": "Better World Acquisition Corp. Unit", "BWACW": "Better World Acquisition Corp. Warrants", "BWAY": "BrainsWay Ltd. American Depositary Shares", "BWB": "Bridgewater Bancshares Inc. Common Stock", "BWBBP": "Bridgewater Bancshares Inc. Depositary Shares Each Representing a 1/100th Interest in a Share of 5.875% Non-Cumulative Perpetual Preferred Stock Series A", "BWCAU": "Blue Whale Acquisition Corp I Unit", "BWEN": "Broadwind Inc. Common Stock", "BWFG": "Bankwell Financial Group Inc. Common Stock", "BWG": "BrandywineGLOBAL Global Income Opportunities Fund Inc.", "BWMN": "Bowman Consulting Group Ltd. Common Stock", "BWMX": "Betterware de Mexico S.A.B. de C.V. Ordinary Shares", "BWSN": "Babcock & Wilcox Enterprises Inc. 8.125% Senior Notes due 2026", "BWXT": "BWX Technologies Inc. Common Stock", "BX": "Blackstone Inc. Common Stock", "BXC": "Bluelinx Holdings Inc. Common Stock", "BXMT": "Blackstone Mortgage Trust Inc. Common Stock", "BXMX": "Nuveen S&P 500 Buy-Write Income Fund Common Shares of Beneficial Interest", "BXP": "Boston Properties Inc. Common Stock", "BXRX": "Baudax Bio Inc. Common Stock", "BXS": "BancorpSouth Bank Common Stock", "BXS^A": "BancorpSouth Bank 5.50% Series A Non-Cumulative Perpetual Preferred Stock", "BY": "Byline Bancorp Inc. Common Stock", "BYD": "Boyd Gaming Corporation Common Stock", "BYFC": "Broadway Financial Corporation Common Stock", "BYM": "Blackrock Municipal Income Quality Trust Common Shares of Beneficial Interest", "BYND": "Beyond Meat Inc. Common Stock", "BYRN": "Byrna Technologies Inc. Common Stock", "BYSI": "BeyondSpring Inc. Ordinary Shares", "BYTS": "BYTE Acquisition Corp. Class A Ordinary Shares", "BYTSU": "BYTE Acquisition Corp. Units", "BYTSW": "BYTE Acquisition Corp. Warrants", "BZ": "KANZHUN LIMITED American Depository Shares", "BZH": "Beazer Homes USA Inc. Common Stock", "BZUN": "Baozun Inc. American Depositary Shares", "C": "Citigroup Inc. Common Stock", "C^J": "Citigroup Inc. Dep Shs Repstg 1/1000 Pfd Ser J Fixed/Fltg", "C^K": "Citigroup Inc. Dep Shs Repstg 1/1000th Pfd Ser K", "C^N": "Citigroup Capital XIII 7.875% Fixed rate Floating Rate trust Preferred Securities (TruPS)", "CAAP": "Corporacion America Airports SA Common Shares", "CAAS": "China Automotive Systems Inc. Common Stock", "CABA": "Cabaletta Bio Inc. Common Stock", "CABO": "Cable One Inc. Common Stock", "CAC": "Camden National Corporation Common Stock", "CACC": "Credit Acceptance Corporation Common Stock", "CACI": "CACI International Inc. Class A Common Stock", "CADE": "Cadence Bancorporation Class A Common Stock", "CADL": "Candel Therapeutics Inc. Common Stock", "CAE": "CAE Inc. Ordinary Shares", "CAF": "Morgan Stanley China A Share Fund Inc. Common Stock", "CAG": "ConAgra Brands Inc. Common Stock", "CAH": "Cardinal Health Inc. Common Stock", "CAI": "CAI International Inc. Common Stock", "CAI^A": "CAI International Inc. 8.50% Series A Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Stock", "CAI^B": "CAI International Inc. 8.50% Series B Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Stock", "CAJ": "Canon Inc. American Depositary Shares", "CAKE": "Cheesecake Factory Incorporated (The) Common Stock", "CAL": "Caleres Inc. Common Stock", "CALA": "Calithera Biosciences Inc. Common Stock", "CALB": "California BanCorp Common Stock", "CALM": "Cal-Maine Foods Inc. Common Stock", "CALT": "Calliditas Therapeutics AB American Depositary Shares", "CALX": "Calix Inc Common Stock", "CAMP": "CalAmp Corp. Common Stock", "CAMT": "Camtek Ltd. Ordinary Shares", "CAN": "Canaan Inc. American Depositary Shares", "CANF": "Can-Fite Biopharma Ltd Sponsored ADR (Israel)", "CANG": "Cango Inc. American Depositary Shares each representing two (2) Class A Ordinary Shares", "CANO": "Cano Health Inc. Class A Common Stock", "CAPL": "CrossAmerica Partners LP Common Units representing limited partner interests", "CAPR": "Capricor Therapeutics Inc. Common Stock", "CAR": "Avis Budget Group Inc. Common Stock", "CARA": "Cara Therapeutics Inc. Common Stock", "CARE": "Carter Bankshares Inc. Common Stock", "CARG": "CarGurus Inc. Class A Common Stock ", "CARR": "Carrier Global Corporation Common Stock ", "CARS": "Cars.com Inc. Common Stock ", "CARV": "Carver Bancorp Inc. Common Stock", "CAS": "Cascade Acquisition Corp. Class A Common Stock", "CASA": "Casa Systems Inc. Common Stock", "CASH": "Meta Financial Group Inc. Common Stock", "CASI": "CASI Pharmaceuticals Inc. Common Stock", "CASS": "Cass Information Systems Inc Common Stock", "CASY": "Casey's General Stores Inc. Common Stock", "CAT": "Caterpillar Inc. Common Stock", "CATC": "Cambridge Bancorp Common Stock", "CATO": "Cato Corporation (The) Class A Common Stock", "CATY": "Cathay General Bancorp Common Stock", "CB": "Chubb Limited Common Stock", "CBAH": "CBRE Acquisition Holdings Inc. Class A Common Stock", "CBAN": "Colony Bankcorp Inc. Common Stock", "CBAT": "CBAK Energy Technology Inc. Common Stock", "CBAY": "CymaBay Therapeutics Inc. Common Stock", "CBD": "Companhia Brasileira de Distribuicao American Depsitary Shares; each representing one Common Share", "CBFV": "CB Financial Services Inc. Common Stock", "CBH": "Virtus AllianzGI Convertible & Income 2024 Target Term Fund Common Shares of Beneficial Interest", "CBIO": "Catalyst Biosciences Inc. Common Stock", "CBMB": "CBM Bancorp Inc. Common Stock", "CBNK": "Capital Bancorp Inc. Common Stock", "CBOE": "Cboe Global Markets Inc. Common Stock", "CBRE": "CBRE Group Inc Common Stock Class A", "CBRL": "Cracker Barrel Old Country Store Inc Common Stock", "CBSH": "Commerce Bancshares Inc. Common Stock", "CBT": "Cabot Corporation Common Stock", "CBTX": "CBTX Inc. Common Stock", "CBU": "Community Bank System Inc. Common Stock", "CBZ": "CBIZ Inc. Common Stock", "CC": "Chemours Company (The) Common Stock", "CCAC": "CITIC Capital Acquisition Corp. Class A Ordinary Shares", "CCAIU": "Cascadia Acquisition Corp. Unit", "CCAP": "Crescent Capital BDC Inc. Common stock", "CCB": "Coastal Financial Corporation Common Stock", "CCBG": "Capital City Bank Group Common Stock", "CCCC": "C4 Therapeutics Inc. Common Stock", "CCCS": "CCC Intelligent Solutions Holdings Inc. Common Stock", "CCD": "Calamos Dynamic Convertible & Income Fund Common Stock", "CCEL": "Cryo-Cell International Inc. Common Stock", "CCEP": "Coca-Cola Europacific Partners plc Ordinary Shares", "CCF": "Chase Corporation Common Stock", "CCI": "Crown Castle International Corp. (REIT) Common Stock", "CCJ": "Cameco Corporation Common Stock", "CCK": "Crown Holdings Inc.", "CCL": "Carnival Corporation Common Stock", "CCLP": "CSI Compressco LP Common Units", "CCM": "Concord Medical Services Holdings Limited ADS (Each represents three ordinary shares)", "CCMP": "CMC Materials Inc. Common Stock", "CCNC": "Code Chain New Continent Limited Common Stock", "CCNE": "CNB Financial Corporation Common Stock", "CCNEP": "CNB Financial Corporation Depositary Shares each representing a 1/40th ownership interest in a share of 7.125% Series A Fixed-Rate Non-Cumulative Perpetual Preferred Stock", "CCO": "Clear Channel Outdoor Holdings Inc. Common Stock", "CCOI": "Cogent Communications Holdings Inc.", "CCRN": "Cross Country Healthcare Inc. Common Stock $0.0001 Par Value", "CCS": "Century Communities Inc. Common Stock", "CCSIV": "Consensus Cloud Solutions Inc. Common Stock When-Issued", "CCU": "Compania Cervecerias Unidas S.A. Common Stock", "CCV": "Churchill Capital Corp V Class A Common Stock", "CCVI": "Churchill Capital Corp VI Class A Common Stock", "CCXI": "ChemoCentryx Inc. Common Stock", "CCZ": "Comcast Holdings ZONES", "CD": "Chindata Group Holdings Limited American Depositary Shares", "CDAK": "Codiak BioSciences Inc. Common Stock", "CDAY": "Ceridian HCM Holding Inc. Common Stock", "CDE": "Coeur Mining Inc. Common Stock", "CDEV": "Centennial Resource Development Inc. Class A Common Stock", "CDK": "CDK Global Inc. Common Stock", "CDLX": "Cardlytics Inc. Common Stock", "CDMO": "Avid Bioservices Inc. Common Stock", "CDNA": "CareDx Inc. Common Stock", "CDNS": "Cadence Design Systems Inc. Common Stock", "CDOR": "Condor Hospitality Trust Inc. Common Stock", "CDR": "Cedar Realty Trust Inc. Common Stock", "CDR^B": "Cedar Realty Trust Inc. 7.25% Series B Cumulative Redeemable Preferred Stock", "CDR^C": "Cedar Realty Trust Inc. 6.50% Series C Cumulative Redeemable Preferred Stock", "CDTX": "Cidara Therapeutics Inc. Common Stock", "CDW": "CDW Corporation Common Stock", "CDXC": "ChromaDex Corporation Common Stock", "CDXS": "Codexis Inc. Common Stock", "CDZI": "CADIZ Inc. Common Stock", "CDZIP": "Cadiz Inc. Depositary Shares", "CE": "Celanese Corporation Celanese Corporation Common Stock", "CEA": "China Eastern Airlines Corporation Ltd. Common Stock", "CECE": "CECO Environmental Corp. Common Stock", "CEE": "The Central and Eastern Europe Fund Inc. (The) Common Stock", "CEI": "Camber Energy Inc. Common Stock", "CEIX": "CONSOL Energy Inc. Common Stock ", "CELC": "Celcuity Inc. Common Stock", "CELH": "Celsius Holdings Inc. Common Stock", "CELP": "Cypress Environmental Partners L.P. Common Units representing limited partner interests", "CELU": "Celularity Inc. Class A Common Stock", "CELUW": "Celularity Inc. Warrant", "CEM": "ClearBridge MLP and Midstream Fund Inc. Common Stock", "CEMI": "Chembio Diagnostics Inc. Common Stock", "CEN": "Center Coast Brookfield MLP & Energy Infrastructure Fund", "CENQ": "CENAQ Energy Corp. Class A Ordinary Shares", "CENQU": "CENAQ Energy Corp. Unit", "CENQW": "CENAQ Energy Corp. Warrant", "CENT": "Central Garden & Pet Company Common Stock", "CENTA": "Central Garden & Pet Company Class A Common Stock Nonvoting", "CENX": "Century Aluminum Company Common Stock", "CEPU": "Central Puerto S.A. American Depositary Shares (each represents ten Common Shares)", "CEQP": "Crestwood Equity Partners LP", "CEQP^": "Crestwood Equity Partners LP Preferred Units representing limited partner interests", "CERE": "Cerevel Therapeutics Holdings Inc. Common Stock", "CERN": "Cerner Corporation Common Stock", "CERS": "Cerus Corporation Common Stock", "CERT": "Certara Inc. Common Stock", "CET": "Central Securities Corporation Common Stock", "CETX": "Cemtrex Inc. Common Stock", "CETXP": "Cemtrex Inc. Series 1 Preferred Stock", "CETXW": "Cemtrex Inc. Series 1 Warrant", "CEV": "Eaton Vance California Municipal Income Trust Shares of Beneficial Interest", "CEVA": "CEVA Inc. Common Stock", "CF": "CF Industries Holdings Inc. Common Stock", "CFB": "CrossFirst Bankshares Inc. Common Stock", "CFBK": "CF Bankshares Inc. Common Stock", "CFFE": "CF Acquisition Corp. VIII Class A Common Stock", "CFFEU": "CF Acquisition Corp. VIII Unit", "CFFEW": "CF Acquisition Corp. VIII Warrant", "CFFI": "C&F Financial Corporation Common Stock", "CFFN": "Capitol Federal Financial Inc. Common Stock", "CFFVU": "CF Acquisition Corp. V Unit", "CFFVW": "CF Acquisition Corp. V Warrant", "CFG": "Citizens Financial Group Inc. Common Stock", "CFG^D": "Citizens Financial Group Inc. Depositary Shares each representing a 1/40th Interest in a Share of 6.350% Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D", "CFG^E": "Citizens Financial Group Inc. Depositary Shares Each Representing 1/40th Interest in a Share of 5.000% Fixed-Rate Non-Cumulative Perpetual Preferred Stock Series E", "CFIV": "CF Acquisition Corp. IV Class A common stock", "CFIVU": "CF Acquisition Corp. IV Unit", "CFIVW": "CF Acquisition Corp. IV Warrant", "CFLT": "Confluent Inc. Class A Common Stock", "CFMS": "Conformis Inc. Common Stock", "CFR": "Cullen/Frost Bankers Inc. Common Stock", "CFR^B": "Cullen/Frost Bankers Inc. Depositary Shares each representing a 1/40th ownership interest in a share of 4.450% non-cumulative perpetual preferred stock Series B", "CFRX": "ContraFect Corporation Common Stock", "CFV": "CF Acquisition Corp. V Class A Common Stock", "CFVI": "CF Acquisition Corp. VI Class A Common Stock", "CFVIU": "CF Acquisition Corp. VI Unit", "CFVIW": "CF Acquisition Corp. VI Warrant", "CFX": "Colfax Corporation Common Stock", "CFXA": "Colfax Corporation 5.75% Tangible Equity Units", "CG": "The Carlyle Group Inc. Common Stock", "CGA": "China Green Agriculture Inc. Common Stock", "CGABL": "The Carlyle Group Inc. 4.625% Subordinated Notes due 2061", "CGAU": "Centerra Gold Inc. Common Shares", "CGBD": "TCG BDC Inc. Common Stock", "CGC": "Canopy Growth Corporation Common Shares", "CGEM": "Cullinan Oncology Inc. Common Stock", "CGEN": "Compugen Ltd. Ordinary Shares", "CGNT": "Cognyte Software Ltd. Ordinary Shares", "CGNX": "Cognex Corporation Common Stock", "CGO": "Calamos Global Total Return Fund Common Stock", "CGRN": "Capstone Green Energy Corporation Common Stock", "CHAA": "Catcha Investment Corp. Class A Ordinary Shares", "CHCI": "Comstock Holding Companies Inc. Class A Common Stock", "CHCO": "City Holding Company Common Stock", "CHCT": "Community Healthcare Trust Incorporated Common Stock", "CHD": "Church & Dwight Company Inc. Common Stock", "CHDN": "Churchill Downs Incorporated Common Stock", "CHE": "Chemed Corp", "CHEF": "The Chefs' Warehouse Inc. Common Stock", "CHEK": "Check-Cap Ltd. Ordinary Share", "CHEKZ": "Check-Cap Ltd. Series C Warrant", "CHGG": "Chegg Inc. Common Stock", "CHH": "Choice Hotels International Inc. Common Stock", "CHI": "Calamos Convertible Opportunities and Income Fund Common Stock", "CHK": "Chesapeake Energy Corporation Common Stock", "CHKEL": "Chesapeake Energy Corporation Class C Warrants", "CHKEW": "Chesapeake Energy Corporation Class A Warrants", "CHKEZ": "Chesapeake Energy Corporation Class B Warrants", "CHKP": "Check Point Software Technologies Ltd. Ordinary Shares", "CHMG": "Chemung Financial Corp Common Stock", "CHMI": "Cherry Hill Mortgage Investment Corporation Common Stock", "CHMI^A": "Cherry Hill Mortgage Investment Corporation 8.20% Series A Cumulative Redeemable Preferred Stock", "CHMI^B": "Cherry Hill Mortgage Investment Corporation 8.250% Series B Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "CHN": "China Fund Inc. (The) Common Stock", "CHNG": "Change Healthcare Inc. Common Stock", "CHNGU": "Change Healthcare Inc. Tangible Equity Units", "CHNR": "China Natural Resources Inc. Common Stock", "CHPM": "CHP Merger Corp. Class A Common Stock", "CHPMU": "CHP Merger Corp. Unit", "CHPMW": "CHP Merger Corp. Warrant", "CHPT": "ChargePoint Holdings Inc. Common Stock", "CHRA": "Charah Solutions Inc. Common Stock", "CHRB": "Charah Solutions Inc. 8.50% Senior Notes due 2026", "CHRS": "Coherus BioSciences Inc. Common Stock", "CHRW": "C.H. Robinson Worldwide Inc. Common Stock", "CHS": "Chico's FAS Inc. Common Stock", "CHSCL": "CHS Inc Class B Cumulative Redeemable Preferred Stock Series 4", "CHSCM": "CHS Inc Class B Reset Rate Cumulative Redeemable Preferred Stock Series 3", "CHSCN": "CHS Inc Preferred Class B Series 2 Reset Rate", "CHSCO": "CHS Inc. Class B Cumulative Redeemable Preferred Stock", "CHSCP": "CHS Inc. 8% Cumulative Redeemable Preferred Stock", "CHT": "Chunghwa Telecom Co. Ltd.", "CHTR": "Charter Communications Inc. Class A Common Stock New", "CHUY": "Chuy's Holdings Inc. Common Stock", "CHW": "Calamos Global Dynamic Income Fund Common Stock", "CHWA": "CHW Acquisition Corporation Ordinary Share", "CHWAU": "CHW Acquisition Corporation Unit", "CHWAW": "CHW Acquisition Corporation Warrant", "CHWY": "Chewy Inc. Class A Common Stock", "CHX": "ChampionX Corporation Common Stock ", "CHY": "Calamos Convertible and High Income Fund Common Stock", "CI": "Cigna Corporation Common Stock", "CIA": "Citizens Inc. Class A Common Stock ($1.00 Par)", "CIB": "BanColombia S.A. Common Stock", "CIDM": "Cinedigm Corp. Class A Common Stock", "CIEN": "Ciena Corporation Common Stock", "CIF": "MFS Intermediate High Income Fund Common Stock", "CIFR": "Cipher Mining Inc. Common Stock", "CIFRW": "Cipher Mining Inc. Warrant", "CIG": "Comp En De Mn Cemig ADS American Depositary Shares", "CIGI": "Colliers International Group Inc. Subordinate Voting Shares", "CIH": "China Index Holdings Limited American Depository Shares", "CII": "Blackrock Capital and Income Fund Inc.", "CIIGU": "CIIG Capital Partners II Inc. Unit", "CIK": "Credit Suisse Asset Management Income Fund Inc. Common Stock", "CIM": "Chimera Investment Corporation Common Stock", "CIM^A": "Chimera Investment Corporation 8.00% Series A Cumulative Redeemable Preferred Stock", "CIM^B": "Chimera Investment Corporation 8.00% Series B Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "CIM^C": "Chimera Investment Corporation 7.75% Series C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "CIM^D": "Chimera Investment Corporation 8.00% Series D Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "CINF": "Cincinnati Financial Corporation Common Stock", "CINR": "Ciner Resources LP Common Units representing Limited Partner Interests", "CIO": "City Office REIT Inc. Common Stock", "CIO^A": "City Office REIT Inc. 6.625% Series A Cumulative Redeemable Preferred Stock", "CIR": "CIRCOR International Inc. Common Stock", "CIT": "CIT Group Inc (DEL) Common Stock", "CIT^B": "CIT Group Inc (DEL) 5.625 % Non-Cumulative Perpetual Preferred Stock Series B", "CIVB": "Civista Bancshares Inc. Common Stock", "CIX": "CompX International Inc. Common Stock", "CIXX": "CI Financial Corp. Common Shares", "CIZN": "Citizens Holding Company Common Stock", "CJJD": "China Jo-Jo Drugstores Inc. (Cayman Islands) Ordinary Shares", "CKPT": "Checkpoint Therapeutics Inc. Common Stock", "CKX": "CKX Lands Inc. Common Stock", "CL": "Colgate-Palmolive Company Common Stock", "CLAA": "Colonnade Acquisition Corp. II Class A Ordinary Shares", "CLAQ": "CleanTech Acquisition Corp. Common stock", "CLAQR": "CleanTech Acquisition Corp. Rights", "CLAQU": "CleanTech Acquisition Corp. Units", "CLAQW": "CleanTech Acquisition Corp. Warrant", "CLAR": "Clarus Corporation Common Stock", "CLAS": "Class Acceleration Corp. Class A Common Stock", "CLAY": "Chavant Capital Acquisition Corp. Ordinary Shares", "CLAYU": "Chavant Capital Acquisition Corp. Unit", "CLB": "Core Laboratories N.V. Common Stock", "CLBK": "Columbia Financial Inc. Common Stock", "CLBR": "Colombier Acquisition Corp. Class A Common Stock", "CLBS": "Caladrius Biosciences Inc. Common Stock", "CLBT": "Cellebrite DI Ltd. Ordinary Shares", "CLBTW": "Cellebrite DI Ltd. Warrants", "CLDB": "Cortland Bancorp Common Stock", "CLDR": "Cloudera Inc. Common Stock", "CLDT": "Chatham Lodging Trust (REIT) Common Shares of Beneficial Interest", "CLDT^A": "Chatham Lodging Trust (REIT) 6.625% Series A Cumulative Redeemable Preferred Shares of Beneficial Interest", "CLDX": "Celldex Therapeutics Inc.", "CLEU": "China Liberal Education Holdings Limited Ordinary Shares", "CLF": "Cleveland-Cliffs Inc. Common Stock", "CLFD": "Clearfield Inc. Common Stock", "CLGN": "CollPlant Biotechnologies Ltd Ordinary Shares", "CLH": "Clean Harbors Inc. Common Stock", "CLI": "Mack-Cali Realty Corporation Common Stock", "CLIM": "Climate Real Impact Solutions II Acquisition Corporation Class A Common Stock", "CLIR": "ClearSign Technologies Corporation Common Stock", "CLLS": "Cellectis S.A. American Depositary Shares", "CLM": "Cornerstone Strategic Value Fund Inc. New Common Stock", "CLMT": "Calumet Specialty Products Partners L.P. Common Units", "CLNE": "Clean Energy Fuels Corp. Common Stock", "CLNN": "Clene Inc. Common Stock", "CLNNW": "Clene Inc. Warrant", "CLOE": "Clover Leaf Capital Corp. Class A Common Stock", "CLOER": "Clover Leaf Capital Corp. Rights", "CLOEU": "Clover Leaf Capital Corp. Unit", "CLOV": "Clover Health Investments Corp. Class A Common Stock", "CLPR": "Clipper Realty Inc. Common Stock", "CLPS": "CLPS Incorporation Common Stock", "CLPT": "ClearPoint Neuro Inc. Common Stock", "CLR": "Continental Resources Inc. Common Stock", "CLRB": "Cellectar Biosciences Inc. Common Stock", "CLRM": "Clarim Acquisition Corp. Class A Common Stock", "CLRMU": "Clarim Acquisition Corp. Unit", "CLRMW": "Clarim Acquisition Corp. Warrant", "CLRO": "ClearOne Inc. (DE) Common Stock", "CLS": "Celestica Inc. Common Stock", "CLSD": "Clearside Biomedical Inc. Common Stock", "CLSK": "CleanSpark Inc. Common Stock", "CLSN": "Celsion Corporation Common Stock", "CLVR": "Clever Leaves Holdings Inc. Common Shares", "CLVRW": "Clever Leaves Holdings Inc. Warrant", "CLVS": "Clovis Oncology Inc. Common Stock", "CLVT": "Clarivate Plc Ordinary Shares", "CLVT^A": "Clarivate Plc 5.25% Series A Mandatory Convertible Preferred Shares", "CLW": "Clearwater Paper Corporation Common Stock", "CLWT": "Euro Tech Holdings Company Limited Common Stock", "CLX": "Clorox Company (The) Common Stock", "CLXT": "Calyxt Inc. Common Stock", "CM": "Canadian Imperial Bank of Commerce Common Stock", "CMA": "Comerica Incorporated Common Stock", "CMAX": "CareMax Inc. Class A Common Stock", "CMAXW": "CareMax Inc. Warrant", "CMBM": "Cambium Networks Corporation Ordinary Shares", "CMC": "Commercial Metals Company Common Stock", "CMCL": "Caledonia Mining Corporation Plc Common Shares", "CMCM": "Cheetah Mobile Inc. American Depositary Shares each representing 10 Class Ordinary Shares", "CMCO": "Columbus McKinnon Corporation Common Stock", "CMCSA": "Comcast Corporation Class A Common Stock", "CMCT": "CIM Commercial Trust Corporation Common stock", "CME": "CME Group Inc. Class A Common Stock", "CMG": "Chipotle Mexican Grill Inc. Common Stock", "CMI": "Cummins Inc. Common Stock", "CMLS": "Cumulus Media Inc. Class A Common Stock", "CMLT": "CM Life Sciences III Inc. Class A Common Stock", "CMLTU": "CM Life Sciences III Inc. Unit", "CMLTW": "CM Life Sciences III Inc. Warrant", "CMMB": "Chemomab Therapeutics Ltd. American Depositary Share", "CMO": "Capstead Mortgage Corporation Common Stock", "CMO^E": "Capstead Mortgage Corporation Pfd Ser E", "CMP": "Compass Minerals Intl Inc Common Stock", "CMPI": "Checkmate Pharmaceuticals Inc. Common Stock", "CMPR": "Cimpress plc Ordinary Shares (Ireland)", "CMPS": "COMPASS Pathways Plc American Depository Shares", "CMPX": "Compass Therapeutics Inc.", "CMRE": "Costamare Inc. Common Stock $0.0001 par value", "CMRE^B": "Costamare Inc. Perpetual Preferred Stock Series B (Marshall Islands)", "CMRE^C": "Costamare Inc. Perpetual Preferred Series C (Marshall Islands)", "CMRE^D": "Costamare Inc. 8.75% Series D Cumulative Redeemable Perpetual Preferred Stock", "CMRE^E": "Costamare Inc. 8.875% Series E Cumulative Redeemable Perpetual Preferred Stock par value $0.0001", "CMRX": "Chimerix Inc. Common Stock", "CMS": "CMS Energy Corporation Common Stock", "CMS^B": "CMS Energy Corporation Preferred Stock", "CMS^C": "CMS Energy Corporation Depositary Shares each representing a 1/1000th interest in a share of 4.200% Cumulative Redeemable Perpetual Preferred Stock Series C", "CMSA": "CMS Energy Corporation 5.625% Junior Subordinated Notes due 2078", "CMSC": "CMS Energy Corporation 5.875% Junior Subordinated Notes due 2078", "CMSD": "CMS Energy Corporation 5.875% Junior Subordinated Notes due 2079", "CMT": "Core Molding Technologies Inc Common Stock", "CMTL": "Comtech Telecommunications Corp. Common Stock", "CMU": "MFS Municipal Income Trust Common Stock", "CNA": "CNA Financial Corporation Common Stock", "CNBKA": "Century Bancorp Inc. Class A Common Stock", "CNC": "Centene Corporation Common Stock", "CNCE": "Concert Pharmaceuticals Inc. Common Stock", "CND": "Concord Acquisition Corp. Class A Common Stock", "CNDT": "Conduent Incorporated Common Stock ", "CNET": "ZW Data Action Technologies Inc. Common Stock", "CNEY": "CN Energy Group Inc. Ordinary Shares", "CNF": "CNFinance Holdings Limited American Depositary Shares each representing twenty (20) Ordinary Shares", "CNFR": "Conifer Holdings Inc. Common Stock", "CNFRL": "Conifer Holdings Inc. 6.75% Senior Unsecured Notes due 2023", "CNHI": "CNH Industrial N.V. Common Shares", "CNI": "Canadian National Railway Company Common Stock", "CNK": "Cinemark Holdings Inc Cinemark Holdings Inc. Common Stock", "CNM": "Core & Main Inc. Class A Common Stock", "CNMD": "CONMED Corporation Common Stock", "CNNB": "Cincinnati Bancorp Inc. Common Stock", "CNNE": "Cannae Holdings Inc. Common Stock", "CNO": "CNO Financial Group Inc. Common Stock", "CNO^A": "CNO Financial Group Inc. 5.125% Subordinated Debentures due 2060", "CNOB": "ConnectOne Bancorp Inc. Common Stock", "CNOBP": "ConnectOne Bancorp Inc. Depositary Shares each representing a 1/40th interest in a share of 5.25% Fixed-Rate Reset Non-Cumulative Perpetual Preferred Stock Series A", "CNP": "CenterPoint Energy Inc (Holding Co) Common Stock", "CNQ": "Canadian Natural Resources Limited Common Stock", "CNR": "Cornerstone Building Brands Inc. Common Stock", "CNS": "Cohen & Steers Inc Common Stock", "CNSL": "Consolidated Communications Holdings Inc. Common Stock", "CNSP": "CNS Pharmaceuticals Inc. Common Stock", "CNTA": "Centessa Pharmaceuticals plc American Depositary Shares", "CNTB": "Connect Biopharma Holdings Limited American Depositary Shares", "CNTG": "Centogene N.V. Common Shares", "CNTQ": "Chardan NexTech Acquisition 2 Corp. Class A Common Stock", "CNTQU": "Chardan NexTech Acquisition 2 Corp. Unit", "CNTQW": "Chardan NexTech Acquisition 2 Corp. Warrant", "CNTY": "Century Casinos Inc. Common Stock", "CNVY": "Convey Holding Parent Inc. Common Stock", "CNX": "CNX Resources Corporation Common Stock", "CNXC": "Concentrix Corporation Common Stock", "CNXN": "PC Connection Inc. Common Stock", "CO": "Global Cord Blood Corporation Common Stock", "COCP": "Cocrystal Pharma Inc. Common Stock", "CODA": "Coda Octopus Group Inc. Common stock", "CODI": "D/B/A Compass Diversified Holdings Shares of Beneficial Interest", "CODI^A": "Compass Diversified Holdings 7.250% Series A Preferred Shares representing beneficial interest in Compass Diversified Holdings", "CODI^B": "Compass Diversified Holdings 7.875% Series B Fixed-to-Floating Rate Cumulative Preferred Shares representing beneficial interests in Compass Diversified Holdings", "CODI^C": "Compass Diversified Holdings 7.875% Series C Cumulative Preferred Shares", "CODX": "Co-Diagnostics Inc. Common Stock", "COE": "China Online Education Group American depositary shares each representing 15 Class A ordinary shares", "COF": "Capital One Financial Corporation Common Stock", "COF^G": "Capital One Financial Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series G", "COF^H": "Capital One Financial Corporation Depositary Shares Each Representing 1/40th Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series H", "COF^I": "Capital One Financial Corporation Depositary shares each representing a 1/40th interest in a share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series I of the Issuer", "COF^J": "Capital One Financial Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of Fixed Rate Non- Cumulative Perpetual Preferred Stock Series J", "COF^K": "Capital One Financial Corporation Depositary Shares Each Representing a 1/40th Ownership Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series K", "COF^L": "Capital One Financial Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series L", "COF^N": "Capital One Financial Corporation Depositary Shares Each Representing a 1/40th Ownership Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series N", "COFS": "ChoiceOne Financial Services Inc. Common Stock", "COGT": "Cogent Biosciences Inc. Common Stock", "COHN": "Cohen & Company Inc.", "COHR": "Coherent Inc. Common Stock", "COHU": "Cohu Inc. Common Stock", "COIN": "Coinbase Global Inc. Class A Common Stock", "COKE": "Coca-Cola Consolidated Inc. Common Stock", "COLB": "Columbia Banking System Inc. Common Stock", "COLD": "Americold Realty Trust Common Shares", "COLI": "Colicity Inc. Class A Common Stock", "COLIU": "Colicity Inc. Units", "COLIW": "Colicity Inc. Warrant", "COLL": "Collegium Pharmaceutical Inc. Common Stock", "COLM": "Columbia Sportswear Company Common Stock", "COMM": "CommScope Holding Company Inc. Common Stock", "COMP": "Compass Inc. Class A Common Stock", "COMS": "ComSovereign Holding Corp. Common Stock", "COMSW": "ComSovereign Holding Corp. Warrants", "CONE": "CyrusOne Inc Common Stock", "CONN": "Conn's Inc. Common Stock", "CONX": "CONX Corp. Class A Common Stock", "CONXU": "CONX Corp. Unit", "CONXW": "CONX Corp. Warrant", "COO": "The Cooper Companies Inc. Common Stock", "COOK": "Traeger Inc. Common Stock", "COOL": "Corner Growth Acquisition Corp. Class A Ordinary Shares", "COOLU": "Corner Growth Acquisition Corp. Unit", "COOLW": "Corner Growth Acquisition Corp. Warrant", "COOP": "Mr. Cooper Group Inc. Common Stock", "COP": "ConocoPhillips Common Stock", "COR": "CoreSite Realty Corporation Common Stock", "CORR": "CorEnergy Infrastructure Trust Inc. Common Stock", "CORR^A": "CorEnergy Infrastructure Trust Inc. Depositary Shares each representing a 1/100th fractional interest of a share of 7.375% Series A Cumulative Redeemable Preferred Stock", "CORS": "Corsair Partnering Corporation Class A Ordinary Shares", "CORT": "Corcept Therapeutics Incorporated Common Stock", "COST": "Costco Wholesale Corporation Common Stock", "COTY": "Coty Inc. Class A Common Stock", "COUP": "Coupa Software Incorporated Common Stock", "COUR": "Coursera Inc. Common Stock", "COVA": "COVA Acquisition Corp. Class A Ordinary Share", "COVAU": "COVA Acquisition Corp. Unit", "COVAW": "COVA Acquisition Corp. Warrants to purchase Class A ordinary shares", "COWN": "Cowen Inc. Class A Common Stock", "COWNL": "Cowen Inc. 7.75% Senior Notes due 2033", "CP": "Canadian Pacific Railway Limited Common Stock", "CPA": "Copa Holdings S.A. Copa Holdings S.A. Class A Common Stock", "CPAAU": "Conyers Park III Acquisition Corp. Unit", "CPAAW": "Conyers Park III Acquisition Corp. Warrants", "CPAC": "Cementos Pacasmayo S.A.A. American Depositary Shares (Each representing five Common Shares)", "CPAR": "Catalyst Partners Acquisition Corp. Class A Ordinary Share", "CPARU": "Catalyst Partners Acquisition Corp. Unit", "CPARW": "Catalyst Partners Acquisition Corp. Warrant", "CPB": "Campbell Soup Company Common Stock", "CPE": "Callon Petroleum Company Common Stock", "CPF": "Central Pacific Financial Corp New", "CPG": "Crescent Point Energy Corporation Ordinary Shares (Canada)", "CPHC": "Canterbury Park Holding Corporation 'New' Common Stock", "CPHI": "China Pharma Holdings Inc. Common Stock", "CPIX": "Cumberland Pharmaceuticals Inc. Common Stock", "CPK": "Chesapeake Utilities Corporation Common Stock", "CPLG": "CorePoint Lodging Inc. Common Stock ", "CPLP": "Capital Product Partners L.P. Common Units", "CPNG": "Coupang Inc. Class A Common Stock", "CPOP": "Pop Culture Group Co. Ltd Class A Ordinary Shares", "CPRI": "Capri Holdings Limited Ordinary Shares", "CPRT": "Copart Inc. (DE) Common Stock", "CPRX": "Catalyst Pharmaceuticals Inc. Common Stock", "CPS": "Cooper-Standard Holdings Inc. Common Stock", "CPSH": "CPS Technologies Corp. Common Stock", "CPSI": "Computer Programs and Systems Inc. Common Stock", "CPSR": "Capstar Special Purpose Acquisition Corp. Class A Common Stock", "CPSS": "Consumer Portfolio Services Inc. Common Stock", "CPT": "Camden Property Trust Common Stock", "CPTAG": "Capitala Finance Corp. 5.75% Convertible Notes Due 2022", "CPTAL": "Capitala Finance Corp. 6% Notes Due 2022", "CPTK": "Crown PropTech Acquisitions Class A Ordinary Shares", "CPUH": "Compute Health Acquisition Corp. Class A Common Stock", "CPZ": "Calamos Long/Short Equity & Dynamic Income Trust Common Stock", "CQP": "Cheniere Energy Partners LP Cheniere Energy Partners LP Common Units", "CR": "Crane Co. Common Stock", "CRAI": "CRA International Inc. Common Stock", "CRBP": "Corbus Pharmaceuticals Holdings Inc. Common Stock", "CRBU": "Caribou Biosciences Inc. Common Stock", "CRC": "California Resources Corporation Common Stock", "CRCT": "Cricut Inc. Class A Common Stock", "CRD/A": "Crawford & Company", "CRD/B": "Crawford & Company", "CRDF": "Cardiff Oncology Inc. Common Stock", "CRDL": "Cardiol Therapeutics Inc. Class A Common Shares", "CREG": "China Recycling Energy Corporation Common Stock", "CRESW": "Cresud S.A.C.I.F. y A. Warrant", "CRESY": "Cresud S.A.C.I.F. y A. American Depositary Shares", "CREX": "Creative Realities Inc. Common Stock", "CRF": "Cornerstone Total Return Fund Inc. (The) Common Stock", "CRH": "CRH PLC American Depositary Shares", "CRHC": "Cohn Robbins Holdings Corp. Class A Ordinary Shares", "CRI": "Carter's Inc. Common Stock", "CRIS": "Curis Inc. Common Stock", "CRK": "Comstock Resources Inc. Common Stock", "CRKN": "Crown Electrokinetics Corp. Common Stock", "CRL": "Charles River Laboratories International Inc. Common Stock", "CRM": "Salesforce.com Inc Common Stock", "CRMD": "CorMedix Inc. Common Stock", "CRMT": "America's Car-Mart Inc Common Stock", "CRNC": "Cerence Inc. Common Stock", "CRNT": "Ceragon Networks Ltd. Ordinary Shares", "CRNX": "Crinetics Pharmaceuticals Inc. Common Stock", "CRON": "Cronos Group Inc. Common Share", "CROX": "Crocs Inc. Common Stock", "CRS": "Carpenter Technology Corporation Common Stock", "CRSP": "CRISPR Therapeutics AG Common Shares", "CRSR": "Corsair Gaming Inc. Common Stock", "CRT": "Cross Timbers Royalty Trust Common Stock", "CRTD": "Creatd Inc. Common Stock", "CRTDW": "Creatd Inc. Warrant", "CRTO": "Criteo S.A. American Depositary Shares", "CRTX": "Cortexyme Inc. Common Stock", "CRU": "Crucible Acquisition Corporation Class A Common Stock", "CRUS": "Cirrus Logic Inc. Common Stock", "CRVL": "CorVel Corp. Common Stock", "CRVS": "Corvus Pharmaceuticals Inc. Common Stock", "CRWD": "CrowdStrike Holdings Inc. Class A Common Stock", "CRWS": "Crown Crafts Inc Common Stock", "CRXT": "Clarus Therapeutics Holdings Inc. Common Stock", "CRXTW": "Clarus Therapeutics Holdings Inc. Warrants", "CRY": "CryoLife Inc. Common Stock", "CRZN": "Corazon Capital V838 Monoceros Corp Class A Ordinary Shares", "CRZNW": "Corazon Capital V838 Monoceros Corp Warrant", "CS": "Credit Suisse Group American Depositary Shares", "CSAN": "Cosan S.A. ADS", "CSBR": "Champions Oncology Inc. Common Stock", "CSCO": "Cisco Systems Inc. Common Stock (DE)", "CSCW": "Color Star Technology Co. Ltd. Ordinary Shares", "CSGP": "CoStar Group Inc. Common Stock", "CSGS": "CSG Systems International Inc. Common Stock", "CSII": "Cardiovascular Systems Inc. Common Stock", "CSIQ": "Canadian Solar Inc. Common Shares (BC)", "CSL": "Carlisle Companies Incorporated Common Stock", "CSLT": "Castlight Health Inc. Class B Common Stock", "CSOD": "Cornerstone OnDemand Inc. Common Stock", "CSPI": "CSP Inc. Common Stock", "CSPR": "Casper Sleep Inc. Common Stock", "CSQ": "Calamos Strategic Total Return Common Stock", "CSR": "D/B/A Centerspace Common Stock", "CSR^C": "D/B/A Centerspace 6.625% Series C ", "CSSE": "Chicken Soup for the Soul Entertainment Inc. Class A Common Stock", "CSSEN": "Chicken Soup for the Soul Entertainment Inc. 9.50% Notes due 2025", "CSSEP": "Chicken Soup for the Soul Entertainment Inc. 9.75% Series A Cumulative Redeemable Perpetual Preferred Stock", "CSTA": "Constellation Acquisition Corp I Class A Ordinary Shares", "CSTE": "Caesarstone Ltd. Ordinary Shares", "CSTL": "Castle Biosciences Inc. Common Stock", "CSTM": "Constellium SE Ordinary Shares (France)", "CSTR": "CapStar Financial Holdings Inc. Common Stock", "CSU": "Capital Senior Living Corporation Common Stock", "CSV": "Carriage Services Inc. Common Stock", "CSWC": "Capital Southwest Corporation Common Stock", "CSWI": "CSW Industrials Inc. Common Stock", "CSX": "CSX Corporation Common Stock", "CTA^A": "E.I. du Pont de Nemours and Company Preferred Stock", "CTA^B": "E.I. du Pont de Nemours and Company Preferred Stock", "CTAQ": "Carney Technology Acquisition Corp. II Class A Common Stock", "CTAQU": "Carney Technology Acquisition Corp. II Units", "CTAQW": "Carney Technology Acquisition Corp. II Warrant", "CTAS": "Cintas Corporation Common Stock", "CTBB": "Qwest Corporation 6.5% Notes due 2056", "CTBI": "Community Trust Bancorp Inc. Common Stock", "CTDD": "Qwest Corporation 6.75% Notes due 2057", "CTEK": "CynergisTek Inc. Common Stock", "CTG": "Computer Task Group Inc. Common Stock", "CTHR": "Charles & Colvard Ltd Common Stock", "CTIB": "Yunhong CTI Ltd. Common Stock", "CTIC": "CTI BioPharma Corp. (DE) Common Stock", "CTK": "CooTek (Cayman) Inc. American Depositary Shares each representing 50 Class A Ordinary Shares", "CTKB": "Cytek Biosciences Inc. Common Stock", "CTLP": "Cantaloupe Inc. Common Stock", "CTLT": "Catalent Inc. Common Stock", "CTMX": "CytomX Therapeutics Inc. Common Stock", "CTO": "CTO Realty Growth Inc. Common Stock", "CTO^A": "CTO Realty Growth Inc. 6.375% Series A Cumulative Redeemable Preferred Stock", "CTOS": "Custom Truck One Source Inc. Common Stock", "CTR": "ClearBridge MLP and Midstream Total Return Fund Inc. Common Stock", "CTRA": "Coterra Energy Inc.", "CTRE": "CareTrust REIT Inc. Common Stock", "CTRM": "Castor Maritime Inc. Common Shares", "CTRN": "Citi Trends Inc. Common Stock", "CTS": "CTS Corporation Common Stock", "CTSH": "Cognizant Technology Solutions Corporation Class A Common Stock", "CTSO": "Cytosorbents Corporation Common Stock", "CTT": "CatchMark Timber Trust Inc. Class A Common Stock", "CTVA": "Corteva Inc. Common Stock ", "CTXR": "Citius Pharmaceuticals Inc. Common Stock", "CTXRW": "Citius Pharmaceuticals Inc. Warrant", "CTXS": "Citrix Systems Inc. Common Stock", "CUBA": "Herzfeld Caribbean Basin Fund Inc. (The) Common Stock", "CUBB": "Customers Bancorp Inc 5.375% Subordinated Notes Due 2034", "CUBE": "CubeSmart Common Shares", "CUBI": "Customers Bancorp Inc Common Stock", "CUBI^E": "Customers Bancorp Inc Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series E", "CUBI^F": "Customers Bancorp Inc Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series F", "CUE": "Cue Biopharma Inc. Common Stock", "CUEN": "Cuentas Inc. Common Stock", "CUK": "Carnival Plc ADS ADS", "CULL": "Cullman Bancorp Inc. Common Stock", "CULP": "Culp Inc. Common Stock", "CURI": "CuriosityStream Inc. Class A Common Stock", "CURIW": "CuriosityStream Inc. Warrant", "CURO": "CURO Group Holdings Corp. Common Stock", "CURV": "Torrid Holdings Inc. Common Stock", "CUTR": "Cutera Inc. Common Stock", "CUZ": "Cousins Properties Incorporated Common Stock", "CVA": "Covanta Holding Corporation Common Stock", "CVAC": "CureVac N.V. Ordinary Shares", "CVBF": "CVB Financial Corporation Common Stock", "CVCO": "Cavco Industries Inc. Common Stock When Issued", "CVCY": "Central Valley Community Bancorp Common Stock", "CVE": "Cenovus Energy Inc Common Stock", "CVEO": "Civeo Corporation (Canada) Common Shares", "CVET": "Covetrus Inc. Common Stock", "CVGI": "Commercial Vehicle Group Inc. Common Stock", "CVGW": "Calavo Growers Inc. Common Stock", "CVI": "CVR Energy Inc. Common Stock", "CVII": "Churchill Capital Corp VII Class A Common Stock", "CVLG": "Covenant Logistics Group Inc. Class A Common Stock", "CVLT": "Commvault Systems Inc. Common Stock", "CVLY": "Codorus Valley Bancorp Inc Common Stock", "CVM": "Cel-Sci Corporation Common Stock", "CVNA": "Carvana Co. Class A Common Stock", "CVR": "Chicago Rivet & Machine Co. Common Stock", "CVRX": "CVRx Inc. Common Stock", "CVS": "CVS Health Corporation Common Stock", "CVU": "CPI Aerostructures Inc. Common Stock", "CVV": "CVD Equipment Corporation Common Stock", "CVX": "Chevron Corporation Common Stock", "CW": "Curtiss-Wright Corporation Common Stock", "CWAN": "Clearwater Analytics Holdings Inc. Class A Common Stock", "CWBC": "Community West Bancshares Common Stock", "CWBR": "CohBar Inc. Common Stock", "CWCO": "Consolidated Water Co. Ltd. Ordinary Shares", "CWEN": "Clearway Energy Inc. Class C Common Stock", "CWH": "Camping World Holdings Inc. Class A Commom Stock", "CWK": "Cushman & Wakefield plc Ordinary Shares", "CWST": "Casella Waste Systems Inc. Class A Common Stock", "CWT": "California Water Service Group Common Stock", "CX": "Cemex S.A.B. de C.V. Sponsored ADR", "CXDC": "China XD Plastics Company Limited Common Stock", "CXDO": "Crexendo Inc. Common Stock", "CXE": "MFS High Income Municipal Trust Common Stock", "CXH": "MFS Investment Grade Municipal Trust Common Stock", "CXM": "Sprinklr Inc. Class A Common Stock", "CXP": "Columbia Property Trust Inc. Common Stock", "CXW": "CoreCivic Inc. Common Stock", "CYAD": "Celyad Oncology SA American Depositary Shares", "CYAN": "Cyanotech Corporation Common Stock", "CYBE": "CyberOptics Corporation Common Stock", "CYBN": "Cybin Inc. Common Shares", "CYBR": "CyberArk Software Ltd. Ordinary Shares", "CYCC": "Cyclacel Pharmaceuticals Inc. Common Stock", "CYCCP": "Cyclacel Pharmaceuticals Inc. 6% Convertible Preferred Stock", "CYCN": "Cyclerion Therapeutics Inc. Common Stock", "CYD": "China Yuchai International Limited Common Stock", "CYH": "Community Health Systems Inc. Common Stock", "CYRN": "CYREN Ltd. Ordinary Shares", "CYRX": "CryoPort Inc. Common Stock", "CYT": "Cyteir Therapeutics Inc. Common Stock", "CYTH": "Cyclo Therapeutics Inc. Common Stock", "CYTHW": "Cyclo Therapeutics Inc. Warrant", "CYTK": "Cytokinetics Incorporated Common Stock", "CYTO": "Altamira Therapeutics Ltd. Common Shares 0.01 SF (Bermuda)", "CYXT": "Cyxtera Technologies Inc. Class A Common Stock", "CYXTW": "Cyxtera Technologies Inc. Warrant", "CZNC": "Citizens & Northern Corp Common Stock", "CZOO": "Cazoo Group Ltd Class A Ordinary Shares", "CZR": "Caesars Entertainment Inc. Common Stock", "CZWI": "Citizens Community Bancorp Inc. Common Stock", "D": "Dominion Energy Inc. Common Stock", "DAC": "Danaos Corporation Common Stock", "DADA": "Dada Nexus Limited American Depositary Shares", "DAIO": "Data I/O Corporation Common Stock", "DAKT": "Daktronics Inc. Common Stock", "DAL": "Delta Air Lines Inc. Common Stock", "DALN": "DallasNews Corporation Series A Common Stock", "DALS": "DA32 Life Science Tech Acquisition Corp. Class A Common Stock", "DAN": "Dana Incorporated Common Stock ", "DAO": "Youdao Inc. American Depositary Shares each representing one Class A Ordinary Share", "DAR": "Darling Ingredients Inc. Common Stock", "DARE": "Dare Bioscience Inc. Common Stock", "DASH": "DoorDash Inc. Class A Common Stock", "DATS": "DatChat Inc. Common Stock", "DATSW": "DatChat Inc. Series A Warrant", "DAVA": "Endava plc American Depositary Shares (each representing one Class A Ordinary Share)", "DAWN": "Day One Biopharmaceuticals Inc. Common Stock", "DB": "Deutsche Bank AG Common Stock", "DBD": "Diebold Nixdorf Incorporated Common Stock", "DBDR": "Roman DBDR Tech Acquisition Corp. Class A Common Stock", "DBDRW": "Roman DBDR Tech Acquisition Corp. Warrant", "DBGI": "Digital Brands Group Inc. Common Stock", "DBGIW": "Digital Brands Group Inc. Warrant", "DBI": "Designer Brands Inc. Class A Common Stock", "DBL": "DoubleLine Opportunistic Credit Fund Common Shares of Beneficial Interest", "DBRG": "DigitalBridge Group Inc.", "DBRG^H": "DigitalBridge Group Inc. 7.125% Series H ", "DBRG^I": "DigitalBridge Group Inc. 7.15% Series I ", "DBRG^J": "DigitalBridge Group Inc. 7.125% Series J ", "DBTX": "Decibel Therapeutics Inc. Common Stock", "DBVT": "DBV Technologies S.A. American Depositary Shares", "DBX": "Dropbox Inc. Class A Common Stock", "DCBO": "Docebo Inc. Common Shares", "DCF": "BNY Mellon Alcentra Global Credit Income 2024 Target Term Fund Inc. Common Stock", "DCI": "Donaldson Company Inc. Common Stock", "DCO": "Ducommun Incorporated Common Stock", "DCOM": "Dime Community Bancshares Inc. Common Stock", "DCOMP": "Dime Community Bancshares Inc. Fixed-Rate Non-Cumulative Perpetual Preferred Stock Series A", "DCP": "DCP Midstream LP Common Units ", "DCP^B": "DCP Midstream LP 7.875% Series B Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units", "DCP^C": "DCP Midstream LP 7.95% Series C Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units", "DCPH": "Deciphera Pharmaceuticals Inc. Common Stock", "DCRC": "Decarbonization Plus Acquisition Corporation III Class A Common Stock", "DCRCU": "Decarbonization Plus Acquisition Corporation III Unit", "DCRCW": "Decarbonization Plus Acquisition Corporation III Warrant", "DCRDU": "Decarbonization Plus Acquisition Corporation IV Unit", "DCRDW": "Decarbonization Plus Acquisition Corporation IV Warrant", "DCRN": "Decarbonization Plus Acquisition Corporation II Class A Common stock", "DCRNU": "Decarbonization Plus Acquisition Corporation II Unit", "DCRNW": "Decarbonization Plus Acquisition Corporation II Warrant", "DCT": "Duck Creek Technologies Inc. Common Stock", "DCTH": "Delcath Systems Inc. Common Stock", "DCUE": "Dominion Energy Inc. 2019 Series A Corporate Units", "DD": "DuPont de Nemours Inc. Common Stock", "DDD": "3D Systems Corporation Common Stock", "DDF": "Delaware Investments Dividend & Income Fund Inc. Common Stock", "DDI": "DoubleDown Interactive Co. Ltd. American Depository Shares", "DDL": "Dingdong (Cayman) Limited American Depositary Shares (each two representing three Ordinary Shares)", "DDMX": "DD3 Acquisition Corp. II Class A Common Stock", "DDMXU": "DD3 Acquisition Corp. II Unit", "DDMXW": "DD3 Acquisition Corp. II Warrant", "DDOG": "Datadog Inc. Class A Common Stock", "DDS": "Dillard's Inc. Common Stock", "DDT": "Dillard's Capital Trust I", "DE": "Deere & Company Common Stock", "DEA": "Easterly Government Properties Inc. Common Stock", "DECK": "Deckers Outdoor Corporation Common Stock", "DEI": "Douglas Emmett Inc. Common Stock", "DELL": "Dell Technologies Inc. Class C Common Stock ", "DEN": "Denbury Inc. Common Stock", "DENN": "Denny's Corporation Common Stock", "DEO": "Diageo plc Common Stock", "DESP": "Despegar.com Corp. Ordinary Shares", "DEX": "Delaware Enhanced Global Dividend Common Shares of Beneficial Interest", "DFFN": "Diffusion Pharmaceuticals Inc. Common Stock", "DFH": "Dream Finders Homes Inc. Class A Common Stock", "DFIN": "Donnelley Financial Solutions Inc. Common Stock ", "DFP": "Flaherty & Crumrine Dynamic Preferred and Income Fund Inc. Common Stock", "DFPH": "DFP Healthcare Acquisitions Corp. Class A Common Stock", "DFPHU": "DFP Healthcare Acquisitions Corp. Unit", "DFPHW": "DFP Healthcare Acquisitions Corp. Warrant", "DFS": "Discover Financial Services Common Stock", "DG": "Dollar General Corporation Common Stock", "DGICA": "Donegal Group Inc. Class A Common Stock", "DGICB": "Donegal Group Inc. Class B Common Stock", "DGII": "Digi International Inc. Common Stock", "DGLY": "Digital Ally Inc. Common Stock", "DGNS": "Dragoneer Growth Opportunities Corp. II Class A Ordinary Shares", "DGNU": "Dragoneer Growth Opportunities Corp. III Class A Ordinary Shares", "DGX": "Quest Diagnostics Incorporated Common Stock", "DH": "Definitive Healthcare Corp. Class A Common Stock", "DHBC": "DHB Capital Corp. Class A common stock", "DHBCU": "DHB Capital Corp. Unit", "DHBCW": "DHB Capital Corp. Warrant", "DHC": "Diversified Healthcare Trust Common Shares of Beneficial Interest", "DHCA": "DHC Acquisition Corp. Class A ordinary share", "DHCAW": "DHC Acquisition Corp. Warrant", "DHCNI": "Diversified Healthcare Trust 5.625% Senior Notes due 2042", "DHCNL": "Diversified Healthcare Trust 6.25% Senior Notes Due 2046", "DHF": "BNY Mellon High Yield Strategies Fund Common Stock", "DHHC": "DiamondHead Holdings Corp. Class A Common Stock", "DHHCU": "DiamondHead Holdings Corp. Unit", "DHHCW": "DiamondHead Holdings Corp. Warrant", "DHI": "D.R. Horton Inc. Common Stock", "DHIL": "Diamond Hill Investment Group Inc. Class A Common Stock", "DHR": "Danaher Corporation Common Stock", "DHR^A": "Danaher Corporation 4.75% Mandatory Convertible Preferred Stock Series A", "DHR^B": "Danaher Corporation 5.00% Mandatory Convertible Preferred Stock Series B", "DHT": "DHT Holdings Inc.", "DHX": "DHI Group Inc. Common Stock", "DHY": "Credit Suisse High Yield Bond Fund Common Stock", "DIAX": "Nuveen Dow 30SM Dynamic Overwrite Fund Common Shares of Beneficial Interest", "DIBS": "1stdibs.com Inc. Common Stock", "DICE": "DICE Therapeutics Inc. Common Stock", "DIDI": "DiDi Global Inc. American Depositary Shares (each four representing one Class A Ordinary Share)", "DILA": "DILA Capital Acquisition Corp. Class A Common Stock", "DILAU": "DILA Capital Acquisition Corp. Unit", "DILAW": "DILA Capital Acquisition Corp. Warrant", "DIN": "Dine Brands Global Inc. Common Stock", "DIOD": "Diodes Incorporated Common Stock", "DIS": "Walt Disney Company (The) Common Stock", "DISA": "Disruptive Acquisition Corporation I Class A Ordinary Shares", "DISAW": "Disruptive Acquisition Corporation I Warrant", "DISCA": "Discovery Inc. Series A Common Stock", "DISCB": "Discovery Inc. Series B Common Stock", "DISCK": "Discovery Inc. Series C Common Stock", "DISH": "DISH Network Corporation Class A Common Stock", "DIT": "AMCON Distributing Company Common Stock", "DJCO": "Daily Journal Corp. (S.C.) Common Stock", "DK": "Delek US Holdings Inc. Common Stock", "DKDCA": "Data Knights Acquisition Corp. Class A Common Stock", "DKDCW": "Data Knights Acquisition Corp. Warrant", "DKL": "Delek Logistics Partners L.P. Common Units representing Limited Partner Interests", "DKNG": "DraftKings Inc. Class A Common Stock", "DKS": "Dick's Sporting Goods Inc Common Stock", "DLA": "Delta Apparel Inc. Common Stock", "DLB": "Dolby Laboratories Common Stock", "DLCA": "Deep Lake Capital Acquisition Corp. Class A Ordinary Shares", "DLCAU": "Deep Lake Capital Acquisition Corp. Unit", "DLCAW": "Deep Lake Capital Acquisition Corp. Warrant", "DLHC": "DLH Holdings Corp.", "DLNG": "Dynagas LNG Partners LP Common Units", "DLNG^A": "Dynagas LNG Partners LP 9.00% Series A Cumulative Redeemable Preferred Units", "DLNG^B": "Dynagas LNG Partners LP 8.75% Series B Fixed to Floating Rate Cumulative Redeemable Perpetual Preferred Units liquidation preference $25.00 per Uni", "DLO": "DLocal Limited Class A Common Shares", "DLPN": "Dolphin Entertainment Inc. Common Stock", "DLR": "Digital Realty Trust Inc. Common Stock", "DLR^J": "Digital Realty Trust Inc. 5.250% Series J Cumulative Redeemable Preferred Stock", "DLR^K": "Digital Realty Trust Inc. 5.850% Series K Cumulative Redeemable Preferred Stock par value $0.01 per share", "DLR^L": "Digital Realty Trust Inc. 5.200% Series L Cumulative Redeemable Preferred Stock", "DLTH": "Duluth Holdings Inc. Class B Common Stock", "DLTR": "Dollar Tree Inc. Common Stock", "DLX": "Deluxe Corporation Common Stock", "DLY": "DoubleLine Yield Opportunities Fund Common Shares of Beneficial Interest", "DM": "Desktop Metal Inc. Class A Common Stock", "DMAC": "DiaMedica Therapeutics Inc. Common Stock", "DMB": "BNY Mellon Municipal Bond Infrastructure Fund Inc. Common Stock", "DMF": "BNY Mellon Municipal Income Inc. Common Stock", "DMLP": "Dorchester Minerals L.P. Common Units Representing Limited Partnership Interests", "DMO": "Western Asset Mortgage Opportunity Fund Inc. Common Stock", "DMRC": "Digimarc Corporation Common Stock", "DMS": "Digital Media Solutions Inc. Class A Ordinary Shares", "DMTK": "DermTech Inc. Common Stock", "DMYQ": "dMY Technology Group Inc. IV Class A Common Stock", "DNA": "Ginkgo Bioworks Holdings Inc. Class A Common Stock", "DNAA": "Social Capital Suvretta Holdings Corp. I Class A Ordinary Share", "DNAB": "Social Capital Suvretta Holdings Corp. II Class A Ordinary Shares", "DNAC": "Social Capital Suvretta Holdings Corp. III Class A ordinary shares", "DNAD": "Social Capital Suvretta Holdings Corp. IV Class A Ordinary Shares", "DNAY": "Codex DNA Inc. Common Stock", "DNB": "Dun & Bradstreet Holdings Inc. Common Stock", "DNLI": "Denali Therapeutics Inc. Common Stock", "DNMR": "Danimer Scientific Inc. Common Stock", "DNN": "Denison Mines Corp Ordinary Shares (Canada)", "DNOW": "NOW Inc. Common Stock", "DNP": "DNP Select Income Fund Inc. Common Stock", "DNUT": "Krispy Kreme Inc. Common Stock", "DNZ": "D and Z Media Acquisition Corp. Class A Common Stock", "DOC": "Physicians Realty Trust Common Shares of Beneficial Interest", "DOCN": "DigitalOcean Holdings Inc. Common Stock", "DOCS": "Doximity Inc. Class A Common Stock", "DOCU": "DocuSign Inc. Common Stock", "DOGZ": "Dogness (International) Corporation Class A Common Stock", "DOLE": "Dole plc Ordinary Shares", "DOMA": "Doma Holdings Inc. Common Stock", "DOMO": "Domo Inc. Class B Common Stock", "DOOO": "BRP Inc. (Recreational Products) Common Subordinate Voting Shares", "DOOR": "Masonite International Corporation Ordinary Shares (Canada)", "DORM": "Dorman Products Inc. Common Stock", "DOV": "Dover Corporation Common Stock", "DOW": "Dow Inc. Common Stock ", "DOX": "Amdocs Limited Ordinary Shares", "DOYU": "DouYu International Holdings Limited ADS", "DPG": "Duff & Phelps Utility and Infrastructure Fund Inc.", "DPRO": "Draganfly Inc. Common Shares", "DPW": "Ault Global Holdings Inc. Common Stock", "DPZ": "Domino's Pizza Inc Common Stock", "DQ": "DAQO New Energy Corp. American Depositary Shares each representing five ordinary shares", "DRAY": "Macondray Capital Acquisition Corp. I Class A Ordinary Shares", "DRAYU": "Macondray Capital Acquisition Corp. I Unit", "DRAYW": "Macondray Capital Acquisition Corp. I Warrant", "DRD": "DRDGOLD Limited American Depositary Shares", "DRE": "Duke Realty Corporation Common Stock", "DRH": "Diamondrock Hospitality Company Common Stock", "DRH^A": "Diamondrock Hospitality Company 8.250% Series A Cumulative Redeemable Preferred Stock", "DRI": "Darden Restaurants Inc. Common Stock", "DRIO": "DarioHealth Corp. Common Stock", "DRMA": "Dermata Therapeutics Inc. Common Stock", "DRMAW": "Dermata Therapeutics Inc. Warrant", "DRNA": "Dicerna Pharmaceuticals Inc. Common Stock", "DRQ": "Dril-Quip Inc. Common Stock", "DRRX": "DURECT Corporation Common Stock", "DRTT": "DIRTT Environmental Solutions Ltd. Common Shares", "DRVN": "Driven Brands Holdings Inc. Common Stock", "DS": "Drive Shack Inc.", "DS^B": "Drive Shack Inc. Preferred Series B", "DS^C": "Drive Shack Inc. Preferred Series C", "DS^D": "Drive Shack Inc. Pfd Ser D", "DSAC": "Duddell Street Acquisition Corp. Class A Ordinary Shares", "DSACW": "Duddell Street Acquisition Corp. Warrant", "DSEY": "Diversey Holdings Ltd. Ordinary Shares", "DSGN": "Design Therapeutics Inc. Common Stock", "DSGX": "Descartes Systems Group Inc. (The) Common Stock", "DSKE": "Daseke Inc. Common Stock", "DSKEW": "Daseke Inc. Warrant", "DSL": "DoubleLine Income Solutions Fund Common Shares of Beneficial Interests", "DSM": "BNY Mellon Strategic Municipal Bond Fund Inc. Common Stock", "DSP": "Viant Technology Inc. Class A Common Stock", "DSPG": "DSP Group Inc. Common Stock", "DSS": "DSS Inc. Common Stock", "DSU": "Blackrock Debt Strategies Fund Inc. Common Stock", "DSWL": "Deswell Industries Inc. Common Shares", "DSX": "Diana Shipping inc. common stock", "DSX^B": "Diana Shipping Inc. Perpetual Preferred Shares Series B (Marshall Islands)", "DT": "Dynatrace Inc. Common Stock", "DTB": "DTE Energy Company 2020 Series G 4.375% Junior Subordinated Debentures due 2080", "DTE": "DTE Energy Company Common Stock", "DTEA": "DAVIDsTEA Inc. Common Stock", "DTF": "DTF Tax-Free Income Inc. Common Stock", "DTIL": "Precision BioSciences Inc. Common Stock", "DTLA^": "Brookfield DTLA Inc. 7.625% Series A Cumulative Redeemable Preferred Stock", "DTM": "DT Midstream Inc. Common Stock ", "DTOC": "Digital Transformation Opportunities Corp. Class A Common Stock", "DTOCW": "Digital Transformation Opportunities Corp. Warrant", "DTP": "DTE Energy Company 6.25% Corporate Units", "DTRTU": "DTRT Health Acquisition Corp. Unit", "DTSS": "Datasea Inc. Common Stock", "DTST": "Data Storage Corporation Common Stock", "DTSTW": "Data Storage Corporation Warrant", "DTW": "DTE Energy Company 2017 Series E 5.25% Junior Subordinated Debentures due 2077", "DTY": "DTE Energy Company 2016 Series F 6.00% Junior Subordinated Debentures due 2076", "DUK": "Duke Energy Corporation (Holding Company) Common Stock", "DUK^A": "Duke Energy Corporation Depositary Shares each representing a 1/1000th interest in a share of 5.75% Series A Cumulative Redeemable Perpetual Preferred Stock", "DUKB": "Duke Energy Corporation 5.625% Junior Subordinated Debentures due 2078", "DUKH": "Duke Energy Corporation 5.125% Junior Subordinated Debentures due 2073", "DUNE": "Dune Acquisition Corporation Class A Common Stock", "DUNEU": "Dune Acquisition Corporation Unit", "DUNEW": "Dune Acquisition Corporation Warrant", "DUO": "Fangdd Network Group Ltd. American Depositary Shares", "DUOL": "Duolingo Inc. Class A Common Stock", "DUOT": "Duos Technologies Group Inc. Common Stock", "DV": "DoubleVerify Holdings Inc. Common Stock", "DVA": "DaVita Inc. Common Stock", "DVAX": "Dynavax Technologies Corporation Common Stock", "DVD": "Dover Motorsports Inc. Common Stock", "DVN": "Devon Energy Corporation Common Stock", "DWAC": "Digital World Acquisition Corp. Class A Common Stock", "DWACU": "Digital World Acquisition Corp. Units", "DWACW": "Digital World Acquisition Corp. Warrants", "DWIN": "Delwinds Insurance Acquisition Corp. Class A Common Stock", "DWSN": "Dawson Geophysical Company Common Stock", "DX": "Dynex Capital Inc. Common Stock", "DX^C": "Dynex Capital Inc. 6.900% Series C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "DXC": "DXC Technology Company Common Stock ", "DXCM": "DexCom Inc. Common Stock", "DXF": "Dunxin Financial Holdings Limited American Depositary Shares", "DXLG": "Destination XL Group Inc. Common Stock", "DXPE": "DXP Enterprises Inc. Common Stock", "DXR": "Daxor Corporation Common Stock", "DXYN": "Dixie Group Inc. (The) Common Stock", "DY": "Dycom Industries Inc. Common Stock", "DYAI": "Dyadic International Inc. Common Stock", "DYFN": "Angel Oak Dynamic Financial Strategies Income Term Trust Common Shares of Beneficial Interest", "DYN": "Dyne Therapeutics Inc. Common Stock", "DYNS": "Dynamics Special Purpose Corp. Class A Common Stock", "DYNT": "Dynatronics Corporation Common Stock", "DZSI": "DZS Inc. Common Stock", "E": "ENI S.p.A. Common Stock", "EA": "Electronic Arts Inc. Common Stock", "EAC": "Edify Acquisition Corp. Class A Common Stock", "EACPU": "Edify Acquisition Corp. Units", "EACPW": "Edify Acquisition Corp. Warrant", "EAD": "Wells Fargo Income Opportunities Fund Common Shares", "EAF": "GrafTech International Ltd. Common Stock", "EAI": "Entergy Arkansas LLC First Mortgage Bonds 4.875% Series Due September 1 2066", "EAR": "Eargo Inc. Common Stock", "EARN": "Ellington Residential Mortgage REIT Common Shares of Beneficial Interest", "EAST": "Eastside Distilling Inc. Common Stock", "EAT": "Brinker International Inc. Common Stock", "EB": "Eventbrite Inc. Class A Common Stock", "EBACU": "European Biotech Acquisition Corp. Units", "EBACW": "European Biotech Acquisition Corp. Warrant", "EBAY": "eBay Inc. Common Stock", "EBC": "Eastern Bankshares Inc. Common Stock", "EBET": "Esports Technologies Inc. Common Stock", "EBF": "Ennis Inc. Common Stock", "EBIX": "Ebix Inc. Common Stock", "EBMT": "Eagle Bancorp Montana Inc. Common Stock", "EBON": "Ebang International Holdings Inc. Class A Ordinary Shares", "EBR": "Centrais Electricas Brasileiras S A American Depositary Shares (Each representing one Common Share)", "EBS": "Emergent Biosolutions Inc. Common Stock", "EBSB": "Meridian Bancorp Inc. Common Stock", "EBTC": "Enterprise Bancorp Inc Common Stock", "EC": "Ecopetrol S.A. American Depositary Shares", "ECAT": "BlackRock ESG Capital Allocation Trust Common Shares of Beneficial Interest", "ECC ": "Eagle Point Credit Company Inc. Common Stock", "ECCB": "Eagle Point Credit Company Inc. 7.75% Series B Term Preferred Stock due 2026", "ECCC": "Eagle Point Credit Company Inc. 6.50% Series C Term Preferred Stock due 2031", "ECCW": "Eagle Point Credit Company Inc. 6.75% Notes due 2031", "ECCX": "Eagle Point Credit Company Inc. 6.6875% Notes due 2028", "ECCY": "Eagle Point Credit Company Inc. 6.75% Notes due 2027", "ECF": "Ellsworth Growth and Income Fund Ltd.", "ECHO": "Echo Global Logistics Inc. Common Stock", "ECL": "Ecolab Inc. Common Stock", "ECOL": "US Ecology Inc Common Stock", "ECOLW": "US Ecology Inc. Warrant", "ECOM ": "ChannelAdvisor Corporation Common Stock", "ECOR": "electroCore Inc. Common Stock", "ECPG": "Encore Capital Group Inc Common Stock", "ECVT": "Ecovyst Inc. Common Stock", "ED": "Consolidated Edison Inc. Common Stock", "EDAP": "EDAP TMS S.A. American Depositary Shares", "EDD": "Morgan Stanley Emerging Markets Domestic Debt Fund Inc. Morgan Stanley Emerging Markets Domestic Debt Fund Inc. Common Stock", "EDF": "Stone Harbor Emerging Markets Income Fund Common Shares of Beneficial Interest", "EDI": "Stone Harbor Emerging Markets Total Income Fund Common Shares of Beneficial Interests", "EDIT": "Editas Medicine Inc. Common Stock", "EDN": "Empresa Distribuidora Y Comercializadora Norte S.A. (Edenor) Empresa Distribuidora Y Comercializadora Norte S.A. (Edenor) American Depositary Shares", "EDNCU": "Endurance Acquisition Corp. Unit", "EDR": "Endeavor Group Holdings Inc. Class A Common Stock", "EDRY": "EuroDry Ltd. Common Shares ", "EDSA": "Edesa Biotech Inc. Common Shares", "EDTK": "Skillful Craftsman Education Technology Limited Ordinary Share", "EDTX": "EdtechX Holdings Acquisition Corp. II Class A common stock", "EDTXU": "EdtechX Holdings Acquisition Corp. II Unit", "EDTXW": "EdtechX Holdings Acquisition Corp. II Warrant", "EDU": "New Oriental Education & Technology Group Inc. Sponsored ADR representing 1 Ordinary Share (Cayman Islands)", "EDUC": "Educational Development Corporation Common Stock", "EEA": "The European Equity Fund Inc. Common Stock", "EEFT": "Euronet Worldwide Inc. Common Stock", "EEIQ": "Elite Education Group International Ltd. Common Stock", "EEX": "Emerald Holding Inc. Common Stock", "EFC": "Ellington Financial Inc. Common Stock ", "EFC^A": "Ellington Financial Inc. 6.750% Series A Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "EFL": "Eaton Vance Floating-Rate 2022 Target Term Trust Common Shares of Beneficial Interest", "EFOI": "Energy Focus Inc. Common Stock", "EFR": "Eaton Vance Senior Floating-Rate Fund Common Shares of Beneficial Interest", "EFSC": "Enterprise Financial Services Corporation Common Stock", "EFT": "Eaton Vance Floating Rate Income Trust Common Shares of Beneficial Interest", "EFTR": "eFFECTOR Therapeutics Inc. Common Stock", "EFTRW": "eFFECTOR Therapeutics Inc. Warrant", "EFX": "Equifax Inc. Common Stock", "EGAN": "eGain Corporation Common Stock", "EGBN": "Eagle Bancorp Inc. Common Stock", "EGF": "Blackrock Enhanced Government Fund Inc. Common Stock", "EGGF": "EG Acquisition Corp. Class A Common Stock", "EGHT": "8x8 Inc Common Stock", "EGLE": "Eagle Bulk Shipping Inc. Common Stock", "EGLX": "Enthusiast Gaming Holdings Inc. Common Stock", "EGO": "Eldorado Gold Corporation Ordinary Shares", "EGP": "EastGroup Properties Inc. Common Stock", "EGRX": "Eagle Pharmaceuticals Inc. Common Stock", "EGY": "VAALCO Energy Inc. Common Stock", "EH": "EHang Holdings Limited ADS", "EHC": "Encompass Health Corporation Common Stock", "EHI": "Western Asset Global High Income Fund Inc Common Stock", "EHTH": "eHealth Inc. Common Stock", "EIC": "Eagle Point Income Company Inc. Common Stock", "EIG": "Employers Holdings Inc Common Stock", "EIGR": "Eiger BioPharmaceuticals Inc. Common Stock", "EIM": "Eaton Vance Municipal Bond Fund Common Shares of Beneficial Interest $.01 par value", "EIX": "Edison International Common Stock", "EJFA": "EJF Acquisition Corp. Class A Ordinary Share", "EJFAU": "EJF Acquisition Corp. Unit", "EJFAW": "EJF Acquisition Corp. Warrant", "EJH": "E-Home Household Service Holdings Limited Ordinary Shares", "EKSO": "Ekso Bionics Holdings Inc. Common Stock", "EL": "Estee Lauder Companies Inc. (The) Common Stock", "ELA": "Envela Corporation Common Stock", "ELAN": "Elanco Animal Health Incorporated Common Stock", "ELAT": "Elanco Animal Health Incorporated 5.00% Tangible Equity Units", "ELC": "Entergy Louisiana Inc. Collateral Trust Mortgage Bonds 4.875 % Series due September 1 2066", "ELDN": "Eledon Pharmaceuticals Inc. Common Stock", "ELEV": "Elevation Oncology Inc. Common stock", "ELF": "e.l.f. Beauty Inc. Common Stock", "ELLO": "Ellomay Capital Ltd Ordinary Shares (Israel)", "ELMD": "Electromed Inc. Common Stock", "ELMS": "Electric Last Mile Solutions Inc. Class A Common stock", "ELMSW": "Electric Last Mile Solutions Inc. Warrant", "ELOX": "Eloxx Pharmaceuticals Inc. Common Stock", "ELP": "Companhia Paranaense de Energia (COPEL) American Depositary Shares (each representing one Unit consisting one Common Share and four non-voting Class B Preferred Shares)", "ELS": "Equity Lifestyle Properties Inc. Common Stock", "ELSE": "Electro-Sensors Inc. Common Stock", "ELTK": "Eltek Ltd. Ordinary Shares", "ELVT": "Elevate Credit Inc. Common Stock", "ELY": "Callaway Golf Company Common Stock", "ELYM": "Eliem Therapeutics Inc Common Stock", "ELYS": "Elys Game Technology Corp. Common Stock", "EM": "Smart Share Global Limited American Depositary Shares", "EMAN": "eMagin Corporation Common Stock", "EMCF": "Emclaire Financial Corp Common Stock", "EMD": "Western Asset Emerging Markets Debt Fund Inc Common Stock", "EME": "EMCOR Group Inc. Common Stock", "EMF": "Templeton Emerging Markets Fund Common Stock", "EMKR": "EMCORE Corporation Common Stock", "EML": "Eastern Company (The) Common Stock", "EMN": "Eastman Chemical Company Common Stock", "EMO": "ClearBridge Energy Midstream Opportunity Fund Inc. Common Stock", "EMP": "Entergy Mississippi LLC First Mortgage Bonds 4.90% Series Due October 1 2066", "EMR": "Emerson Electric Company Common Stock", "EMX": "EMX Royalty Corporation Common Shares (Canada)", "ENB": "Enbridge Inc Common Stock", "ENBA": "Enbridge Inc 6.375% Fixed-to-Floating Rate Subordinated Notes Series 2018-B due 2078", "ENBL": "Enable Midstream Partners LP Common Units representing limited partner interests", "ENDP": "Endo International plc Ordinary Shares", "ENFA": "890 5th Avenue Partners Inc. Class A Common Stock", "ENFAU": "890 5th Avenue Partners Inc. Unit", "ENFAW": "890 5th Avenue Partners Inc. Warrant", "ENG": "ENGlobal Corporation Common Stock", "ENIA": "Enel Americas S.A. American Depositary Shares", "ENIC": "Enel Chile S.A. American Depositary Shares (Each representing 50 shares of Common Stock)", "ENJ": "Entergy New Orleans LLC First Mortgage Bonds 5.0% Series due December 1 2052", "ENLC": "EnLink Midstream LLC Common Units representing Limited Partner Interests", "ENLV": "Enlivex Therapeutics Ltd. Ordinary Shares", "ENNV": "ECP Environmental Growth Opportunities Corp. Class A Common Stock", "ENNVU": "ECP Environmental Growth Opportunities Corp. Unit", "ENNVW": "ECP Environmental Growth Opportunities Corp. Warrants", "ENO": "Entergy New Orleans LLC First Mortgage Bonds 5.50% Series due April 1 2066", "ENOB": "Enochian Biosciences Inc. Common Stock", "ENPC": "Executive Network Partnering Corporation Class A Common Stock", "ENPH": "Enphase Energy Inc. Common Stock", "ENR": "Energizer Holdings Inc. Common Stock", "ENR^A": "Energizer Holdings Inc. 7.50% Series A Mandatory Convertible Preferred Stock", "ENS": "EnerSys Common Stock", "ENSC": "Ensysce Biosciences Inc. Common Stock", "ENSG": "The Ensign Group Inc. Common Stock", "ENSV": "Enservco Corporation Common Stock", "ENTA": "Enanta Pharmaceuticals Inc. Common Stock", "ENTG": "Entegris Inc. Common Stock", "ENTX": "Entera Bio Ltd. Ordinary Shares", "ENTXW": "Entera Bio Ltd. Warrant", "ENV": "Envestnet Inc Common Stock", "ENVA": "Enova International Inc. Common Stock", "ENVB": "Enveric Biosciences Inc. Common Stock", "ENVI": "Environmental Impact Acquisition Corp. Class A Common Stock", "ENVIU": "Environmental Impact Acquisition Corp. Unit", "ENVIW": "Environmental Impact Acquisition Corp. Warrant", "ENVX": "Enovix Corporation Common Stock", "ENVXW": "Enovix Corporation Warrant", "ENX": "Eaton Vance New York Municipal Bond Fund Common Shares of Beneficial Interest $.01 par value", "ENZ": "Enzo Biochem Inc. Common Stock ($0.01 Par Value)", "EOCW": "Elliott Opportunity II Corp. Class A Ordinary Shares", "EOD": "Wells Fargo Global Dividend Opportunity Fund", "EOG": "EOG Resources Inc. Common Stock", "EOI": "Eaton Vance Enhance Equity Income Fund Eaton Vance Enhanced Equity Income Fund Shares of Beneficial Interest", "EOLS": "Evolus Inc. Common Stock", "EOS": "Eaton Vance Enhance Equity Income Fund II Common Stock", "EOSE": "Eos Energy Enterprises Inc. Class A Common Stock", "EOSEW": "Eos Energy Enterprises Inc. Warrant", "EOT": "Eaton Vance Municipal Income Trust EATON VANCE NATIONAL MUNICIPAL OPPORTUNITIES TRUST", "EP^C": "El Paso Corporation Preferred Stock", "EPAC": "Enerpac Tool Group Corp. Common Stock", "EPAM": "EPAM Systems Inc. Common Stock", "EPAY": "Bottomline Technologies Inc. Common Stock", "EPC": "Edgewell Personal Care Company Common Stock", "EPD": "Enterprise Products Partners L.P. Common Stock", "EPHY": "Epiphany Technology Acquisition Corp. Class A Common Stock", "EPHYU": "Epiphany Technology Acquisition Corp. Unit", "EPHYW": "Epiphany Technology Acquisition Corp. Warrant", "EPIX": "ESSA Pharma Inc. Common Stock", "EPM": "Evolution Petroleum Corporation Inc. Common Stock", "EPR": "EPR Properties Common Stock", "EPR^C": "EPR Properties 5.75% Series C Cumulative Convertible Preferred Shares", "EPR^E": "EPR Properties Series E Cumulative Conv Pfd Shs Ser E", "EPR^G": "EPR Properties 5.750% Series G Cumulative Redeemable Preferred Shares", "EPRT": "Essential Properties Realty Trust Inc. Common Stock", "EPSN": "Epsilon Energy Ltd. Common Share", "EPWR": "Empowerment & Inclusion Capital I Corp. Class A Common Stock", "EPZM": "Epizyme Inc. Common Stock", "EQ": "Equillium Inc. Common Stock", "EQBK": "Equity Bancshares Inc. Class A Common Stock", "EQC": "Equity Commonwealth Common Shares of Beneficial Interest", "EQC^D": "Equity Commonwealth 6.50% Pfd Conv Shs Ser D", "EQD": "Equity Distribution Acquisition Corp. Class A Common Stock", "EQH": "Equitable Holdings Inc. Common Stock", "EQH^A": "Equitable Holdings Inc. Depositary Shares", "EQH^C": "Equitable Holdings Inc. Depositary Shares each representing a 1/1000th interest in a share of Fixed Rate Noncumulative Perpetual Preferred Stock Series C", "EQHA": "EQ Health Acquisition Corp. Class A Common Stock", "EQIX": "Equinix Inc. Common Stock REIT", "EQNR": "Equinor ASA", "EQOS": "Diginex Limited Ordinary Shares", "EQR": "Equity Residential Common Shares of Beneficial Interest", "EQS": "Equus Total Return Inc. Common Stock", "EQT": "EQT Corporation Common Stock", "EQX": "Equinox Gold Corp. Common Shares", "ERAS": "Erasca Inc. Common Stock", "ERC": "Wells Fargo Multi-Sector Income Fund Common Stock no par value", "ERES": "East Resources Acquisition Company Class A Common Stock", "ERESU": "East Resources Acquisition Company Unit", "ERESW": "East Resources Acquisition Company Warrant", "ERF": "Enerplus Corporation Common Stock", "ERH": "Wells Fargo Utilities and High Income Fund", "ERIC": "Ericsson American Depositary Shares", "ERIE": "Erie Indemnity Company Class A Common Stock", "ERII": "Energy Recovery Inc. Common Stock", "ERJ": "Embraer S.A. Common Stock", "ERO": "Ero Copper Corp. Common Shares", "ERYP": "Erytech Pharma S.A. American Depositary Shares", "ES": "Eversource Energy (D/B/A) Common Stock", "ESBK": "Elmira Savings Bank Elmira NY Common Stock", "ESCA": "Escalade Incorporated Common Stock", "ESE": "ESCO Technologies Inc. Common Stock", "ESEA": "Euroseas Ltd. Common Stock (Marshall Islands)", "ESGC": "Eros STX Global Corporation A Ordinary Shares", "ESGR": "Enstar Group Limited Ordinary Shares", "ESGRO": "Enstar Group Limited Depository Shares 7.00% Perpetual Non-Cumulative Preference Shares Series E", "ESGRP": "Enstar Group Limited Depositary Shares Each Representing 1/1000th of an interest in Preference Shares", "ESI": "Element Solutions Inc. Common Stock", "ESLT": "Elbit Systems Ltd. Ordinary Shares", "ESM": "ESM Acquisition Corporation Class A Ordinary Shares", "ESMT": "EngageSmart Inc. Common Stock", "ESNT": "Essent Group Ltd. Common Shares", "ESP": "Espey Mfg. & Electronics Corp. Common Stock", "ESPR": "Esperion Therapeutics Inc. Common Stock", "ESQ": "Esquire Financial Holdings Inc. Common Stock", "ESRT": "Empire State Realty Trust Inc. Class A Common Stock", "ESS": "Essex Property Trust Inc. Common Stock", "ESSA": "ESSA Bancorp Inc. Common Stock", "ESSC": "East Stone Acquisition Corporation Ordinary Shares", "ESSCR": "East Stone Acquisition Corporation Right", "ESSCW": "East Stone Acquisition Corporation Warrant", "ESTA": "Establishment Labs Holdings Inc. Common Shares", "ESTC": "Elastic N.V. Ordinary Shares", "ESTE": "Earthstone Energy Inc. Class A Common Stock", "ESXB": "Community Bankers Trust Corporation Common Stock (VA)", "ET": "Energy Transfer LP Common Units ", "ET^C": "Energy Transfer L.P. 7.375% Series C Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Unit", "ET^D": "Energy Transfer L.P. 7.625% Series D Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Unit", "ET^E": "Energy Transfer L.P. 7.600% Series E Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Unit", "ETAC": "E.Merge Technology Acquisition Corp. Class A Common Stock", "ETACU": "E.Merge Technology Acquisition Corp. Unit", "ETACW": "E.Merge Technology Acquisition Corp. Warrant", "ETB": "Eaton Vance Tax-Managed Buy-Write Income Fund Eaton Vance Tax-Managed Buy-Write Income Fund Common Shares of Beneficial Interest", "ETD": "Ethan Allen Interiors Inc. Common Stock", "ETG": "Eaton Vance Tax-Advantaged Global Dividend Income Fund Common Shares of Beneficial Interest", "ETI^": "Entergy Texas Inc 5.375% Series A Preferred Stock Cumulative No Par Value", "ETJ": "Eaton Vance Risk-Managed Diversified Equity Income Fund Common Shares of Beneficial Interest", "ETN": "Eaton Corporation PLC Ordinary Shares", "ETNB": "89bio Inc. Common Stock", "ETO": "Eaton Vance Tax-Advantage Global Dividend Opp Common Stock", "ETON": "Eton Pharmaceuticals Inc. Common Stock", "ETR": "Entergy Corporation Common Stock", "ETRN": "Equitrans Midstream Corporation Common Stock ", "ETSY": "Etsy Inc. Common Stock", "ETTX": "Entasis Therapeutics Holdings Inc. Common Stock", "ETV": "Eaton Vance Corporation Eaton Vance Tax-Managed Buy-Write Opportunities Fund Common Shares of Beneficial Interest", "ETW": "Eaton Vance Corporation Eaton Vance Tax-Managed Global Buy-Write Opportunites Fund Common Shares of Beneficial Interest", "ETWO": "E2open Parent Holdings Inc.Class A Common Stock", "ETX ": "Eaton Vance Municipal Income 2028 Term Trust Common Shares of Beneficial Interest", "ETY": "Eaton Vance Tax-Managed Diversified Equity Income Fund Common Shares of Beneficial Interest", "EUCR": "Eucrates Biomedical Acquisition Corp. Ordinary Shares", "EUCRW": "Eucrates Biomedical Acquisition Corp. Warrant", "EURN": "Euronav NV Ordinary Shares", "EUSG": "European Sustainable Growth Acquisition Corp. Class A Ordinary Shares", "EUSGU": "European Sustainable Growth Acquisition Corp. Unit", "EUSGW": "European Sustainable Growth Acquisition Corp. Warrant", "EVA": "Enviva Partners LP Common units representing limited partner interests", "EVAX": "Evaxion Biotech A/S American Depositary Share", "EVBG": "Everbridge Inc. Common Stock", "EVBN": "Evans Bancorp Inc. Common Stock", "EVC": "Entravision Communications Corporation Common Stock", "EVCM": "EverCommerce Inc. Common Stock", "EVER": "EverQuote Inc. Class A Common Stock", "EVF": "Eaton Vance Senior Income Trust Common Stock", "EVFM": "Evofem Biosciences Inc. Common Stock", "EVG": "Eaton Vance Short Diversified Income Fund Eaton Vance Short Duration Diversified Income Fund Common Shares of Beneficial Interest", "EVGBC": "Eaton Vance NextShares Trust Eaton Vance Global Income Builder NextShares", "EVGN": "Evogene Ltd Ordinary Shares", "EVGO": "EVgo Inc. Class A Common Stock", "EVGOW": "EVgo Inc. Warrants", "EVH": "Evolent Health Inc Class A Common Stock", "EVI": "EVI Industries Inc. Common Stock", "EVK": "Ever-Glory International Group Inc. Common Stock", "EVLMC": "Eaton Vance NextShares Trust II Eaton Vance TABS 5-to-15 Year Laddered Municipal Bond NextShares", "EVLO": "Evelo Biosciences Inc. Common Stock", "EVLV": "Evolv Technologies Holdings Inc. Class A Common Stock", "EVLVW": "Evolv Technologies Holdings Inc. Warrant", "EVM": "Eaton Vance California Municipal Bond Fund Common Shares of Beneficial Interest $.01 par value", "EVN": "Eaton Vance Municipal Income Trust Common Stock", "EVOJ": "Evo Acquisition Corp. Class A Common Stock", "EVOJU": "Evo Acquisition Corp. Unit", "EVOJW": "Evo Acquisition Corp. Warrant", "EVOK": "Evoke Pharma Inc. Common Stock", "EVOL": "Evolving Systems Inc. Common Stock", "EVOP": "EVO Payments Inc. Class A Common Stock", "EVR": "Evercore Inc. Class A Common Stock", "EVRG": "Evergy Inc. Common Stock", "EVRI": "Everi Holdings Inc. Common Stock", "EVT": "Eaton Vance Tax Advantaged Dividend Income Fund Common Shares of Beneficial Interest", "EVTC": "Evertec Inc. Common Stock", "EVV": "Eaton Vance Limited Duration Income Fund Common Shares of Beneficial Interest", "EW": "Edwards Lifesciences Corporation Common Stock", "EWBC": "East West Bancorp Inc. Common Stock", "EWCZ": "European Wax Center Inc. Class A Common Stock", "EWTX": "Edgewise Therapeutics Inc. Common Stock", "EXAI": "Exscientia Plc American Depositary Shares", "EXAS": "Exact Sciences Corporation Common Stock", "EXC": "Exelon Corporation Common Stock", "EXD": "Eaton Vance Tax-Managed Buy-Write Strategy Fund Common Shares of Beneficial Interest", "EXEL": "Exelixis Inc. Common Stock", "EXG": "Eaton Vance Tax-Managed Global Diversified Equity Income Fund Eaton Vance Tax-Managed Global Diversified Equity Income Fund Common Shares of Beneficial Interest", "EXK": "Endeavour Silver Corporation Ordinary Shares (Canada)", "EXLS": "ExlService Holdings Inc. Common Stock", "EXN": "Excellon Resources Inc. Common Shares", "EXP": "Eagle Materials Inc Common Stock", "EXPD": "Expeditors International of Washington Inc. Common Stock", "EXPE": "Expedia Group Inc. Common Stock", "EXPI": "eXp World Holdings Inc. Common Stock", "EXPO": "Exponent Inc. Common Stock", "EXPR": "Express Inc. Common Stock", "EXR": "Extra Space Storage Inc Common Stock", "EXTN": "Exterran Corporation Common Stock", "EXTR": "Extreme Networks Inc. Common Stock", "EYE": "National Vision Holdings Inc. Common Stock", "EYEG": "Eyegate Pharmaceuticals Inc. Common Stock", "EYEN": "Eyenovia Inc. Common Stock", "EYES": "Second Sight Medical Products Inc. Common Stock", "EYESW": "Second Sight Medical Products Inc. Warrants expiring 03/14/2024", "EYPT": "EyePoint Pharmaceuticals Inc. Common Stock", "EZFL": "EzFill Holdings Inc. Common Stock", "EZGO": "EZGO Technologies Ltd. Ordinary Shares", "EZPW": "EZCORP Inc. Class A Non Voting Common Stock", "F": "Ford Motor Company Common Stock", "F^B": "Ford Motor Company 6.20% Notes due June 1 2059", "F^C": "Ford Motor Company 6% Notes due December 1 2059", "FA": "First Advantage Corporation Common Stock", "FACA": "Figure Acquisition Corp. I Class A Common Stock", "FACT": "Freedom Acquisition I Corp. Class A Ordinary Shares", "FAF": "First American Corporation (New) Common Stock", "FAM": "First Trust/Aberdeen Global Opportunity Income Fund First Trust/Aberdeen Global Opportunity Income Fund Common Shares of Beneficial Interest", "FAMI": "Farmmi Inc. Ordinary Shares", "FANG": "Diamondback Energy Inc. Commmon Stock", "FANH": "Fanhua Inc. American Depositary Shares", "FARM": "Farmer Brothers Company Common Stock", "FARO": "FARO Technologies Inc. Common Stock", "FAST": "Fastenal Company Common Stock", "FAT": "FAT Brands Inc. Class A Common Stock", "FATBB": "FAT Brands Inc. Class B Common Stock", "FATBP": "FAT Brands Inc. 8.25% Series B Cumulative Preferred Stock", "FATBW": "FAT Brands Inc. Warrant", "FATE": "Fate Therapeutics Inc. Common Stock", "FAX": "Aberdeen Asia-Pacific Income Fund Inc Common Stock", "FB": "Facebook Inc. Class A Common Stock", "FBC": "Flagstar Bancorp Inc. Common Stock", "FBHS": "Fortune Brands Home & Security Inc. Common Stock", "FBIO": "Fortress Biotech Inc. Common Stock", "FBIOP": "Fortress Biotech Inc. 9.375% Series A Cumulative Redeemable Perpetual Preferred Stock", "FBIZ": "First Business Financial Services Inc. Common Stock", "FBK": "FB Financial Corporation Common Stock", "FBMS": "First Bancshares Inc.", "FBNC": "First Bancorp Common Stock", "FBP": "First BanCorp. New Common Stock", "FBRX": "Forte Biosciences Inc. Common Stock", "FC": "Franklin Covey Company Common Stock", "FCAP": "First Capital Inc. Common Stock", "FCAX": "Fortress Capital Acquisition Corp. Class A Ordinary Shares", "FCBC": "First Community Bankshares Inc. (VA) Common Stock", "FCCO": "First Community Corporation Common Stock", "FCCY": "1st Constitution Bancorp (NJ) Common Stock", "FCEL": "FuelCell Energy Inc. Common Stock", "FCF": "First Commonwealth Financial Corporation Common Stock", "FCFS": "FirstCash Inc. Common Stock", "FCN": "FTI Consulting Inc. Common Stock", "FCNCA": "First Citizens BancShares Inc. Class A Common Stock", "FCNCP": "First Citizens BancShares Inc. Depositary Shares", "FCO": "Aberdeen Global Income Fund Inc. Common Stock", "FCPT": "Four Corners Property Trust Inc. Common Stock", "FCRD": "First Eagle Alternative Capital BDC Inc. Common Stock", "FCRW": "First Eagle Alternative Capital BDC Inc. 6.125% Notes Due 2023", "FCRX": "First Eagle Alternative Capital BDC Inc. 5.000% Notes due 2026", "FCT": "First Trust Senior Floating Rate Income Fund II Common Shares of Beneficial Interest", "FCUV": "Focus Universal Inc. Common Stock", "FCX": "Freeport-McMoRan Inc. Common Stock", "FDBC": "Fidelity D & D Bancorp Inc. Common Stock", "FDEU": "First Trust Dynamic Europe Equity Income Fund Common Shares of Beneficial Interest", "FDMT": "4D Molecular Therapeutics Inc. Common Stock", "FDP": "Fresh Del Monte Produce Inc. Common Stock", "FDS": "FactSet Research Systems Inc. Common Stock", "FDUS": "Fidus Investment Corporation Common Stock", "FDUSG": "Fidus Investment Corporation 5.375% Notes Due 2024", "FDUSZ": "Fidus Investment Corporation 6% Notes due 2024", "FDX": "FedEx Corporation Common Stock", "FE": "FirstEnergy Corp. Common Stock", "FEDU": "Four Seasons Education (Cayman) Inc. American Depositary Shares each two ADSs representing one ordinary share", "FEI ": "First Trust MLP and Energy Income Fund Common Shares of Beneficial Interest", "FEIM": "Frequency Electronics Inc. Common Stock", "FELE": "Franklin Electric Co. Inc. Common Stock", "FEMY": "Femasys Inc. Common Stock", "FEN": "First Trust Energy Income and Growth Fund", "FENC": "Fennec Pharmaceuticals Inc. Common Stock", "FENG": "Phoenix New Media Limited American Depositary Shares each representing 8 Class A ordinary shares.", "FEO": "First Trust/Aberdeen Emerging Opportunity Fund Common Shares of Beneficial Interest", "FERG": "Ferguson plc Ordinary Shares", "FET": "Forum Energy Technologies Inc. Common Stock", "FEYE": "FireEye Inc. Common Stock", "FF": "FutureFuel Corp. Common shares", "FFA": "First Trust Enhanced Equity Income Fund", "FFBC": "First Financial Bancorp. Common Stock", "FFBW": "FFBW Inc. Common Stock (MD)", "FFC": "Flaherty & Crumrine Preferred and Income Securities Fund Incorporated", "FFHL": "Fuwei Films (Holdings) Co. Ltd. Ordinary Shares", "FFIC": "Flushing Financial Corporation Common Stock", "FFIE": "Faraday Future Intelligent Electric Inc. Common Stock", "FFIEW": "Faraday Future Intelligent Electric Inc. Warrant", "FFIN": "First Financial Bankshares Inc. Common Stock", "FFIV": "F5 Networks Inc. Common Stock", "FFNW": "First Financial Northwest Inc. Common Stock", "FFWM": "First Foundation Inc. Common Stock", "FGB": "First Trust Specialty Finance and Financial Opportunities Fund", "FGBI": "First Guaranty Bancshares Inc. Common Stock", "FGEN": "FibroGen Inc Common Stock", "FGF": "FG Financial Group Inc. Common Stock", "FGFPP": "FG Financial Group Inc. 8.00% Cumulative Preferred Stock", "FHB": "First Hawaiian Inc. Common Stock", "FHI": "Federated Hermes Inc. Common Stock", "FHLTU": "Future Health ESG Corp. Unit", "FHN": "First Horizon Corporation Common Stock", "FHN^B": "First Horizon Corporation Depositary Shares each representing a 1/400th interest in a share of Non-Cumulative Perpetual Preferred Stock Series B", "FHN^C": "First Horizon Corporation Depositary Shares each representing a 1/400th interest in a share of Non-Cumulative Perpetual Preferred Stock Series C", "FHN^D": "First Horizon Corporation Depositary Shares each representing a 1/400th interest in a share of Non-Cumulative Perpetual Preferred Stock Series D", "FHN^E": "First Horizon Corporation Depositary Shares each representing a 1/4000th interest in a share of Non-Cumulative Perpetual Preferred Stock Series E", "FHN^F": "First Horizon Corporation Depositary Shares each representing 1/4000th Interest in a Share of Non-Cumulative Perpetual Preferred Stock Series F", "FHS": "First High-School Education Group Co. Ltd. American Depositary Shares", "FHTX": "Foghorn Therapeutics Inc. Common Stock", "FIBK": "First Interstate BancSystem Inc. Class A Common Stock", "FICO": "Fair Isaac Corproation Common Stock", "FICV": "Frontier Investment Corp Class A Ordinary Shares", "FICVU": "Frontier Investment Corp Units", "FICVW": "Frontier Investment Corp Warrants", "FIF": "First Trust Energy Infrastructure Fund Common Shares of Beneficial Interest", "FIGS": "FIGS Inc. Class A Common Stock", "FINM": "Marlin Technology Corporation Class A Ordinary Share", "FINMU": "Marlin Technology Corporation Unit", "FINMW": "Marlin Technology Corporation Warrant", "FINS": "Angel Oak Financial Strategies Income Term Trust Common Shares of Beneficial Interest", "FINV": "FinVolution Group American Depositary Shares", "FIS": "Fidelity National Information Services Inc. Common Stock", "FISI": "Financial Institutions Inc. Common Stock", "FISV": "Fiserv Inc. Common Stock", "FITB": "Fifth Third Bancorp Common Stock", "FITBI": "Fifth Third Bancorp Depositary Shares", "FITBO": "Fifth Third Bancorp Depositary Shares each representing a 1/1000th ownership interest in a share of Non-Cumulative Perpetual Preferred Stock Series K", "FITBP": "Fifth Third Bancorp Depositary Shares each representing 1/40th share of Fifth Third 6.00% Non-Cumulative Perpetual Class B Preferred Stock Series A", "FIV": "First Trust Senior Floating Rate 2022 Target Term Fund Common Shares of Beneficial Interest", "FIVE": "Five Below Inc. Common Stock", "FIVN": "Five9 Inc. Common Stock", "FIX": "Comfort Systems USA Inc. Common Stock", "FIXX": "Homology Medicines Inc. Common Stock", "FIZZ": "National Beverage Corp. Common Stock", "FKWL": "Franklin Wireless Corp. Common Stock", "FL": "Foot Locker Inc.", "FLAC": "Frazier Lifesciences Acquisition Corporation Class A Ordinary Shares", "FLACU": "Frazier Lifesciences Acquisition Corporation Unit", "FLACW": "Frazier Lifesciences Acquisition Corporation Warrant", "FLC": "Flaherty & Crumrine Total Return Fund Inc Common Stock", "FLDM": "Fluidigm Corporation Common Stock", "FLEX": "Flex Ltd. Ordinary Shares", "FLGC": "Flora Growth Corp. Common Stock", "FLGT": "Fulgent Genetics Inc. Common Stock", "FLIC": "First of Long Island Corporation (The) Common Stock", "FLL": "Full House Resorts Inc. Common Stock", "FLME": "Flame Acquisition Corp. Class A Common Stock", "FLMN": "Falcon Minerals Corporation Class A Common Stock", "FLMNW": "Falcon Minerals Corporation Warrant", "FLNG": "FLEX LNG Ltd. Ordinary Shares", "FLNT": "Fluent Inc. Common Stock", "FLO": "Flowers Foods Inc. Common Stock", "FLOW": "SPX FLOW Inc. Common Stock", "FLR": "Fluor Corporation Common Stock", "FLS": "Flowserve Corporation Common Stock", "FLT": "FleetCor Technologies Inc. Common Stock", "FLUX": "Flux Power Holdings Inc. Common Stock", "FLWS": "1-800-FLOWERS.COM Inc. Common Stock", "FLXN": "Flexion Therapeutics Inc. Common Stock", "FLXS": "Flexsteel Industries Inc. Common Stock", "FLYW": "Flywire Corporation Voting Common Stock", "FMAC": "FirstMark Horizon Acquisition Corp. Class A Common Stock", "FMAO": "Farmers & Merchants Bancorp Inc. Common Stock", "FMBH": "First Mid Bancshares Inc. Common Stock", "FMBI": "First Midwest Bancorp Inc. Common Stock", "FMBIO": "First Midwest Bancorp Inc. Depositary Shares Each Representing a 1/40th Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series C", "FMBIP": "First Midwest Bancorp Inc. Depositary Shares Each Representing a 1/40th Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series A", "FMC": "FMC Corporation Common Stock", "FMIV": "Forum Merger IV Corporation Class A Common stock", "FMIVU": "Forum Merger IV Corporation Unit", "FMIVW": "Forum Merger IV Corporation Warrant", "FMN": "Federated Hermes Premier Municipal Income Fund", "FMNB": "Farmers National Banc Corp. Common Stock", "FMO": "Fiduciary/Claymore Energy Infrastructure Fund Common Shares of Beneficial Interest", "FMS": "Fresenius Medical Care AG Common Stock", "FMTX": "Forma Therapeutics Holdings Inc. Common Stock", "FMX": "Fomento Economico Mexicano S.A.B. de C.V. Common Stock", "FMY": "First Trust Motgage Income Fund Common Shares of Beneficial Interest", "FN": "Fabrinet Ordinary Shares", "FNB": "F.N.B. Corporation Common Stock", "FNB^E": "F.N.B. Corporation Depositary Shares each representing a 1/40th interest in a share of Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series E", "FNCB": "FNCB Bancorp Inc. Common Stock", "FNCH": "Finch Therapeutics Group Inc. Common Stock", "FND": "Floor & Decor Holdings Inc. Common Stock", "FNF": "FNF Group of Fidelity National Financial Inc. Common Stock", "FNHC": "FedNat Holding Company Common Stock", "FNKO": "Funko Inc. Class A Common Stock", "FNLC": "First Bancorp Inc (ME) Common Stock", "FNV": "Franco-Nevada Corporation", "FNWB": "First Northwest Bancorp Common Stock", "FOA": "Finance of America Companies Inc. Class A Common Stock", "FOCS": "Focus Financial Partners Inc. Class A Common Stock", "FOE": "Ferro Corporation Common Stock", "FOF": "Cohen & Steers Closed-End Opportunity Fund Inc. Common Stock", "FOLD": "Amicus Therapeutics Inc. Common Stock", "FONR": "Fonar Corporation Common Stock", "FOR": "Forestar Group Inc Common Stock ", "FORA": "Forian Inc. Common Stock", "FORD": "Forward Industries Inc. Common Stock", "FORE": "Foresight Acquisition Corp. Class A Common Stock", "FOREU": "Foresight Acquisition Corp. Units", "FOREW": "Foresight Acquisition Corp. Warrant", "FORG": "ForgeRock Inc. Class A Common Stock", "FORM": "FormFactor Inc. FormFactor Inc. Common Stock", "FORR": "Forrester Research Inc. Common Stock", "FORTY": "Formula Systems (1985) Ltd. American Depositary Shares", "FOSL": "Fossil Group Inc. Common Stock", "FOUR": "Shift4 Payments Inc. Class A Common Stock", "FOX": "Fox Corporation Class B Common Stock", "FOXA": "Fox Corporation Class A Common Stock", "FOXF": "Fox Factory Holding Corp. Common Stock", "FOXW": "FoxWayne Enterprises Acquisition Corp. Class A Common Stock", "FOXWU": "FoxWayne Enterprises Acquisition Corp. Unit", "FOXWW": "FoxWayne Enterprises Acquisition Corp. Warrant", "FPAC": "Far Peak Acquisition Corporation Class A Ordinary Shares", "FPAY": "FlexShopper Inc. Common Stock", "FPF": "First Trust Intermediate Duration Preferred & Income Fund Common Shares of Beneficial Interest", "FPH": "Five Point Holdings LLC Class A Common Shares", "FPI": "Farmland Partners Inc. Common Stock", "FPL": "First Trust New Opportunities MLP & Energy Fund Common Shares of Beneficial Interest", "FR": "First Industrial Realty Trust Inc. Common Stock", "FRA": "Blackrock Floating Rate Income Strategies Fund Inc Common Stock", "FRAF": "Franklin Financial Services Corporation Common Stock", "FRBA": "First Bank Common Stock", "FRBK": "Republic First Bancorp Inc. Common Stock", "FRC": "FIRST REPUBLIC BANK Common Stock", "FRC^H": "FIRST REPUBLIC BANK Depositary Shares each representing a 1/40th interest in a share of 5.125% Noncumulative Perpetual Series H Preferred Stock par value $0.01 per share", "FRC^I": "FIRST REPUBLIC BANK Depositary Shares each representing a 1/40th interest in a share of 5.50% Noncumulative Perpetual Series I Preferred Stock par value $0.01 per share", "FRC^J": "FIRST REPUBLIC BANK Depositary Shares Each Representing a 1/40th Interest in a Share of 4.70% Noncumulative Perpetual Series J Preferred Stock", "FRC^K": "FIRST REPUBLIC BANK Depositary Shares Each Representing a 1/40th Interest in a Share of 4.125% Noncumulative Perpetual Series K Preferred Stock", "FRC^L": "FIRST REPUBLIC BANK Depositary Shares Each Representing a 1/40th Interest in a Share of 4.250% Noncumulative Perpetual Series L Preferred Stock", "FRC^M": "FIRST REPUBLIC BANK Depositary Shares each representing a 1/40th interest in a share of 4.000% Noncumulative Perpetual Series M Preferred Stock", "FRD": "Friedman Industries Inc. Common Stock", "FREE": "Whole Earth Brands Inc. Class A Common Stock", "FREEW": "Whole Earth Brands Inc. Warrant", "FREQ": "Frequency Therapeutics Inc. Common Stock", "FREY": "FREYR Battery Ordinary Shares", "FRG": "Franchise Group Inc. Common Stock", "FRGAP": "Franchise Group Inc. 7.50% Series A Cumulative Perpetual Preferred Stock", "FRGI": "Fiesta Restaurant Group Inc. Common Stock", "FRHC": "Freedom Holding Corp. Common Stock", "FRLN": "Freeline Therapeutics Holdings plc American Depositary Shares", "FRME": "First Merchants Corporation Common Stock", "FRO": "Frontline Ltd. Ordinary Shares", "FROG": "JFrog Ltd. Ordinary Shares", "FRON": "Frontier Acquisition Corp. Class A Ordinary Shares", "FRONU": "Frontier Acquisition Corp. Units", "FRONW": "Frontier Acquisition Corp. Warrant", "FRPH": "FRP Holdings Inc. Common Stock", "FRPT": "Freshpet Inc. Common Stock", "FRSG": "First Reserve Sustainable Growth Corp. Class A Common Stock", "FRSGU": "First Reserve Sustainable Growth Corp. Unit", "FRSGW": "First Reserve Sustainable Growth Corp. Warrant", "FRSH": "Freshworks Inc. Class A Common Stock", "FRST": "Primis Financial Corp. Common Stock", "FRSX": "Foresight Autonomous Holdings Ltd. American Depositary Shares", "FRT": "Federal Realty Investment Trust Common Stock", "FRT^C": "Federal Realty Investment Trust Depositary Shares each representing a 1/1000th interest in a 5.000% Series C Cumulative Redeemable Preferred Share", "FRTA": "Forterra Inc. Common Stock", "FRW": "PWP Forward Acquisition Corp. I Class A Common Stock", "FRWAU": "PWP Forward Acquisition Corp. I Units", "FRWAW": "PWP Forward Acquisition Corp. I Warrant", "FRXB": "Forest Road Acquisition Corp. II Class A Common Stock", "FSBC": "Five Star Bancorp Common Stock", "FSBW": "FS Bancorp Inc. Common Stock", "FSD": "First Trust High Income Long Short Fund Common Shares of Beneficial Interest", "FSEA": "First Seacoast Bancorp Common Stock", "FSFG": "First Savings Financial Group Inc. Common Stock", "FSI": "Flexible Solutions International Inc. Common Stock (CDA)", "FSII": "FS Development Corp. II Class A Common Stock", "FSK": "FS KKR Capital Corp. Common Stock", "FSLR": "First Solar Inc. Common Stock", "FSLY": "Fastly Inc. Class A Common Stock", "FSM": "Fortuna Silver Mines Inc Ordinary Shares (Canada)", "FSNB": "Fusion Acquisition Corp. II Class A Common Stock", "FSP": "Franklin Street Properties Corp. Common Stock", "FSR": "Fisker Inc. Class A Common Stock", "FSRX": "FinServ Acquisition Corp. II Class A Common Stock", "FSRXU": "FinServ Acquisition Corp. II Unit", "FSRXW": "FinServ Acquisition Corp. II Warrant", "FSS": "Federal Signal Corporation Common Stock", "FSSI": "Fortistar Sustainable Solutions Corp. Class A Common Stock", "FSSIU": "Fortistar Sustainable Solutions Corp. Unit", "FSSIW": "Fortistar Sustainable Solutions Corp. Warrant", "FST": "FAST Acquisition Corp. Class A Common Stock", "FSTR": "L.B. Foster Company Common Stock", "FSTX": "F-star Therapeutics Inc. Common Stock", "FSV": "FirstService Corporation Common Shares", "FT": "Franklin Universal Trust Common Stock", "FTAA": "FTAC Athena Acquisition Corp. Class A Ordinary Share", "FTAAU": "FTAC Athena Acquisition Corp. Unit", "FTAAW": "FTAC Athena Acquisition Corp. Warrant", "FTAI": "Fortress Transportation and Infrastructure Investors LLC Common Shares", "FTAI^A": "Fortress Transportation and Infrastructure Investors LLC 8.25% Fixed to Floating Rate Series A Cumulative Perpetual Redeemable Preferred Shares", "FTAI^B": "Fortress Transportation and Infrastructure Investors LLC 8.00% Fixed-to-Floating Rate Series B Cumulative Perpetual Redeemable Preferred Shares", "FTAI^C": "Fortress Transportation and Infrastructure Investors LLC 8.25% Fixed - Rate Reset Series C Cumulative Perpetual Redeemable Preferred Shares", "FTCH": "Farfetch Limited Class A Ordinary Shares", "FTCI": "FTC Solar Inc. Common Stock", "FTCV": "FinTech Acquisition Corp. V Class A Common Stock", "FTCVU": "FinTech Acquisition Corp. V Unit", "FTCVW": "FinTech Acquisition Corp. V Warrant", "FTDR": "Frontdoor Inc. Common Stock", "FTEK": "Fuel Tech Inc. Common Stock", "FTEV": "FinTech Evolution Acquisition Group Class A Ordinary Shares", "FTF": "Franklin Limited Duration Income Trust Common Shares of Beneficial Interest", "FTFT": "Future FinTech Group Inc. Common Stock", "FTHM": "Fathom Holdings Inc. Common Stock", "FTHY": "First Trust High Yield Opportunities 2027 Term Fund Common Stock", "FTI": "TechnipFMC plc Ordinary Share", "FTK": "Flotek Industries Inc. Common Stock", "FTNT": "Fortinet Inc. Common Stock", "FTPA": "FTAC Parnassus Acquisition Corp. Class A Common Stock", "FTPAU": "FTAC Parnassus Acquisition Corp. Unit", "FTPAW": "FTAC Parnassus Acquisition Corp. Warrant", "FTRP": "Field Trip Health Ltd. Common Shares", "FTS": "Fortis Inc. Common Shares", "FTSI": "FTS International Inc. Class A Common Stock", "FTV": "Fortive Corporation Common Stock ", "FTVI": "FinTech Acquisition Corp. VI Class A Common Stock", "FTVIU": "FinTech Acquisition Corp. VI Units", "FTVIW": "FinTech Acquisition Corp. VI Warrants", "FUBO": "fuboTV Inc. Common Stock", "FUL": "H. B. Fuller Company Common Stock", "FULC": "Fulcrum Therapeutics Inc. Common Stock", "FULT": "Fulton Financial Corporation Common Stock", "FULTP": "Fulton Financial Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of Fixed Rate Non-Cumulative Perpetual Preferred Stock Series A", "FUN": "Cedar Fair L.P. Common Stock", "FUNC": "First United Corporation Common Stock", "FUND": "Sprott Focus Trust Inc. Common Stock", "FURY": "Fury Gold Mines Limited Common Shares", "FUSB": "First US Bancshares Inc. Common Stock", "FUSN": "Fusion Pharmaceuticals Inc. Common Shares", "FUTU": "Futu Holdings Limited American Depositary Shares", "FUV": "Arcimoto Inc. Common Stock", "FVAM": "5:01 Acquisition Corp. Class A Common Stock", "FVCB": "FVCBankcorp Inc. Common Stock", "FVE": "Five Star Senior Living Inc. Common Stock", "FVIV": "Fortress Value Acquisition Corp. IV Class A Common Stock", "FVRR": "Fiverr International Ltd. Ordinary Shares no par value", "FVT": "Fortress Value Acquisition Corp. III Class A Common Stock", "FWAC": "Fifth Wall Acquisition Corp. III Class A Ordinary Shares", "FWBI": "First Wave BioPharma Inc. Common Stock", "FWONA": "Liberty Media Corporation Series A Liberty Formula One Common Stock", "FWONK": "Liberty Media Corporation Series C Liberty Formula One Common Stock", "FWP": "Forward Pharma A/S American Depositary Shares", "FWRD": "Forward Air Corporation Common Stock", "FWRG": "First Watch Restaurant Group Inc. Common Stock", "FXLV": "F45 Training Holdings Inc. Common Stock", "FXNC": "First National Corporation Common Stock", "FYBR": "Frontier Communications Parent Inc. Common Stock", "FZT": "FAST Acquisition Corp. II Class A Common Stock", "G": "Genpact Limited Common Stock", "GAB": "Gabelli Equity Trust Inc. (The) Common Stock", "GAB^G": "Gabelli Equity Trust Inc. (The) Series G Cumulative Preferred Stock", "GAB^H": "Gabelli Equity Trust Inc. (The) Pfd Ser H", "GAB^J": "Gabelli Equity Trust Inc. (The) 5.45% Series J Cumulative Preferred Stock", "GAB^K": "Gabelli Equity Trust Inc. (The) 5.00% Series K Cumulative Preferred Stock", "GABC": "German American Bancorp Inc. Common Stock", "GACQ": "Global Consumer Acquisition Corp. Common Stock", "GACQW": "Global Consumer Acquisition Corp. Warrant", "GAIA": "Gaia Inc. Class A Common Stock", "GAIN": "Gladstone Investment Corporation Business Development Company", "GAINN": "Gladstone Investment Corporation 5.00% Notes Due 2026", "GAINZ": "Gladstone Investment Corporation 4.875% Notes due 2028", "GALT": "Galectin Therapeutics Inc. Common Stock", "GAM": "General American Investors Inc. Common Stock", "GAM^B": "General American Investors Company Inc. Cumulative Preferred Stock", "GAMB": "Gambling.com Group Limited Ordinary Shares", "GAMC": "Golden Arrow Merger Corp. Class A Common Stock", "GAMCU": "Golden Arrow Merger Corp. Unit", "GAMCW": "Golden Arrow Merger Corp. Warrant", "GAME": "Engine Media Holdings Inc. Common Stock", "GAN": "GAN Limited Ordinary Shares", "GANX": "Gain Therapeutics Inc. Common Stock", "GAPA": "G&P Acquisition Corp. Class A Common Stock", "GASS": "StealthGas Inc. Common Stock", "GATEU": "Marblegate Acquisition Corp. Unit", "GATO": "Gatos Silver Inc. Common Stock", "GATX": "GATX Corporation Common Stock", "GAU": "Galiano Gold Inc.", "GB": "Global Blue Group Holding AG Ordinary Shares", "GBAB": "Guggenheim Taxable Municipal Bond & Investment Grade Debt Trust Common Shares of Beneficial Interest", "GBCI": "Glacier Bancorp Inc. Common Stock", "GBDC": "Golub Capital BDC Inc. Common Stock", "GBIO": "Generation Bio Co. Common Stock", "GBL": "Gamco Investors Inc. Common Stock", "GBLI": "Global Indemnity Group LLC Class A Common Stock (DE)", "GBLIL": "Global Indemnity Group LLC 7.875% Subordinated Notes due 2047", "GBNH": "Greenbrook TMS Inc. Common Shares", "GBNY": "Generations Bancorp NY Inc. Common Stock", "GBOX": "Greenbox POS Common Stock", "GBR": "New Concept Energy Inc Common Stock", "GBRG": "Goldenbridge Acquisition Limited Ordinary Shares", "GBRGR": "Goldenbridge Acquisition Limited Right", "GBRGU": "Goldenbridge Acquisition Limited Unit", "GBRGW": "Goldenbridge Acquisition Limited Warrant", "GBS": "GBS Inc. Common Stock", "GBT": "Global Blood Therapeutics Inc. Common Stock", "GBX": "Greenbrier Companies Inc. (The) Common Stock", "GCAC": "Growth Capital Acquisition Corp. Class A Common Stock", "GCACU": "Growth Capital Acquisition Corp. Unit", "GCACW": "Growth Capital Acquisition Corp. Warrant", "GCBC": "Greene County Bancorp Inc. Common Stock", "GCI": "Gannett Co. Inc. Common Stock", "GCMG": "GCM Grosvenor Inc. Class A Common Stock", "GCMGW": "GCM Grosvenor Inc. Warrant", "GCO": "Genesco Inc. Common Stock", "GCP": "GCP Applied Technologies Inc. Common Stock", "GCV": "Gabelli Convertible and Income Securities Fund Inc. (The) Common Stock", "GD": "General Dynamics Corporation Common Stock", "GDDY": "GoDaddy Inc. Class A Common Stock", "GDEN": "Golden Entertainment Inc. Common Stock", "GDEV": "Nexters Inc. Ordinary Shares", "GDEVW": "Nexters Inc. Warrant", "GDL": "GDL Fund The Common Shares of Beneficial Interest", "GDO": "Western Asset Global Corporate Defined Opportunity Fund Inc. Western Asset Global Corporate Defined Opportunity Fund Inc.", "GDOT": "Green Dot Corporation Class A Common Stock $0.001 par value", "GDP": "Goodrich Petroleum Corporation Common Stock", "GDRX": "GoodRx Holdings Inc. Class A Common Stock", "GDS": "GDS Holdings Limited ADS", "GDV": "Gabelli Dividend & Income Trust Common Shares of Beneficial Interest", "GDV^G": "Gabelli Dividend 5.25% Series G Cumulative Preferred Shares par value $0.001 per share", "GDV^H": "The Gabelli Dividend & Income Trust 5.375% Series H Cumulative Preferred Shares", "GDYN": "Grid Dynamics Holdings Inc. Class A Common Stock", "GE": "General Electric Company Common Stock", "GECC": "Great Elm Capital Corp. Common Stock", "GECCM": "Great Elm Capital Corp. 6.75% Notes Due 2025", "GECCN": "Great Elm Capital Corp. 6.5% Notes due 2024", "GECCO": "Great Elm Capital Corp. 5.875% Notes due 2026", "GEF": "Greif Inc. Class A Common Stock", "GEG": "Great Elm Group Inc. Common Stock", "GEL": "Genesis Energy L.P. Common Units", "GENC": "Gencor Industries Inc. Common Stock", "GENE": "Genetic Technologies Ltd Sponsored ADR", "GENI": "Genius Sports Limited Ordinary Shares", "GEO": "Geo Group Inc (The) REIT", "GEOS": "Geospace Technologies Corporation Common Stock (Texas)", "GER": "Goldman Sachs MLP Energy Renaissance Fund", "GERN": "Geron Corporation Common Stock", "GES": "Guess? Inc. Common Stock", "GEVO": "Gevo Inc. Common Stock", "GF": "New Germany Fund Inc. (The) Common Stock", "GFAI": "Guardforce AI Co. Limited Ordinary Shares", "GFAIW": "Guardforce AI Co. Limited Warrant", "GFED": "Guaranty Federal Bancshares Inc. Common Stock", "GFF": "Griffon Corporation Common Stock", "GFI": "Gold Fields Limited American Depositary Shares", "GFL": "GFL Environmental Inc. Subordinate voting shares no par value", "GFLU": "GFL Environmental Inc. Tangible Equity Units", "GFOR": "Graf Acquisition Corp. IV Common Stock", "GFX": "Golden Falcon Acquisition Corp. Class A Common Stock", "GGAL": "Grupo Financiero Galicia S.A. American Depositary Shares", "GGB": "Gerdau S.A. Common Stock", "GGG": "Graco Inc. Common Stock", "GGGV": "G3 VRM Acquisition Corp. Class A Common Stock", "GGGVR": "G3 VRM Acquisition Corp. Rights", "GGM": "Guggenheim Credit Allocation Fund Common Shares of Beneficial Interest", "GGMC": "Glenfarne Merger Corp. Class A Common Stock", "GGMCU": "Glenfarne Merger Corp. Unit", "GGMCW": "Glenfarne Merger Corp. Warrant", "GGN": "GAMCO Global Gold Natural Resources & Income Trust", "GGN^B": "GAMCO Global Gold Natural Reources & Income Trust 5.00% Series B Cumulative 25.00 Liquidation Preference", "GGO": "The Gabelli Go Anywhere Trust Common Shares of Beneficial Interest", "GGPI": "Gores Guggenheim Inc. Class A Common Stock", "GGPIU": "Gores Guggenheim Inc. Unit", "GGPIW": "Gores Guggenheim Inc. Warrant", "GGT": "Gabelli Multi-Media Trust Inc. (The) Common Stock", "GGT^G": "Gabelli Multi-Media Trust Inc. (The) 5.125% Series G Cumulative Preferred Shares", "GGZ": "Gabelli Global Small and Mid Cap Value Trust (The) Common Shares of Beneficial Interest", "GGZ^A": "Gabelli Global Small and Mid Cap Value Trust (The) 5.450% Series A Cumulative Preferred Shares (Liquidation Preference $25.00 per share)", "GH": "Guardant Health Inc. Common Stock", "GHAC": "Gaming & Hospitality Acquisition Corp. Class A Common Stock", "GHACU": "Gaming & Hospitality Acquisition Corp. Unit", "GHACW": "Gaming & Hospitality Acquisition Corp. Warrants", "GHC": "Graham Holdings Company Common Stock", "GHG": "GreenTree Hospitality Group Ltd. American depositary shares each representing one Class A ordinary share", "GHL": "Greenhill & Co. Inc. Common Stock", "GHLD": "Guild Holdings Company Class A Common Stock", "GHM": "Graham Corporation Common Stock", "GHRS": "GH Research PLC Ordinary Shares", "GHSI": "Guardion Health Sciences Inc. Common Stock", "GHY": "PGIM Global High Yield Fund Inc.", "GIB": "CGI Inc. Common Stock", "GIC": "Global Industrial Company Common Stock", "GIFI": "Gulf Island Fabrication Inc. Common Stock", "GIG": "GigCapital4 Inc. Common stock", "GIGGU": "GigCapital4 Inc. Unit", "GIGGW": "GigCapital4 Inc. Warrant", "GIGM": "GigaMedia Limited Ordinary Shares", "GIII": "G-III Apparel Group LTD. Common Stock", "GIIX": "Gores Holdings VIII Inc. Class A Common Stock", "GIIXU": "Gores Holdings VIII Inc. Unit", "GIIXW": "Gores Holdings VIII Inc. Warrant", "GIL": "Gildan Activewear Inc. Class A Sub. Vot. Common Stock", "GILD": "Gilead Sciences Inc. Common Stock", "GILT": "Gilat Satellite Networks Ltd. Ordinary Shares", "GIM": "Templeton Global Income Fund Inc. Common Stock", "GIS": "General Mills Inc. Common Stock", "GIW": "GigInternational1 Inc. Common Stock", "GIWWU": "GigInternational1 Inc. Units", "GIWWW": "GigInternational1 Inc. Warrant", "GJH": "Synthetic Fixed-Income Securities Inc 6.375% (STRATS) Cl A-1", "GJP": "Synthetic Fixed-Income Securities Inc. Synthetic Fixed-Income Securities Inc. on behalf of STRATS (SM) Trust for Dominion Resources Inc. Securities Series 2005-6 Floating Rate Structured Repackaged Asset-Backed Trust Securities (STRATS) Certificates", "GJS": "Goldman Sachs Group Securities STRATS Trust for Goldman Sachs Group Securities Series 2006-2", "GJT": "Synthetic Fixed-Income Securities Inc. Synthetic Fixed-Income Securities Inc. Floating Rate Structured Repackaged Asset-Backed Trust Securities Certificates Series 2006-3", "GKOS": "Glaukos Corporation Common Stock", "GL": "Globe Life Inc. Common Stock", "GL^D": "Globe Life Inc. 4.25% Junior Subordinated Debentures due 2061", "GLAD": "Gladstone Capital Corporation Common Stock", "GLADL": "Gladstone Capital Corporation 5.375% Notes due 2024", "GLAQ": "Globis Acquisition Corp. common stock", "GLAQU": "Globis Acquisition Corp. Unit", "GLAQW": "Globis Acquisition Corp. Warrant", "GLBE": "Global-E Online Ltd. Ordinary Shares", "GLBL": "Cartesian Growth Corporation Class A Ordinary Share", "GLBLU": "Cartesian Growth Corporation Unit", "GLBLW": "Cartesian Growth Corporation Warrant", "GLBS": "Globus Maritime Limited Common Stock", "GLBZ": "Glen Burnie Bancorp Common Stock", "GLDD": "Great Lakes Dredge & Dock Corporation Common Stock", "GLDG": "GoldMining Inc. Common Shares", "GLEE": "Gladstone Acquisition Corp. Class A Common Stock", "GLEEU": "Gladstone Acquisition Corp. Unit", "GLEEW": "Gladstone Acquisition Corp. Warrant", "GLG": "TD Holdings Inc. Common Stock", "GLHA": "Glass Houses Acquisition Corp. Class A common stock", "GLHAU": "Glass Houses Acquisition Corp. Unit", "GLHAW": "Glass Houses Acquisition Corp. Warrant", "GLMD": "Galmed Pharmaceuticals Ltd. Ordinary Shares", "GLNG": "Golar Lng Ltd", "GLO": "Clough Global Opportunities Fund Common Stock", "GLOB": "Globant S.A. Common Shares", "GLOG^A": "GasLog LP. 8.75% Series A Cumulative Redeemable Perpetual Preference Shares", "GLOP": "GasLog Partners LP Common Units representing limited partnership interests", "GLOP^A": "GasLog Partners LP 8.625% Series A Cumulative Redeemable Perpetual Fixed to Floating Rate Preference Units", "GLOP^B": "GasLog Partners LP 8.200% Series B Cumulative Redeemable Perpetual Fixed to Floating Rate Preference Units", "GLOP^C": "GasLog Partners LP 8.500% Series C Cumulative Redeemable Perpetual Fixed to Floating Rate Preference Units", "GLP": "Global Partners LP Global Partners LP Common Units representing Limited Partner Interests", "GLP^A": "Global Partners LP 9.75% Series A Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units representing limited partner interests", "GLP^B": "Global Partners LP 9.50% Series B Fixed Rate Cumulative Redeemable Perpetual Preferred Units representing limited partner interests", "GLPG": "Galapagos NV American Depositary Shares", "GLPI": "Gaming and Leisure Properties Inc. Common Stock", "GLQ": "Clough Global Equity Fund Clough Global Equity Fund Common Shares of Beneficial Interest", "GLRE": "Greenlight Capital Re Ltd. Class A Ordinary Shares", "GLSI": "Greenwich LifeSciences Inc. Common Stock", "GLSPT": "Global SPAC Partners Co. Subunit ", "GLSPU": "Global SPAC Partners Co. Unit", "GLSPW": "Global SPAC Partners Co. Warrants", "GLT": "Glatfelter Corporation Common Stock", "GLTA": "Galata Acquisition Corp. Class A Ordinary Shares", "GLTO": "Galecto Inc. Common Stock", "GLU": "Gabelli Global Utility Common Shares of Beneficial Ownership", "GLU^B": "The Gabelli Global Utility and Income Trust Series B Cumulative Puttable and Callable Preferred Shares", "GLUE": "Monte Rosa Therapeutics Inc. Common Stock", "GLV": "Clough Global Dividend and Income Fund Common Shares of beneficial interest", "GLW": "Corning Incorporated Common Stock", "GLYC": "GlycoMimetics Inc. Common Stock", "GM": "General Motors Company Common Stock", "GMAB": "Genmab A/S ADS", "GMBL": "Esports Entertainment Group Inc. Common Stock", "GMBLW": "Esports Entertainment Group Inc. Warrant", "GMBT": "Queen's Gambit Growth Capital Class A Ordinary Share", "GMBTU": "Queen's Gambit Growth Capital Unit", "GMBTW": "Queen's Gambit Growth Capital Warrant", "GMDA": "Gamida Cell Ltd. Ordinary Shares", "GME": "GameStop Corporation Common Stock", "GMED": "Globus Medical Inc. Class A Common Stock", "GMII": "Gores Metropoulos II Inc. Class A Common Stock", "GMIIU": "Gores Metropoulos II Inc. Unit", "GMIIW": "Gores Metropoulos II Inc. Warrants", "GMRE": "Global Medical REIT Inc. Common Stock", "GMRE^A": "Global Medical REIT Inc. Series A Cumulative Redeemable Preferred Stock", "GMS": "GMS Inc. Common Stock", "GMTX": "Gemini Therapeutics Inc. Common Stock", "GMVD": "G Medical Innovations Holdings Ltd. Ordinary Shares", "GMVDW": "G Medical Innovations Holdings Ltd. Warrants", "GNAC": "Group Nine Acquisition Corp. Class A Common stock", "GNACU": "Group Nine Acquisition Corp. Unit", "GNACW": "Group Nine Acquisition Corp. Warrant", "GNCA": "Genocea Biosciences Inc. Common Stock", "GNE": "Genie Energy Ltd. Class B Common Stock Stock", "GNE^A": "Genie Energy Ltd. Series 2012 - A Preferred Stock $0.01 par value", "GNFT": "GENFIT S.A. American Depositary Shares", "GNK": "Genco Shipping & Trading Limited Ordinary Shares New (Marshall Islands)", "GNL": "Global Net Lease Inc. Common Stock", "GNL^A": "Global Net Lease Inc. 7.25% Series A Cumulative Redeemable Preferred Stock $0.01 par value per share", "GNL^B": "Global Net Lease Inc. 6.875% Series B Cumulative Redeemable Perpetual Preferred Stock", "GNLN": "Greenlane Holdings Inc. Class A Common Stock", "GNOG": "Golden Nugget Online Gaming Inc. Class A Common Stock", "GNPX": "Genprex Inc. Common Stock", "GNRC": "Generac Holdlings Inc. Common Stock", "GNSS": "Genasys Inc. Common Stock", "GNT": "GAMCO Natural Resources Gold & Income Trust", "GNT^A": "GAMCO Natural Resources Gold & Income Tust 5.20% Series A Cumulative Preferred Shares (Liquidation Preference $25.00 per share)", "GNTX": "Gentex Corporation Common Stock", "GNTY": "Guaranty Bancshares Inc. Common Stock", "GNUS": "Genius Brands International Inc. Common Stock", "GNW": "Genworth Financial Inc Common Stock", "GO": "Grocery Outlet Holding Corp. Common Stock", "GOAC": "GO Acquisition Corp. Class A Common Stock", "GOBI": "Gobi Acquisition Corp. Class A Ordinary Shares", "GOCO": "GoHealth Inc. Class A Common Stock", "GOED": "1847 Goedeker Inc. Commom Stock", "GOEV": "Canoo Inc. Class A Common Stock", "GOEVW": "Canoo Inc. Warrant", "GOF": "Guggenheim Strategic Opportunities Fund Common Shares of Beneficial Interest", "GOGL": "Golden Ocean Group Limited Common Stock", "GOGO": "Gogo Inc. Common Stock", "GOL": "Gol Linhas Aereas Inteligentes S.A. Sponsored ADR representing 2 Pfd Shares", "GOLD": "Barrick Gold Corporation Common Stock (BC)", "GOLF": "Acushnet Holdings Corp. Common Stock", "GOOD": "Gladstone Commercial Corporation Real Estate Investment Trust", "GOODN": "Gladstone Commercial Corporation 6.625% Series E Cumulative Redeemable Preferred Stock", "GOODO": "Gladstone Commercial Corporation 6.00% Series G Cumulative Redeemable Preferred Stock par value $0.001 per share", "GOOG": "Alphabet Inc. Class C Capital Stock", "GOOGL": "Alphabet Inc. Class A Common Stock", "GOOS": "Canada Goose Holdings Inc. Subordinate Voting Shares", "GORO": "Gold Resource Corporation Common Stock", "GOSS": "Gossamer Bio Inc. Common Stock", "GOTU": "Gaotu Techedu Inc. American Depositary Shares", "GOVX": "GeoVax Labs Inc. Common Stock", "GOVXW": "GeoVax Labs Inc. Warrants", "GP": "GreenPower Motor Company Inc. Common Shares", "GPAC": "Global Partner Acquisition Corp II Class A Ordinary Share", "GPACU": "Global Partner Acquisition Corp II Unit", "GPACW": "Global Partner Acquisition Corp II Warrant", "GPC": "Genuine Parts Company Common Stock", "GPCO": "Golden Path Acquisition Corporation Ordinary Shares", "GPCOR": "Golden Path Acquisition Corporation Rights", "GPCOW": "Golden Path Acquisition Corporation Warrant", "GPI": "Group 1 Automotive Inc. Common Stock", "GPJA": "Georgia Power Company Series 2017A 5.00% Junior Subordinated Notes due October 1 2077", "GPK": "Graphic Packaging Holding Company", "GPL": "Great Panther Mining Limited Ordinary Shares (Canada)", "GPM": "Guggenheim Enhanced Equity Income Fund", "GPMT": "Granite Point Mortgage Trust Inc. Common Stock", "GPN": "Global Payments Inc. Common Stock", "GPOR": "Gulfport Energy Corporation Common Shares", "GPP": "Green Plains Partners LP Common Units", "GPRE": "Green Plains Inc. Common Stock", "GPRK": "Geopark Ltd Common Shares", "GPRO": "GoPro Inc. Class A Common Stock", "GPS": "Gap Inc. (The) Common Stock", "GPX": "GP Strategies Corporation Common Stock", "GRAY": "Graybug Vision Inc. Common Stock", "GRBK": "Green Brick Partners Inc. Common Stock", "GRC": "Gorman-Rupp Company (The) Common Stock", "GRCL": "Gracell Biotechnologies Inc. American Depositary Shares", "GRCY": "Greencity Acquisition Corporation Ordinary Shares", "GRCYW": "Greencity Acquisition Corporation Warrants", "GREE": "Greenidge Generation Holdings Inc. Class A Common Stock", "GRF": "Eagle Capital Growth Fund Inc. Common Stock", "GRFS": "Grifols S.A. American Depositary Shares", "GRIL": "Muscle Maker Inc Common Stock", "GRIN": "Grindrod Shipping Holdings Ltd. Ordinary Shares", "GRMN": "Garmin Ltd. Common Stock (Switzerland)", "GRNQ": "Greenpro Capital Corp. Common Stock", "GROM": "Grom Social Enterprises Inc. Common Stock", "GROMW": "Grom Social Enterprises Inc. Warrants", "GROW": "U.S. Global Investors Inc. Class A Common Stock", "GROY": "Gold Royalty Corp. Common Shares", "GRPH": "Graphite Bio Inc. Common Stock", "GRPN": "Groupon Inc. Common Stock", "GRTS": "Gritstone bio Inc. Common Stock", "GRTX": "Galera Therapeutics Inc. Common Stock", "GRUB": "Just Eat Takeaway.com N.V. American Depositary Shares", "GRVI": "Grove Inc. Common Stock", "GRVY": "GRAVITY Co. Ltd. American Depository Shares", "GRWG": "GrowGeneration Corp. Common Stock", "GRX": "The Gabelli Healthcare & Wellness Trust Common Shares of Beneficial Interest", "GS": "Goldman Sachs Group Inc. (The) Common Stock", "GS^A": "Goldman Sachs Group Inc. (The) Depositary Shares each representing 1/1000th Interest in a Share of Floating Rate Non-Cumulative Preferred Stock Series A", "GS^C": "Goldman Sachs Group Inc. (The) Depositary Share repstg 1/1000th Preferred Series C", "GS^D": "Goldman Sachs Group Inc. (The) Dep Shs repstg 1/1000 Pfd Ser D Fltg", "GS^J": "Goldman Sachs Group Inc Depositary Shs Repstg 1/1000th Pfd Ser J Fixed to Fltg Rate", "GS^K": "Goldman Sachs Group Inc. (The) Dep Shs Repstg 1/1000 Int Sh Fxd/Fltg Non Cum Pfd Stk Ser K", "GSAH": "GS Acquisition Holdings Corp II Class A Common Stock", "GSAQ": "Global Synergy Acquisition Corp. Class A Ordinary Shares", "GSAQU": "Global Synergy Acquisition Corp. Units", "GSAQW": "Global Synergy Acquisition Corp. Warrant", "GSAT": "Globalstar Inc. Common Stock", "GSBC": "Great Southern Bancorp Inc. Common Stock", "GSBD": "Goldman Sachs BDC Inc. Common Stock", "GSEV": "Gores Holdings VII Inc. Class A Common Stock", "GSEVU": "Gores Holdings VII Inc. Units", "GSEVW": "Gores Holdings VII Inc. Warrant", "GSHD": "Goosehead Insurance Inc. Class A Common Stock", "GSIT": "GSI Technology Common Stock", "GSK": "GlaxoSmithKline PLC Common Stock", "GSKY": "GreenSky Inc. Class A Common Stock", "GSL": "Global Ship Lease Inc New Class A Common Shares", "GSL^B": "Global Ship Lease Inc. Depository Shares Representing 1/100th Perpetual Preferred Series B% (Marshall Island)", "GSLD": "Global Ship Lease Inc. 8.00% Senior Notes due 2024", "GSM": "Ferroglobe PLC Ordinary Shares", "GSMG": "Glory Star New Media Group Holdings Limited Ordinary Share", "GSMGW": "Glory Star New Media Group Holdings Limited Warrant expiring 2/13/2025", "GSQB": "G Squared Ascend II Inc. Class A Ordinary Shares", "GSQD": "G Squared Ascend I Inc. Class A Ordinary Shares", "GSS": "Golden Star Resources Ltd Common Stock", "GSV": "Gold Standard Ventures Corporation Common Stock (Canada)", "GT": "The Goodyear Tire & Rubber Company Common Stock", "GTBP": "GT Biopharma Inc. Common Stock", "GTE": "Gran Tierra Energy Inc. Common Stock", "GTEC": "Greenland Technologies Holding Corporation Ordinary Shares", "GTES": "Gates Industrial Corporation plc Ordinary Shares", "GTH": "Genetron Holdings Limited ADS", "GTHX": "G1 Therapeutics Inc. Common Stock", "GTIM": "Good Times Restaurants Inc. Common Stock", "GTLS": "Chart Industries Inc. Common Stock", "GTN": "Gray Television Inc. Common Stock", "GTPA": "Gores Technology Partners Inc. Class A Common Stock", "GTPAU": "Gores Technology Partners Inc. Units", "GTPAW": "Gores Technology Partners Inc. Warrant", "GTPB": "Gores Technology Partners II Inc. Class A Common Stock", "GTPBU": "Gores Technology Partners II Inc. Units", "GTPBW": "Gores Technology Partners II Inc. Warrant", "GTS": "Triple-S Management Corporation Common Stock", "GTX": "Garrett Motion Inc. Common Stock", "GTXAP": "Garrett Motion Inc. Series A Cumulative Convertible Preferred Stock", "GTY": "Getty Realty Corporation Common Stock", "GTYH": "GTY Technology Holdings Inc. Common Stock", "GURE": "Gulf Resources Inc. (NV) Common Stock", "GUT": "Gabelli Utility Trust (The) Common Stock", "GUT^A": "Gabelli Utility Trust (The) 5.625% Series A Cumulative Preferred Shares", "GUT^C": "Gabelli Utility Trust (The) 5.375% Series C Cumulative Preferred Shares", "GVA": "Granite Construction Incorporated Common Stock", "GVP": "GSE Systems Inc. Common Stock", "GWB": "Great Western Bancorp Inc. Common Stock", "GWGH": "GWG Holdings Inc Common Stock", "GWII": "Good Works II Acquisition Corp. Common Stock", "GWIIW": "Good Works II Acquisition Corp. Warrant", "GWRE": "Guidewire Software Inc. Common Stock", "GWRS": "Global Water Resources Inc. Common Stock", "GWW": "W.W. Grainger Inc. Common Stock", "GXII": "GX Acquisition Corp. II Class A Common Stock", "GXIIW": "GX Acquisition Corp. II Warrant", "GXO": "GXO Logistics Inc. Common Stock ", "GYRO": "Gyrodyne LLC Common Stock", "H": "Hyatt Hotels Corporation Class A Common Stock", "HA": "Hawaiian Holdings Inc. Common Stock", "HAAC": "Health Assurance Acquisition Corp. Class A Common Stock", "HAACU": "Health Assurance Acquisition Corp. SAIL Securities", "HAACW": "Health Assurance Acquisition Corp. Warrants", "HAE": "Haemonetics Corporation Common Stock", "HAFC": "Hanmi Financial Corporation Common Stock", "HAIN": "Hain Celestial Group Inc. (The) Common Stock", "HAL": "Halliburton Company Common Stock", "HALL": "Hallmark Financial Services Inc. Common Stock", "HALO": "Halozyme Therapeutics Inc. Common Stock", "HAPP": "Happiness Biotech Group Limited Ordinary Shares", "HARP": "Harpoon Therapeutics Inc. Common Stock", "HAS": "Hasbro Inc. Common Stock", "HASI": "Hannon Armstrong Sustainable Infrastructure Capital Inc. Common Stock", "HAYN": "Haynes International Inc. Common Stock", "HAYW": "Hayward Holdings Inc. Common Stock", "HBAN": "Huntington Bancshares Incorporated Common Stock", "HBANM": "Huntington Bancshares Incorporated Depositary Shares each representing a 1/1000th interest in a share of Huntington Series I Preferred Stock", "HBANN": "Huntington Bancshares Incorporated Depositary Shares each representing a 1/40th interest in a share of 5.875% Series C Non-Cumulative Perpetual Preferred Stock", "HBANP": "Huntington Bancshares Incorporated Depositary Shares 4.500% Series H Non-Cumulative Perpetual Preferred Stock", "HBB": "Hamilton Beach Brands Holding Company Class A Common Stock ", "HBCP": "Home Bancorp Inc. Common Stock", "HBI": "Hanesbrands Inc. Common Stock", "HBIO": "Harvard Bioscience Inc. Common Stock", "HBM": "Hudbay Minerals Inc. Ordinary Shares (Canada)", "HBMD": "Howard Bancorp Inc. Common Stock", "HBNC": "Horizon Bancorp Inc. Common Stock", "HBP": "Huttig Building Products Inc. Common Stock", "HBT": "HBT Financial Inc. Common Stock", "HCA": "HCA Healthcare Inc. Common Stock", "HCAQ": "HealthCor Catalio Acquisition Corp. Class A Ordinary Shares", "HCAR": "Healthcare Services Acquisition Corporation Class A Common Stock", "HCARU": "Healthcare Services Acquisition Corporation Unit", "HCARW": "Healthcare Services Acquisition Corporation Warrant", "HCAT": "Health Catalyst Inc Common Stock", "HCC": "Warrior Met Coal Inc. Common Stock", "HCCC": "Healthcare Capital Corp. Class A common stock", "HCCCU": "Healthcare Capital Corp. Unit", "HCCCW": "Healthcare Capital Corp. Warrant", "HCCI": "Heritage-Crystal Clean Inc. Common Stock", "HCDI": "Harbor Custom Development Inc. Common Stock", "HCDIP": "Harbor Custom Development Inc. 8.0% Series A Cumulative Convertible Preferred Stock no par value", "HCDIW": "Harbor Custom Development Inc. Warrant", "HCI": "HCI Group Inc. Common Stock", "HCIC": "Hennessy Capital Investment Corp. V Class A Common Stock", "HCICU": "Hennessy Capital Investment Corp. V Units ", "HCICW": "Hennessy Capital Investment Corp. V Warrant", "HCII": "Hudson Executive Investment Corp. II Class A Common Stock", "HCIIU": "Hudson Executive Investment Corp. II Unit", "HCIIW": "Hudson Executive Investment Corp. II Warrant", "HCKT": "Hackett Group Inc (The). Common Stock", "HCM": "HUTCHMED (China) Limited American Depositary Shares", "HCNE": "Jaws Hurricane Acquisition Corp. Class A Common Stock", "HCNEU": "Jaws Hurricane Acquisition Corp. Unit", "HCNEW": "Jaws Hurricane Acquisition Corp. Warrant", "HCSG": "Healthcare Services Group Inc. Common Stock", "HCVIU": "Hennessy Capital Investment Corp. VI Unit", "HCWB": "HCW Biologics Inc. Common Stock", "HCXY": "Hercules Capital Inc. 6.25% Notes due 2033", "HD": "Home Depot Inc. (The) Common Stock", "HDB": "HDFC Bank Limited Common Stock", "HDSN": "Hudson Technologies Inc. Common Stock", "HE": "Hawaiian Electric Industries Inc. Common Stock", "HEAR": "Turtle Beach Corporation Common Stock", "HEES": "H&E Equipment Services Inc. Common Stock", "HEI": "Heico Corporation Common Stock", "HEI/A": "Heico Corporation", "HELE": "Helen of Troy Limited Common Stock", "HEP": "Holly Energy Partners L.P. Common Stock", "HEPA": "Hepion Pharmaceuticals Inc. Common Stock", "HEPS": "D-Market Electronic Services & Trading American Depositary Shares", "HEQ": "John Hancock Hedged Equity & Income Fund Common Shares of Beneficial Interest", "HERA": "FTAC Hera Acquisition Corp. Class A Ordinary Shares", "HERAU": "FTAC Hera Acquisition Corp. Units", "HERAW": "FTAC Hera Acquisition Corp. Warrant", "HES": "Hess Corporation Common Stock", "HESM": "Hess Midstream LP Class A Share", "HEXO": "HEXO Corp. Common Shares", "HFBL": "Home Federal Bancorp Inc. of Louisiana Common StocK", "HFC": "HollyFrontier Corporation Common Stock", "HFFG": "HF Foods Group Inc. Common Stock", "HFRO": "Highland Income Fund", "HFRO^A": "Highland Income Fund 5.375% Series A Cumulative Preferred Shares", "HFWA": "Heritage Financial Corporation Common Stock", "HGBL": "Heritage Global Inc. Common Stock", "HGEN": "Humanigen Inc. Common Stock", "HGH": "Hartford Financial Services Group Inc. (The) 7.875% Fixed to Floating Rate Junior Subordinated Debentures due 2042", "HGLB": "Highland Global Allocation Fund Common Stock", "HGSH": "China HGS Real Estate Inc. Common Stock", "HGV": "Hilton Grand Vacations Inc. Common Stock ", "HHC": "Howard Hughes Corporation (The) Common Stock", "HHGCU": "HHG Capital Corporation Units", "HHLA": "HH&L Acquisition Co. Class A Ordinary Shares", "HHR": "HeadHunter Group PLC American Depositary Shares", "HI": "Hillenbrand Inc Common Stock", "HIBB": "Hibbett Inc. Common Stock", "HIE": "Miller/Howard High Income Equity Fund Common Shares of Beneficial Interest", "HIFS": "Hingham Institution for Savings Common Stock", "HIG": "Hartford Financial Services Group Inc. (The) Common Stock", "HIG^G": "Hartford Financial Services Group Inc. (The) Depositary Shares each representing a 1/1000th interest in a share of 6.000% Non-Cumulative Preferred Stock Series G $0.01 par value", "HIGA": "H.I.G. Acquisition Corp. Class A Ordinary Shares", "HIHO": "Highway Holdings Limited Common Stock", "HII": "Huntington Ingalls Industries Inc. Common Stock", "HIII": "Hudson Executive Investment Corp. III Class A Common Stock", "HIIIU": "Hudson Executive Investment Corp. III Unit", "HIIIW": "Hudson Executive Investment Corp. III Warrant", "HIL": "Hill International Inc. Common Stock", "HIMS": "Hims & Hers Health Inc. Class A Common Stock", "HIMX": "Himax Technologies Inc. American Depositary Shares", "HIO": "Western Asset High Income Opportunity Fund Inc. Common Stock", "HIPO": "Hippo Holdings Inc. Common Stock", "HITI": "High Tide Inc. Common Shares", "HIVE": "Hive Blockchain Technologies Ltd. Common Shares", "HIW": "Highwoods Properties Inc. Common Stock", "HIX": "Western Asset High Income Fund II Inc. Common Stock", "HKIB": "AMTD International Inc. American Depositary Shares each representing one Class A Ordinary Share", "HL": "Hecla Mining Company Common Stock", "HL^B": "Hecla Mining Company Preferred Stock", "HLAH": "Hamilton Lane Alliance Holdings I Inc. Class A Common Stock", "HLAHU": "Hamilton Lane Alliance Holdings I Inc. Unit", "HLAHW": "Hamilton Lane Alliance Holdings I Inc. Warrant", "HLBZ": "Helbiz Inc. Class A Common stock", "HLBZW": "Helbiz Inc. Warrant", "HLF": "Herbalife Nutrition Ltd. Common Stock", "HLG": "Hailiang Education Group Inc. American Depositary Shares", "HLI": "Houlihan Lokey Inc. Class A Common Stock", "HLIO": "Helios Technologies Inc. Common Stock", "HLIT": "Harmonic Inc. Common Stock", "HLLY": "Holley Inc. Common Stock", "HLMN": "Hillman Solutions Corp. Common Stock", "HLMNW": "Hillman Solutions Corp. Warrant", "HLNE": "Hamilton Lane Incorporated Class A Common Stock", "HLT": "Hilton Worldwide Holdings Inc. Common Stock ", "HLTH": "Cue Health Inc. Common Stock", "HLX": "Helix Energy Solutions Group Inc. Common Stock", "HLXA": "Helix Acquisition Corp. Class A Ordinary Shares", "HMC": "Honda Motor Company Ltd. Common Stock", "HMCO": "HumanCo Acquisition Corp. Class A Common Stock", "HMCOU": "HumanCo Acquisition Corp. Unit", "HMCOW": "HumanCo Acquisition Corp. Warrant", "HMG": "HMG/Courtland Properties Inc. Common Stock", "HMHC": "Houghton Mifflin Harcourt Company Common Stock", "HMLP": "Hoegh LNG Partners LP Common Units representing Limited Partner Interests", "HMLP^A": "Hoegh LNG Partners LP 8.75% Series A Cumulative Redeemable Preferred Units", "HMN": "Horace Mann Educators Corporation Common Stock", "HMNF": "HMN Financial Inc. Common Stock", "HMPT": "Home Point Capital Inc Common Stock", "HMST": "HomeStreet Inc. Common Stock", "HMTV": "Hemisphere Media Group Inc. Class A Common Stock", "HMY": "Harmony Gold Mining Company Limited", "HNGR": "Hanger Inc. Common Stock", "HNI": "HNI Corporation Common Stock", "HNNA": "Hennessy Advisors Inc. Common Stock", "HNP": "Huaneng Power Intl Common Stock", "HNRG": "Hallador Energy Company Common Stock", "HNST": "The Honest Company Inc. Common Stock", "HNW": "Pioneer Diversified High Income Fund Inc.", "HOFT": "Hooker Furnishings Corporation Common Stock", "HOFV": "Hall of Fame Resort & Entertainment Company Common Stock", "HOFVW": "Hall of Fame Resort & Entertainment Company Warrant", "HOG": "Harley-Davidson Inc. Common Stock", "HOLI": "Hollysys Automation Technologies Ltd. Common Shares (British Virgin Islands)", "HOLX": "Hologic Inc. Common Stock", "HOMB": "Home BancShares Inc. Common Stock", "HON": "Honeywell International Inc. Common Stock", "HONE": "HarborOne Bancorp Inc. Common Stock", "HOOD": "Robinhood Markets Inc. Class A Common Stock", "HOOK": "HOOKIPA Pharma Inc. Common Stock", "HOPE": "Hope Bancorp Inc. Common Stock", "HOTH": "Hoth Therapeutics Inc. Common Stock", "HOV": "Hovnanian Enterprises Inc. Class A Common Stock", "HOVNP": "Hovnanian Enterprises Inc Dep Shr Srs A Pfd", "HOWL": "Werewolf Therapeutics Inc. Common Stock", "HP": "Helmerich & Payne Inc. Common Stock", "HPE": "Hewlett Packard Enterprise Company Common Stock", "HPF": "John Hancock Pfd Income Fund II Pfd Income Fund II", "HPI": "John Hancock Preferred Income Fund Common Shares of Beneficial Interest", "HPK": "HighPeak Energy Inc. Common Stock", "HPKEW": "HighPeak Energy Inc. Warrant", "HPLTU": "Home Plate Acquisition Corporation Unit", "HPP": "Hudson Pacific Properties Inc. Common Stock", "HPQ": "HP Inc. Common Stock", "HPS": "John Hancock Preferred Income Fund III Preferred Income Fund III", "HPX": "HPX Corp. Class A Ordinary Shares", "HQH": "Tekla Healthcare Investors Common Stock", "HQI": "HireQuest Inc. Common Stock (DE)", "HQL": "TeklaLife Sciences Investors Common Stock", "HQY": "HealthEquity Inc. Common Stock", "HR": "Healthcare Realty Trust Incorporated Common Stock", "HRB": "H&R Block Inc. Common Stock", "HRC": "Hill-Rom Holdings Inc Common Stock", "HRI": "Herc Holdings Inc. Common Stock ", "HRL": "Hormel Foods Corporation Common Stock", "HRMY": "Harmony Biosciences Holdings Inc. Common Stock", "HROW": "Harrow Health Inc. Common Stock", "HROWL": "Harrow Health Inc. 8.625% Senior Notes due 2026", "HRTG": "Heritage Insurance Holdings Inc. Common Stock", "HRTX": "Heron Therapeutics Inc. Common Stock", "HRZN": "Horizon Technology Finance Corporation Common Stock", "HSAQ": "Health Sciences Acquisitions Corporation 2 Ordinary Shares", "HSBC": "HSBC Holdings plc. Common Stock", "HSC": "Harsco Corporation Common Stock", "HSDT": "Helius Medical Technologies Inc. Class A Common Stock (DE)", "HSIC": "Henry Schein Inc. Common Stock", "HSII": "Heidrick & Struggles International Inc. Common Stock", "HSKA": "Heska Corporation Common Stock", "HSON": "Hudson Global Inc. Common Stock", "HST": "Host Hotels & Resorts Inc. Common Stock", "HSTM": "HealthStream Inc. Common Stock", "HSTO": "Histogen Inc. Common Stock", "HSY": "The Hershey Company Common Stock", "HT": "Hersha Hospitality Trust Class A Common Shares of Beneficial Interest", "HT^C": "Hersha Hospitality Trust 6.875% Series C Cumulative Redeemable Preferred Shares of Beneficial Interest", "HT^D": "Hersha Hospitality Trust 6.50% Series D Cumulative Redeemable Preferred Shares of Beneficial Interest $0.01 par value per share", "HT^E": "Hersha Hospitality Trust 6.50% Series E Cumulative Redeemable Preferred Shares of Beneficial Interest", "HTA": "Healthcare Trust of America Inc. Class A Common Stock", "HTBI": "HomeTrust Bancshares Inc. Common Stock", "HTBK": "Heritage Commerce Corp Common Stock", "HTBX": "Heat Biologics Inc. Common Stock", "HTD": "John Hancock Tax Advantaged Dividend Income Fund Common Shares of Beneficial Interest", "HTGC": "Hercules Capital Inc. Common Stock", "HTGM": "HTG Molecular Diagnostics Inc. Common Stock", "HTH": "Hilltop Holdings Inc.", "HTHT": "Huazhu Group Limited American Depositary Shares", "HTIA": "Healthcare Trust Inc. 7.375% Series A Cumulative Redeemable Perpetual Preferred Stock", "HTLD": "Heartland Express Inc. Common Stock", "HTLF": "Heartland Financial USA Inc. Common Stock", "HTLFP": "Heartland Financial USA Inc. Depositary Shares each representing a 1/400th ownership interest in a share of 7.00% Fixed-Rate Reset Non-Cumulative Perpetual Preferred Stock Series E", "HTOO": "Fusion Fuel Green PLC Class A Ordinary Shares", "HTOOW": "Fusion Fuel Green PLC Warrant", "HTPA": "Highland Transcend Partners I Corp. Class A Ordinary Shares", "HTY": "John Hancock Tax-Advantaged Global Shareholder Yield Fund Common Shares of Beneficial Interest", "HUBB": "Hubbell Inc Common Stock", "HUBG": "Hub Group Inc. Class A Common Stock", "HUBS": "HubSpot Inc. Common Stock", "HUDI": "Huadi International Group Co. Ltd. Ordinary Shares", "HUGE": "FSD Pharma Inc. Class B Subordinate Voting Shares", "HUGS": "USHG Acquisition Corp. Class A Common Stock", "HUIZ": "Huize Holding Limited American Depositary Shares", "HUM": "Humana Inc. Common Stock", "HUMA": "Humacyte Inc. Common Stock", "HUMAW": "Humacyte Inc. Warrant", "HUN": "Huntsman Corporation Common Stock", "HURC": "Hurco Companies Inc. Common Stock", "HURN": "Huron Consulting Group Inc. Common Stock", "HUSA": "Houston American Energy Corporation Common Stock", "HUSN": "Hudson Capital Inc. Ordinary Shares", "HUT": "Hut 8 Mining Corp. Common Shares", "HUYA": "HUYA Inc. American depositary shares each representing one Class A ordinary share", "HVBC": "HV Bancorp Inc. Common Stock", "HVT": "Haverty Furniture Companies Inc. Common Stock", "HVT/A": "Haverty Furniture Companies Inc.", "HWBK": "Hawthorn Bancshares Inc. Common Stock", "HWC": "Hancock Whitney Corporation Common Stock", "HWCPZ": "Hancock Whitney Corporation 6.25% Subordinated Notes due 2060", "HWEL": "Healthwell Acquisition Corp. I Class A Common Stock", "HWELU": "Healthwell Acquisition Corp. I Unit", "HWELW": "Healthwell Acquisition Corp. I Warrant", "HWKN": "Hawkins Inc. Common Stock", "HWM": "Howmet Aerospace Inc. Common Stock", "HWM^": "Howmet Aerospace Inc. $3.75 Preferred Stock", "HX": "Xiaobai Maimai Inc. ADR", "HXL": "Hexcel Corporation Common Stock", "HY": "Hyster-Yale Materials Handling Inc. Class A Common Stock", "HYAC": "Haymaker Acquisition Corp. III Class A common stock", "HYACU": "Haymaker Acquisition Corp. III Unit", "HYACW": "Haymaker Acquisition Corp. III Warrant", "HYB": "New America High Income Fund Inc. (The) Common Stock", "HYFM": "Hydrofarm Holdings Group Inc. Common Stock", "HYI": "Western Asset High Yield Defined Opportunity Fund Inc. Common Stock", "HYLN": "Hyliion Holdings Corp. Class A Common Stock", "HYMC": "Hycroft Mining Holding Corporation Class A Common Stock", "HYMCL": "Hycroft Mining Holding Corporation Warrants", "HYMCW": "Hycroft Mining Holding Corporation Warrant", "HYMCZ": "Hycroft Mining Holding Corporation Warrant", "HYRE": "HyreCar Inc. Common Stock", "HYT": "Blackrock Corporate High Yield Fund Inc. Common Stock", "HYW": "Hywin Holdings Ltd. American Depositary Shares", "HYZN": "Hyzon Motors Inc. Class A Common Stock", "HYZNW": "Hyzon Motors Inc. Warrants", "HZAC": "Horizon Acquisition Corporation Class A Ordinary Shares", "HZN": "Horizon Global Corporation Common Shares", "HZNP": "Horizon Therapeutics Public Limited Company Ordinary Shares", "HZO": "MarineMax Inc. (FL) Common Stock", "HZON": "Horizon Acquisition Corporation II Class A Ordinary Shares", "IAA": "IAA Inc. Common Stock ", "IAC": "IAC/InterActiveCorp Common Stock", "IACB": "ION Acquisition Corp 2 Ltd. Class A Ordinary Shares", "IACC": "ION Acquisition Corp 3 Ltd. Class A Ordinary Shares", "IAE": "Voya Asia Pacific High Dividend Equity Income Fund ING Asia Pacific High Dividend Equity Income Fund Common Shares of Beneficial Interest", "IAF": "Aberdeen Australia Equity Fund Inc Common Stock", "IAG": "Iamgold Corporation Ordinary Shares", "IART": "Integra LifeSciences Holdings Corporation Common Stock", "IAS": "Integral Ad Science Holding Corp. Common Stock", "IBA": "Industrias Bachoco S.A.B. de C.V. Common Stock", "IBCP": "Independent Bank Corporation Common Stock", "IBER": "Ibere Pharmaceuticals Class A Ordinary Shares", "IBEX": "IBEX Limited Common Shares", "IBIO": "iBio Inc. Common Stock", "IBKR": "Interactive Brokers Group Inc. Class A Common Stock", "IBM": "International Business Machines Corporation Common Stock", "IBN": "ICICI Bank Limited Common Stock", "IBOC": "International Bancshares Corporation Common Stock", "IBP": "Installed Building Products Inc. Common Stock", "IBRX": "ImmunityBio Inc. Common Stock", "IBTX": "Independent Bank Group Inc Common Stock", "ICAD": "iCAD Inc. Common Stock", "ICBK": "County Bancorp Inc. Common Stock", "ICCC": "ImmuCell Corporation Common Stock", "ICCH": "ICC Holdings Inc. Common Stock", "ICCM": "IceCure Medical Ltd. Ordinary Shares", "ICD": "Independence Contract Drilling Inc. Common Stock", "ICE": "Intercontinental Exchange Inc. Common Stock", "ICFI": "ICF International Inc. Common Stock", "ICHR": "Ichor Holdings Ordinary Shares", "ICL": "ICL Group Ltd. Ordinary Shares", "ICLK": "iClick Interactive Asia Group Limited American Depositary Shares", "ICLR": "ICON plc Ordinary Shares", "ICMB": "Investcorp Credit Management BDC Inc. Common Stock", "ICPT": "Intercept Pharmaceuticals Inc. Common Stock", "ICR^A": "InPoint Commercial Real Estate Income Inc. 6.75% Series A Cumulative Redeemable Preferred Stock", "ICUI": "ICU Medical Inc. Common Stock", "ICVX": "Icosavax Inc. Common Stock", "ID": "PARTS iD Inc. Class A Common Stock", "IDA": "IDACORP Inc. Common Stock", "IDBA": "IDEX Biometrics ASA American Depositary Shares", "IDCC": "InterDigital Inc. Common Stock", "IDE": "Voya Infrastructure Industrials and Materials Fund Common Shares of Beneficial Interest", "IDEX": "Ideanomics Inc. Common Stock", "IDN": "Intellicheck Inc. Common Stock", "IDRA": "Idera Pharmaceuticals Inc. Common Stock", "IDT": "IDT Corporation Class B Common Stock", "IDW": "IDW Media Holdings Class B Common Stock", "IDXX": "IDEXX Laboratories Inc. Common Stock", "IDYA": "IDEAYA Biosciences Inc. Common Stock", "IEA": "Infrastructure and Energy Alternatives Inc. Common Stock", "IEAWW": "Infrastructure and Energy Alternatives Inc. Warrant", "IEC": "IEC Electronics Corp. Common Stock", "IEP": "Icahn Enterprises L.P. Common Stock", "IESC": "IES Holdings Inc. Common Stock", "IEX": "IDEX Corporation Common Stock", "IFBD": "Infobird Co. Ltd Ordinary Shares", "IFF": "Internationa Flavors & Fragrances Inc. Common Stock", "IFMK": "iFresh Inc. Common Stock", "IFN": "India Fund Inc. (The) Common Stock", "IFRX": "InflaRx N.V. Common Stock", "IFS": "Intercorp Financial Services Inc. Common Shares", "IGA": "Voya Global Advantage and Premium Opportunity Fund Common Shares of Beneficial Interest", "IGAC": "IG Acquisition Corp. Class A Common Stock", "IGACU": "IG Acquisition Corp. Unit", "IGACW": "IG Acquisition Corp. Warrant", "IGC": "India Globalization Capital Inc. Common Stock", "IGD": "Voya Global Equity Dividend and Premium Opportunity Fund", "IGI": "Western Asset Investment Grade Defined Opportunity Trust Inc. Common Stock", "IGIC": "International General Insurance Holdings Ltd. Ordinary Share", "IGICW": "International General Insurance Holdings Ltd. Warrants expiring 03/17/2025", "IGMS": "IGM Biosciences Inc. Common Stock", "IGNYU": "Ignyte Acquisition Corp. Unit", "IGNYW": "Ignyte Acquisition Corp. Warrant", "IGR": "CBRE Clarion Global Real Estate Income Fund Common Stock", "IGT": "International Game Technology Ordinary Shares", "IH": "iHuman Inc. American depositary shares each representing five Class A ordinary shares", "IHC": "Independence Holding Company Common Stock", "IHD": "Voya Emerging Markets High Income Dividend Equity Fund Common Shares", "IHG": "Intercontinental Hotels Group American Depositary Shares (Each representing one Ordinary Share)", "IHIT": "Invesco High Income 2023 Target Term Fund Common Shares of Beneficial Interest", "IHRT": "iHeartMedia Inc. Class A Common Stock", "IHT": "InnSuites Hospitality Trust Shares of Beneficial Interest", "IHTA": "Invesco High Income 2024 Target Term Fund Common Shares of Beneficial Interest No par value per share", "IIAC": "Investindustrial Acquisition Corp. Class A Ordinary Shares", "IIF": "Morgan Stanley India Investment Fund Inc. Common Stock", "III": "Information Services Group Inc. Information Services Group Inc. Common Stock", "IIII": "INSU Acquisition Corp. III Class A Common Stock", "IIIIU": "INSU Acquisition Corp. III Unit", "IIIIW": "INSU Acquisition Corp. III Warrant", "IIIN": "Insteel Industries Inc. Common Stock", "IIIV": "i3 Verticals Inc. Class A Common Stock", "IIM": "Invesco Value Municipal Income Trust Common Stock", "IIN": "Intricon Corporation Common Stock", "IINN": "Inspira Technologies Oxy B.H.N. Ltd. Ordinary Shares", "IINNW": "Inspira Technologies Oxy B.H.N. Ltd. Warrant", "IIPR": "Innovative Industrial Properties Inc. Common Stock", "IIPR^A": "Innovative Industrial Properties Inc. 9.00% Series A Cumulative Redeemable Preferred Stock", "IIVI": "II-VI Incorporated Common Stock", "IIVIP": "II-VI Incorporated 6.00% Series A Mandatory Convertible Preferred Stock", "IKNA": "Ikena Oncology Inc. Common Stock", "IKNX": "Ikonics Corporation", "IKT": "Inhibikase Therapeutics Inc. Common Stock", "ILMN": "Illumina Inc. Common Stock", "ILPT": "Industrial Logistics Properties Trust Common Shares of Beneficial Interest", "IMAB": "I-MAB American Depositary Shares", "IMAC": "IMAC Holdings Inc. Common Stock", "IMACW": "IMAC Holdings Inc. Warrant", "IMAQ": "International Media Acquisition Corp. Class A Common Stock", "IMAQR": "International Media Acquisition Corp. Rights", "IMAQU": "International Media Acquisition Corp. Unit", "IMAQW": "International Media Acquisition Corp. Warrants", "IMAX": "Imax Corporation Common Stock", "IMBI": "iMedia Brands Inc. Class A Common Stock", "IMBIL": "iMedia Brands Inc. 8.5% Senior Notes Due 2026", "IMCC": "IM Cannabis Corp. Common Shares", "IMCR": "Immunocore Holdings plc American Depositary Shares", "IMGN": "ImmunoGen Inc. Common Stock", "IMGO": "Imago BioSciences Inc. Common stock", "IMH": "Impac Mortgage Holdings Inc. Common Stock ", "IMKTA": "Ingles Markets Incorporated Class A Common Stock", "IMMP": "Immutep Limited American Depositary Shares", "IMMR": "Immersion Corporation Common Stock", "IMNM": "Immunome Inc. Common Stock", "IMO": "Imperial Oil Limited Common Stock", "IMOS": "ChipMOS TECHNOLOGIES INC. American Depositary Shares", "IMPL": "Impel NeuroPharma Inc. Common Stock", "IMPX": "AEA-Bridges Impact Corp. Class A Ordinary Shares", "IMRA": "IMARA Inc. Common Stock", "IMRN": "Immuron Limited American Depositary Shares", "IMRNW": "Immuron Limited Warrants", "IMRX": "Immuneering Corporation Class A Common Stock", "IMTE": "Integrated Media Technology Limited Ordinary Shares", "IMTX": "Immatics N.V. Ordinary Shares", "IMTXW": "Immatics N.V. Warrants", "IMUX": "Immunic Inc. Common Stock", "IMV": "IMV Inc. Common Shares", "IMVT": "Immunovant Inc. Common Stock", "IMXI": "International Money Express Inc. Common Stock", "INAB": "IN8bio Inc. Common Stock", "INBK": "First Internet Bancorp Common Stock", "INBKZ": "First Internet Bancorp 6.0% Fixed-to-Floating Rate Subordinated Notes Due 2029", "INBX": "Inhibrx Inc. Common Stock", "INCR": "Intercure Ltd. Ordinary Shares", "INCY": "Incyte Corp. Common Stock", "INDB": "Independent Bank Corp. Common Stock", "INDI": "indie Semiconductor Inc. Class A Common Stock", "INDIW": "indie Semiconductor Inc. Warrant", "INDO": "Indonesia Energy Corporation Limited Ordinary Shares", "INDP": "Indaptus Therapeutics Inc. Common Stock", "INDT": "INDUS Realty Trust Inc. (MD) Common Stock", "INFI": "Infinity Pharmaceuticals Inc. Common Stock", "INFN": "Infinera Corporation Common Stock", "INFO": "IHS Markit Ltd. Common Shares", "INFU": "InfuSystems Holdings Inc. Common Stock", "INFY": "Infosys Limited American Depositary Shares", "ING": "ING Group N.V. Common Stock", "INGN": "Inogen Inc Common Stock", "INGR": "Ingredion Incorporated Common Stock", "INKA": "KludeIn I Acquisition Corp. Class A Common Stock", "INKAU": "KludeIn I Acquisition Corp. Unit", "INKAW": "KludeIn I Acquisition Corp. Warrant", "INM": "InMed Pharmaceuticals Inc. Common Shares", "INMB": "INmune Bio Inc. Common stock", "INMD": "InMode Ltd. Ordinary Shares", "INN": "Summit Hotel Properties Inc. Common Stock", "INN^E": "Summit Hotel Properties Inc. 6.250% Series E Cumulative Redeemable Preferred Stock", "INN^F": "Summit Hotel Properties Inc. 5.875% Series F Cumulative Redeemable Preferred Stock $0.01 par value per share", "INNV": "InnovAge Holding Corp. Common Stock", "INO": "Inovio Pharmaceuticals Inc. Common Stock", "INOD": "Innodata Inc. Common Stock", "INOV": "Inovalon Holdings Inc. Class A Common Stock", "INPX": "Inpixon Common Stock", "INS": "Intelligent Systems Corporation Common Stock", "INSE": "Inspired Entertainment Inc. Common Stock", "INSG": "Inseego Corp. Common Stock", "INSI": "Insight Select Income Fund", "INSM": "Insmed Inc. Common Stock", "INSP": "Inspire Medical Systems Inc. Common Stock", "INST": "Instructure Holdings Inc. Common Stock", "INSW": "International Seaways Inc. Common Stock ", "INSW^A": "International Seaways Inc. 8.50% Senior Notes due June 30 2023", "INT": "World Fuel Services Corporation Common Stock", "INTA": "Intapp Inc. Common Stock", "INTC": "Intel Corporation Common Stock", "INTG": "Intergroup Corporation (The) Common Stock", "INTT": "inTest Corporation Common Stock", "INTU": "Intuit Inc. Common Stock", "INTZ": "Intrusion Inc. Common Stock", "INUV": "Inuvo Inc.", "INVA": "Innoviva Inc. Common Stock", "INVE": "Identiv Inc. Common Stock", "INVH": "Invitation Homes Inc. Common Stock", "INVO": "INVO BioScience Inc. Common Stock", "INVZ": "Innoviz Technologies Ltd. Ordinary shares", "INVZW": "Innoviz Technologies Ltd. Warrant", "INZY": "Inozyme Pharma Inc. Common Stock", "IO": "Ion Geophysical Corporation Common Stock", "IONM": "Assure Holdings Corp. Common Stock", "IONQ": "IonQ Inc. Common Stock", "IONS": "Ionis Pharmaceuticals Inc. Common Stock", "IOR": "Income Opportunity Realty Investors Inc. Common Stock", "IOSP": "Innospec Inc. Common Stock", "IOVA": "Iovance Biotherapeutics Inc. Common Stock", "IP": "International Paper Company Common Stock", "IPA": "ImmunoPrecise Antibodies Ltd. Common Stock", "IPAR": "Inter Parfums Inc. Common Stock", "IPAXU": "Inflection Point Acquisition Corp. Units", "IPDN": "Professional Diversity Network Inc. Common Stock", "IPG": "Interpublic Group of Companies Inc. (The) Common Stock", "IPGP": "IPG Photonics Corporation Common Stock", "IPHA": "Innate Pharma S.A. ADS", "IPI": "Intrepid Potash Inc Common Stock", "IPLDP": "Interstate Power & Light Company Perp Prd Ser D", "IPOD": "Social Capital Hedosophia Holdings Corp. IV Class A Ordinary Shares", "IPOF": "Social Capital Hedosophia Holdings Corp. VI Class A Ordinary Shares", "IPSC": "Century Therapeutics Inc. Common Stock", "IPVA": "InterPrivate II Acquisition Corp. Class A Common Stock", "IPVF": "InterPrivate III Financial Partners Inc. Class A Common Stock", "IPVI": "InterPrivate IV InfraTech Partners Inc. Class A Common Stock", "IPVIU": "InterPrivate IV InfraTech Partners Inc. Units", "IPVIW": "InterPrivate IV InfraTech Partners Inc. Warrant", "IPW": "iPower Inc. Common Stock", "IPWR": "Ideal Power Inc. Common Stock", "IQ": "iQIYI Inc. American Depositary Shares", "IQI": "Invesco Quality Municipal Income Trust Common Stock", "IQV": "IQVIA Holdings Inc. Common Stock", "IR": "Ingersoll Rand Inc. Common Stock", "IRBT": "iRobot Corporation Common Stock", "IRCP": "IRSA Propiedades Comerciales S.A. American Depositary Shares", "IRDM": "Iridium Communications Inc Common Stock", "IRIX": "IRIDEX Corporation Common Stock", "IRL": "New Ireland Fund Inc (The) Common Stock", "IRM": "Iron Mountain Incorporated (Delaware)Common Stock REIT", "IRMD": "iRadimed Corporation Common Stock", "IRNT": "IronNet Inc. Common Stock", "IROQ": "IF Bancorp Inc. Common Stock", "IRS": "IRSA Inversiones Y Representaciones S.A. Common Stock", "IRT": "Independence Realty Trust Inc. Common Stock", "IRTC": "iRhythm Technologies Inc. Common Stock", "IRWD": "Ironwood Pharmaceuticals Inc. Class A Common Stock", "IS": "ironSource Ltd. Class A Ordinary Shares", "ISAA": "Iron Spark I Inc. Class A Common Stock", "ISBC": "Investors Bancorp Inc. Common Stock", "ISD": "PGIM High Yield Bond Fund Inc.", "ISDR": "Issuer Direct Corporation Common Stock", "ISEE": "IVERIC bio Inc. Common Stock", "ISIG": "Insignia Systems Inc. Common Stock", "ISLE": "Isleworth Healthcare Acquisition Corporation Common stock", "ISLEW": "Isleworth Healthcare Acquisition Corporation Warrant", "ISOS": "Isos Acquisition Corporation Class A Ordinary Shares", "ISPC": "iSpecimen Inc. Common Stock", "ISR": "IsoRay Inc. Common Stock (DE)", "ISRG": "Intuitive Surgical Inc. Common Stock", "ISSC": "Innovative Solutions and Support Inc. Common Stock", "ISTR": "Investar Holding Corporation Common Stock", "ISUN": "iSun Inc. Common Stock", "IT": "Gartner Inc. Common Stock", "ITAC": "Industrial Tech Acquisitions Inc. Class A common stock", "ITACU": "Industrial Tech Acquisitions Inc. Unit", "ITACW": "Industrial Tech Acquisitions Inc. Warrant", "ITCB": "Itau CorpBanca American Depositary Shares (each representing 1500 shares of Common Stock no par value)", "ITCI": "Intra-Cellular Therapies Inc. Common Stock", "ITGR": "Integer Holdings Corporation Common Stock", "ITHX": "ITHAX Acquisition Corp. Class A Ordinary Shares", "ITHXU": "ITHAX Acquisition Corp. Unit", "ITHXW": "ITHAX Acquisition Corp. Warrant", "ITI": "Iteris Inc. Common Stock", "ITIC": "Investors Title Company Common Stock", "ITMR": "Itamar Medical Ltd. American Depository Shares", "ITOS": "iTeos Therapeutics Inc. Common Stock", "ITP": "IT Tech Packaging Inc. Common Stock", "ITQ": "Itiquira Acquisition Corp. Class A Ordinary Shares", "ITQRU": "Itiquira Acquisition Corp. Unit", "ITQRW": "Itiquira Acquisition Corp. Warrant", "ITRG": "Integra Resources Corp. Common Shares", "ITRI": "Itron Inc. Common Stock", "ITRM": "Iterum Therapeutics plc Ordinary Share", "ITRN": "Ituran Location and Control Ltd. Ordinary Shares", "ITT": "ITT Inc. Common Stock ", "ITUB": "Itau Unibanco Banco Holding SA American Depositary Shares (Each repstg 500 Preferred shares)", "ITW": "Illinois Tool Works Inc. Common Stock", "IVA": "Inventiva S.A. American Depository Shares", "IVAC": "Intevac Inc. Common Stock", "IVAN": "Ivanhoe Capital Acquisition Corp. Class A Ordinary Shares", "IVC": "Invacare Corporation Common Stock", "IVH": "Delaware Ivy High Income Opportunities Fund", "IVR": "INVESCO MORTGAGE CAPITAL INC Common Stock", "IVR^B": "Invesco Mortgage Capital Inc. Preferred Series B Cum Fxd to Fltg", "IVR^C": "INVESCO MORTGAGE CAPITAL INC 7.5% Fixed-to-Floating Series C Cumulative Redeemable Preferred Stock Liquation Preference $25.00 per Share", "IVZ": "Invesco Ltd Common Stock", "IX": "Orix Corp Ads Common Stock", "IZEA": "IZEA Worldwide Inc. Common Stock", "J": "Jacobs Engineering Group Inc. Common Stock", "JACK": "Jack In The Box Inc. Common Stock", "JAGX": "Jaguar Health Inc. Common Stock", "JAKK": "JAKKS Pacific Inc. Common Stock", "JAMF": "Jamf Holding Corp. Common Stock", "JAN": "JanOne Inc. Common Stock (NV)", "JANX": "Janux Therapeutics Inc. Common Stock", "JAQC": "Jupiter Acquisition Corporation Common stock", "JAQCU": "Jupiter Acquisition Corporation Units", "JAQCW": "Jupiter Acquisition Corporation Warrants", "JATT": "JATT Acquisition Corp Class A Ordinary Shares", "JAZZ": "Jazz Pharmaceuticals plc Common Stock (Ireland)", "JBGS": "JBG SMITH Properties Common Shares ", "JBHT": "J.B. Hunt Transport Services Inc. Common Stock", "JBI": "Janus International Group Inc. Common Stock", "JBL": "Jabil Inc. Common Stock", "JBLU": "JetBlue Airways Corporation Common Stock", "JBSS": "John B. Sanfilippo & Son Inc. Common Stock", "JBT": "John Bean Technologies Corporation Common Stock", "JCE": "Nuveen Core Equity Alpha Fund Nuveen Core Equity Alpha Fund Common Shares of Beneficial Interest", "JCI": "Johnson Controls International plc Ordinary Share", "JCIC": "Jack Creek Investment Corp. Class A Ordinary Shares", "JCICU": "Jack Creek Investment Corp. Units", "JCICW": "Jack Creek Investment Corp. Warrants", "JCO": "Nuveen Credit Opportunities 2022 Target Term Fund Common Shares of Beneficial Interest", "JCOM": "j2 Global Inc. Common Stock", "JCS": "Communications Systems Inc. Common Stock", "JCTCF": "Jewett-Cameron Trading Company Common Shares", "JD": "JD.com Inc. American Depositary Shares", "JDD": "Nuveen Diversified Dividend and Income Fund Shares of Beneficial Interest", "JEF": "Jefferies Financial Group Inc. Common Stock", "JELD": "JELD-WEN Holding Inc. Common Stock", "JEMD": "Nuveen Emerging Markets Debt 2022 Target Term Fund Common Shares of Beneficial Interest $0.01 par value per share", "JEQ": "Aberdeen Japan Equity Fund Inc. Common Stock", "JFIN": "Jiayin Group Inc. American Depositary Shares", "JFR": "Nuveen Floating Rate Income Fund Common Stock", "JFU": "9F Inc. American Depositary Shares", "JG": "Aurora Mobile Limited American Depositary Shares", "JGH": "Nuveen Global High Income Fund Common Shares of Beneficial Interest", "JHAA": "Nuveen Corporate Income 2023 Target Term Fund", "JHB": "Nuveen Corporate Income November 2021 Target Term Fund", "JHG": "Janus Henderson Group plc Ordinary Shares", "JHI": "John Hancock Investors Trust Common Stock", "JHS": "John Hancock Income Securities Trust Common Stock", "JHX": "James Hardie Industries plc American Depositary Shares (Ireland)", "JILL": "J. Jill Inc. Common Stock", "JJSF": "J & J Snack Foods Corp. Common Stock", "JKHY": "Jack Henry & Associates Inc. Common Stock", "JKS": "JinkoSolar Holding Company Limited American Depositary Shares (each representing 4 Common Shares)", "JLL": "Jones Lang LaSalle Incorporated Common Stock", "JLS": "Nuveen Mortgage and Income Fund", "JMIA": "Jumia Technologies AG American Depositary Shares each representing two Ordinary Shares", "JMM": "Nuveen Multi-Market Income Fund (MA)", "JMP": "JMP Group LLC Common Shares", "JMPNZ": "JMP Group LLC 6.875% Senior Notes due 2029", "JNCE": "Jounce Therapeutics Inc. Common Stock", "JNJ": "Johnson & Johnson Common Stock", "JNPR": "Juniper Networks Inc. Common Stock", "JOAN": "JOANN Inc. Common Stock", "JOB": "GEE Group Inc. Common Stock", "JOBS": "51job Inc. American Depositary Shares", "JOBY": "Joby Aviation Inc. Common Stock", "JOE": "St. Joe Company (The) Common Stock", "JOF": "Japan Smaller Capitalization Fund Inc Common Stock", "JOFF": "JOFF Fintech Acquisition Corp. Class A Common Stock", "JOFFU": "JOFF Fintech Acquisition Corp. Unit", "JOFFW": "JOFF Fintech Acquisition Corp. Warrant", "JOUT": "Johnson Outdoors Inc. Class A Common Stock", "JP": "Jupai Holdings Limited American Depositary Shares each representing six ordinary shares", "JPC": "Nuveen Preferred & Income Opportunities Fund", "JPI": "Nuveen Preferred and Income Term Fund Common Shares of Beneficial Interest", "JPM": "JP Morgan Chase & Co. Common Stock", "JPM^C": "J P Morgan Chase & Co Depositary Shares each representing a 1/400th interest in a share of 6.00% Non-Cumulative Preferred Stock Series EE", "JPM^D": "J P Morgan Chase & Co Depositary Shares each representing a 1/400th interest in a share of 5.75% Non-Cumulative Preferred Stock Series DD", "JPM^J": "J P Morgan Chase & Co Depositary Shares each representing a 1/400th interest in a share of JPMorgan Chase & Co. 4.75% Non-Cumulative Preferred Stock Series GG", "JPM^K": "J P Morgan Chase & Co Depositary Shares each representing a 1/400th interest in a share of 4.55% Non-Cumulative Preferred Stock Series JJ", "JPM^L": "J P Morgan Chase & Co Depositary Shares each representing a 1/400th interest in a share of 4.625% Non-Cumulative Preferred Stock Series LL", "JPM^M": "J P Morgan Chase & Co Depositary Shares each representing a 1/400th interest in a share of 4.20% Non-Cumulative Preferred Stock Series MM", "JPS": "Nuveen Preferred & Income Securities Fund", "JPT": "Nuveen Preferred and Income 2022 Term Fund Common Shares of Beneficial Interest", "JQC": "Nuveen Credit Strategies Income Fund Shares of Beneficial Interest", "JRI": "Nuveen Real Asset Income and Growth Fund Common Shares of Beneficial Interest", "JRJC": "China Finance Online Co. Limited American Depositary Shares", "JRO": "Nuveen Floating Rate Income Opportuntiy Fund Shares of Beneficial Interest", "JRS": "Nuveen Real Estate Income Fund Common Shares of Beneficial Interest", "JRSH": "Jerash Holdings (US) Inc. Common Stock", "JRVR": "James River Group Holdings Ltd. Common Shares", "JSD": "Nuveen Short Duration Credit Opportunities Fund Common Shares of Beneficial Interest", "JSM": "Navient Corporation 6% Senior Notes due December 15 2043", "JSPR": "Jasper Therapeutics Inc. Common Stock", "JSPRW": "Japer Therapeutics Inc. Warrants", "JT": "Jianpu Technology Inc. American depositary shares", "JTA": "Nuveen Tax-Advantaged Total Return Strategy Fund Common Share of Beneficial Interest", "JTD": "Nuveen Tax-Advantaged Dividend Growth Fund Common Shares of Beneficial Interest", "JUGG": "Jaws Juggernaut Acquisition Corporation Class A Ordinary Share", "JUGGU": "Jaws Juggernaut Acquisition Corporation Unit", "JUGGW": "Jaws Juggernaut Acquisition Corporation Warrant", "JUPW": "Jupiter Wellness Inc. Common Stock", "JUPWW": "Jupiter Wellness Inc. Warrant", "JVA": "Coffee Holding Co. Inc. Common Stock", "JW/A": "John Wiley & Sons Inc.", "JW/B": "John Wiley & Sons Inc.", "JWEL": "Jowell Global Ltd. Ordinary Shares", "JWN": "Nordstrom Inc. Common Stock", "JWSM": "Jaws Mustang Acquisition Corp. Class A Ordinary Shares", "JXN": "Jackson Financial Inc. Class A Common Stock ", "JYAC": "Jiya Acquisition Corp. Class A Common Stock", "JYNT": "The Joint Corp. Common Stock", "JZXN": "Jiuzi Holdings Inc. Ordinary Shares", "K": "Kellogg Company Common Stock", "KAHC": "KKR Acquisition Holdings I Corp. Class A Common Stock", "KAI": "Kadant Inc Common Stock", "KAII": "Kismet Acquisition Two Corp. Class A Ordinary Shares", "KAIIU": "Kismet Acquisition Two Corp. Unit", "KAIIW": "Kismet Acquisition Two Corp. Warrant", "KAIR": "Kairos Acquisition Corp. Class A Ordinary Shares", "KAIRU": "Kairos Acquisition Corp. Unit", "KAIRW": "Kairos Acquisition Corp. Warrant", "KALA": "Kala Pharmaceuticals Inc. Common Stock", "KALU": "Kaiser Aluminum Corporation Common Stock", "KALV": "KalVista Pharmaceuticals Inc. Common Stock", "KAMN": "Kaman Corporation Common Stock", "KAR": "KAR Auction Services Inc Common Stock", "KARO": "Karooooo Ltd. Ordinary Shares", "KAVL": "Kaival Brands Innovations Group Inc. Common Stock", "KB": "KB Financial Group Inc", "KBAL": "Kimball International Inc. Class B Common Stock", "KBH": "KB Home Common Stock", "KBNT": "Kubient Inc. Common Stock", "KBNTW": "Kubient Inc. Warrant", "KBR": "KBR Inc. Common Stock", "KBSF": "KBS Fashion Group Limited Common Stock", "KC": "Kingsoft Cloud Holdings Limited American Depositary Shares", "KCGI": "Kensington Capital Acquisition Corp. V", "KDMN": "Kadmon Holdings Inc. Common Stock", "KDNY": "Chinook Therapeutics Inc. Common Stock", "KDP": "Keurig Dr Pepper Inc. Common Stock", "KE": "Kimball Electronics Inc. Common Stock", "KELYA": "Kelly Services Inc. Class A Common Stock", "KELYB": "Kelly Services Inc. Class B Common Stock", "KEN": "Kenon Holdings Ltd. Ordinary Shares", "KEP": "Korea Electric Power Corporation Common Stock", "KEQU": "Kewaunee Scientific Corporation Common Stock", "KERN": "Akerna Corp. Common Stock", "KERNW": "Akerna Corp Warrant", "KEX": "Kirby Corporation Common Stock", "KEY": "KeyCorp Common Stock", "KEY^I": "KeyCorp Depositary Shares Each Representing a 1/40th Ownership Interest in a Share of Fixed-to-Floating Rate Perpetual Non-Cumulative Preferred Stock Series E", "KEY^J": "KeyCorp Depositary Shares each representing a 1/40th ownership interest in a share of Fixed Rate Perpetual Non-Cumulative Preferred Stock Series F", "KEY^K": "KeyCorp Depositary Shares each representing a 1/40th ownership interest in a share of Fixed Rate Perpetual Non-Cumulative Preferred Stock Series G", "KEYS": "Keysight Technologies Inc. Common Stock", "KF": "Korea Fund Inc. (The) New Common Stock", "KFFB": "Kentucky First Federal Bancorp Common Stock", "KFRC": "Kforce Inc. Common Stock", "KFS": "Kingsway Financial Services Inc. Common Stock (DE)", "KFY": "Korn Ferry Common Stock", "KGC": "Kinross Gold Corporation Common Stock", "KHC": "The Kraft Heinz Company Common Stock", "KIDS": "OrthoPediatrics Corp. Common Stock", "KIII": "Kismet Acquisition Three Corp. Class A Ordinary Shares", "KIIIU": "Kismet Acquisition Three Corp. Unit", "KIIIW": "Kismet Acquisition Three Corp. Warrant", "KIM": "Kimco Realty Corporation Common Stock", "KIM^L": "Kimco Realty Corporation Class L Depositary Shares each of which represents a one-one thousandth fractional interest in a share of 5.125% Class L Cumulative Redeemable Preferred Stock liquidation preference $25000.00 per share", "KIM^M": "Kimco Realty Corporation Class M Depositary Shares each of which represents a one-one thousandth fractional interest in a share of 5.25% Class M Cumulative Redeemable Preferred Stock liquidation preference $25000.00 per share", "KINS": "Kingstone Companies Inc. Common Stock", "KINZ": "KINS Technology Group Inc. Class A Common Stock", "KINZU": "KINS Technology Group Inc. Unit", "KINZW": "KINS Technology Group Inc. Warrant", "KIO": "KKR Income Opportunities Fund Common Shares", "KIQ": "Kelso Technologies Inc Ordinary Shares", "KIRK": "Kirkland's Inc. COMMONSTOCK", "KKR": "KKR & Co. Inc. Common Stock", "KKR^C": "KKR & Co. Inc. 6.00% Series C Mandatory Convertible Preferred Stock", "KKRS": "KKR Group Finance Co. IX LLC 4.625% Subordinated Notes due 2061", "KL": "Kirkland Lake Gold Ltd. Common Shares", "KLAC": "KLA Corporation Common Stock", "KLAQ": "KL Acquisition Corp Class A Common Stock", "KLAQU": "KL Acquisition Corp Unit", "KLAQW": "KL Acquisition Corp Warrant", "KLDO": "Kaleido Biosciences Inc. Common Stock", "KLIC": "Kulicke and Soffa Industries Inc. Common Stock", "KLR": "Kaleyra Inc. Common Stock", "KLTR": "Kaltura Inc. Common Stock", "KLXE": "KLX Energy Services Holdings Inc. Common Stock", "KMB": "Kimberly-Clark Corporation Common Stock", "KMDA": "Kamada Ltd. Ordinary Shares", "KMF": "Kayne Anderson NextGen Energy & Infrastructure Inc.", "KMI": "Kinder Morgan Inc. Common Stock", "KMPH": "KemPharm Inc. Common Stock", "KMPR": "Kemper Corporation", "KMT": "Kennametal Inc. Common Stock", "KMX": "CarMax Inc", "KN": "Knowles Corporation Common Stock", "KNBE": "KnowBe4 Inc. Class A Common Stock", "KNDI": "Kandi Technologies Group Inc Common Stock", "KNOP": "KNOT Offshore Partners LP Common Units representing Limited Partner Interests", "KNSA": "Kiniksa Pharmaceuticals Ltd. Class A Common Stock", "KNSL": "Kinsale Capital Group Inc. Common Stock", "KNTE": "Kinnate Biopharma Inc. Common Stock", "KNX": "Knight-Swift Transportation Holdings Inc.", "KO": "Coca-Cola Company (The) Common Stock", "KOD": "Kodiak Sciences Inc Common Stock", "KODK": "Eastman Kodak Company Common New", "KOF": "Coca Cola Femsa S.A.B. de C.V. American Depositary Shares each representing 10 Units (each Unit consists of 3 Series B Shares and 5 Series L Shares)", "KOP": "Koppers Holdings Inc. Koppers Holdings Inc. Common Stock", "KOPN": "Kopin Corporation Common Stock", "KOR": "Corvus Gold Inc. Common Shares", "KORE": "KORE Group Holdings Inc. Common Stock", "KOS": "Kosmos Energy Ltd. Common Shares (DE)", "KOSS": "Koss Corporation Common Stock", "KPLT": "Katapult Holdings Inc. Common Stock", "KPLTW": "Katapult Holdings Inc. Warrant", "KPTI": "Karyopharm Therapeutics Inc. Common Stock", "KR": "Kroger Company (The) Common Stock", "KRA": "Kraton Corporation Common Stock", "KRBP": "Kiromic BioPharma Inc. Common Stock", "KRC": "Kilroy Realty Corporation Common Stock", "KREF": "KKR Real Estate Finance Trust Inc. Common Stock", "KREF^A": "KKR Real Estate Finance Trust Inc. 6.50% Series A Cumulative Redeemable Preferred Stock", "KRG": "Kite Realty Group Trust Common Stock", "KRKR": "36Kr Holdings Inc. American Depositary Shares", "KRMD": "Repro Med Systems Inc. Common Stock", "KRNL": "Kernel Group Holdings Inc. Class A Ordinary Shares", "KRNLU": "Kernel Group Holdings Inc. Units", "KRNLW": "Kernel Group Holdings Inc. Warrants", "KRNT": "Kornit Digital Ltd. Ordinary Shares", "KRNY": "Kearny Financial Corp Common Stock", "KRO": "Kronos Worldwide Inc Common Stock", "KRON": "Kronos Bio Inc. Common Stock", "KROS": "Keros Therapeutics Inc. Common Stock", "KRP": "Kimbell Royalty Partners Common Units Representing Limited Partner Interests", "KRT": "Karat Packaging Inc. Common Stock", "KRTX": "Karuna Therapeutics Inc. Common Stock", "KRUS": "Kura Sushi USA Inc. Class A Common Stock", "KRYS": "Krystal Biotech Inc. Common Stock", "KSI": "Kadem Sustainable Impact Corporation Class A common stock", "KSICU": "Kadem Sustainable Impact Corporation Unit", "KSICW": "Kadem Sustainable Impact Corporation Warrant", "KSM": "DWS Strategic Municipal Income Trust", "KSPN": "Kaspien Holdings Inc. Common Stock", "KSS": "Kohl's Corporation Common Stock", "KSU": "Kansas City Southern Common Stock", "KSU^": "Kansas City Southern Preferred Stock", "KT": "KT Corporation Common Stock", "KTB": "Kontoor Brands Inc. Common Stock ", "KTCC": "Key Tronic Corporation Common Stock", "KTF": "DWS Municipal Income Trust", "KTH": "Structures Products Cp 8% CorTS Issued by Peco Energy Cap Tr II Preferred Stock", "KTN": "Structured Products Corp 8.205% CorTS 8.205% Corporate Backed Trust Securities (CorTS)", "KTOS": "Kratos Defense & Security Solutions Inc. Common Stock", "KTRA": "Kintara Therapeutics Inc. Common Stock", "KTTA": "Pasithea Therapeutics Corp. Common Stock", "KTTAW": "Pasithea Therapeutics Corp. Warrant", "KUKE": "Kuke Music Holding Limited American Depositary Shares each representing one Ordinary Share", "KULR": "KULR Technology Group Inc. Common Stock", "KURA": "Kura Oncology Inc. Common Stock", "KURI": "Alkuri Global Acquisition Corp. Class A common stock", "KURIU": "Alkuri Global Acquisition Corp. Unit", "KURIW": "Alkuri Global Acquisition Corp. Warrant", "KVHI": "KVH Industries Inc. Common Stock", "KVSA": "Khosla Ventures Acquisition Co. Class A Common Stock", "KVSB": "Khosla Ventures Acquisition Co. II Class A Common Stock", "KVSC": "Khosla Ventures Acquisition Co. III Class A Common Stock", "KW": "Kennedy-Wilson Holdings Inc. Common Stock", "KWAC": "Kingswood Acquisition Corp. Class A Common Stock", "KWR": "Quaker Chemical Corporation Common Stock", "KXIN": "Kaixin Auto Holdings Ordinary Share", "KYMR": "Kymera Therapeutics Inc. Common Stock", "KYN": "Kayne Anderson Energy Infrastructure Fund Inc.", "KZIA": "Kazia Therapeutics Limited American Depositary Shares", "KZR": "Kezar Life Sciences Inc. Common Stock", "L": "Loews Corporation Common Stock", "LAAA": "Lakeshore Acquisition I Corp. Ordinary Shares", "LAAAU": "Lakeshore Acquisition I Corp. Unit", "LAAAW": "Lakeshore Acquisition I Corp. Warrant", "LABP": "Landos Biopharma Inc. Common Stock", "LAC": "Lithium Americas Corp. Common Shares", "LAD": "Lithia Motors Inc. Common Stock", "LADR": "Ladder Capital Corp Class A Common Stock", "LAIX": "LAIX Inc. American Depositary Shares each representing one Class A Ordinary Share", "LAKE": "Lakeland Industries Inc. Common Stock", "LAMR": "Lamar Advertising Company Class A Common Stock", "LANC": "Lancaster Colony Corporation Common Stock", "LAND": "Gladstone Land Corporation Common Stock", "LANDM": "Gladstone Land Corporation 5.00% Series D Cumulative Term Preferred Stock", "LANDO": "Gladstone Land Corporation 6.00% Series B Cumulative Redeemable Preferred Stock", "LARK": "Landmark Bancorp Inc. Common Stock", "LASR": "nLIGHT Inc. Common Stock", "LAUR": "Laureate Education Inc. Class A Common Stock", "LAW": "CS Disco Inc. Common Stock", "LAWS": "Lawson Products Inc. Common Stock", "LAZ": "Lazard LTD. Lazard LTD. Class A Common Stock", "LAZR": "Luminar Technologies Inc. Class A Common Stock", "LAZY": "Lazydays Holdings Inc. Common Stock", "LBAI": "Lakeland Bancorp Inc. Common Stock", "LBC": "Luther Burbank Corporation Common Stock", "LBPH": "Longboard Pharmaceuticals Inc. Common Stock", "LBPS": "4D pharma plc American Depositary Shares", "LBPSW": "4D pharma plc Warrant", "LBRDA": "Liberty Broadband Corporation Class A Common Stock", "LBRDK": "Liberty Broadband Corporation Class C Common Stock", "LBRDP": "Liberty Broadband Corporation Series A Cumulative Redeemable Preferred Stock", "LBRT": "Liberty Oilfield Services Inc. Class A Common Stock", "LBTYA": "Liberty Global plc Class A Ordinary Shares", "LBTYB": "Liberty Global plc Class B Ordinary Shares", "LBTYK": "Liberty Global plc Class C Ordinary Shares", "LC": "LendingClub Corporation Common Stock", "LCA": "Landcadia Holdings IV Inc. Class A Common Stock", "LCAA": "L Catterton Asia Acquisition Corp Class A Ordinary Shares", "LCAAU": "L Catterton Asia Acquisition Corp Units", "LCAAW": "L Catterton Asia Acquisition Corp Warrant", "LCAHU": "Landcadia Holdings IV Inc. Units", "LCAHW": "Landcadia Holdings IV Inc. Warrant ", "LCAP": "Lionheart Acquisition Corp. II Class A Common Stock", "LCAPU": "Lionheart Acquisition Corp. II Unit", "LCAPW": "Lionheart Acquisition Corp. II Warrant", "LCI": "Lannett Co Inc Common Stock", "LCID": "Lucid Group Inc. Common Stock", "LCIDW": "Lucid Group Inc. Warrant", "LCII": "LCI Industries", "LCNB": "LCNB Corporation Common Stock", "LCTX": "Lineage Cell Therapeutics Inc. Common Stock", "LCUT": "Lifetime Brands Inc. Common Stock", "LDHA": "LDH Growth Corp I Class A Ordinary Shares", "LDHAU": "LDH Growth Corp I Units", "LDHAW": "LDH Growth Corp I Warrant", "LDI": "loanDepot Inc. Class A Common Stock", "LDOS": "Leidos Holdings Inc. Common Stock", "LDP": "Cohen & Steers Limited Duration Preferred and Income Fund Inc.", "LE": "Lands' End Inc. Common Stock", "LEA": "Lear Corporation Common Stock", "LEAP": "Ribbit LEAP Ltd. Class A Ordinary Shares", "LECO": "Lincoln Electric Holdings Inc. Common Shares", "LEDS": "SemiLEDS Corporation Common Stock", "LEE": "Lee Enterprises Incorporated Common Stock", "LEG": "Leggett & Platt Incorporated Common Stock", "LEGA": "Lead Edge Growth Opportunities Ltd Class A Ordinary Shares", "LEGAU": "Lead Edge Growth Opportunities Ltd Units", "LEGAW": "Lead Edge Growth Opportunities Ltd Warrant", "LEGH": "Legacy Housing Corporation Common Stock (TX)", "LEGN": "Legend Biotech Corporation American Depositary Shares", "LEGO": "Legato Merger Corp. Common stock", "LEGOU": "Legato Merger Corp. Units", "LEGOW": "Legato Merger Corp. Warrant", "LEJU": "Leju Holdings Limited American Depositary Shares each representing one Ordinary share", "LEN": "Lennar Corporation Class A Common Stock", "LEO": "BNY Mellon Strategic Municipals Inc. Common Stock", "LESL": "Leslie's Inc. Common Stock", "LEU": "Centrus Energy Corp. Class A Common Stock", "LEV": "The Lion Electric Company Common Shares", "LEVI": "Levi Strauss & Co Class A Common Stock", "LEVL": "Level One Bancorp Inc. Common Stock", "LEVLP": "Level One Bancorp Inc. Depositary Shares Each Representing a 1/100th Interest in a Share of 7.50% Non-Cumulative Perpetual Preferred Stock Series B", "LEXX": "Lexaria Bioscience Corp. Common Stock", "LEXXW": "Lexaria Bioscience Corp. Warrant", "LFC": "China Life Insurance Company Limited American Depositary Shares", "LFG": "Archaea Energy Inc. Class A Common Stock", "LFMD": "LifeMD Inc. Common Stock", "LFST": "LifeStance Health Group Inc. Common Stock", "LFT": "Lument Finance Trust Inc. Common Stock", "LFT^A": "Lument Finance Trust Inc. 7.875% Series A Cumulative Redeemable Preferred Stock", "LFTR": "Lefteris Acquisition Corp. Class A Common Stock", "LFTRU": "Lefteris Acquisition Corp. Unit", "LFTRW": "Lefteris Acquisition Corp. Warrant", "LFUS": "Littelfuse Inc. Common Stock", "LFVN": "Lifevantage Corporation Common Stock (Delaware)", "LGAC": "Lazard Growth Acquisition Corp. I Ordinary Shares", "LGACU": "Lazard Growth Acquisition Corp. I Units", "LGACW": "Lazard Growth Acquisition Corp. I Warrants", "LGHL": "Lion Group Holding Ltd. American Depositary Share", "LGHLW": "Lion Group Holding Ltd. Warrant", "LGI": "Lazard Global Total Return and Income Fund Common Stock", "LGIH": "LGI Homes Inc. Common Stock", "LGL": "LGL Group Inc. (The) Common Stock", "LGND": "Ligand Pharmaceuticals Incorporated Common Stock", "LGO": "Largo Resources Ltd. Common Shares", "LGV": "Longview Acquisition Corp. II Class A Common Stock", "LGVN": "Longeveron Inc. Class A Common Stock", "LH": "Laboratory Corporation of America Holdings Common Stock", "LHAA": "Lerer Hippeau Acquisition Corp. Class A Common Stock", "LHC": "Leo Holdings Corp. II Class A Ordinary Shares", "LHCG": "LHC Group Common Stock", "LHDX": "Lucira Health Inc. Common Stock", "LHX": "L3Harris Technologies Inc. Common Stock", "LI": "Li Auto Inc. American Depositary Shares", "LICY": "Li-Cycle Holdings Corp. Common Shares", "LIDR": "AEye Inc. Class A Common Stock", "LIDRW": "AEye Inc. Warrant", "LIFE": "aTyr Pharma Inc. Common Stock", "LII": "Lennox International Inc. Common Stock", "LIII": "Leo Holdings III Corp. Class A Ordinary Shares", "LILA": "Liberty Latin America Ltd. Class A Common Stock", "LILAK": "Liberty Latin America Ltd. Class C Common Stock", "LILM": "Lilium N.V. Class A Ordinary Shares", "LILMW": "Lilium N.V. Warrants", "LIN": "Linde plc Ordinary Share", "LINC": "Lincoln Educational Services Corporation Common Stock", "LIND": "Lindblad Expeditions Holdings Inc. Common Stock", "LINK": "Interlink Electronics Inc. Common Stock", "LIQT": "LiqTech International Inc. Common Stock", "LITB": "LightInTheBox Holding Co. Ltd. American Depositary Shares each representing 2 ordinary shares", "LITE": "Lumentum Holdings Inc. Common Stock", "LITT": "Logistics Innovation Technologies Corp. Class A Common Stock", "LITTU": "Logistics Innovation Technologies Corp. Units", "LITTW": "Logistics Innovation Technologies Corp. Warrant", "LIVE": "Live Ventures Incorporated Common Stock", "LIVN": "LivaNova PLC Ordinary Shares", "LIVX": "LiveXLive Media Inc. Common Stock", "LIXT": "Lixte Biotechnology Holdings Inc. Common Stock", "LIXTW": "Lixte Biotechnology Holdings Inc. Warrants", "LIZI": "LIZHI INC. American Depositary Shares", "LJAQ": "LightJump Acquisition Corporation Common Stock", "LJAQU": "LightJump Acquisition Corporation Unit", "LJAQW": "LightJump Acquisition Corporation Warrant", "LJPC": "La Jolla Pharmaceutical Company Common Stock", "LKCO": "Luokung Technology Corp Ordinary Shares", "LKFN": "Lakeland Financial Corporation Common Stock", "LKQ": "LKQ Corporation Common Stock", "LL": "Lumber Liquidators Holdings Inc Common Stock", "LLNW": "Limelight Networks Inc. Common Stock", "LLY": "Eli Lilly and Company Common Stock", "LMACA": "Liberty Media Acquisition Corporation Series A Common Stock", "LMACU": "Liberty Media Acquisition Corporation Unit", "LMACW": "Liberty Media Acquisition Corporation Warrants", "LMAO": "LMF Acquisition Opportunities Inc. Class A common stock", "LMAOU": "LMF Acquisition Opportunities Inc. Unit", "LMAOW": "LMF Acquisition Opportunities Inc. Warrant", "LMAT": "LeMaitre Vascular Inc. Common Stock", "LMB": "Limbach Holdings Inc. Common Stock", "LMDX": "LumiraDx Limited Common Shares", "LMDXW": "LumiraDx Limited Warrant", "LMFA": "LM Funding America Inc. Common Stock", "LMND": "Lemonade Inc. Common Stock", "LMNL": "Liminal BioSciences Inc. Common Shares", "LMNR": "Limoneira Co Common Stock", "LMPX": "LMP Automotive Holdings Inc. Common Stock", "LMRK": "Landmark Infrastructure Partners LP Common Units", "LMRKN": "Landmark Infrastructure Partners LP 7% Series C Fltg/Fxd Perpetual Conv Preferred Stock", "LMRKO": "Landmark Infrastructure Partners LP Perpetual Preferred Units Series B 7.90%", "LMRKP": "Landmark Infrastructure Partners LP 8.00% Series A Cumulative Redeemable Perpetual Preferred Units", "LMST": "Limestone Bancorp Inc. Common Stock", "LMT": "Lockheed Martin Corporation Common Stock", "LNC": "Lincoln National Corporation Common Stock", "LND": "Brasilagro Brazilian Agric Real Estate Co Sponsored ADR (Brazil)", "LNDC": "Landec Corporation Common Stock (DE)", "LNFA": "L&F Acquisition Corp. Class A Ordinary Shares", "LNG": "Cheniere Energy Inc. Common Stock", "LNN": "Lindsay Corporation Common Stock", "LNSR": "LENSAR Inc. Common Stock", "LNT": "Alliant Energy Corporation Common Stock", "LNTH": "Lantheus Holdings Inc. Common Stock", "LOAN": "Manhattan Bridge Capital Inc", "LOB": "Live Oak Bancshares Inc. Common Stock", "LOCO": "El Pollo Loco Holdings Inc. Common Stock", "LODE": "Comstock Mining Inc. Common Stock", "LOGC": "LogicBio Therapeutics Inc. Common Stock", "LOGI": "Logitech International S.A. Ordinary Shares", "LOKB": "Live Oak Acquisition Corp. II Class A Common Stock", "LOKM": "Live Oak Mobility Acquisition Corp. Class A Common Stock", "LOMA": "Loma Negra Compania Industrial Argentina Sociedad Anonima ADS", "LOOP": "Loop Industries Inc. Common Stock", "LOPE": "Grand Canyon Education Inc. Common Stock", "LORL": "Loral Space and Communications Inc. Common Stock", "LOTZ": "CarLotz Inc. Class A Common Stock", "LOTZW": "CarLotz Inc. Warrant", "LOV": "Spark Networks Inc. American Depositary Shares (each representing one-tenth of an Ordinary Share)", "LOVE": "The Lovesac Company Common Stock", "LOW": "Lowe's Companies Inc. Common Stock", "LPCN": "Lipocine Inc. Common Stock", "LPG": "Dorian LPG Ltd. Common Stock", "LPI": "Laredo Petroleum Inc. Common Stock", "LPL": "LG Display Co Ltd AMERICAN DEPOSITORY SHARES", "LPLA": "LPL Financial Holdings Inc. Common Stock", "LPRO": "Open Lending Corporation Class A Common Stock", "LPSN": "LivePerson Inc. Common Stock", "LPTH": "LightPath Technologies Inc. Class A Common Stock", "LPTX": "Leap Therapeutics Inc. Common Stock", "LPX": "Louisiana-Pacific Corporation Common Stock", "LQDA": "Liquidia Corporation Common Stock", "LQDT": "Liquidity Services Inc. Common Stock", "LRCX": "Lam Research Corporation Common Stock", "LRFC": "Logan Ridge Finance Corporation Common Stock", "LRMR": "Larimar Therapeutics Inc. Common Stock", "LRN": "Stride Inc. Common Stock", "LSAQ": "LifeSci Acquisition II Corp. Common Stock", "LSBK": "Lake Shore Bancorp Inc. Common Stock", "LSCC": "Lattice Semiconductor Corporation Common Stock", "LSEA": "Landsea Homes Corporation Common Stock", "LSEAW": "Landsea Homes Corporation Warrant", "LSF": "Laird Superfood Inc. Common Stock", "LSI": "Life Storage Inc. Common Stock", "LSPD": "Lightspeed Commerce Inc. Subordinate Voting Shares", "LSTR": "Landstar System Inc. Common Stock", "LSXMA": "Liberty Media Corporation Series A Liberty SiriusXM Common Stock", "LSXMB": "Liberty Media Corporation Series B Liberty SiriusXM Common Stock", "LSXMK": "Liberty Media Corporation Series C Liberty SiriusXM Common Stock", "LTBR": "Lightbridge Corporation Common Stock", "LTC": "LTC Properties Inc. Common Stock", "LTCH": "Latch Inc. Common Stock", "LTCHW": "Latch Inc. Warrant expiring 6/4/2026", "LTHM": "Livent Corporation Common Stock", "LTRN": "Lantern Pharma Inc. Common Stock", "LTRPA": "Liberty TripAdvisor Holdings Inc. Series A Common Stock", "LTRPB": "Liberty TripAdvisor Holdings Inc. Series B Common Stock", "LTRX": "Lantronix Inc. Common Stock", "LU": "Lufax Holding Ltd American Depositary Shares two of which representing one Ordinary Share", "LUB": "Luby's Inc. Common Stock", "LULU": "lululemon athletica inc. Common Stock", "LUMN": "Lumen Technologies Inc. Common Stock", "LUMO": "Lumos Pharma Inc. Common Stock", "LUNA": "Luna Innovations Incorporated Common Stock", "LUNG": "Pulmonx Corporation Common Stock", "LUV": "Southwest Airlines Company Common Stock", "LUXA": "Lux Health Tech Acquisition Corp. Class A Common Stock", "LUXAU": "Lux Health Tech Acquisition Corp. Units", "LUXAW": "Lux Health Tech Acquisition Corp. Warrants", "LVOX": "LiveVox Holding Inc. Class A Common Stock", "LVOXU": "LiveVox Holding Inc. Unit", "LVOXW": "LiveVox Holding Inc. Warrant", "LVRA": "Levere Holdings Corp. Class A Ordinary Shares", "LVRAU": "Levere Holdings Corp. Unit", "LVRAW": "Levere Holdings Corp. Warrant", "LVS": "Las Vegas Sands Corp. Common Stock", "LVTX": "LAVA Therapeutics N.V. Ordinary Shares", "LW": "Lamb Weston Holdings Inc. Common Stock ", "LWAY": "Lifeway Foods Inc. Common Stock", "LWLG": "Lightwave Logic Inc. Common Stock", "LX": "LexinFintech Holdings Ltd. American Depositary Shares", "LXEH": "Lixiang Education Holding Co. Ltd. American Depositary Shares", "LXFR": "Luxfer Holdings PLC Ordinary Shares", "LXP": "Lexington Realty Trust Common Stock", "LXP^C": "Lexington Realty Trust Preferred Conv. Series C", "LXRX": "Lexicon Pharmaceuticals Inc. Common Stock", "LXU": "LSB Industries Inc. Common Stock", "LYB": "LyondellBasell Industries NV Ordinary Shares Class A (Netherlands)", "LYEL": "Lyell Immunopharma Inc. Common Stock", "LYFT": "Lyft Inc. Class A Common Stock", "LYG": "Lloyds Banking Group Plc American Depositary Shares", "LYL": "Dragon Victory International Limited Ordinary Shares", "LYRA": "Lyra Therapeutics Inc. Common Stock", "LYTS": "LSI Industries Inc. Common Stock", "LYV": "Live Nation Entertainment Inc. Common Stock", "LZ": "LegalZoom.com Inc. Common Stock", "LZB": "La-Z-Boy Incorporated Common Stock", "M": "Macy's Inc Common Stock", "MA": "Mastercard Incorporated Common Stock", "MAA": "Mid-America Apartment Communities Inc. Common Stock", "MAA^I": "Mid-America Apartment Communities Inc. 8.50% Series I Cumulative Redeemable Preferred Stock", "MAC": "Macerich Company (The) Common Stock", "MACA": "Moringa Acquisition Corp Class A Ordinary Shares", "MACAU": "Moringa Acquisition Corp Units", "MACAW": "Moringa Acquisition Corp Warrant", "MACC": "Mission Advancement Corp. Class A Common Stock", "MACK": "Merrimack Pharmaceuticals Inc. Common Stock", "MACQ": "MCAP Acquisition Corporation Class A Common Stock", "MACQU": "MCAP Acquisition Corporation Unit", "MACQW": "MCAP Acquisition Corporation Warrants", "MACU": "Mallard Acquisition Corp. Common stock", "MACUU": "Mallard Acquisition Corp. Unit", "MACUW": "Mallard Acquisition Corp. Warrant", "MAG": "MAG Silver Corporation Ordinary Shares", "MAIN": "Main Street Capital Corporation Common Stock", "MAN": "ManpowerGroup Common Stock", "MANH": "Manhattan Associates Inc. Common Stock", "MANT": "ManTech International Corporation Common Stock $0.01 Par Value", "MANU": "Manchester United Ltd. Class A Ordinary Shares", "MAPS": "WM Technology Inc. Class A Common Stock ", "MAPSW": "WM Technology Inc. Warrants ", "MAQC": "Maquia Capital Acquisition Corporation Class A Common Stock", "MAQCU": "Maquia Capital Acquisition Corporation Unit", "MAQCW": "Maquia Capital Acquisition Corporation Warrant", "MAR": "Marriott International Class A Common Stock", "MARA": "Marathon Digital Holdings Inc. Common Stock", "MARK": "Remark Holdings Inc. Common Stock", "MARPS": "Marine Petroleum Trust Units of Beneficial Interest", "MAS": "Masco Corporation Common Stock", "MASI": "Masimo Corporation Common Stock", "MASS": "908 Devices Inc. Common Stock", "MAT": "Mattel Inc. Common Stock", "MATW": "Matthews International Corporation Class A Common Stock", "MATX": "Matson Inc. Common Stock", "MAV": "Pioneer Municipal High Income Advantage Fund Inc.", "MAX": "MediaAlpha Inc. Class A Common Stock", "MAXN": "Maxeon Solar Technologies Ltd. Ordinary Shares", "MAXR": "Maxar Technologies Inc.", "MAYS": "J. W. Mays Inc. Common Stock", "MBAC": "M3-Brigade Acquisition II Corp. Class A Common Stock", "MBCN": "Middlefield Banc Corp. Common Stock", "MBI": "MBIA Inc. Common Stock", "MBII": "Marrone Bio Innovations Inc. Common Stock", "MBIN": "Merchants Bancorp Common Stock", "MBINN": "Merchants Bancorp Depositary Shares Preferred Series C", "MBINO": "Merchants Bancorp Depositary Shares Each Representing a 1/40th Interest in a Share of Series B Fixed-to-Floating Rate", "MBINP": "Merchants Bancorp 7.00% Fixed-to-Floating Rate Series A Non-Cumulative Perpetual Preferred Stock", "MBIO": "Mustang Bio Inc. Common Stock", "MBNKP": "Medallion Bank Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series F", "MBOT": "Microbot Medical Inc. Common Stock", "MBRX": "Moleculin Biotech Inc. Common Stock", "MBT": "Mobile TeleSystems PJSC", "MBTC": "Nocturne Acquisition Corporation Ordinary Shares", "MBTCR": "Nocturne Acquisition Corporation Right", "MBTCU": "Nocturne Acquisition Corporation Unit", "MBUU": "Malibu Boats Inc. Class A Common Stock", "MBWM": "Mercantile Bank Corporation Common Stock", "MC": "Moelis & Company Class A Common Stock", "MCA": "Blackrock MuniYield California Quality Fund Inc. Common Stock", "MCAD": "Mountain Crest Acquisition Corp. II Common Stock", "MCADR": "Mountain Crest Acquisition Corp. II Right", "MCADU": "Mountain Crest Acquisition Corp. II Unit", "MCAE": "Mountain Crest Acquisition Corp. III Common Stock", "MCAF": "Mountain Crest Acquisition Corp. IV Common Stock", "MCAFR": "Mountain Crest Acquisition Corp. IV Rights", "MCAFU": "Mountain Crest Acquisition Corp. IV Unit", "MCB": "Metropolitan Bank Holding Corp. Common Stock", "MCBC": "Macatawa Bank Corporation Common Stock", "MCBS": "MetroCity Bankshares Inc. Common Stock", "MCD": "McDonald's Corporation Common Stock", "MCF": "Contango Oil & Gas Company Common Stock (TX)", "MCFE": "McAfee Corp. Class A Common Stock", "MCFT": "MasterCraft Boat Holdings Inc. Common Stock", "MCG": "Membership Collective Group Inc. Class A Common Stock", "MCHP": "Microchip Technology Incorporated Common Stock", "MCHX": "Marchex Inc. Class B Common Stock", "MCI": "Barings Corporate Investors Common Stock", "MCK": "McKesson Corporation Common Stock", "MCMJ": "Merida Merger Corp. I Common Stock", "MCMJW": "Merida Merger Corp. I Warrant", "MCN": "Madison Covered Call & Equity Strategy Fund Common Stock", "MCO": "Moody's Corporation Common Stock", "MCR": "MFS Charter Income Trust Common Stock", "MCRB": "Seres Therapeutics Inc. Common Stock", "MCRI": "Monarch Casino & Resort Inc. Common Stock", "MCS": "Marcus Corporation (The) Common Stock", "MCW": "Mister Car Wash Inc. Common Stock", "MCY": "Mercury General Corporation Common Stock", "MD": "Mednax Inc. Common Stock", "MDB": "MongoDB Inc. Class A Common Stock", "MDC": "M.D.C. Holdings Inc. Common Stock", "MDGL": "Madrigal Pharmaceuticals Inc. Common Stock", "MDGS": "Medigus Ltd. American Depositary Shares", "MDGSW": "Medigus Ltd. Series C Warrant", "MDH": "MDH Acquisition Corp. Class A Common Stock", "MDIA": "Mediaco Holding Inc. Class A Common Stock ", "MDJH": "MDJM LTD Ordinary Share", "MDLA": "Medallia Inc. Common Stock", "MDLZ": "Mondelez International Inc. Class A Common Stock", "MDNA": "Medicenna Therapeutics Corp. Common Shares", "MDP": "Meredith Corporation Common Stock", "MDRR": "Medalist Diversified REIT Inc. Common Stock", "MDRRP": "Medalist Diversified REIT Inc. Series A Cumulative Redeemable Preferred Stock", "MDRX": "Allscripts Healthcare Solutions Inc. Common Stock", "MDT": "Medtronic plc. Ordinary Shares", "MDU": "MDU Resources Group Inc. Common Stock (Holding Company)", "MDVA": "Modiv Inc. 7.375% Series A Cumulative Redeemable Perpetual Preferred Stock", "MDVL": "MedAvail Holdings Inc. Common Stock", "MDWD": "MediWound Ltd. Ordinary Shares", "MDWT": "Midwest Holding Inc. Common Stock", "MDXG": "MiMedx Group Inc Common Stock", "ME": "23andMe Holding Co. Class A Common Stock", "MEAC": "Mercury Ecommerce Acquisition Corp Class A Common Stock", "MEACU": "Mercury Ecommerce Acquisition Corp Unit", "MEACW": "Mercury Ecommerce Acquisition Corp Warrants", "MEC": "Mayville Engineering Company Inc. Common Stock", "MED": "MEDIFAST INC Common Stock", "MEDP": "Medpace Holdings Inc. Common Stock", "MEDS": "TRxADE HEALTH Inc. Common Stock", "MEG": "Montrose Environmental Group Inc. Common Stock", "MEI": "Methode Electronics Inc. Common Stock", "MEIP": "MEI Pharma Inc. Common Stock", "MEKA": "MELI Kaszek Pioneer Corp Class A Ordinary Shares", "MELI": "MercadoLibre Inc. Common Stock", "MEOAU": "Minority Equality Opportunities Acquisition Inc. Units", "MEOH": "Methanex Corporation Common Stock", "MER^K": "Bank of America Corporation Income Capital Obligation Notes initially due December 15 2066", "MERC": "Mercer International Inc. Common Stock", "MESA": "Mesa Air Group Inc. Common Stock", "MESO": "Mesoblast Limited American Depositary Shares", "MET": "MetLife Inc. Common Stock", "MET^A": "MetLife Inc. Preferred Series A Floating Rate", "MET^E": "MetLife Inc. Depositary shares each representing a 1/1000th interest in a share of the Issuera??s 5.625% Non-Cumulative Preferred Stock Series E.", "MET^F": "MetLife Inc. Depositary Shares each representing a 1/1000th interest in a share of 4.75% Non-Cumulative Preferred Stock Series F", "METC": "Ramaco Resources Inc. Common Stock", "METCL": "Ramaco Resources Inc. 9.00% Senior Notes due 2026", "METX": "Meten Holding Group Ltd. Ordinary Shares", "METXW": "Meten Holding Group Ltd. Warrant", "MEUSW": "23andMe Holding Co. Warrant", "MF": "Missfresh Limited American Depositary Shares", "MFA": "MFA Financial Inc.", "MFA^B": "MFA Financial Inc. Preferred Series B", "MFA^C": "MFA Financial Inc. 6.50% Series C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "MFC": "Manulife Financial Corporation Common Stock", "MFD": "Macquarie First Trust Global Common Stock", "MFG": "Mizuho Financial Group Inc. Sponosred ADR (Japan)", "MFGP": "Micro Focus Intl PLC ADS each representing One Ord Sh", "MFH": "Mercurity Fintech Holding Inc. ADS", "MFIN": "Medallion Financial Corp. Common Stock", "MFL": "Blackrock MuniHoldings Investment Quality Fund Common Shares of Beneficial Interest", "MFM": "MFS Municipal Income Trust Common Stock", "MFV": "MFS Special Value Trust Common Stock", "MG": "Mistras Group Inc Common Stock", "MGA": "Magna International Inc. Common Stock", "MGEE": "MGE Energy Inc", "MGF": "MFS Government Markets Income Trust Common Stock", "MGI": "Moneygram International Inc. Common Stock", "MGIC": "Magic Software Enterprises Ltd. Ordinary Shares", "MGLN": "Magellan Health Inc. Common Stock", "MGM": "MGM Resorts International Common Stock", "MGNI": "Magnite Inc. Common Stock", "MGNX": "MacroGenics Inc. Common Stock", "MGP": "MGM Growth Properties LLC Class A common shares representing limited liability company interests", "MGPI": "MGP Ingredients Inc.", "MGR": "Affiliated Managers Group Inc. 5.875% Junior Subordinated Notes due 2059", "MGRB": "Affiliated Managers Group Inc. 4.750% Junior Subordinated Notes due 2060", "MGRC": "McGrath RentCorp Common Stock", "MGRD": "Affiliated Managers Group Inc. 4.200% Junior Subordinated Notes due 2061", "MGTA": "Magenta Therapeutics Inc. Common Stock", "MGTX": "MeiraGTx Holdings plc Ordinary Shares", "MGU": "Macquarie Global Infrastructure Total Return Fund Inc. Common Stock", "MGY": "Magnolia Oil & Gas Corporation Class A Common Stock", "MGYR": "Magyar Bancorp Inc. Common Stock", "MH^A": "Maiden Holdings Ltd. Pref Shs Ser A (Bermuda)", "MH^C": "Maiden Holdings North America Ltd. 7.125% Non-Cumulative Preference Shares Series C", "MH^D": "Maiden Holdings Ltd. 6.700% Non-Cumulative Preference Shares Series D", "MHD": "Blackrock MuniHoldings Fund Inc. Common Stock", "MHF": "Western Asset Municipal High Income Fund Inc. Common Stock", "MHH": "Mastech Digital Inc Common Stock", "MHI": "Pioneer Municipal High Income Fund Inc.", "MHK": "Mohawk Industries Inc. Common Stock", "MHLA": "Maiden Holdings Ltd. 6.625% Notes due 2046", "MHLD": "Maiden Holdings Ltd.", "MHN": "Blackrock MuniHoldings New York Quality Fund Inc. Common Stock", "MHNC": "Maiden Holdings North America Ltd. 7.75% Notes due 2043", "MHO": "M/I Homes Inc. Common Stock", "MIC": "Macquarie Infrastructure Holdings LLC Common Unit", "MICT": "MICT Inc. Common Stock", "MIDD": "Middleby Corporation (The) Common Stock", "MIGI": "Mawson Infrastructure Group Inc. Common Stock", "MILE": "Metromile Inc. Common Stock", "MILEW": "Metromile Inc. Warrant ", "MIME": "Mimecast Limited Ordinary Shares", "MIMO": "Airspan Networks Holdings Inc. Common Stock", "MIN": "MFS Intermediate Income Trust Common Stock", "MIND": "MIND Technology Inc. Common Stock (DE)", "MINDP": "MIND Technology Inc. Series A 9.00% Series A Cumulative Preferred Stock (DE)", "MINM": "Minim Inc. Common Stock", "MIO": "Pioneer Municipal High Income Opportunities Fund Inc. Common Stock", "MIRM": "Mirum Pharmaceuticals Inc. Common Stock", "MIRO": "Miromatrix Medical Inc. Common Stock", "MIST": "Milestone Pharmaceuticals Inc. Common Shares", "MIT": "Mason Industrial Technology Inc. Class A Common Stock", "MITA": "Coliseum Acquisition Corp. Class A Ordinary Share", "MITAW": "Coliseum Acquisition Corp. Warrant", "MITC": "MeaTech 3D Ltd. American Depositary Shares", "MITK": "Mitek Systems Inc. Common Stock", "MITO": "Stealth BioTherapeutics Corp. ADS", "MITQ": "Moving iMage Technologies Inc. Common Stock", "MITT": "AG Mortgage Investment Trust Inc. Common Stock", "MITT^A": "AG Mortgage Investment Trust Inc. 8.25% Preferred Series A", "MITT^B": "AG Mortgage Investment Trust Inc. Preferred Series B", "MITT^C": "AG Mortgage Investment Trust Inc. 8.00% Series C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock $0.01 par value per share", "MIXT": "MiX Telematics Limited American Depositary Shares each representing 25 Ordinary Shares", "MIY": "Blackrock MuniYield Michigan Quality Fund Inc. Common Stock", "MKC": "McCormick & Company Incorporated Common Stock", "MKD": "Molecular Data Inc. American Depositary Shares", "MKFG": "Markforged Holding Corporation Common Stock", "MKL": "Markel Corporation Common Stock", "MKSI": "MKS Instruments Inc. Common Stock", "MKTW": "MarketWise Inc. Class A Common Stock", "MKTWW": "MarketWise Inc. Warrant", "MKTX": "MarketAxess Holdings Inc. Common Stock", "MKTY": "Mechanical Technology Incorporated Common Stock (Nevada)", "MKTYP": "Mechanical Technology Incorporated 9.0% Series A Cumulative Perpetual Preferred Stock", "ML": "MoneyLion Inc. Class A Common Stock", "MLAB": "Mesa Laboratories Inc. Common Stock", "MLAC": "Malacca Straits Acquisition Company Limited Class A Ordinary Shares", "MLACW": "Malacca Straits Acquisition Company Limited Warrants", "MLCO": "Melco Resorts & Entertainment Limited American Depositary Shares", "MLHR": "Herman Miller Inc. Common Stock", "MLI": "Mueller Industries Inc. Common Stock", "MLM": "Martin Marietta Materials Inc. Common Stock", "MLNK": "MeridianLink Inc. Common Stock", "MLP": "Maui Land & Pineapple Company Inc. Common Stock", "MLR": "Miller Industries Inc. Common Stock", "MLSS": "Milestone Scientific Inc. Common Stock", "MLVF": "Malvern Bancorp Inc. Common Stock", "MMAT": "Meta Materials Inc. Common Stock", "MMC": "Marsh & McLennan Companies Inc. Common Stock", "MMD": "MainStay MacKay DefinedTerm Municipal Opportunities Fund", "MMI": "Marcus & Millichap Inc. Common Stock", "MMLP": "Martin Midstream Partners L.P. Limited Partnership", "MMM": "3M Company Common Stock", "MMMB": "MamaMancini's Holdings Inc. Common Stock", "MMP": "Magellan Midstream Partners L.P. Limited Partnership", "MMS": "Maximus Inc. Common Stock", "MMSI": "Merit Medical Systems Inc. Common Stock", "MMT": "MFS Multimarket Income Trust Common Stock", "MMU": "Western Asset Managed Municipals Fund Inc. Common Stock", "MMX": "Maverix Metals Inc. Common Shares", "MMYT": "MakeMyTrip Limited Ordinary Shares", "MN": "Manning & Napier Inc. Class A Common Stock", "MNDO": "MIND C.T.I. Ltd. Ordinary Shares", "MNDY": "monday.com Ltd. Ordinary Shares", "MNKD": "MannKind Corporation Common Stock", "MNMD": "Mind Medicine (MindMed) Inc. Subordinate Voting Shares", "MNOV": "Medicinova Inc Common Stock", "MNP": "Western Asset Municipal Partners Fund Inc. Common Stock", "MNPR": "Monopar Therapeutics Inc. Common Stock", "MNR": "Monmouth Real Estate Investment Corporation Class A Common Stock", "MNR^C": "Monmouth Real Estate Investment Corporation 6.125% Series C Cumulative Redeemable Preferred Stock", "MNRL": "Brigham Minerals Inc. Class A Common Stock", "MNRO": "Monro Inc. Common Stock", "MNSB": "MainStreet Bancshares Inc. Common Stock", "MNSBP": "MainStreet Bancshares Inc. Depositary Shares", "MNSO": "MINISO Group Holding Limited American Depositary Shares each representing four Class A Ordinary Shares", "MNST": "Monster Beverage Corporation", "MNTK": "Montauk Renewables Inc. Common Stock", "MNTS": "Momentus Inc. Class A Common Stock", "MNTSW": "Momentus Inc. Warrant", "MNTV": "Momentive Global Inc. Common Stock", "MNTX": "Manitex International Inc. Common Stock", "MO": "Altria Group Inc.", "MOD": "Modine Manufacturing Company Common Stock", "MODN": "Model N Inc. Common Stock", "MODV": "ModivCare Inc. Common Stock", "MOFG": "MidWestOne Financial Gp Common Stock", "MOGO": "Mogo Inc. Common Shares", "MOGU": "MOGU Inc. American Depositary Shares (each representing 25 Class A Ordinary Shares)", "MOH": "Molina Healthcare Inc Common Stock", "MOHO": "ECMOHO Limited American Depositary Shares", "MOLN": "Molecular Partners AG American Depositary Shares", "MOMO": "Hello Group Inc. American Depositary Shares", "MON": "Monument Circle Acquisition Corp. Class A Common Stock", "MONCU": "Monument Circle Acquisition Corp. Unit", "MONCW": "Monument Circle Acquisition Corp. Warrant", "MOR": "MorphoSys AG American Depositary Shares", "MORF": "Morphic Holding Inc. Common Stock", "MORN": "Morningstar Inc. Common Stock", "MOS": "Mosaic Company (The) Common Stock", "MOSY": "MoSys Inc. Common Stock", "MOTN": "Motion Acquisition Corp. Class A Common Stock", "MOTNU": "Motion Acquisition Corp. Unit", "MOTNW": "Motion Acquisition Corp. Warrants to purchase one Class A common", "MOTS": "Motus GI Holdings Inc. Common Stock", "MOTV": "Motive Capital Corp Class A Ordinary Shares", "MOV": "Movado Group Inc. Common Stock", "MOVE": "Movano Inc. Common Stock", "MOXC": "Moxian (BVI) Inc Ordinary Shares", "MP": "MP Materials Corp. Common Stock", "MPA": "Blackrock MuniYield Pennsylvania Quality Fund Common Stock", "MPAA": "Motorcar Parts of America Inc. Common Stock", "MPAC": "Model Performance Acquisition Corp. Class A Ordinary Share", "MPACR": "Model Performance Acquisition Corp. Right", "MPACW": "Model Performance Acquisition Corp. Warrant", "MPB": "Mid Penn Bancorp Common Stock", "MPC": "Marathon Petroleum Corporation Common Stock", "MPLN": "MultiPlan Corporation Class A Common Stock", "MPLX": "MPLX LP Common Units Representing Limited Partner Interests", "MPV": "Barings Participation Investors Common Stock", "MPW": "Medical Properties Trust Inc. common stock", "MPWR": "Monolithic Power Systems Inc. Common Stock", "MPX": "Marine Products Corporation Common Stock", "MQ": "Marqeta Inc. Class A Common Stock", "MQT": "Blackrock MuniYield Quality Fund II Inc. Common Stock", "MQY": "Blackrock MuniYield Quality Fund Inc. Common Stock", "MRAC": "Marquee Raine Acquisition Corp. Class A Ordinary Shares", "MRACU": "Marquee Raine Acquisition Corp. Unit", "MRACW": "Marquee Raine Acquisition Corp. Warrant", "MRAM": "Everspin Technologies Inc. Common Stock", "MRBK": "Meridian Corporation Common Stock", "MRC": "MRC Global Inc. Common Stock", "MRCC": "Monroe Capital Corporation Common Stock", "MRCY": "Mercury Systems Inc Common Stock", "MREO": "Mereo BioPharma Group plc American Depositary Shares", "MRIN": "Marin Software Incorporated Common Stock", "MRK": "Merck & Company Inc. Common Stock (new)", "MRKR": "Marker Therapeutics Inc. Common Stock", "MRLN": "Marlin Business Services Corp. Common Stock", "MRM": "MEDIROM Healthcare Technologies Inc. American Depositary Share", "MRNA": "Moderna Inc. Common Stock", "MRNS": "Marinus Pharmaceuticals Inc. Common Stock", "MRO": "Marathon Oil Corporation Common Stock", "MRSN": "Mersana Therapeutics Inc. Common Stock", "MRTN": "Marten Transport Ltd. Common Stock", "MRTX": "Mirati Therapeutics Inc. Common Stock", "MRUS": "Merus N.V. Common Shares", "MRVI": "Maravai LifeSciences Holdings Inc. Class A Common Stock", "MRVL": "Marvell Technology Inc. Common Stock", "MS": "Morgan Stanley Common Stock", "MS^A": "Morgan Stanley Dep Shs repstg 1/1000 Pfd Ser A", "MS^E": "Morgan Stanley DEPOSITARY SHARES REP 1/1000TH SHARES FIXED/FLTG PREFERRED STOCK SERIES E", "MS^F": "Morgan Stanley Dep Shs Rpstg 1/1000th Int Prd Ser F Fxd to Flag", "MS^I": "Morgan Stanley Depository Shares Representing 1/1000th Preferred Series 1 Fixed to Floating Non (Cum)", "MS^K": "Morgan Stanley Depositary Shares each representing 1/1000th of a share of Fixed-to-Floating Rate Non-Cumulative Preferred Stock Series K", "MS^L": "Morgan Stanley Depositary Shares each representing 1/1000th of a share of 4.875% Non-Cumulative Preferred Stock Series L", "MSA": "MSA Safety Incorporated Common Stock", "MSAC": "Medicus Sciences Acquisition Corp. Class A Ordinary Share", "MSACW": "Medicus Sciences Acquisition Corp. Warrant", "MSB": "Mesabi Trust Common Stock", "MSBI": "Midland States Bancorp Inc. Common Stock", "MSC": "Studio City International Holdings Limited American depositary shares each representing four Class A ordinary shares", "MSCI": "MSCI Inc Common Stock", "MSD": "Morgan Stanley Emerging Markets Debt Fund Inc. Common Stock", "MSDA": "MSD Acquisition Corp. Class A Ordinary Shares", "MSDAU": "MSD Acquisition Corp. Unit", "MSDAW": "MSD Acquisition Corp. Warrant", "MSEX": "Middlesex Water Company Common Stock", "MSFT": "Microsoft Corporation Common Stock", "MSGE": "Madison Square Garden Entertainment Corp. Class A Common Stock ", "MSGM": "Motorsport Games Inc. Class A Common Stock", "MSGS": "Madison Square Garden Sports Corp. Class A Common Stock (New)", "MSI": "Motorola Solutions Inc. Common Stock", "MSM": "MSC Industrial Direct Company Inc. Common Stock", "MSN": "Emerson Radio Corporation Common Stock", "MSON": "MISONIX Inc. Common Stock (DE)", "MSP": "Datto Holding Corp. Common Stock", "MSTR": "MicroStrategy Incorporated Common Stock Class A", "MSVB": "Mid-Southern Bancorp Inc. Common Stock", "MT": "Arcelor Mittal NY Registry Shares NEW", "MTA": "Metalla Royalty & Streaming Ltd. Common Shares", "MTAC": "MedTech Acquisition Corporation Class A Common Stock", "MTACU": "MedTech Acquisition Corporation Unit", "MTACW": "MedTech Acquisition Corporation Warrant", "MTAL": "Metals Acquisition Corp Class A Ordinary Shares", "MTB": "M&T Bank Corporation Common Stock", "MTBC": "CareCloud Inc. Common Stock", "MTBCP": "CareCloud Inc. 11% Series A Cumulative Redeemable Perpetual Preferred Stock", "MTC": "MMTec Inc. Common Shares", "MTCH": "Match Group Inc. Common Stock", "MTCN": "ArcelorMittal 5.50% Mandatorily Convertible Subordinated Notes due 2023", "MTCR": "Metacrine Inc. Common Stock", "MTD": "Mettler-Toledo International Inc. Common Stock", "MTDR": "Matador Resources Company Common Stock", "MTEM": "Molecular Templates Inc. Common Stock", "MTEX": "Mannatech Incorporated Common Stock", "MTG": "MGIC Investment Corporation Common Stock", "MTH": "Meritage Homes Corporation Common Stock", "MTL": "Mechel PAO American Depositary Shares (Each rep. 1 common shares)", "MTL^": "Mechel PAO American Depositary Shares (each representing one-half of a Preferred Share)", "MTLS": "Materialise NV American Depositary Shares", "MTN": "Vail Resorts Inc. Common Stock", "MTNB": "Matinas Biopharma Holdings Inc. Common Stock", "MTOR": "Meritor Inc. Common Stock", "MTP": "Midatech Pharma PLC American Depositary Shs", "MTR": "Mesa Royalty Trust Common Stock", "MTRN": "Materion Corporation", "MTRX": "Matrix Service Company Common Stock", "MTRYU": "Monterey Bio Acquisition Corporation Unit", "MTSI": "MACOM Technology Solutions Holdings Inc. Common Stock", "MTTR": "Matterport Inc. Class A Common Stock", "MTTRW": "Matterport Inc. Warrant", "MTW": "Manitowoc Company Inc. (The) Common Stock", "MTX": "Minerals Technologies Inc. Common Stock", "MTZ": "MasTec Inc. Common Stock", "MU": "Micron Technology Inc. Common Stock", "MUA": "Blackrock MuniAssets Fund Inc Common Stock", "MUC": "Blackrock MuniHoldings California Quality Fund Inc. Common Stock", "MUDS": "Mudrick Capital Acquisition Corporation II Class A Common Stock", "MUDSW": "Mudrick Capital Acquisition Corporation II Warrant", "MUE": "Blackrock MuniHoldings Quality Fund II Inc. Common Stock", "MUFG": "Mitsubishi UFJ Financial Group Inc. Common Stock", "MUI": "BlackRock Municipal Income Fund Inc. Common Stock", "MUJ": "Blackrock MuniHoldings New Jersey Quality Fund Inc. Common Stock", "MUR": "Murphy Oil Corporation Common Stock", "MUSA": "Murphy USA Inc. Common Stock", "MUX": "McEwen Mining Inc. Common Stock", "MVBF": "MVB Financial Corp. Common Stock", "MVF": "Blackrock MuniVest Fund Inc. Common Stock", "MVIS": "MicroVision Inc. Common Stock", "MVO": "MV Oil Trust Units of Beneficial Interests", "MVST": "Microvast Holdings Inc. Common Stock", "MVSTW": "Microvast Holdings Inc. Warrants", "MVT": "Blackrock MuniVest Fund II Inc. Common Stock", "MWA": "MUELLER WATER PRODUCTS Common Stock", "MX": "Magnachip Semiconductor Corporation Common Stock", "MXC": "Mexco Energy Corporation Common Stock", "MXCT": "MaxCyte Inc. Common Stock", "MXE": "Mexico Equity and Income Fund Inc. (The) Common Stock", "MXF": "Mexico Fund Inc. (The) Common Stock", "MXL": "MaxLinear Inc. Common Stock", "MYC": "Blackrock MuniYield California Fund Inc. Common Stock", "MYD": "Blackrock MuniYield Fund Inc. Common Stock", "MYE": "Myers Industries Inc. Common Stock", "MYFW": "First Western Financial Inc. Common Stock", "MYGN": "Myriad Genetics Inc. Common Stock", "MYI": "Blackrock MuniYield Quality Fund III Inc Common Stock", "MYJ": "Blackrock MuniYield New Jersey Fund Inc Common Stock", "MYMD": "MyMD Pharmaceuticals Inc. Common Stock", "MYN": "Blackrock MuniYield New York Quality Fund Inc.Common Stock", "MYO": "Myomo Inc. Common Stock", "MYOV": "Myovant Sciences Ltd. Common Shares", "MYPS": "PLAYSTUDIOS Inc. Class A Common Stock", "MYPSW": "PLAYSTUDIOS Inc. Warrant", "MYRG": "MYR Group Inc. Common Stock", "MYSZ": "My Size Inc. Common Stock", "MYTE": "MYT Netherlands Parent B.V. American Depositary Shares each representing one Ordinary Share", "NAAC": "North Atlantic Acquisition Corporation Class A Ordinary Share", "NAACU": "North Atlantic Acquisition Corporation Unit", "NAACW": "North Atlantic Acquisition Corporation Warrant", "NABL": "N-able Inc. Common Stock", "NAC": "Nuveen California Quality Municipal Income Fund", "NAD": "Nuveen Quality Municipal Income Fund Common Stock", "NAII": "Natural Alternatives International Inc. Common Stock", "NAK": "Northern Dynasty Minerals Ltd. Common Stock", "NAKD": "Naked Brand Group Limited Ordinary Shares", "NAN": "Nuveen New York Quality Municipal Income Fund Common Stock", "NAOV": "NanoVibronix Inc. Common Stock", "NAPA": "The Duckhorn Portfolio Inc. Common Stock", "NARI": "Inari Medical Inc. Common Stock", "NAT": "Nordic American Tankers Limited Common Stock", "NATH": "Nathan's Famous Inc. Common Stock", "NATI": "National Instruments Corporation Common Stock", "NATR": "Nature's Sunshine Products Inc. Common Stock", "NAUT": "Nautilus Biotechnolgy Inc. Common Stock", "NAVB": "Navidea Biopharmaceuticals Inc. Common Stock", "NAVI": "Navient Corporation Common Stock", "NAZ": "Nuveen Arizona Quality Municipal Income Fund Common Stock", "NBB": "Nuveen Taxable Municipal Income Fund Common Shares of Beneficial Interest", "NBEV": "NewAge Inc. Common Stock (Delaware)", "NBH": "Neuberger Berman Municipal Fund Inc. Common Stock", "NBHC": "National Bank Holdings Corporation Common Stock", "NBIX": "Neurocrine Biosciences Inc. Common Stock", "NBN": "Northeast Bank Common Stock", "NBO": "Neuberger Berman New York Municipal Fund Inc. Common Stock", "NBR": "Nabors Industries Ltd.", "NBRV": "Nabriva Therapeutics plc Ordinary Shares Ireland", "NBSE": "NeuBase Therapeutics Inc. Common Stock", "NBST": "Newbury Street Acquisition Corporation Common Stock", "NBSTW": "Newbury Street Acquisition Corporation Warrants", "NBTB": "NBT Bancorp Inc. Common Stock", "NBTX": "Nanobiotix S.A. American Depositary Shares", "NBW": "Neuberger Berman California Municipal Fund Inc Common Stock", "NBXG": "Neuberger Berman Next Generation Connectivity Fund Inc. Common Stock", "NBY": "NovaBay Pharmaceuticals Inc. Common Stock", "NC": "NACCO Industries Inc. Common Stock", "NCA": "Nuveen California Municipal Value Fund", "NCBS": "Nicolet Bankshares Inc. Common Stock", "NCLH": "Norwegian Cruise Line Holdings Ltd. Ordinary Shares", "NCMI": "National CineMedia Inc. Common Stock", "NCNA": "NuCana plc American Depositary Share", "NCNO": "nCino Inc. Common Stock", "NCR": "NCR Corporation Common Stock", "NCSM": "NCS Multistage Holdings Inc. Common Stock", "NCTY": "The9 Limited American Depository Shares", "NCV": "Virtus AllianzGI Convertible & Income Fund Common Shares of Beneficial Interest", "NCV^A": "Virtus AllianzGI Convertible & Income Fund 5.625% Series A Cumulative Preferred Shares", "NCZ": "Virtus AllianzGI Convertible & Income Fund II Common Shares of Beneficial Interest", "NCZ^A": "Virtus AllianzGI Convertible & Income Fund II 5.50% Series A Cumulative Preferred Shares", "NDAC": "NightDragon Acquisition Corp. Class A Common stock", "NDACU": "NightDragon Acquisition Corp. SCALE Units", "NDACW": "NightDragon Acquisition Corp. Warrants to purchase Class A common stock", "NDAQ": "Nasdaq Inc. Common Stock", "NDLS": "Noodles & Company Class A Common Stock", "NDMO": "Nuveen Dynamic Municipal Opportunities Fund Common Shares of Beneficial Interest", "NDP": "Tortoise Energy Independence Fund Inc. Common Stock", "NDRA": "ENDRA Life Sciences Inc. Common Stock", "NDRAW": "ENDRA Life Sciences Inc. Warrants", "NDSN": "Nordson Corporation Common Stock", "NE": "Noble Corporation plc Ordinary Shares", "NEA": "Nuveen AMT-Free Quality Municipal Income Fund Common Shares of Beneficial Interest Par Value $.01", "NECB": "NorthEast Community Bancorp Inc. Common Stock", "NEE": "NextEra Energy Inc. Common Stock", "NEE^K": "NextEra Energy Inc. Series K Junior Subordinated Debentures due June 1 2076", "NEE^N": "NextEra Energy Inc. Series N Junior Subordinated Debentures due March 1 2079", "NEE^O": "NextEra Energy Inc. 4.872% Corporate Units", "NEE^P": "NextEra Energy Inc. 5.279% Corporate Units", "NEE^Q": "NextEra Energy Inc. 6.219% Corporate Units", "NEGG": "Newegg Commerce Inc. Common Shares", "NEM": "Newmont Corporation", "NEN": "New England Realty Associates Limited Partnership Class A Depositary Receipts Evidencing Units of Limited Partnership", "NEO": "NeoGenomics Inc. Common Stock", "NEOG": "Neogen Corporation Common Stock", "NEON": "Neonode Inc. Common Stock", "NEP": "NextEra Energy Partners LP Common Units representing limited partner interests", "NEPH": "Nephros Inc. Common Stock", "NEPT": "Neptune Wellness Solutions Inc. Ordinary Shares", "NERV": "Minerva Neurosciences Inc Common Stock", "NES": "Nuverra Environmental Solutions Inc. Common Stock", "NESR": "National Energy Services Reunited Corp. Ordinary Shares", "NESRW": "National Energy Services Reunited Corp. Warrant", "NET": "Cloudflare Inc. Class A Common Stock", "NETE": "Net Element Inc. Common Stock", "NETI": "Eneti Inc. Common Stock", "NEU": "NewMarket Corp Common Stock", "NEV": "Nuveen Enhanced Municipal Value Fund Common Shares of Beneficial Interest", "NEW": "Puxin Limited American Depositary Shares each representing two Ordinary Shares", "NEWP": "New Pacific Metals Corp. Common Shares", "NEWR": "New Relic Inc. Common Stock", "NEWT": "Newtek Business Services Corp. Common Stock (Maryland)", "NEWTL": "Newtek Business Services Corp. 5.75% Notes due 2024", "NEWTZ": "Newtek Business Services Corp. 5.50% Notes Due 2026", "NEX": "NexTier Oilfield Solutions Inc. Common Stock", "NEXA": "Nexa Resources S.A. Common Shares", "NEXI": "NexImmune Inc. Common Stock", "NEXT": "NextDecade Corporation Common Stock", "NFBK": "Northfield Bancorp Inc. Common Stock (Delaware)", "NFE": "New Fortress Energy Inc. Class A Common Stock", "NFG": "National Fuel Gas Company Common Stock", "NFGC": "New Found Gold Corp Common Shares", "NFH": "New Frontier Health Corporation Ordinary Shares", "NFJ": "Virtus Dividend Interest & Premium Strategy Fund Common Shares of Beneficial Interest", "NFLX": "Netflix Inc. Common Stock", "NG": "Novagold Resources Inc.", "NGAB": "Northern Genesis Acquisition Corp. II Common Stock", "NGC": "Northern Genesis Acquisition Corp. III Common Stock", "NGCA": "NextGen Acquisition Corp. II Class A Ordinary Shares", "NGCAU": "NextGen Acquisition Corp. II Units", "NGCAW": "NextGen Acquisition Corp. II Warrant", "NGD": "New Gold Inc.", "NGG": "National Grid Transco PLC National Grid PLC (NEW) American Depositary Shares", "NGL": "NGL ENERGY PARTNERS LP Common Units representing Limited Partner Interests", "NGL^B": "NGL ENERGY PARTNERS LP 9.00% Class B Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units representing limited partnership interests", "NGL^C": "NGL ENERGY PARTNERS LP 9.625% Class C Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units representing limited partner interests", "NGM": "NGM Biopharmaceuticals Inc. Common Stock", "NGMS": "NeoGames S.A. Ordinary Shares", "NGS": "Natural Gas Services Group Inc. Common Stock", "NGVC": "Natural Grocers by Vitamin Cottage Inc. Common Stock", "NGVT": "Ingevity Corporation Common Stock ", "NH": "NantHealth Inc. Common Stock", "NHC": "National HealthCare Corporation Common Stock", "NHF": "NexPoint Strategic Opportunities Fund", "NHF^A": "NexPoint Strategic Opportunities Fund 5.50% Series A Cumulative Preferred Shares", "NHI": "National Health Investors Inc. Common Stock", "NHS": "Neuberger Berman High Yield Strategies Fund", "NHTC": "Natural Health Trends Corp. Common Stock", "NI": "NiSource Inc Common Stock", "NI^B": "NiSource Inc Depositary Shares representing 1/1000th ownership interest in a share of 6.50% Series B Preferred Stock and 1/1000th ownership interest in a share of Series B-1 Preferred Stock", "NICE": "NICE Ltd American Depositary Shares", "NICK": "Nicholas Financial Inc. Common Stock", "NID": "Nuveen Intermediate Duration Municipal Term Fund Common Shares of Beneficial Interest", "NIE": "Virtus AllianzGI Equity & Convertible Income Fund Common Shares of Beneficial Interest", "NIM": "Nuveen Select Maturities Municipal Fund Common Stock", "NIMC": "NiSource Inc Series A Corporate Units", "NINE": "Nine Energy Service Inc. Common Stock", "NIO": "NIO Inc. American depositary shares each representing one Class A ordinary share", "NIQ": "Nuveenn Intermediate Duration Quality Municipal Term Fund Common Shares of Beneficial Interest", "NISN": "NiSun International Enterprise Development Group Co. Ltd. Class A Common Shares", "NIU": "Niu Technologies American Depositary Shares", "NJR": "NewJersey Resources Corporation Common Stock", "NKE": "Nike Inc. Common Stock", "NKG": "Nuveen Georgia Quality Municipal Income Fund ", "NKLA": "Nikola Corporation Common Stock", "NKSH": "National Bankshares Inc. Common Stock", "NKTR": "Nektar Therapeutics Common Stock", "NKTX": "Nkarta Inc. Common Stock", "NKX": "Nuveen California AMT-Free Quality Municipal Income Fund", "NL": "NL Industries Inc. Common Stock", "NLIT": "Northern Lights Acquisition Corp. Class A Common Stock", "NLITU": "Northern Lights Acquisition Corp. Units", "NLITW": "Northern Lights Acquisition Corp. Warrants", "NLOK": "NortonLifeLock Inc. Common Stock", "NLS": "Nautilus Inc. Common Stock", "NLSN": "Nielsen N.V. Ordinary Shares", "NLSP": "NLS Pharmaceutics Ltd. Ordinary Shares", "NLSPW": "NLS Pharmaceutics Ltd. Warrant", "NLTX": "Neoleukin Therapeutics Inc. Common Stock", "NLY": "Annaly Capital Management Inc Common Stock", "NLY^F": "Annaly Capital Management Inc 6.95% Series F", "NLY^G": "Annaly Capital Management Inc 6.50% Series G Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NLY^I": "Annaly Capital Management Inc 6.750% Series I Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NM": "Navios Maritime Holdings Inc. Common Stock", "NM^G": "Navios Maritime Holdings Inc. Sponsored ADR Representing 1/100th Perpetual Preferred Series G (Marshall Islands)", "NM^H": "Navios Maritime Holdings Inc. Sponsored ADR Representing 1/100th Perp. Preferred Series H%", "NMCO": "Nuveen Municipal Credit Opportunities Fund Common Shares", "NMFC": "New Mountain Finance Corporation Common Stock", "NMG": "Nouveau Monde Graphite Inc. Common Shares", "NMI": "Nuveen Municipal Income Fund Inc. Common Stock", "NMIH": "NMI Holdings Inc. Class A Common Stock", "NMK^B": "Niagara Mohawk Holdings Inc. Preferred Stock", "NMK^C": "Niagara Mohawk Holdings Inc. Preferred Stock", "NML": "Neuberger Berman MLP and Energy Income Fund Inc. Common Stock", "NMM": "Navios Maritime Partners LP Common Units Representing Limited Partner Interests", "NMMC": "North Mountain Merger Corp. Class A Common Stock", "NMMCU": "North Mountain Merger Corp. Unit", "NMMCW": "North Mountain Merger Corp. Warrant", "NMR": "Nomura Holdings Inc ADR American Depositary Shares", "NMRD": "Nemaura Medical Inc. Common Stock", "NMRK": "Newmark Group Inc. Class A Common Stock", "NMS": "Nuveen Minnesota Quality Municipal Income Fund ", "NMT": "Nuveen Massachusetts Quality Municipal Income Fund Common Stock", "NMTC": "NeuroOne Medical Technologies Corporation Common Stock", "NMTR": "9 Meters Biopharma Inc. Common Stock", "NMZ": "Nuveen Municipal High Income Opportunity Fund Common Stock $0.01 par value per share", "NNA": "Navios Maritime Acquisition Corporation Common stock", "NNBR": "NN Inc. Common Stock", "NNDM": "Nano Dimension Ltd. American Depositary Shares", "NNI": "Nelnet Inc. Common Stock", "NNN": "National Retail Properties Common Stock", "NNN^F": "National Retail Properties Depositary Shares each representing a 1/100th interest in a share of 5.20% Series F Cumulative Redeemable Preferred Stock", "NNOX": "NANO-X IMAGING LTD Ordinary Shares", "NNVC": "NanoViricides Inc. Common Stock", "NNY": "Nuveen New York Municipal Value Fund Common Stock", "NOA": "North American Construction Group Ltd. Common Shares (no par)", "NOAC": "Natural Order Acquisition Corp. Common Stock", "NOACU": "Natural Order Acquisition Corp. Unit", "NOACW": "Natural Order Acquisition Corp. Warrant", "NOAH": "Noah Holdings Limited", "NOC": "Northrop Grumman Corporation Common Stock", "NODK": "NI Holdings Inc. Common Stock", "NOG": "Northern Oil and Gas Inc. Common Stock", "NOK": "Nokia Corporation Sponsored American Depositary Shares", "NOM": "Nuveen Missouri Quality Municipal Income Fund ", "NOMD": "Nomad Foods Limited Ordinary Shares", "NOTV": "Inotiv Inc. Common Stock", "NOV": "NOV Inc. Common Stock", "NOVA": "Sunnova Energy International Inc. Common Stock", "NOVN": "Novan Inc. Common Stock", "NOVT": "Novanta Inc. Common Stock", "NOVV": "Nova Vision Acquisition Corp. Ordinary share", "NOVVR": "Nova Vision Acquisition Corp. Rights", "NOVVU": "Nova Vision Acquisition Corp. Unit", "NOVVW": "Nova Vision Acquisition Corp. Warrant", "NOW": "ServiceNow Inc. Common Stock", "NP": "Neenah Inc. Common Stock", "NPCE": "Neuropace Inc. Common Stock", "NPCT": "Nuveen Core Plus Impact Fund Common Shares of Beneficial Interest", "NPK": "National Presto Industries Inc. Common Stock", "NPO": "EnPro Industries Inc", "NPTN": "NeoPhotonics Corporation Common Stock", "NPV": "Nuveen Virginia Quality Municipal Income Fund Common Stock", "NQP": "Nuveen Pennsylvania Quality Municipal Income Fund Common Stock", "NR": "Newpark Resources Inc. Common Stock", "NRAC": "Noble Rock Acquisition Corporation Class A Ordinary Share", "NRACU": "Noble Rock Acquisition Corporation Unit", "NRACW": "Noble Rock Acquisition Corporation Warrant", "NRBO": "NeuroBo Pharmaceuticals Inc. Common Stock", "NRC": "National Research Corporation Common Stock (Delaware)", "NRDY": "Nerdy Inc. Class A Common Stock", "NREF": "NexPoint Real Estate Finance Inc. Common Stock", "NREF^A": "NexPoint Real Estate Finance Inc. 8.50% Series A Cumulative Redeemable Preferred Stock", "NRG": "NRG Energy Inc. Common Stock", "NRGX": "PIMCO Energy and Tactical Credit Opportunities Fund Common Shares of Beneficial Interest", "NRIM": "Northrim BanCorp Inc Common Stock", "NRIX": "Nurix Therapeutics Inc. Common stock", "NRK": "Nuveen New York AMT-Free Quality Municipal Income Fund ", "NRO": "Neuberger Berman Real Estate Securities Income Fund Inc. Neuberger Berman Real Estate Securities Income Fund Inc.", "NRP": "Natural Resource Partners LP Limited Partnership", "NRT": "North European Oil Royality Trust Common Stock", "NRUC": "National Rural Utilities Cooperative Finance Corporation 5.500% Subordinated Notes due 2064 (Subordinated Deferrable Interest Notes)", "NRXP": "NRX Pharmaceuticals Inc. Common Stock", "NRXPW": "NRX Pharmaceuticals Inc. Warrant", "NRZ": "New Residential Investment Corp. Common Stock", "NRZ^A": "New Residential Investment Corp. 7.50% Series A Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NRZ^B": "New Residential Investment Corp. 7.125% Series B Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NRZ^C": "New Residential Investment Corp. 6.375% Series C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NRZ^D": "New Residential Investment Corp. 7.00% Fixed-Rate Reset Series D Cumulative Redeemable Preferred Stock", "NS": "Nustar Energy L.P. Common Units", "NS^A": "Nustar Energy L.P. 8.50% Series A Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units", "NS^B": "Nustar Energy L.P. 7.625% Series B Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units representing limited partner interests", "NS^C": "Nustar Energy L.P. 9.00% Series C Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units", "NSA": "National Storage Affiliates Trust Common Shares of Beneficial Interest", "NSA^A": "National Storage Affiliates Trust 6.000% Series A Cumulative Redeemable Preferred Shares of Beneficial Interest (Liquidation Preference $25.00 per share)", "NSC": "Norfolk Southern Corporation Common Stock", "NSEC": "National Security Group Inc. Common Stock", "NSIT": "Insight Enterprises Inc. Common Stock", "NSL": "Nuveen Senior Income Fund Common Stock", "NSP": "Insperity Inc. Common Stock", "NSPR": "InspireMD Inc. Common Stock", "NSPRZ": "InspireMD Inc. Series B Warrants", "NSR": "Nomad Royalty Company Ltd. Common Shares", "NSS": "NuStar Logistics L.P. 7.625% Fixed-to-Floating Rate Subordinated Notes due 2043", "NSSC": "NAPCO Security Technologies Inc. Common Stock", "NSTB": "Northern Star Investment Corp. II Class A Common stock", "NSTC": "Northern Star Investment Corp. III Class A Common Stock", "NSTD": "Northern Star Investment Corp. IV Class A Common Stock", "NSTG": "NanoString Technologies Inc. Common Stock", "NSYS": "Nortech Systems Incorporated Common Stock", "NTAP": "NetApp Inc. Common Stock", "NTB": "Bank of N.T. Butterfield & Son Limited (The) Voting Ordinary Shares", "NTCO": "Natura &Co Holding S.A. American Depositary Shares ", "NTCT": "NetScout Systems Inc. Common Stock", "NTES": "NetEase Inc. American Depositary Shares", "NTG": "Tortoise Midstream Energy Fund Inc. Common Stock", "NTGR": "NETGEAR Inc. Common Stock", "NTIC": "Northern Technologies International Corporation Common Stock", "NTIP": "Network-1 Technologies Inc. Common Stock", "NTLA": "Intellia Therapeutics Inc. Common Stock", "NTNX": "Nutanix Inc. Class A Common Stock", "NTP": "Nam Tai Property Inc. Common Stock", "NTR": "Nutrien Ltd. Common Shares", "NTRA": "Natera Inc. Common Stock", "NTRB": "Nutriband Inc. Common Stock", "NTRBW": "Nutriband Inc. Warrant", "NTRS": "Northern Trust Corporation Common Stock", "NTRSO": "Northern Trust Corporation Depositary Shares Each Representing a 1/1000th Interest in a Share of Series E Non-Cumulative Perpetual Preferred Stock", "NTST": "NetSTREIT Corp. Common Stock", "NTUS": "Natus Medical Incorporated Common Stock", "NTWK": "NetSol Technologies Inc. Common Stock", "NTZ": "Natuzzi S.p.A.", "NUAN": "Nuance Communications Inc. Common Stock", "NUE": "Nucor Corporation Common Stock", "NUO": "Nuveen Ohio Quality Municipal Income Fund Common Stock", "NURO": "NeuroMetrix Inc. Common Stock", "NUS": "Nu Skin Enterprises Inc. Common Stock", "NUV": "Nuveen Municipal Value Fund Inc. Common Stock", "NUVA": "NuVasive Inc. Common Stock", "NUVB": "Nuvation Bio Inc. Class A Common Stock", "NUVL": "Nuvalent Inc. Class A Common Stock", "NUW": "Nuveen AMT-Free Municipal Value Fund", "NUWE": "Nuwellis Inc. Common Stock", "NUZE": "NuZee Inc. Common Stock", "NVAX": "Novavax Inc. Common Stock", "NVCN": "Neovasc Inc. Common Shares", "NVCR": "NovoCure Limited Ordinary Shares", "NVDA": "NVIDIA Corporation Common Stock", "NVEC": "NVE Corporation Common Stock", "NVEE": "NV5 Global Inc. Common Stock", "NVFY": "Nova Lifestyle Inc. Common Stock", "NVG": "Nuveen AMT-Free Municipal Credit Income Fund ", "NVGS": "Navigator Holdings Ltd. Ordinary Shares (Marshall Islands)", "NVIV": "InVivo Therapeutics Holdings Corp Common Stock", "NVMI": "Nova Ltd. Ordinary Shares", "NVNO": "enVVeno Medical Corporation Common Stock", "NVNOW": "enVVeno Medical Corporation Warrants", "NVO": "Novo Nordisk A/S Common Stock", "NVOS": "Novo Integrated Sciences Inc. Common Stock", "NVR": "NVR Inc. Common Stock", "NVRO": "Nevro Corp. Common Stock", "NVS": "Novartis AG Common Stock", "NVSA": "New Vista Acquisition Corp Class A Ordinary Shares", "NVSAU": "New Vista Acquisition Corp Unit", "NVSAW": "New Vista Acquisition Corp. Warrant ", "NVST": "Envista Holdings Corporation Common Stock", "NVT": "nVent Electric plc Ordinary Shares ", "NVTA": "Invitae Corporation Common Stock", "NVVE": "Nuvve Holding Corp. Common Stock", "NVVEW": "Nuvve Holding Corp. Warrant", "NWBI": "Northwest Bancshares Inc. Common Stock", "NWE": "NorthWestern Corporation Common Stock", "NWFL": "Norwood Financial Corp. Common Stock", "NWG": "NatWest Group plc American Depositary Shares", "NWL": "Newell Brands Inc. Common Stock", "NWLI": "National Western Life Group Inc. Class A Common Stock", "NWN": "Northwest Natural Holding Company Common Stock", "NWPX": "Northwest Pipe Company Common Stock", "NWS": "News Corporation Class B Common Stock", "NWSA": "News Corporation Class A Common Stock", "NX": "Quanex Building Products Corporation Common Stock", "NXC": "Nuveen California Select Tax-Free Income Portfolio Common Stock", "NXE": "Nexgen Energy Ltd. Common Shares", "NXGN": "NextGen Healthcare Inc. Common Stock", "NXJ": "Nuveen New Jersey Qualified Municipal Fund ", "NXN": "Nuveen New York Select Tax-Free Income Portfolio Common Stock", "NXP": "Nuveen Select Tax Free Income Portfolio Common Stock", "NXPI": "NXP Semiconductors N.V. Common Stock", "NXQ": "Nuveen Select Tax Free Income Portfolio II Common Stock", "NXR": "Nuveen Select Tax Free Income Portfolio III Common Stock", "NXRT": "NexPoint Residential Trust Inc. Common Stock", "NXST": "Nexstar Media Group Inc. Class A Common Stock", "NXTC": "NextCure Inc. Common Stock", "NXTD": "Nxt-ID Inc. Common Stock", "NXTP": "NextPlay Technologies Inc. Common Stock", "NXU": "Novus Capital Corporation II Class A Common Stock", "NYC": "New York City REIT Inc. Class A Common Stock", "NYCB": "New York Community Bancorp Inc. Common Stock", "NYCB^A": "New York Community Bancorp Inc. Depositary shares each representing a 1/40th interest in a share of Fixed-to-Floating Rate Series A Noncumulative Perpetual Preferred Stock", "NYCB^U": "New York Community Bancorp Inc. New York Community Capital Tr V (BONUSES)", "NYMT": "New York Mortgage Trust Inc. Common Stock", "NYMTL": "New York Mortgage Trust Inc. 6.875% Series F Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock $0.01 par value per share", "NYMTM": "New York Mortgage Trust Inc. 7.875% Series E Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NYMTN": "New York Mortgage Trust Inc. 8.00% Series D Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "NYMTP": "New York Mortgage Trust Inc. 7.75% Series B Cumulative Redeemable Preferred Stock", "NYMX": "Nymox Pharmaceutical Corporation Common Stock (Bahamas)", "NYT": "New York Times Company (The) Common Stock", "NYXH": "Nyxoah SA Ordinary Shares", "NZF": "Nuveen Municipal Credit Income Fund ", "O": "Realty Income Corporation Common Stock", "OACB": "Oaktree Acquisition Corp. II Class A Ordinary Shares", "OAK^A": "Oaktree Capital Group LLC 6.625% Series A Preferred units", "OAK^B": "Oaktree Capital Group LLC 6.550% Series B Preferred Units", "OAS": "Oasis Petroleum Inc. Common Stock", "OB": "Outbrain Inc. Common Stock", "OBAS": "Optibase Ltd. Ordinary Shares", "OBCI": "Ocean Bio-Chem Inc. Common Stock", "OBLG": "Oblong Inc. Common Stock", "OBNK": "Origin Bancorp Inc. Common Stock", "OBSV": "ObsEva SA Ordinary Shares", "OBT": "Orange County Bancorp Inc. Common Stock", "OC": "Owens Corning Inc Common Stock New", "OCA": "Omnichannel Acquisition Corp. Class A Common Stock", "OCAX": "OCA Acquisition Corp. Class A Common Stock", "OCAXW": "OCA Acquisition Corp. Warrant", "OCC": "Optical Cable Corporation Common Stock", "OCCI": "OFS Credit Company Inc. Common Stock", "OCCIO": "OFS Credit Company Inc. 6.125% Series C Term Preferred Stock", "OCCIP": "OFS Credit Company Inc. 6.875% Series A Term Preferred Stock", "OCDX": "Ortho Clinical Diagnostics Holdings plc Ordinary Shares", "OCFC": "OceanFirst Financial Corp. Common Stock", "OCFCP": "OceanFirst Financial Corp. Depositary Shares", "OCFT": "OneConnect Financial Technology Co. Ltd. American Depositary Shares each representing three ordinary shares", "OCG": "Oriental Culture Holding LTD Ordinary Shares", "OCGN": "Ocugen Inc. Common Stock", "OCN": "Ocwen Financial Corporation NEW Common Stock", "OCSL": "Oaktree Specialty Lending Corporation Common Stock", "OCUL": "Ocular Therapeutix Inc. Common Stock", "OCUP": "Ocuphire Pharma Inc. Common Stock", "OCX": "Oncocyte Corporation Common Stock", "ODC": "Oil-Dri Corporation Of America Common Stock", "ODFL": "Old Dominion Freight Line Inc. Common Stock", "ODP": "The ODP Corporation Common Stock", "ODT": "Odonate Therapeutics Inc. Common Stock", "OEC": "Orion Engineered Carbons S.A Common Shares", "OEG": "Orbital Energy Group Inc. Common Stock", "OEPW": "One Equity Partners Open Water I Corp. Class A Common Stock", "OEPWU": "One Equity Partners Open Water I Corp. Unit", "OEPWW": "One Equity Partners Open Water I Corp. Warrant", "OESX": "Orion Energy Systems Inc. Common Stock", "OFC": "Corporate Office Properties Trust Common Stock", "OFED": "Oconee Federal Financial Corp. Common Stock", "OFG": "OFG Bancorp Common Stock", "OFIX": "Orthofix Medical Inc. Common Stock (DE)", "OFLX": "Omega Flex Inc. Common Stock", "OFS": "OFS Capital Corporation Common Stock", "OFSSG": "OFS Capital Corporation 6.25% Notes Due 2023", "OFSSI": "OFS Capital Corporation 5.95% Notes due 2026", "OG": "Onion Global Limited American Depositary Shares (each ten (10) ADSs representing one (1) Class A Ordinary Share)", "OGE": "OGE Energy Corp Common Stock", "OGEN": "Oragenics Inc. Common Stock", "OGI": "Organigram Holdings Inc. Common Shares", "OGN": "Organon & Co. Common Stock ", "OGS": "ONE Gas Inc. Common Stock", "OHI": "Omega Healthcare Investors Inc. Common Stock", "OHPA": "Orion Acquisition Corp. Class A common stock", "OHPAU": "Orion Acquisition Corp. Unit", "OHPAW": "Orion Acquisition Corp. Warrant", "OI": "O-I Glass Inc. Common Stock", "OIA": "Invesco Municipal Income Opportunities Trust Common Stock", "OII": "Oceaneering International Inc. Common Stock", "OIIM": "O2Micro International Limited American Depositary Shares", "OIS": "Oil States International Inc. Common Stock", "OKE": "ONEOK Inc. Common Stock", "OKTA": "Okta Inc. Class A Common Stock", "OLB": "The OLB Group Inc. Common Stock", "OLED": "Universal Display Corporation Common Stock", "OLK": "Olink Holding AB (publ) American Depositary Shares", "OLLI": "Ollie's Bargain Outlet Holdings Inc. Common Stock", "OLMA": "Olema Pharmaceuticals Inc. Common Stock", "OLN": "Olin Corporation Common Stock", "OLO": "Olo Inc. Class A Common Stock", "OLP": "One Liberty Properties Inc. Common Stock", "OLPX": "Olaplex Holdings Inc. Common Stock", "OM": "Outset Medical Inc. Common Stock", "OMAB": "Grupo Aeroportuario del Centro Norte S.A.B. de C.V. ADS", "OMC": "Omnicom Group Inc. Common Stock", "OMCL": "Omnicell Inc. Common Stock ($0.001 par value)", "OMEG": "Omega Alpha SPAC Class A Ordinary Shares", "OMER": "Omeros Corporation Common Stock", "OMEX": "Odyssey Marine Exploration Inc. Common Stock", "OMF": "OneMain Holdings Inc. Common Stock", "OMGA": "Omega Therapeutics Inc. Common Stock", "OMI": "Owens & Minor Inc. Common Stock", "OMIC": "Singular Genomics Systems Inc. Common Stock", "OMP": "Oasis Midstream Partners LP Common Units Representing Limited Partner Interests", "OMQS": "OMNIQ Corp. Common Stock", "ON": "ON Semiconductor Corporation Common Stock", "ONB": "Old National Bancorp Common Stock", "ONCR": "Oncorus Inc. Common Stock", "ONCS": "OncoSec Medical Incorporated Common Stock", "ONCT": "Oncternal Therapeutics Inc. Common Stock", "ONCY": "Oncolytics Biotech Inc. Common Shares", "ONDS": "Ondas Holdings Inc. Common Stock", "ONE": "OneSmart International Education Group Limited ADS", "ONEM": "1Life Healthcare Inc. Common Stock", "ONEW": "OneWater Marine Inc. Class A Common Stock", "ONON": "On Holding AG Class A Ordinary Shares", "ONTF": "ON24 Inc. Common Stock", "ONTO": "Onto Innovation Inc. Common Stock", "ONTX": "Onconova Therapeutics Inc. Common Stock", "ONVO": "Organovo Holdings Inc. Common Stock", "OOMA": "Ooma Inc. Common Stock", "OPA": "Magnum Opus Acquisition Limited Class A Ordinary Shares", "OPAD": "Offerpad Solutions Inc. Class A Common Stock", "OPBK": "OP Bancorp Common Stock", "OPCH": "Option Care Health Inc. Common Stock", "OPEN": "Opendoor Technologies Inc Common Stock", "OPFI": "OppFi Inc. Class A Common Stock", "OPGN": "OpGen Inc. Common Stock", "OPHC": "OptimumBank Holdings Inc. Common Stock", "OPI": "Office Properties Income Trust Common Shares of Beneficial Interest", "OPINL": "Office Properties Income Trust 6.375% Senior Notes due 2050", "OPK": "OPKO Health Inc. Common Stock", "OPNT": "Opiant Pharmaceuticals Inc. Common Stock", "OPOF": "Old Point Financial Corporation Common Stock", "OPP": "RiverNorth/DoubleLine Strategic Opportunity Fund Inc. Common Stock", "OPP^A": "RiverNorth/DoubleLine Strategic Opportunity Fund Inc. 4.375% Series A Cumulative Preferred Stock", "OPRA": "Opera Limited American Depositary Shares", "OPRT": "Oportun Financial Corporation Common Stock", "OPRX": "OptimizeRx Corporation Common Stock", "OPT": "Opthea Limited American Depositary Shares", "OPTN": "OptiNose Inc. Common Stock", "OPTT": "Ocean Power Technologies Inc. Common Stock", "OPY": "Oppenheimer Holdings Inc. Class A Common Stock (DE)", "OR": "Osisko Gold Royalties Ltd Common Shares", "ORA": "Ormat Technologies Inc. Common Stock", "ORAN": "Orange", "ORC": "Orchid Island Capital Inc. Common Stock", "ORCC": "Owl Rock Capital Corporation Common Stock", "ORCL": "Oracle Corporation Common Stock", "ORGN": "Origin Materials Inc. Common Stock", "ORGNW": "Origin Materials Inc. Warrants", "ORGO": "Organogenesis Holdings Inc. Class A Common Stock", "ORGS": "Orgenesis Inc. Common Stock", "ORI": "Old Republic International Corporation Common Stock", "ORIA": "Orion Biotech Opportunities Corp. Class A Ordinary Share", "ORIAW": "Orion Biotech Opportunities Corp. Warrant", "ORIC": "Oric Pharmaceuticals Inc. Common Stock", "ORLA": "Orla Mining Ltd. Common Shares", "ORLY": "O'Reilly Automotive Inc. Common Stock", "ORMP": "Oramed Pharmaceuticals Inc. Common Stock", "ORN": "Orion Group Holdings Inc. Common", "ORPH": "Orphazyme A/S American Depositary Shares", "ORRF": "Orrstown Financial Services Inc Common Stock", "ORTX": "Orchard Therapeutics plc American Depositary Shares", "OSAT": "Orbsat Corp Common Stock", "OSATW": "Orbsat Corp Warrants", "OSBC": "Old Second Bancorp Inc. Common Stock", "OSCR": "Oscar Health Inc. Class A Common Stock", "OSG": "Overseas Shipholding Group Inc. Class A Common Stock", "OSH": "Oak Street Health Inc. Common Stock", "OSI": "Osiris Acquisition Corp. Class A Common Stock", "OSIS": "OSI Systems Inc. Common Stock (DE)", "OSK": "Oshkosh Corporation (Holding Company)Common Stock", "OSMT": "Osmotica Pharmaceuticals plc Ordinary Shares", "OSPN": "OneSpan Inc. Common Stock", "OSS": "One Stop Systems Inc. Common Stock", "OSTK": "Overstock.com Inc. Common Stock", "OSTR": "Oyster Enterprises Acquisition Corp. Class A Common Stock", "OSTRU": "Oyster Enterprises Acquisition Corp. Unit", "OSTRW": "Oyster Enterprises Acquisition Corp. Warrant", "OSUR": "OraSure Technologies Inc. Common Stock", "OSW": "OneSpaWorld Holdings Limited Common Shares", "OTEC": "OceanTech Acquisitions I Corp. Class A Common Stock", "OTECU": "OceanTech Acquisitions I Corp. Units", "OTECW": "OceanTech Acquisitions I Corp. Warrant", "OTEX": "Open Text Corporation Common Shares", "OTIC": "Otonomy Inc. Common Stock", "OTIS": "Otis Worldwide Corporation Common Stock ", "OTLK": "Outlook Therapeutics Inc. Common Stock", "OTLKW": "Outlook Therapeutics Inc. Series A Warrant Expiring 02/18/2022", "OTLY": "Oatly Group AB American Depositary Shares", "OTMO": "Otonomo Technologies Ltd. Ordinary shares", "OTMOW": "Otonomo Technologies Ltd. Warrant", "OTRAU": "OTR Acquisition Corp. Unit", "OTRAW": "OTR Acquisition Corp. Warrant", "OTRK": "Ontrak Inc. Common Stock", "OTRKP": "Ontrak Inc. 9.50% Series A Cumulative Perpetual Preferred Stock", "OTTR": "Otter Tail Corporation Common Stock", "OUST": "Ouster Inc. Common Stock", "OUT": "OUTFRONT Media Inc. Common Stock", "OVBC": "Ohio Valley Banc Corp. Common Stock", "OVID": "Ovid Therapeutics Inc. Common Stock", "OVLY": "Oak Valley Bancorp (CA) Common Stock", "OVV": "Ovintiv Inc. (DE)", "OWL": "Blue Owl Capital Inc. Class A Common Stock", "OWLT": "Owlet Inc. Class A Common Stock", "OXAC": "Oxbridge Acquisition Corp. Class A Ordinary Shares", "OXACU": "Oxbridge Acquisition Corp. Unit", "OXACW": "Oxbridge Acquisition Corp. Warrant", "OXBR": "Oxbridge Re Holdings Limited Ordinary Shares", "OXBRW": "Oxbridge Re Holdings Limited Warrant expiring 3/26/2024", "OXLC": "Oxford Lane Capital Corp. Common Stock", "OXLCL": "Oxford Lane Capital Corp. 6.75% Notes due 2031", "OXLCM": "Oxford Lane Capital Corp. 6.75% Series 2024 Term Preferred Stock", "OXLCO": "Oxford Lane Capital Corp. Preferred Stock Shares 6.00% Series 2029", "OXLCP": "Oxford Lane Capital Corp. 6.25% Series 2027 Term Preferred Shares", "OXM": "Oxford Industries Inc. Common Stock", "OXSQ": "Oxford Square Capital Corp. Common Stock", "OXSQG": "Oxford Square Capital Corp. 5.50% Notes due 2028", "OXSQL": "Oxford Square Capital Corp. 6.50% Notes due 2024", "OXSQZ": "Oxford Square Capital Corp. 6.25% Notes due 2026", "OXUSU": "Oxus Acquisition Corp. Unit", "OXY": "Occidental Petroleum Corporation Common Stock", "OYST": "Oyster Point Pharma Inc. Common Stock", "OZK": "Bank OZK Common Stock", "OZON": "Ozon Holdings PLC American Depositary Shares each ADS representing one ordinary share", "PAA": "Plains All American Pipeline L.P. Common Units representing Limited Partner Interests", "PAAS": "Pan American Silver Corp. Common Stock", "PAC": "Grupo Aeroportuario Del Pacifico S.A. B. de C.V. Grupo Aeroportuario Del Pacifico S.A. de C.V. (each representing 10 Series B shares)", "PACB": "Pacific Biosciences of California Inc. Common Stock", "PACK": "Ranpak Holdings Corp Class A Common Stock", "PACW": "PacWest Bancorp Common Stock", "PACX": "Pioneer Merger Corp. Class A Ordinary Share", "PACXU": "Pioneer Merger Corp. Unit", "PACXW": "Pioneer Merger Corp. Warrant", "PAE": "PAE Incorporated Class A Common Stock", "PAEWW": "PAE Incorporated Warrants", "PAFO": "Pacifico Acquisition Corp. Common Stock", "PAFOR": "Pacifico Acquisition Corp. Rights", "PAFOU": "Pacifico Acquisition Corp. Units", "PAG": "Penske Automotive Group Inc. Common Stock", "PAGP": "Plains GP Holdings L.P. Class A Units representing Limited Partner Interests", "PAGS": "PagSeguro Digital Ltd. Class A Common Shares", "PAHC": "Phibro Animal Health Corporation Class A Common Stock", "PAI": "Western Asset Investment Grade Income Fund Inc.", "PAIC": "Petra Acquisition Inc. Common Stock", "PAICU": "Petra Acquisition Inc. Units", "PAICW": "Petra Acquisition Inc. Warrant", "PALI": "Palisade Bio Inc. Common Stock", "PALT": "Paltalk Inc. Common Stock", "PAM": "Pampa Energia S.A. Pampa Energia S.A.", "PANA": "Panacea Acquisition Corp. II Class A Ordinary Shares", "PANL": "Pangaea Logistics Solutions Ltd. Common Shares", "PANW": "Palo Alto Networks Inc. Common Stock", "PAQC": "Provident Acquisition Corp. Class A Ordinary Shares", "PAQCU": "Provident Acquisition Corp. Units", "PAQCW": "Provident Acquisition Corp. Warrant", "PAR": "PAR Technology Corporation Common Stock", "PARR": "Par Pacific Holdings Inc. Common Stock", "PASG": "Passage Bio Inc. Common Stock", "PATH": "UiPath Inc. Class A Common Stock", "PATI": "Patriot Transportation Holding Inc. Common Stock", "PATK": "Patrick Industries Inc. Common Stock", "PAVM": "PAVmed Inc. Common Stock", "PAVMW": "PAVmed Inc. Warrant", "PAVMZ": "PAVmed Inc. Series Z Warrant", "PAX": "Patria Investments Limited Class A Common Shares", "PAY": "Paymentus Holdings Inc. Class A Common Stock", "PAYA": "Paya Holdings Inc. Class A Common Stock", "PAYC": "Paycom Software Inc. Common Stock", "PAYO": "Payoneer Global Inc. Common Stock", "PAYOW": "Payoneer Global Inc. Warrant", "PAYS": "Paysign Inc. Common Stock", "PAYX": "Paychex Inc. Common Stock", "PB": "Prosperity Bancshares Inc. Common Stock", "PBA": "Pembina Pipeline Corp. Ordinary Shares (Canada)", "PBBK": "PB Bankshares Inc. Common Stock", "PBC": "Prospect Capital Corporation 6.875% Notes due 2029", "PBCT": "People's United Financial Inc. Common Stock", "PBCTP": "People's United Financial Inc. Perpetual Preferred Series A Fixed-to-floating Rate", "PBF": "PBF Energy Inc. Class A Common Stock", "PBFS": "Pioneer Bancorp Inc. Common Stock", "PBFX": "PBF Logistics LP Common Units representing limited partner interests", "PBH": "Prestige Consumer Healthcare Inc. Common Stock", "PBHC": "Pathfinder Bancorp Inc. Common Stock (MD)", "PBI": "Pitney Bowes Inc. Common Stock", "PBI^B": "Pitney Bowes Inc 6.70% Notes Due 2043", "PBIP": "Prudential Bancorp Inc. Common Stock", "PBLA": "Panbela Therapeutics Inc. Common Stock", "PBPB": "Potbelly Corporation Common Stock", "PBR": "Petroleo Brasileiro S.A.- Petrobras Common Stock", "PBT": "Permian Basin Royalty Trust Common Stock", "PBTS": "Powerbridge Technologies Co. Ltd. Ordinary Shares", "PBYI": "Puma Biotechnology Inc Common Stock", "PCAR": "PACCAR Inc. Common Stock", "PCB": "PCB Bancorp Common Stock", "PCF": "High Income Securities Fund Common Stock", "PCG": "Pacific Gas & Electric Co. Common Stock", "PCG^A": "Pacific Gas & Electric Co. 6% Preferred Stock", "PCG^B": "Pacific Gas & Electric Co. 5 1/2% Preferred Stock", "PCG^C": "Pacific Gas & Electric Co. 5% 1st Preferred Stock", "PCG^D": "Pacific Gas & Electric Co. 5% 1st Red. Preferred Stock", "PCG^E": "Pacific Gas & Electric Co. 5% 1st A Preferred Stock", "PCG^G": "Pacific Gas & Electric Co. 4.80% 1st Preferred Stock", "PCG^H": "Pacific Gas & Electric Co. 4.50% 1st Preferred Stock", "PCG^I": "Pacific Gas & Electric Co. 4.36% 1st Preferred Stock", "PCGU": "Pacific Gas & Electric Co. Equity Unit", "PCH": "PotlatchDeltic Corporation Common Stock", "PCI": "PIMCO Dynamic Credit and Mortgage Income Fund Common Shares of Beneficial Interest", "PCK": "Pimco California Municipal Income Fund II Common Shares of Beneficial Interest", "PCM": "PCM Fund Inc. Common Stock", "PCN": "Pimco Corporate & Income Strategy Fund Common Stock", "PCOM": "Points International Ltd. Common Shares", "PCOR": "Procore Technologies Inc. Common Stock", "PCPC": "Periphas Capital Partnering Corporation Class A Common Stock", "PCQ": "PIMCO California Municipal Income Fund Common Stock", "PCRX": "Pacira BioSciences Inc. Common Stock", "PCSA": "Processa Pharmaceuticals Inc. Common Stock", "PCSB": "PCSB Financial Corporation Common Stock", "PCT": "PureCycle Technologies Inc. Common stock", "PCTI": "PCTEL Inc. Common Stock", "PCTTW": "PureCycle Technologies Inc. Warrant", "PCTY": "Paylocity Holding Corporation Common Stock", "PCVX": "Vaxcyte Inc. Common Stock", "PCYG": "Park City Group Inc. Common Stock", "PCYO": "Pure Cycle Corporation Common Stock", "PD": "PagerDuty Inc. Common Stock", "PDCE": "PDC Energy Inc. Common Stock (Delaware)", "PDCO": "Patterson Companies Inc. Common Stock", "PDD": "Pinduoduo Inc. American Depositary Shares", "PDEX": "Pro-Dex Inc. Common Stock", "PDFS": "PDF Solutions Inc. Common Stock", "PDI": "PIMCO Dynamic Income Fund Common Stock", "PDLB": "PDL Community Bancorp Common Stock", "PDM": "Piedmont Office Realty Trust Inc. Class A Common Stock", "PDO": "PIMCO Dynamic Income Opportunities Fund Common Shares of Beneficial Interest", "PDOT": "Peridot Acquisition Corp. II Class A Ordinary Shares", "PDS": "Precision Drilling Corporation Common Stock", "PDSB": "PDS Biotechnology Corporation Common Stock", "PDT": "John Hancock Premium Dividend Fund", "PEAK": "Healthpeak Properties Inc. Common Stock", "PEB": "Pebblebrook Hotel Trust Common Shares of Beneficial Interest", "PEB^E": "Pebblebrook Hotel Trust 6.375% Series E Cumulative Redeemable Preferred Shares of Beneficial Interest", "PEB^F": "Pebblebrook Hotel Trust 6.3% Series F Cumulative Redeemable Preferred Shares of Beneficial Interest", "PEB^G": "Pebblebrook Hotel Trust 6.375% Series G Cumulative Redeemable Preferred Shares of Beneficial Interest", "PEB^H": "Pebblebrook Hotel Trust 5.700% Series H Cumulative Redeemable Preferred Shares of Beneficial Interest", "PEBK": "Peoples Bancorp of North Carolina Inc. Common Stock", "PEBO": "Peoples Bancorp Inc. Common Stock", "PECO": "Phillips Edison & Company Inc. Common Stock", "PED": "Pedevco Corp. Common Stock", "PEG": "Public Service Enterprise Group Incorporated Common Stock", "PEGA": "Pegasystems Inc. Common Stock", "PEI": "Pennsylvania Real Estate Investment Trust Common Stock", "PEI^B": "Pennsylvania Real Estate Investment Trust Cumulative Redeemable Perpetual Preferred Shares Series B", "PEI^C": "Pennsylvania Real Estate Investment Trust 7.20% Series C Cumulative Redeemable Perpetual Preferred Shares", "PEI^D": "Pennsylvania Real Estate Investment Trust 6.875% Series D Cumulative Redeemable Perpetual Preferred Shares", "PEN": "Penumbra Inc. Common Stock", "PENN": "Penn National Gaming Inc. Common Stock", "PEO": "Adams Natural Resources Fund Inc. Common Stock", "PEP": "PepsiCo Inc. Common Stock", "PERI": "Perion Network Ltd. Ordinary Shares", "PESI": "Perma-Fix Environmental Services Inc. Common Stock", "PETQ": "PetIQ Inc. Class A Common Stock", "PETS": "PetMed Express Inc. Common Stock", "PETV": "PetVivo Holdings Inc. Common Stock", "PETVW": "PetVivo Holdings Inc. Warrant", "PETZ": "TDH Holdings Inc. Common Shares", "PFBC": "Preferred Bank Common Stock", "PFC": "Premier Financial Corp. Common Stock", "PFD": "Flaherty & Crumrine Preferred and Income Fund Incorporated", "PFDR": "Pathfinder Acquisition Corporation Class A Ordinary Shares", "PFDRU": "Pathfinder Acquisition Corporation Unit", "PFDRW": "Pathfinder Acquisition Corporation Warrant", "PFE": "Pfizer Inc. Common Stock", "PFG": "Principal Financial Group Inc Common Stock", "PFGC": "Performance Food Group Company Common Stock", "PFH": "Prudential Financial Inc. 4.125% Junior Subordinated Notes due 2060", "PFHD": "Professional Holding Corp. Class A Common stock", "PFIE": "Profire Energy Inc. Common Stock", "PFIN": "P & F Industries Inc. Class A Common Stock", "PFIS": "Peoples Financial Services Corp. Common Stock", "PFL": "PIMCO Income Strategy Fund Shares of Beneficial Interest", "PFLT": "PennantPark Floating Rate Capital Ltd. Common Stock", "PFMT": "Performant Financial Corporation Common Stock", "PFN": "PIMCO Income Strategy Fund II", "PFO": "Flaherty & Crumrine Preferred and Income Opportunity Fund Incorporated", "PFS": "Provident Financial Services Inc Common Stock", "PFSI": "PennyMac Financial Services Inc. Common Stock", "PFSW": "PFSweb Inc. Common Stock", "PFTA": "Portage Fintech Acquisition Corporation Class A Ordinary Share", "PFTAU": "Portage Fintech Acquisition Corporation Unit", "PFTAW": "Portage Fintech Acquisition Corporation Warrant", "PFX": "PhenixFIN Corporation Common Stock", "PFXNL": "PhenixFIN Corporation 6.125% Senior Notes due 2023", "PG": "Procter & Gamble Company (The) Common Stock", "PGC": "Peapack-Gladstone Financial Corporation Common Stock", "PGEN": "Precigen Inc. Common Stock", "PGNY": "Progyny Inc. Common Stock", "PGP": "Pimco Global Stocksplus & Income Fund Pimco Global StocksPlus & Income Fund Common Shares of Beneficial Interest", "PGR": "Progressive Corporation (The) Common Stock", "PGRE": "Paramount Group Inc. Common Stock", "PGRW": "Progress Acquisition Corp. Class A Common Stock", "PGRWU": "Progress Acquisition Corp. Units", "PGRWW": "Progress Acquisition Corp. Warrant", "PGTI": "PGT Innovations Inc.", "PGZ": "Principal Real Estate Income Fund Common Shares of Beneficial Interest", "PH": "Parker-Hannifin Corporation Common Stock", "PHAR": "Pharming Group N.V. ADS each representing 10 ordinary shares", "PHAS": "PhaseBio Pharmaceuticals Inc. Common Stock", "PHAT": "Phathom Pharmaceuticals Inc. Common Stock", "PHCF": "Puhui Wealth Investment Management Co. Ltd. Ordinary Shares", "PHD": "Pioneer Floating Rate Fund Inc.", "PHG": "Koninklijke Philips N.V. NY Registry Shares", "PHGE": "BiomX Inc. COmmon Stock", "PHI": "PLDT Inc. Sponsored ADR", "PHIC": "Population Health Investment Co. Inc. Class A Ordinary Share", "PHICW": "Population Health Investment Co. Inc. Warrant", "PHIO": "Phio Pharmaceuticals Corp. Common Stock", "PHIOW": "Phio Pharmaceuticals Corp. Warrants expiring 12/21/2021", "PHK": "Pimco High Income Fund Pimco High Income Fund", "PHM": "PulteGroup Inc. Common Stock", "PHR": "Phreesia Inc. Common Stock", "PHT": "Pioneer High Income Fund Inc.", "PHUN": "Phunware Inc. Common Stock", "PHUNW": "Phunware Inc. Warrants", "PHVS": "Pharvaris N.V. Ordinary Shares", "PHX": "PHX Minerals Inc. Common Stock", "PI": "Impinj Inc. Common Stock", "PIAI": "Prime Impact Acquisition I Class A Ordinary Shares", "PICC": "Pivotal Investment Corporation III Class A Common Stock", "PII": "Polaris Inc. Common Stock", "PIM": "Putnam Master Intermediate Income Trust Common Stock", "PINC": "Premier Inc. Class A Common Stock", "PINE": "Alpine Income Property Trust Inc. Common Stock", "PING": "Ping Identity Holding Corp. Common Stock", "PINS": "Pinterest Inc. Class A Common Stock", "PIPP": "Pine Island Acquisition Corp. Class A Common Stock", "PIPR": "Piper Sandler Companies Common Stock", "PIRS": "Pieris Pharmaceuticals Inc. Common Stock", "PIXY": "ShiftPixy Inc. Common Stock", "PJT": "PJT Partners Inc. Class A Common Stock", "PK": "Park Hotels & Resorts Inc. Common Stock ", "PKBK": "Parke Bancorp Inc. Common Stock", "PKE": "Park Aerospace Corp. Common Stock", "PKG": "Packaging Corporation of America Common Stock", "PKI": "PerkinElmer Inc. Common Stock", "PKO": "Pimco Income Opportunity Fund Common Shares of Beneficial Interest", "PKOH": "Park-Ohio Holdings Corp. Common Stock", "PKX": "POSCO Common Stock", "PLAB": "Photronics Inc. Common Stock", "PLAG": "Planet Green Holdings Corp. Common Stock", "PLAN": "Anaplan Inc. Common Stock", "PLAY": "Dave & Buster's Entertainment Inc. Common Stock", "PLBC": "Plumas Bancorp", "PLBY": "PLBY Group Inc. Common Stock", "PLCE": "Children's Place Inc. (The) Common Stock", "PLD": "Prologis Inc. Common Stock", "PLG": "Platinum Group Metals Ltd. Ordinary Shares (Canada)", "PLIN": "China Xiangtai Food Co. Ltd. Ordinary Shares", "PLL": "Piedmont Lithium Inc. Common Stock", "PLM": "Polymet Mining Corporation Ordinary Shares (Canada)", "PLMI": "Plum Acquisition Corp. I Class A Ordinary Share", "PLMIU": "Plum Acquisition Corp. I Units", "PLMIW": "Plum Acquisition Corp. I Warrant", "PLMR": "Palomar Holdings Inc. Common stock", "PLNT": "Planet Fitness Inc. Common Stock", "PLOW": "Douglas Dynamics Inc. Common Stock", "PLPC": "Preformed Line Products Company Common Stock", "PLRX": "Pliant Therapeutics Inc. Common Stock", "PLSE": "Pulse Biosciences Inc Common Stock (DE)", "PLTK": "Playtika Holding Corp. Common Stock", "PLTR": "Palantir Technologies Inc. Class A Common Stock", "PLUG": "Plug Power Inc. Common Stock", "PLUS": "ePlus inc. Common Stock", "PLX": "Protalix BioTherapeutics Inc. (DE) Common Stock", "PLXP": "PLx Pharma Inc. Common Stock", "PLXS": "Plexus Corp. Common Stock", "PLYA": "Playa Hotels & Resorts N.V. Ordinary Shares", "PLYM": "Plymouth Industrial REIT Inc. Common Stock", "PLYM^A": "Plymouth Industrial REIT Inc. 7.50% Series A Cumulative Redeemable Preferred Stock", "PM": "Philip Morris International Inc Common Stock", "PMBC": "Pacific Mercantile Bancorp Common Stock", "PMCB": "PharmaCyte Biotech Inc. Common Stock", "PMD": "Psychemedics Corporation", "PME": "Pingtan Marine Enterprise Ltd.", "PMF": "PIMCO Municipal Income Fund Common Stock", "PMGM": "Priveterra Acquisition Corp. Class A Common Stock", "PMGMU": "Priveterra Acquisition Corp. Units", "PMGMW": "Priveterra Acquisition Corp. Warrant", "PML": "Pimco Municipal Income Fund II Common Shares of Beneficial Interest", "PMM": "Putnam Managed Municipal Income Trust Common Stock", "PMO": "Putnam Municipal Opportunities Trust Common Stock", "PMT": "PennyMac Mortgage Investment Trust Common Shares of Beneficial Interest", "PMT^A": "PennyMac Mortgage Investment Trust 8.125% Series A Fixed-to-Floating Rate Cumulative Redeemable Preferred Shares of Beneficial Interest", "PMT^B": "PennyMac Mortgage Investment Trust 8.00% Series B Fixed-to-Floating Rate Cumulative Redeemable Preferred Shares of Beneficial Interest", "PMT^C": "PennyMac Mortgage Investment Trust 6.75% Series C Cumulative Redeemable Preferred Shares of Beneficial Interest", "PMTS": "CPI Card Group Inc. Common Stock", "PMVC": "PMV Consumer Acquisition Corp. Class A Common Stock", "PMVP": "PMV Pharmaceuticals Inc. Common Stock", "PMX": "PIMCO Municipal Income Fund III Common Shares of Beneficial Interest", "PNBK": "Patriot National Bancorp Inc. Common Stock", "PNC": "PNC Financial Services Group Inc. (The) Common Stock", "PNC^P": "PNC Financial Services Group Inc. (The) Depositary Shares Representing 1/4000th Perpetual Preferred Series P", "PNF": "PIMCO New York Municipal Income Fund Common Stock", "PNFP": "Pinnacle Financial Partners Inc. Common Stock", "PNFPP": "Pinnacle Financial Partners Inc. Depositary shares of Pinnacle Financial Partners Inc. each representing a 1/40th Interest in a share of its 6.75% Fixed-Rate Non-Cumulative Perpetual Preferred Stock Series B", "PNI": "Pimco New York Municipal Income Fund II Common Shares of Beneficial Interest", "PNM": "PNM Resources Inc. (Holding Co.) Common Stock", "PNNT": "PennantPark Investment Corporation Common Stock", "PNNTG": "PennantPark Investment Corporation 5.50% Notes Due 2024", "PNR": "Pentair plc. Ordinary Share", "PNRG": "PrimeEnergy Resources Corporation Common Stock", "PNT": "POINT Biopharma Global Inc. Common Stock", "PNTG": "The Pennant Group Inc. Common Stock ", "PNTM": "Pontem Corporation Class A Ordinary Shares", "PNW": "Pinnacle West Capital Corporation Common Stock", "POAI": "Predictive Oncology Inc. Common Stock", "PODD": "Insulet Corporation Common Stock", "POLA": "Polar Power Inc. Common Stock", "POLY": "Plantronics Inc. Common Stock", "POND": "Angel Pond Holdings Corporation Class A Ordinary Shares", "PONOU": "Pono Capital Corp Unit", "POOL": "Pool Corporation Common Stock", "POR": "Portland General Electric Co Common Stock", "POSH": "Poshmark Inc. Class A Common Stock", "POST": "Post Holdings Inc. Common Stock", "POW": "Powered Brands Class A Ordinary Shares", "POWI": "Power Integrations Inc. Common Stock", "POWL": "Powell Industries Inc. Common Stock", "POWRU": "Powered Brands Units", "POWRW": "Powered Brands Warrants", "POWW": "AMMO Inc. Common Stock", "POWWP": "AMMO Inc. 8.75% Series A Cumulative Redeemable Perpetual Preferred Stock", "PPBI": "Pacific Premier Bancorp Inc", "PPBT": "Purple Biotech Ltd. American Depositary Shares", "PPC": "Pilgrim's Pride Corporation Common Stock", "PPD": "PPD Inc. Common Stock", "PPG": "PPG Industries Inc. Common Stock", "PPGH": "Poema Global Holdings Corp. Class A Ordinary Share", "PPGHU": "Poema Global Holdings Corp. Unit", "PPGHW": "Poema Global Holdings Corp. Warrant", "PPHP": "PHP Ventures Acquisition Corp. Class A Common Stock", "PPHPR": "PHP Ventures Acquisition Corp. Rights", "PPHPU": "PHP Ventures Acquisition Corp. Units", "PPHPW": "PHP Ventures Acquisition Corp. Warrants", "PPIH": "Perma-Pipe International Holdings Inc. Common Stock", "PPL": "PPL Corporation Common Stock", "PPSI": "Pioneer Power Solutions Inc. Common Stock", "PPT": "Putnam Premier Income Trust Common Stock", "PPTA": "Perpetua Resources Corp. Common Shares", "PRA": "ProAssurance Corporation Common Stock", "PRAA": "PRA Group Inc. Common Stock", "PRAX": "Praxis Precision Medicines Inc. Common Stock", "PRCH": "Porch Group Inc. Common Stock", "PRCT": "PROCEPT BioRobotics Corporation Common Stock", "PRDO": "Perdoceo Education Corporation Common Stock", "PRE^J": "PartnerRe Ltd. 4.875% Fixed Rate Non-Cumulative Redeemable Preferred Shares Series J", "PRFT": "Perficient Inc. Common Stock", "PRFX": "PainReform Ltd. Ordinary Shares", "PRG": "PROG Holdings Inc. Common Stock", "PRGO": "Perrigo Company plc Ordinary Shares", "PRGS": "Progress Software Corporation Common Stock (DE)", "PRI": "Primerica Inc. Common Stock", "PRIF^D": "Priority Income Fund Inc. 7.00% Series D Term Preferred Stock due 2029", "PRIF^E": "Priority Income Fund Inc. 6.375% Series E Preferred Stock Due 2024", "PRIF^F": "Priority Income Fund Inc. 6.625% Series F Term Preferred Stock due 2027", "PRIF^G": "Priority Income Fund Inc. 6.25% Series G Preferred Stock Due 2026", "PRIF^H": "Priority Income Fund Inc. 6.00% Series H Term Preferred Stock due 2026", "PRIF^I": "Priority Income Fund Inc. 6.125% Series I Term Preferred Stock due 2028", "PRIF^J": "Priority Income Fund Inc. 6.000% Series J Term Preferred Stock due 2028", "PRIM": "Primoris Services Corporation Common Stock", "PRK": "Park National Corporation Common Stock", "PRLB": "Proto Labs Inc. Common stock", "PRLD": "Prelude Therapeutics Incorporated Common Stock", "PRMW": "Primo Water Corporation Common Stock", "PRO": "PROS Holdings Inc. Common Stock", "PROC": "Procaps Group S.A. Ordinary Shares", "PROCW": "Procaps Group S.A. Warrants", "PROF": "Profound Medical Corp. Common Stock", "PROG": "Progenity Inc. Common Stock", "PROV": "Provident Financial Holdings Inc. Common Stock", "PRPB": "CC Neuberger Principal Holdings II Class A Ordinary Shares", "PRPC": "CC Neuberger Principal Holdings III Class A Ordinary Shares", "PRPH": "ProPhase Labs Inc. Common Stock (DE)", "PRPL": "Purple Innovation Inc. Common Stock", "PRPO": "Precipio Inc. Common Stock", "PRQR": "ProQR Therapeutics N.V. Ordinary Shares", "PRS": "Prudential Financial Inc. 5.625% Junior Subordinated Notes due 2058", "PRSR": "Prospector Capital Corp. Class A Ordinary Shares", "PRSRU": "Prospector Capital Corp. Unit", "PRSRW": "Prospector Capital Corp. Warrants", "PRT": "PermRock Royalty Trust Trust Units", "PRTA": "Prothena Corporation plc Ordinary Shares", "PRTC": "PureTech Health plc American Depositary Shares", "PRTG": "Portage Biotech Inc. Common Stock", "PRTH": "Priority Technology Holdings Inc. Common Stock", "PRTK": "Paratek Pharmaceuticals Inc. Common Stock", "PRTS": "CarParts.com Inc. Common Stock", "PRTY": "Party City Holdco Inc. Common Stock", "PRU": "Prudential Financial Inc. Common Stock", "PRVA": "Privia Health Group Inc. Common Stock", "PRVB": "Provention Bio Inc. Common Stock", "PSA": "Public Storage Common Stock", "PSA^E": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.90% Cumulative Preferred Share of Beneficial Interest Series E", "PSA^F": "Public Storage Depositary Shares Each Representing 1/1000 of a 5.15% Cumulative Preferred Share of Beneficial Interest Series F par value $0.01 per share", "PSA^G": "Public Storage Depositary Shares Each Representing 1/1000 of a 5.05% Cumulative Preferred Share of Beneficial Interest Series G", "PSA^H": "Public Storage Depositary Shares Each Representing 1/1000 of a 5.60% Cumulative Preferred Share of Beneficial Interest Series H", "PSA^I": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.875% Cumulative Preferred Share of Beneficial Interest Series I par value $0.01 per share", "PSA^J": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.700% Cumulative Preferred Share of Beneficial Interest Series J par value $0.01 per share", "PSA^K": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.75% Cumulative Preferred Share of Beneficial Interest Series K", "PSA^L": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.625% Cumulative Preferred Share of Beneficial Interest Series L par value $0.01 per share", "PSA^M": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.125% Cumulative Preferred Share of Beneficial Interest Series M", "PSA^N": "Public Storage Depositary Shares Each Representing 1/1000 of a 3.875% Cumulative Preferred Share of Beneficial Interest Series N", "PSA^O": "Public Storage Depositary Shares Each Representing 1/1000 of a 3.900% Cumulative Preferred Share of Beneficial Interest Series O", "PSA^P": "Public Storage Depositary Shares Each Representing 1/1000 of a 4.000% Cumulative Preferred Share of Bene cial Interest Series P", "PSA^Q": "Public Storage Depositary Shares Each Representing 1/1000 of a 3.950% Cumulative Preferred Share of Beneficial Interest Series Q par value $0.01 per share", "PSAG": "Property Solutions Acquisition Corporation II Class A Common Stock", "PSAGU": "Property Solutions Acquisition Corporation II Units", "PSAGW": "Property Solutions Acquisition Corporation II Warrant", "PSB": "PS Business Parks Inc. (MD) Common Stock", "PSB^W": "PS Business Parks Inc. Depositary Shares Each Representing 1/1000 of a Share of 5.20% Cumulative Preferred Stock Series W", "PSB^X": "PS Business Parks Inc. Depositary Shares Each Representing 1/1000 of a Share of 5.25% Cumulative Preferred Stock Series X", "PSB^Y": "PS Business Parks Inc. 5.20% Cumulative Preferred Stock Series Y", "PSB^Z": "PS Business Parks Inc. Depositary Shares Each Representing 1/1000 of a Share of 4.875% Cumulative Preferred Stock Series Z par value $0.01 per share", "PSEC": "Prospect Capital Corporation Common Stock", "PSEC^A": "Prospect Capital Corporation 5.35% Series A Fixed Rate Cumulative Perpetual Preferred Stock", "PSF": "Cohen & Steers Select Preferred and Income Fund Inc. Common Stock", "PSFE": "Paysafe Limited Common Shares", "PSHG": "Performance Shipping Inc. Common Shares", "PSMT": "PriceSmart Inc. Common Stock", "PSN": "Parsons Corporation Common Stock", "PSNL": "Personalis Inc. Common Stock", "PSO": "Pearson Plc Common Stock", "PSPC": "Post Holdings Partnering Corporation Series A Common Stock", "PSTG": "Pure Storage Inc. Class A Common Stock", "PSTH": "Pershing Square Tontine Holdings Ltd. Class A Common Stock", "PSTI": "Pluristem Therapeutics Inc. Common Stock", "PSTL": "Postal Realty Trust Inc. Class A Common Stock", "PSTV": "PLUS THERAPEUTICS Inc. Common Stock ", "PSTX": "Poseida Therapeutics Inc. Common Stock", "PSX": "Phillips 66 Common Stock", "PSXP": "Phillips 66 Partners LP Common Units representing limited partner interest in the Partnership", "PT": "Pintec Technology Holdings Limited American Depositary Shares", "PTA": "Cohen & Steers Tax-Advantaged Preferred Securities and Income Fund Common Shares of Beneficial Interest", "PTC": "PTC Inc. Common Stock", "PTCT": "PTC Therapeutics Inc. Common Stock", "PTE": "PolarityTE Inc. Common Stock", "PTEN": "Patterson-UTI Energy Inc. Common Stock", "PTGX": "Protagonist Therapeutics Inc. Common Stock", "PTIC": "PropTech Investment Corporation II Class A Common Stock", "PTICU": "PropTech Investment Corporation II Unit", "PTICW": "PropTech Investment Corporation II Warrant", "PTIX": "Protagenic Therapeutics Inc. Common Stock", "PTIXW": "Protagenic Therapeutics Inc. Warrant", "PTMN": "Portman Ridge Finance Corporation Common Stock", "PTN": "Palatin Technologies Inc. Common Stock", "PTNR": "Partner Communications Company Ltd. American Depositary Shares", "PTOC": "Pine Technology Acquisition Corp. Class A Common Stock", "PTOCU": "Pine Technology Acquisition Corp. Unit", "PTOCW": "Pine Technology Acquisition Corp. Warrant", "PTON": "Peloton Interactive Inc. Class A Common Stock", "PTPI": "Petros Pharmaceuticals Inc. Common Stock", "PTR": "PetroChina Company Limited Common Stock", "PTRA": "Proterra Inc. Common Stock", "PTRAW": "Proterra Inc. Warrant", "PTRS": "Partners Bancorp Common Stock", "PTSI": "P.A.M. Transportation Services Inc. Common Stock", "PTVE": "Pactiv Evergreen Inc. Common stock", "PTY": "Pimco Corporate & Income Opportunity Fund", "PUBM": "PubMatic Inc. Class A Common Stock", "PUCK": "Goal Acquisitions Corp. Common Stock", "PUCKU": "Goal Acquisitions Corp. Unit", "PUCKW": "Goal Acquisitions Corp. Warrant", "PUK": "Prudential Public Limited Company Common Stock", "PUK^": "Prudential Public Limited Company 6.75% Perpetual Subordinated Captial Security", "PUK^A": "Prudential Public Limited Company 6.50% Perpetual Subordinated Capital Securities Exchangeable at the Issuer's Option Into Non-Cumulative Dollar Denominated Preference Shares", "PULM": "Pulmatrix Inc. Common Stock", "PUMP": "ProPetro Holding Corp. Common Stock", "PUYI": "Puyi Inc. American Depository Shares", "PV": "Primavera Capital Acquisition Corporation Class A Ordinary Shares", "PVAC": "Penn Virginia Corporation Common Stock", "PVBC": "Provident Bancorp Inc. (MD) Common Stock", "PVG": "Pretium Resources Inc. Ordinary Shares (Canada)", "PVH": "PVH Corp. Common Stock", "PVL": "Permianville Royalty Trust Trust Units ", "PW": "Power REIT (MD) Common Stock", "PW^A": "Power REIT 7.75% Series A Cumulative Perpetual Preferred Stock", "PWFL": "PowerFleet Inc. Common Stock", "PWOD": "Penns Woods Bancorp Inc. Common Stock", "PWP": "Perella Weinberg Partners Class A Common Stock", "PWPPW": "Perella Weinberg Partners Warrant", "PWR": "Quanta Services Inc. Common Stock", "PWSC": "PowerSchool Holdings Inc. Class A Common Stock", "PXD": "Pioneer Natural Resources Company Common Stock", "PXLW": "Pixelworks Inc. Common Stock", "PXS": "Pyxis Tankers Inc. Common Stock", "PXSAP": "Pyxis Tankers Inc. 7.75% Series A Cumulative Convertible Preferred Shares", "PXSAW": "Pyxis Tankers Inc. Warrant", "PYCR": "Paycor HCM Inc. Common Stock", "PYN": "PIMCO New York Municipal Income Fund III Common Shares of Beneficial Interest", "PYPD": "PolyPid Ltd. Ordinary Shares", "PYPL": "PayPal Holdings Inc. Common Stock", "PYR": "PyroGenesis Canada Inc. Common Shares", "PYS": "Merrill Lynch Depositor Inc PPlus Tr Ser RRD-1 Tr Ctf Cl A", "PYT": "PPlus Tr GSC-2 Tr Ctf Fltg Rate", "PZC": "PIMCO California Municipal Income Fund III Common Shares of Beneficial Interest", "PZG": "Paramount Gold Nevada Corp. Common Stock", "PZN": "Pzena Investment Management Inc Class A Common Stock", "PZZA": "Papa John's International Inc. Common Stock", "QADA": "QAD Inc. Class A Common Stock", "QADB": "QAD Inc. Class B Common Stock", "QCOM": "QUALCOMM Incorporated Common Stock", "QCRH": "QCR Holdings Inc. Common Stock", "QD": "Qudian Inc. American Depositary Shares each representing one Class A Ordinary Share", "QDEL": "Quidel Corporation Common Stock", "QFIN": "360 DigiTech Inc. American Depositary Shares", "QFTA": "Quantum FinTech Acquisition Corporation Common Stock", "QGEN": "Qiagen N.V. Common Shares ", "QH": "Quhuo Limited American Depository Shares", "QIPT": "Quipt Home Medical Corp. Common Shares", "QIWI": "QIWI plc American Depositary Shares", "QK": "Q&K International Group Limited American Depositary Shares", "QLGN": "Qualigen Therapeutics Inc. Common Stock", "QLI": "Qilian International Holding Group Ltd. Ordinary Shares", "QLYS": "Qualys Inc. Common Stock", "QMCO": "Quantum Corporation Common Stock", "QNST": "QuinStreet Inc. Common Stock", "QQQX": "Nuveen NASDAQ 100 Dynamic Overwrite Fund Shares of Beneficial Interest", "QRHC": "Quest Resource Holding Corporation Common Stock", "QRTEA": "Qurate Retail Inc. Series A Common Stock ", "QRTEB": "Qurate Retail Inc. Series B Common Stock ", "QRTEP": "Qurate Retail Inc. 8.0% Fixed Rate Cumulative Redeemable Preferred Stock", "QRVO": "Qorvo Inc. Common Stock", "QS": "QuantumScape Corporation Class A Common Stock", "QSI": "Quantum-Si Incorporated Class A Common Stock", "QSIAW": "Quantum-Si Incorporated Warrant", "QSR": "Restaurant Brands International Inc. Common Shares", "QTNT": "Quotient Limited Ordinary Shares", "QTRX": "Quanterix Corporation Common Stock", "QTT": "Qutoutiao Inc. American Depositary Shares", "QTWO": "Q2 Holdings Inc. Common Stock", "QUAD": "Quad Graphics Inc Class A Common Stock", "QUBT": "Quantum Computing Inc. Common Stock", "QUIK": "QuickLogic Corporation Common Stock", "QUMU": "Qumu Corporation Common Stock", "QUOT": "Quotient Technology Inc. Common Stock", "QURE": "uniQure N.V. Ordinary Shares", "QVCC": "QVC Inc. 6.250% Senior Secured Notes due 2068", "QVCD": "QVC Inc. 6.375% Senior Secured Notes due 2067", "R": "Ryder System Inc. Common Stock", "RA": "Brookfield Real Assets Income Fund Inc. Common Stock", "RAAS": "Cloopen Group Holding Limited American Depositary Shares each representing two Class A Ordinary Shares", "RACB": "Research Alliance Corp. II Class A Common Stock", "RACE": "Ferrari N.V. Common Shares", "RAD": "Rite Aid Corporation Common Stock", "RADA": "RADA Electronic Industries Ltd. Ordinary Shares", "RADI": "Radius Global Infrastructure Inc. Class A Common Stock", "RAIL": "FreightCar America Inc. Common Stock", "RAIN": "Rain Therapeutics Inc. Common Stock", "RAM": "Aries I Acquisition Corporation Class A Ordinary Share", "RAMMU": "Aries I Acquisition Corporation Unit", "RAMMW": "Aries I Acquisition Corporation Warrant", "RAMP": "LiveRamp Holdings Inc. Common Stock", "RAND": "Rand Capital Corporation Common Stock", "RANI": "Rani Therapeutics Holdings Inc. Class A Common Stock", "RAPT": "RAPT Therapeutics Inc. Common Stock", "RARE": "Ultragenyx Pharmaceutical Inc. Common Stock", "RAVE": "Rave Restaurant Group Inc. Common Stock", "RAVN": "Raven Industries Inc. Common Stock", "RBA": "Ritchie Bros. Auctioneers Incorporated Common Stock", "RBAC": "RedBall Acquisition Corp. Class A Ordinary Shares", "RBB": "RBB Bancorp Common Stock", "RBBN": "Ribbon Communications Inc. Common Stock", "RBC": "Regal Beloit Corporation Common Stock", "RBCAA": "Republic Bancorp Inc. Class A Common Stock", "RBCN": "Rubicon Technology Inc. Common Stock", "RBKB": "Rhinebeck Bancorp Inc. Common Stock", "RBLX": "Roblox Corporation Class A Common Stock", "RBNC": "Reliant Bancorp Inc. Common Stock", "RBOT": "Vicarious Surgical Inc. Class A Common Stock", "RC": "Ready Capital Corproation Common Stock", "RC^C": "Ready Capital Corporation 6.25% Series C Cumulative Convertible Preferred Stock", "RC^E": "Ready Capital Corporation 6.50% Series E Cumulative Redeemable Preferred Stock", "RCA": "Ready Capital Corporation 7.00% Convertible Senior Notes due 2023", "RCAT": "Red Cat Holdings Inc. Common Stock", "RCB": "Ready Capital Corporation 6.20% Senior Notes due 2026", "RCC": "Ready Capital Corporation 5.75% Senior Notes due 2026", "RCEL": "Avita Therapeutics Inc. Common Stock", "RCG": "RENN Fund Inc Common Stock", "RCHG": "Recharge Acquisition Corp. Class A Common Stock", "RCHGU": "Recharge Acquisition Corp. Unit", "RCHGW": "Recharge Acquisition Corp. Warrant", "RCI": "Rogers Communication Inc. Common Stock", "RCII": "Rent-A-Center Inc. Common Stock", "RCKT": "Rocket Pharmaceuticals Inc. Common Stock", "RCKY": "Rocky Brands Inc. Common Stock", "RCL": "D/B/A Royal Caribbean Cruises Ltd. Common Stock", "RCLF": "Rosecliff Acquisition Corp I Class A Common Stock", "RCLFU": "Rosecliff Acquisition Corp I Unit", "RCLFW": "Rosecliff Acquisition Corp I Warrants", "RCM": "R1 RCM Inc. Common Stock", "RCMT": "RCM Technologies Inc. Common Stock", "RCON": "Recon Technology Ltd. Class A Ordinary Shares", "RCOR": "Renovacor Inc. Common Stock", "RCRT": "Recruiter.com Group Inc. Common Stock", "RCRTW": "Recruiter.com Group Inc. Warrant", "RCS": "PIMCO Strategic Income Fund Inc.", "RCUS": "Arcus Biosciences Inc. Common Stock", "RDCM": "Radcom Ltd. Ordinary Shares", "RDFN": "Redfin Corporation Common Stock", "RDHL": "Redhill Biopharma Ltd. American Depositary Shares", "RDI": "Reading International Inc Class A Common Stock", "RDIB": "Reading International Inc Class B Common Stock", "RDN": "Radian Group Inc. Common Stock", "RDNT": "RadNet Inc. Common Stock", "RDS/B": "Royal Dutch Shell PLC", "RDUS": "Radius Health Inc. Common Stock", "RDVT": "Red Violet Inc. Common Stock ", "RDW": "Redwire Corporation Common Stock", "RDWR": "Radware Ltd. Ordinary Shares", "RDY": "Dr. Reddy's Laboratories Ltd Common Stock", "RE": "Everest Re Group Ltd. Common Stock", "REAL": "The RealReal Inc. Common Stock", "REAX": "The Real Brokerage Inc. Common Shares", "REDU": "RISE Education Cayman Ltd American Depositary Shares", "REE": "REE Automotive Ltd. Class A Ordinary Shares", "REEAW": "REE Automotive Ltd. Warrant", "REED": "Reeds Inc. Common Stock", "REFR": "Research Frontiers Incorporated Common Stock", "REG": "Regency Centers Corporation Common Stock", "REGI": "Renewable Energy Group Inc. Common Stock", "REGN": "Regeneron Pharmaceuticals Inc. Common Stock", "REI": "Ring Energy Inc. Common Stock", "REKR": "Rekor Systems Inc. Common Stock", "RELI": "Reliance Global Group Inc. Common Stock", "RELL": "Richardson Electronics Ltd. Common Stock", "RELX": "RELX PLC PLC American Depositary Shares (Each representing One Ordinary Share)", "RELY": "Remitly Global Inc. Common Stock", "RENN": "Renren Inc. American Depositary Shares each representing fifteen Class A ordinary shares", "REPH": "Recro Pharma Inc. Common Stock", "REPL": "Replimune Group Inc. Common Stock", "REPX": "Riley Exploration Permian Inc. Common Stock", "RERE": "AiHuiShou International Co. Ltd. American Depositary Shares (every three of which representing two Class A ordinary shares)", "RES": "RPC Inc. Common Stock", "RESN": "Resonant Inc. Common Stock", "RETA": "Reata Pharmaceuticals Inc. Class A Common Stock", "RETO": "ReTo Eco-Solutions Inc. Common Shares", "REV": "Revlon Inc. New Common Stock", "REVEU": "Alpine Acquisition Corporation Unit", "REVG": "REV Group Inc. Common Stock", "REVH": "Revolution Healthcare Acquisition Corp. SAIL Class A Common Stock", "REVHU": "Revolution Healthcare Acquisition Corp. SAIL Units", "REVHW": "Revolution Healthcare Acquisition Corp. SAIL Warrant.", "REX": "REX American Resources Corporation", "REXR": "Rexford Industrial Realty Inc. Common Stock", "REXR^B": "Rexford Industrial Realty Inc. 5.875% Series B Cumulative Redeemable Preferred Stock", "REXR^C": "Rexford Industrial Realty Inc. 5.625% Series C Cumulative Redeemable Preferred Stock par value $0.01 per share", "REYN": "Reynolds Consumer Products Inc. Common Stock", "REZI": "Resideo Technologies Inc. Common Stock ", "RF": "Regions Financial Corporation Common Stock", "RF^B": "Regions Financial Corporation Depositary Shares Representing 1/40th Perpetual Preferred Series B", "RF^C": "Regions Financial Corporation Depositary Shares each Representing a 1/40th Interest in a Share of 5.700% Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series C", "RF^E": "Regions Financial Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of 4.45% Non-Cumulative Perpetual Preferred Stock Series E", "RFI": "Cohen & Steers Total Return Realty Fund Inc. Common Stock", "RFIL": "RF Industries Ltd. Common Stock", "RFL": "Rafael Holdings Inc. Class B Common Stock", "RFM": "RiverNorth Flexible Municipal Income Fund Inc. Common Stock", "RFMZ": "RiverNorth Flexible Municipal Income Fund II Inc. Common Stock", "RFP": "Resolute Forest Products Inc. Common Stock", "RGA": "Reinsurance Group of America Incorporated Common Stock", "RGC": "Regencell Bioscience Holdings Limited Ordinary Shares", "RGCO": "RGC Resources Inc. Common Stock", "RGEN": "Repligen Corporation Common Stock", "RGLD": "Royal Gold Inc. Common Stock", "RGLS": "Regulus Therapeutics Inc. Common Stock", "RGNX": "REGENXBIO Inc. Common Stock", "RGP": "Resources Connection Inc. Common Stock", "RGR": "Sturm Ruger & Company Inc. Common Stock", "RGS": "Regis Corporation Common Stock", "RGT": "Royce Global Value Trust Inc. Common Stock", "RH": "RH Common Stock", "RHE": "Regional Health Properties Inc. Common Stock", "RHE^A": "Regional Health Properties Inc. 10.875% Series A Cumulative Redeemable Preferred Stock", "RHI": "Robert Half International Inc. Common Stock", "RHP": "Ryman Hospitality Properties Inc. (REIT)", "RIBT": "RiceBran Technologies Common Stock", "RICK": "RCI Hospitality Holdings Inc. Common Stock", "RICO": "Agrico Acquisition Corp. Class A Ordinary Shares", "RICOU": "Agrico Acquisition Corp. Unit", "RICOW": "Agrico Acquisition Corp. Warrant", "RIDE": "Lordstown Motors Corp. Class A Common Stock", "RIG": "Transocean Ltd (Switzerland) Common Stock", "RIGL": "Rigel Pharmaceuticals Inc. Common Stock", "RILY": "B. Riley Financial Inc. Common Stock", "RILYI": "B. Riley Financial Inc. 6.875% Senior Notes due 2023", "RILYK": "B. Riley Financial Inc. 5.50% Senior Notes Due 2026", "RILYL": "B. Riley Financial Inc. Depositary Shares each representing 1/1000th in a share of 7.375% Series B Cumulative Perpetual Preferred Stock par value $0.0001", "RILYM": "B. Riley Financial Inc. 6.375% Senior Notes due 2025", "RILYN": "B. Riley Financial Inc. 6.50% Senior Notes Due 2026", "RILYO": "B. Riley Financial Inc. 6.75% Senior Notes due 2024", "RILYP": "B. Riley Financial Inc. Depositary Shares each representing a 1/1000th fractional interest in a share of Series A Cumulative Perpetual Preferred Stock", "RILYT": "B. Riley Financial Inc. 6.00% Senior Notes Due 2028", "RILYZ": "B. Riley Financial Inc. 5.25% Senior Notes due 2028", "RIO": "Rio Tinto Plc Common Stock", "RIOT": "Riot Blockchain Inc. Common Stock ", "RIV": "RiverNorth Opportunities Fund Inc. Common Stock", "RIVE": "Riverview Financial Corporation Common Stock", "RJF": "Raymond James Financial Inc. Common Stock", "RKDA": "Arcadia Biosciences Inc. Common Stock", "RKLB": "Rocket Lab USA Inc. Common Stock", "RKLBW": "Rocket Lab USA Inc. Warrant", "RKLY": "Rockley Photonics Holdings Limited Ordinary Shares", "RKT": "Rocket Companies Inc. Class A Common Stock", "RKTA": "Rocket Internet Growth Opportunities Corp. Class A Ordinary Shares", "RL": "Ralph Lauren Corporation Common Stock", "RLAY": "Relay Therapeutics Inc. Common Stock", "RLGT": "Radiant Logistics Inc. Common Stock", "RLGY": "Realogy Holdings Corp. Common Stock", "RLI": "RLI Corp. Common Stock (DE)", "RLJ": "RLJ Lodging Trust Common Shares of Beneficial Interest $0.01 par value", "RLJ^A": "RLJ Lodging Trust $1.95 Series A Cumulative Convertible Preferred Shares", "RLMD": "Relmada Therapeutics Inc. Common Stock", "RLX": "RLX Technology Inc. American Depositary Shares each representing the right to receive one (1) Class A ordinary share", "RLYB": "Rallybio Corporation Common Stock", "RM": "Regional Management Corp. Common Stock", "RMAX": "RE/MAX Holdings Inc. Class A Common Stock", "RMBI": "Richmond Mutual Bancorporation Inc. Common Stock", "RMBL": "RumbleOn Inc. Class B Common Stock", "RMBS": "Rambus Inc. Common Stock", "RMCF": "Rocky Mountain Chocolate Factory Inc. Common Stock", "RMD": "ResMed Inc. Common Stock", "RMED": "Ra Medical Systems Inc. Common Stock", "RMGC": "RMG Acquisition Corp. III Class A Ordinary Shares", "RMGCU": "RMG Acquisition Corp. III Unit", "RMGCW": "RMG Acquisition Corp. III Warrant", "RMI": "RiverNorth Opportunistic Municipal Income Fund Inc. Common Stock", "RMM": "RiverNorth Managed Duration Municipal Income Fund Inc. Common Stock", "RMNI": "Rimini Street Inc. (DE) Common Stock", "RMO": "Romeo Power Inc. Class A Common Stock", "RMPL^": "RiverNorth Specialty Finance Corporation 5.875% ", "RMR": "The RMR Group Inc. Class A Common Stock", "RMT": "Royce Micro-Cap Trust Inc. Common Stock", "RMTI": "Rockwell Medical Inc. (DE) Common Stock", "RNA": "Avidity Biosciences Inc. Common Stock", "RNAZ": "TransCode Therapeutics Inc. Common Stock", "RNDB": "Randolph Bancorp Inc. Common Stock", "RNG": "RingCentral Inc. Class A Common Stock", "RNGR": "Ranger Energy Services Inc. Class A Common Stock", "RNLX": "Renalytix plc American Depositary Shares", "RNP": "Cohen & Steers REIT and Preferred and Income Fund Inc. Common Shares", "RNR": "RenaissanceRe Holdings Ltd. Common Stock", "RNR^F": "RenaissanceRe Holdings Ltd. Depositary Shares each Representing a 1/1000th Interest in a 5.750% Series F Preference Share", "RNR^G": "RenaissanceRe Holdings Ltd. Depositary Shares each representing a 1/1000th interest in a share of 4.20% Series G Preference Shares", "RNST": "Renasant Corporation Common Stock", "RNW": "ReNew Energy Global plc Class A Ordinary Shares", "RNWK": "RealNetworks Inc. Common Stock", "RNWWW": "ReNew Energy Global plc Warrant", "RNXT": "RenovoRx Inc. Common Stock", "ROAD": "Construction Partners Inc. Class A Common Stock", "ROCG": "Roth CH Acquisition IV Co. Common Stock", "ROCGU": "Roth CH Acquisition IV Co. Unit", "ROCGW": "Roth CH Acquisition IV Co. Warrant", "ROCK": "Gibraltar Industries Inc. Common Stock", "ROCR": "Roth CH Acquisition III Co. Common stock", "ROCRU": "Roth CH Acquisition III Co. Unit", "ROCRW": "Roth CH Acquisition III Co. Warrant", "ROG": "Rogers Corporation Common Stock", "ROIC": "Retail Opportunity Investments Corp. Common Stock (MD)", "ROIV": "Roivant Sciences Ltd. Common Shares", "ROIVW": "Roivant Sciences Ltd. Warrant", "ROK": "Rockwell Automation Inc. Common Stock", "ROKU": "Roku Inc. Class A Common Stock", "ROL": "Rollins Inc. Common Stock", "ROLL": "RBC Bearings Incorporated Common Stock", "ROLLP": "RBC Bearings Incorporated 5.00% Series A Mandatory Convertible Preferred Stock", "RONI": "Rice Acquisition Corp. II Class A Ordinary Shares", "ROOT": "Root Inc. Class A Common Stock", "ROP": "Roper Technologies Inc. Common Stock", "ROSS": "Ross Acquisition Corp II Class A Ordinary Shares", "ROST": "Ross Stores Inc. Common Stock", "ROVR": "Rover Group Inc. Class A Common Stock", "ROVRW": "Rover Group Inc. Warrant", "RPAI": "Retail Properties of America Inc. Class A Common Stock", "RPAY": "Repay Holdings Corporation Class A Common Stock", "RPD": "Rapid7 Inc. Common Stock", "RPHM": "Reneo Pharmaceuticals Inc. Common Stock", "RPID": "Rapid Micro Biosystems Inc. Class A Common Stock", "RPM": "RPM International Inc. Common Stock", "RPRX": "Royalty Pharma plc Class A Ordinary Shares ", "RPT": "RPT Realty Common Stock", "RPT^D": "RPT Realty 7.25% ", "RPTX": "Repare Therapeutics Inc. Common Shares", "RQI": "Cohen & Steers Quality Income Realty Fund Inc Common Shares", "RRBI": "Red River Bancshares Inc. Common Stock", "RRC": "Range Resources Corporation Common Stock", "RRD": "R.R. Donnelley & Sons Company Common Stock", "RRGB": "Red Robin Gourmet Burgers Inc. Common Stock", "RRR": "Red Rock Resorts Inc. Class A Common Stock", "RS": "Reliance Steel & Aluminum Co. Common Stock (DE)", "RSF": "RiverNorth Specialty Finance Corporation", "RSG": "Republic Services Inc. Common Stock", "RSI": "Rush Street Interactive Inc. Class A Common Stock", "RSKD": "Riskified Ltd. Class A Ordinary Shares", "RSLS": "ReShape Lifesciences Inc. Common Stock", "RSSS": "Research Solutions Inc Common Stock", "RSVR": "Reservoir Media Inc. Common Stock", "RSVRW": "Reservoir Media Inc. Warrant", "RTLR": "Rattler Midstream LP Common Units", "RTPY": "Reinvent Technology Partners Y Class A Ordinary Shares", "RTPYU": "Reinvent Technology Partners Y Unit", "RTPYW": "Reinvent Technology Partners Y Warrant", "RTX": "Raytheon Technologies Corporation Common Stock", "RUBY": "Rubius Therapeutics Inc. Common Stock", "RUN": "Sunrun Inc. Common Stock", "RUSHA": "Rush Enterprises Inc. Common Stock Cl A", "RUSHB": "Rush Enterprises Inc. Class B", "RUTH": "Ruth's Hospitality Group Inc. Common Stock", "RVACU": "Riverview Acquisition Corp. Unit", "RVI": "Retail Value Inc. Common Stock ", "RVLV": "Revolve Group Inc. Class A Common Stock", "RVMD": "Revolution Medicines Inc. Common Stock", "RVNC": "Revance Therapeutics Inc. Common Stock", "RVP": "Retractable Technologies Inc. Common Stock", "RVPH": "Reviva Pharmaceuticals Holdings Inc. Common Stock ", "RVPHW": "Reviva Pharmaceuticals Holdings Inc. Warrants", "RVSB": "Riverview Bancorp Inc Common Stock", "RVT": "Royce Value Trust Inc. Common Stock", "RWLK": "ReWalk Robotics Ltd. Ordinary Shares", "RWT": "Redwood Trust Inc. Common Stock", "RXDX": "Prometheus Biosciences Inc. Common Stock", "RXN": "Rexnord Corporation Common Stock", "RXRA": "RXR Acquisition Corp. Class A Common Stock", "RXRAU": "RXR Acquisition Corp. Units", "RXRAW": "RXR Acquisition Corp. Warrants to purchase Class A common stock", "RXRX": "Recursion Pharmaceuticals Inc. Class A Common Stock", "RXST": "RxSight Inc. Common Stock", "RXT": "Rackspace Technology Inc. Common Stock", "RY": "Royal Bank Of Canada Common Stock", "RY^T": "Royal Bank Of Canada 6.750% Fixed Rate/Floating Rate Noncumulative First Preferred Shares Series C-2", "RYAAY": "Ryanair Holdings plc American Depositary Shares", "RYAM": "Rayonier Advanced Materials Inc. Common Stock", "RYAN": "Ryan Specialty Group Holdings Inc. Class A Common Stock", "RYB": "RYB Education Inc. American depositary shares each representing one Class A ordinary share", "RYI": "Ryerson Holding Corporation Common Stock", "RYN": "Rayonier Inc. REIT Common Stock", "RYTM": "Rhythm Pharmaceuticals Inc. Common Stock", "RZA": "Reinsurance Group of America Incorporated 6.20% Fixed-to-Floating Rate Subordinated Debentures due 2042", "RZB": "Reinsurance Group of America Incorporated 5.75% Fixed-To-Floating Rate Subordinated Debentures due 2056", "RZLT": "Rezolute Inc. Common Stock (NV)", "S": "SentinelOne Inc. Class A Common Stock", "SA": "Seabridge Gold Inc. Ordinary Shares (Canada)", "SABR": "Sabre Corporation Common Stock", "SABRP": "Sabre Corporation 6.50% Series A Mandatory Convertible Preferred Stock", "SACC": "Sachem Capital Corp. 6.875% Notes due 2024", "SACH": "Sachem Capital Corp. Common Shares", "SACH^A": "Sachem Capital Corp. 7.75% Series A Cumulative Redeemable Preferred Stock", "SAFE": "Safehold Inc. Common Stock", "SAFM": "Sanderson Farms Inc. Common Stock", "SAFT": "Safety Insurance Group Inc. Common Stock", "SAGE": "Sage Therapeutics Inc. Common Stock", "SAH": "Sonic Automotive Inc. Common Stock", "SAIA": "Saia Inc. Common Stock", "SAIC": "SCIENCE APPLICATIONS INTERNATIONAL CORPORATION Common Stock", "SAIL": "SailPoint Technologies Holdings Inc. Common Stock", "SAL": "Salisbury Bancorp Inc. Common Stock", "SALM": "Salem Media Group Inc. Class A Common Stock", "SAM": "Boston Beer Company Inc. (The) Common Stock", "SAMG": "Silvercrest Asset Management Group Inc. Class A Common Stock", "SAN": "Banco Santander S.A. Sponsored ADR (Spain)", "SANA": "Sana Biotechnology Inc. Common Stock", "SAND ": "Sandstorm Gold Ltd. Ordinary Shares (Canada)", "SANM": "Sanmina Corporation Common Stock", "SANW": "S&W Seed Company Common Stock (NV)", "SAP": "SAP SE ADS", "SAR": "Saratoga Investment Corp New", "SASR": "Sandy Spring Bancorp Inc. Common Stock", "SATS": "EchoStar Corporation Common Stock", "SAVA": "Cassava Sciences Inc. Common Stock", "SAVE": "Spirit Airlines Inc. Common Stock", "SB": "Safe Bulkers Inc Common Stock ($0.001 par value)", "SB^C": "Safe Bulkers Inc Cumulative Redeemable Perpetual Preferred Series C (Marshall Islands)", "SB^D": "Safe Bulkers Inc Perpetual Preferred Series D (Marshall Islands)", "SBAC": "SBA Communications Corporation Class A Common Stock", "SBBA": "Scorpio Tankers Inc. 7.00% Senior Notes due 2025", "SBBP": "Strongbridge Biopharma plc Ordinary Shares", "SBCF": "Seacoast Banking Corporation of Florida Common Stock", "SBEA": "SilverBox Engaged Merger Corp I Class A Common Stock", "SBEAU": "SilverBox Engaged Merger Corp I Units", "SBEAW": "SilverBox Engaged Merger Corp I Warrant", "SBET": "SharpLink Gaming Ltd. Ordinary Shares", "SBEV": "Splash Beverage Group Inc. Common Stock", "SBFG": "SB Financial Group Inc. Common Stock", "SBGI": "Sinclair Broadcast Group Inc. Class A Common Stock", "SBH": "Sally Beauty Holdings Inc. (Name to be changed from Sally Holdings Inc.) Common Stock", "SBI": "Western Asset Intermediate Muni Fund Inc Common Stock", "SBII": "Sandbridge X2 Corp. Class A Common Stock", "SBLK": "Star Bulk Carriers Corp. Common Shares", "SBNY": "Signature Bank Common Stock", "SBNYP": "Signature Bank Depositary shares each representing a 1/40th ownership interest in a share of 5.000% Noncumulative Perpetual Series A Preferred Stock", "SBOW": "SilverBow Resorces Inc. Common Stock", "SBR": "Sabine Royalty Trust Common Stock", "SBRA": "Sabra Health Care REIT Inc. Common Stock", "SBS": "Companhia de saneamento Basico Do Estado De Sao Paulo - Sabesp American Depositary Shares (Each repstg 250 Common Shares)", "SBSI": "Southside Bancshares Inc. Common Stock", "SBSW": "D/B/A Sibanye-Stillwater Limited ADS", "SBT": "Sterling Bancorp Inc. Common Stock", "SBTX": "Silverback Therapeutics Inc. Common Stock", "SBUX": "Starbucks Corporation Common Stock", "SC": "Santander Consumer USA Holdings Inc. Common Stock", "SCAQ": "Stratim Cloud Acquisition Corp. Class A Common Stock", "SCAQU": "Stratim Cloud Acquisition Corp. Unit", "SCAQW": "Stratim Cloud Acquisition Corp. Warrant", "SCCB": "Sachem Capital Corp. 7.125% Notes due 2024", "SCCC": "Sachem Capital Corp. 7.75% Notes due 2025", "SCCO": "Southern Copper Corporation Common Stock", "SCD": "LMP Capital and Income Fund Inc. Common Stock", "SCE^G": "SCE Trust II Trust Preferred Securities", "SCE^H": "SCE Trust III Fixed/Floating Rate Trust Preference Securities", "SCE^J": "Southern California Edison Company 5.375% Fixed-to-Floating Rate Trust Preference Securities", "SCE^K": "Southern California Edison Company 5.45% Fixed-to-Floating Rate Trust Preference Securities", "SCE^L": "SCE TRUST VI", "SCHL": "Scholastic Corporation Common Stock", "SCHN": "Schnitzer Steel Industries Inc. Class A Common Stock", "SCHW": "Charles Schwab Corporation (The) Common Stock", "SCHW^D": "The Charles Schwab Corporation Depositary Shares each representing 1/40th interest in a share of 5.95% Non-Cumulative Perpetual Preferred Stock Series D", "SCHW^J": "The Charles Schwab Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of 4.450% Non-Cumulative Perpetual Preferred Stock Series J", "SCI": "Service Corporation International Common Stock", "SCKT": "Socket Mobile Inc. Common Stock", "SCL": "Stepan Company Common Stock", "SCLE": "Broadscale Acquisition Corp. Class A common stock", "SCLEU": "Broadscale Acquisition Corp. Units", "SCLEW": "Broadscale Acquisition Corp. Warrant", "SCM": "Stellus Capital Investment Corporation Common Stock", "SCOA": "ScION Tech Growth I Class A Ordinary Shares", "SCOAU": "ScION Tech Growth I Unit", "SCOAW": "ScION Tech Growth I Warrant", "SCOB": "ScION Tech Growth II Class A Ordinary Shares", "SCOBU": "ScION Tech Growth II Units", "SCOBW": "ScION Tech Growth II Warrants", "SCOR": "comScore Inc. Common Stock", "SCPH": "scPharmaceuticals Inc. Common Stock", "SCPL": "SciPlay Corporation Class A Common Stock", "SCPS": "Scopus BioPharma Inc. Common Stock", "SCR": "Score Media and Gaming Inc. Class A Subordinate Voting Shares", "SCS": "Steelcase Inc. Common Stock", "SCSC": "ScanSource Inc. Common Stock", "SCU": "Sculptor Capital Management Inc. Class A Common Stock", "SCVL": "Shoe Carnival Inc. Common Stock", "SCVX": "SCVX Corp. Class A Ordinary Shares", "SCWX": "SecureWorks Corp. Class A Common Stock", "SCX": "L.S. Starrett Company (The) Common Stock", "SCYX": "SCYNEXIS Inc. Common Stock", "SD": "SandRidge Energy Inc. Common Stock", "SDAC": "Sustainable Development Acquisition I Corp. Class A Common Stock", "SDACU": "Sustainable Development Acquisition I Corp. Unit", "SDACW": "Sustainable Development Acquisition I Corp. Warrant", "SDC": "SmileDirectClub Inc. Class A Common Stock", "SDGR": "Schrodinger Inc. Common Stock", "SDH": "Global Internet of People Inc. Ordinary Shares", "SDHY": "PGIM Short Duration High Yield Opportunities Fund Common Shares", "SDPI": "Superior Drilling Products Inc. Common Stock", "SE": "Sea Limited American Depositary Shares each representing one Class A Ordinary Share", "SEAC": "SeaChange International Inc. Common Stock", "SEAH": "Sports Entertainment Acquisition Corp. Class A Common Stock", "SEAS": "SeaWorld Entertainment Inc. Common Stock", "SEB": "Seaboard Corporation Common Stock", "SECO": "Secoo Holding Limited ADR", "SEDG": "SolarEdge Technologies Inc. Common Stock", "SEE": "Sealed Air Corporation Common Stock", "SEED": "Origin Agritech Limited Common Stock", "SEEL": "Seelos Therapeutics Inc. Common Stock", "SEER": "Seer Inc. Class A Common Stock", "SEIC": "SEI Investments Company Common Stock", "SELB": "Selecta Biosciences Inc. Common Stock", "SELF": "Global Self Storage Inc. Common Stock", "SEM": "Select Medical Holdings Corporation Common Stock", "SEMR": "SEMrush Holdings Inc. Class A Common Stock", "SENEA": "Seneca Foods Corp. Class A Common Stock", "SENEB": "Seneca Foods Corp. Class B Common Stock", "SENS": "Senseonics Holdings Inc. Common Stock", "SERA": "Sera Prognostics Inc. Class A Common Stock", "SESN": "Sesen Bio Inc. Common Stock", "SEVN": "Seven Hills Realty Trust Common Stock", "SF": "Stifel Financial Corporation Common Stock", "SF^B": "Stifel Financial Corporation Depositary Shares Each Representing 1/1000th Interest in a Share of 6.25% Non-Cumulative Preferred Stock Series B", "SF^C": "Stifel Financial Corporation Depositary Shares Each Representing 1/1000th Interest in a Share of 6.125% Non Cumulative Preferred Stock Series C", "SF^D": "Stifel Financial Corporation Depositary Shares Each Representing 1/1000th Interest in a Share of 4.50% Non-Cumulative Preferred Stock Series D", "SFB": "Stifel Financial Corporation 5.20% Senior Notes due 2047", "SFBC": "Sound Financial Bancorp Inc. Common Stock", "SFBS": "ServisFirst Bancshares Inc. Common Stock", "SFE": "Safeguard Scientifics Inc. New Common Stock", "SFET": "Safe-T Group Ltd. American Depositary Share", "SFIX": "Stitch Fix Inc. Class A Common Stock", "SFL": "SFL Corporation Ltd", "SFM": "Sprouts Farmers Market Inc. Common Stock", "SFNC": "Simmons First National Corporation Class A Common Stock", "SFST": "Southern First Bancshares Inc. Common Stock", "SFT": "Shift Technologies Inc. Class A Common Stock", "SFUN": "Fang Holdings Limited American Depositary Shares (Each representing Four Class A Ordinary Shares HK$1.00 par value)", "SGA": "Saga Communications Inc. Class A Common Stock (FL)", "SGAM": "Seaport Global Acquisition Corp. Class A Common Stock", "SGAMU": "Seaport Global Acquisition Corp. Unit", "SGAMW": "Seaport Global Acquisition Corp. Warrant", "SGBX": "SG Blocks Inc. Common Stock", "SGC": "Superior Group of Companies Inc. Common Stock", "SGEN": "Seagen Inc. Common Stock", "SGFY": "Signify Health Inc. Class A Common Stock", "SGH": "SMART Global Holdings Inc. Ordinary Shares", "SGHT": "Sight Sciences Inc. Common Stock", "SGLB": "Sigma Labs Inc. Common Stock", "SGLBW": "Sigma Labs Inc. Warrant", "SGMA": "SigmaTron International Inc. Common Stock", "SGML": "Sigma Lithium Corporation Common Shares", "SGMO": "Sangamo Therapeutics Inc. Common Stock", "SGMS": "Scientific Games Corp Common Stock", "SGOC": "SGOCO Group Ltd Ordinary Shares (Cayman Islands)", "SGRP": "SPAR Group Inc. Common Stock", "SGRY": "Surgery Partners Inc. Common Stock", "SGTX": "Sigilon Therapeutics Inc. Common Stock", "SGU": "Star Group L.P. Common Stock", "SHAC": "SCP & CO Healthcare Acquisition Company Class A Common Stock", "SHACU": "SCP & CO Healthcare Acquisition Company Unit", "SHACW": "SCP & CO Healthcare Acquisition Company Warrant", "SHAK": "Shake Shack Inc. Class A Common Stock", "SHBI": "Shore Bancshares Inc Common Stock", "SHC": "Sotera Health Company Common Stock", "SHCR": "Sharecare Inc. Class A Common Stock", "SHCRW": "Sharecare Inc. Warrant", "SHEN": "Shenandoah Telecommunications Co Common Stock", "SHG": "Shinhan Financial Group Co Ltd American Depositary Shares", "SHI": "SINOPEC Shangai Petrochemical Company Ltd. Common Stock", "SHIP": "Seanergy Maritime Holdings Corp Common Stock", "SHIPW": "Seanergy Maritime Holdings Corp Class A Warrants", "SHIPZ": "Seanergy Maritime Holdings Corp Class B Warrant", "SHLS": "Shoals Technologies Group Inc. Class A Common Stock", "SHLX": "Shell Midstream Partners L.P. Common Units representing Limited Partner Interests", "SHO": "Sunstone Hotel Investors Inc. Sunstone Hotel Investors Inc. Common Shares", "SHO^H": "Sunstone Hotel Investors Inc. 6.125% Series H Cumulative Redeemable Preferred Stock", "SHO^I": "Sunstone Hotel Investors Inc. 5.70% Series I Cumulative Redeemable Preferred Stock", "SHOO": "Steven Madden Ltd. Common Stock", "SHOP": "Shopify Inc. Class A Subordinate Voting Shares", "SHPW": "Shapeways Holdings Inc. Common Stock", "SHQA": "Shelter Acquisition Corporation I Class A Common Stock", "SHQAU": "Shelter Acquisition Corporation I Units", "SHQAW": "Shelter Acquisition Corporation I Warrants", "SHW": "Sherwin-Williams Company (The) Common Stock", "SHYF": "The Shyft Group Inc. Common Stock", "SI": "Silvergate Capital Corporation Class A Common Stock", "SI^A": "Silvergate Capital Corporation Depositary Shares Each Representing a 1/40th Interest in a Share of 5.375% Fixed Rate Non-Cumulative Perpetual Preferred Stock Series A", "SIBN": "SI-BONE Inc. Common Stock", "SIC": "Select Interior Concepts Inc. Class A Common Stock", "SID": "Companhia Siderurgica Nacional S.A. Common Stock", "SIEB": "Siebert Financial Corp. Common Stock", "SIEN": "Sientra Inc. Common Stock", "SIERU": "Sierra Lake Acquisition Corp. Unit", "SIF": "SIFCO Industries Inc. Common Stock", "SIFY": "Sify Technologies Limited American Depositary Shares", "SIG": "Signet Jewelers Limited Common Shares", "SIGA": "SIGA Technologies Inc. Common Stock", "SIGI": "Selective Insurance Group Inc. Common Stock", "SIGIP": "Selective Insurance Group Inc. Depositary Shares each representing a 1/1000th interest in a share of 4.60% Non-Cumulative Preferred Stock Series B", "SII": "Sprott Inc. Common Shares", "SILC": "Silicom Ltd Ordinary Shares", "SILK": "Silk Road Medical Inc. Common Stock", "SILV": "SilverCrest Metals Inc. Common Shares", "SIM": "Grupo Simec S.A.B. de C.V. American Depositary Shares", "SIMO": "Silicon Motion Technology Corporation American Depositary Shares", "SINO": "Sino-Global Shipping America Ltd. Common Stock", "SINT": "SiNtx Technologies Inc. Common Stock", "SIOX": "Sio Gene Therapies Inc. Common Stock", "SIRI": "Sirius XM Holdings Inc. Common Stock", "SITC": "SITE Centers Corp. Common Stock", "SITC^A": "SITE Centers Corp. 6.375% Class A Preferred Shares", "SITE": "SiteOne Landscape Supply Inc. Common Stock", "SITM": "SiTime Corporation Common Stock", "SIVB": "SVB Financial Group Common Stock", "SIVBP": "SVB Financial Group Depositary Shs each representing a 1/40th interest in a share of 5.25% Fixed-Rate Non-Cumulative Perpetual Preferred Stock Series A", "SIX": "Six Flags Entertainment Corporation New Common Stock", "SJ": "Scienjoy Holding Corporation Ordinary Shares", "SJI": "South Jersey Industries Inc. Common Stock", "SJIJ": "South Jersey Industries Inc. 5.625% Junior Subordinated Notes due 2079", "SJIV": "South Jersey Industries Inc. Corporate Units", "SJM": "J.M. Smucker Company (The) New Common Stock", "SJR": "Shaw Communications Inc. Common Stock", "SJT": "San Juan Basin Royalty Trust Common Stock", "SJW": "SJW Group Common Stock (DE)", "SKIL": "Skillsoft Corp. Class A Common Stock", "SKIN": "The Beauty Health Company Class A Common Stock", "SKINW": "The Beauty Health Company Warrant expiring 5/4/2026", "SKLZ": "Skillz Inc. Class A Common Stock", "SKM": "SK Telecom Co. Ltd. Common Stock", "SKT": "Tanger Factory Outlet Centers Inc. Common Stock", "SKX": "Skechers U.S.A. Inc. Common Stock", "SKY": "Skyline Champion Corporation Common Stock", "SKYA": "Skydeck Acquisition Corp. Class A Ordinary Shares", "SKYAU": "Skydeck Acquisition Corp. Units", "SKYAW": "Skydeck Acquisition Corp. Warrants", "SKYT": "SkyWater Technology Inc. Common Stock", "SKYW": "SkyWest Inc. Common Stock", "SLAB": "Silicon Laboratories Inc. Common Stock", "SLAC": "Social Leverage Acquisition Corp I Class A Common Stock", "SLAM": "Slam Corp. Class A Ordinary Share", "SLAMU": "Slam Corp. Unit", "SLAMW": "Slam Corp. warrant", "SLB": "Schlumberger N.V. Common Stock", "SLCA": "U.S. Silica Holdings Inc. Common Stock", "SLCR": "Silver Crest Acquisition Corporation Class A Ordinary Share", "SLCRU": "Silver Crest Acquisition Corporation Unit", "SLCRW": "Silver Crest Acquisition Corporation Warrant", "SLCT": "Select Bancorp Inc. Common Stock", "SLDB": "Solid Biosciences Inc. Common Stock", "SLF": "Sun Life Financial Inc. Common Stock", "SLG": "SL Green Realty Corp Common Stock", "SLG^I": "SL Green Realty Corporation Preferred Series I", "SLGC": "SomaLogic Inc. Class A Common Stock", "SLGCW": "SomaLogic Inc. Warrant", "SLGG": "Super League Gaming Inc. Common Stock", "SLGL": "Sol-Gel Technologies Ltd. Ordinary Shares", "SLGN": "Silgan Holdings Inc. Common Stock", "SLHG": "Skylight Health Group Inc. Ordinary Shares", "SLI": "Standard Lithium Ltd. Common Shares", "SLM": "SLM Corporation Common Stock", "SLMBP": "SLM Corporation Floating Rate Non-Cumulative Preferred Stock Series B", "SLN": "Silence Therapeutics Plc American Depository Share", "SLNG": "Stabilis Solutions Inc. Common Stock", "SLNO": "Soleno Therapeutics Inc. Common Stock", "SLP": "Simulations Plus Inc. Common Stock", "SLQT": "SelectQuote Inc. Common Stock", "SLRC": "SLR Investment Corp. Common Stock", "SLRX": "Salarius Pharmaceuticals Inc. Common Stock", "SLS": "SELLAS Life Sciences Group Inc. Common Stock", "SLVM": "Sylvamo Corporation Common Stock", "SLVRU": "SilverSPAC Inc. Unit", "SM": "SM Energy Company Common Stock", "SMAR": "Smartsheet Inc. Class A Common Stock", "SMBC": "Southern Missouri Bancorp Inc. Common Stock", "SMBK": "SmartFinancial Inc. Common Stock", "SMCI": "Super Micro Computer Inc. Common Stock", "SMED": "Sharps Compliance Corp. Common Stock", "SMFG": "Sumitomo Mitsui Financial Group Inc Unsponsored American Depositary Shares (Japan)", "SMFR": "Sema4 Holdings Corp. Class A Common Stock", "SMFRW": "Sema4 Holdings Corp. Warrant", "SMG": "Scotts Miracle-Gro Company (The) Common Stock", "SMHI": "SEACOR Marine Holdings Inc. Common Stock ", "SMID": "Smith-Midland Corporation Common Stock", "SMIH": "Summit Healthcare Acquisition Corp. Class A Ordinary Share", "SMIHU": "Summit Healthcare Acquisition Corp. Units", "SMIHW": "Summit Healthcare Acquisition Corp. Warrant", "SMIT": "Schmitt Industries Inc. Common Stock", "SMLP": "Summit Midstream Partners LP Common Units Representing Limited Partner Interests", "SMLR": "Semler Scientific Inc. Common Stock", "SMM": "Salient Midstream Common Shares of Beneficial Interest", "SMMF": "Summit Financial Group Inc. Common Stock", "SMMT": "Summit Therapeutics Inc. Common Stock ", "SMP": "Standard Motor Products Inc. Common Stock", "SMPL": "The Simply Good Foods Company Common Stock", "SMRT": "SmartRent Inc. Class A Common Stock", "SMSI": "Smith Micro Software Inc. Common Stock", "SMTC": "Semtech Corporation Common Stock", "SMTI": "Sanara MedTech Inc. Common Stock", "SMTS": "Sierra Metals Inc. Common Stock", "SMWB": "Similarweb Ltd. Ordinary Shares", "SNA": "Snap-On Incorporated Common Stock", "SNAP": "Snap Inc. Class A Common Stock", "SNAX": "Stryve Foods Inc. Class A Common Stock", "SNAXW": "Stryve Foods Inc. Warrant", "SNBR": "Sleep Number Corporation Common Stock", "SNCR": "Synchronoss Technologies Inc. Common Stock", "SNCRL": "Synchronoss Technologies Inc. 8.375% Senior Notes due 2026", "SNCY": "Sun Country Airlines Holdings Inc. Common Stock", "SND": "Smart Sand Inc. Common Stock", "SNDL": "Sundial Growers Inc. Common Shares", "SNDR": "Schneider National Inc. Common Stock", "SNDX": "Syndax Pharmaceuticals Inc. Common Stock", "SNES": "SenesTech Inc. Common Stock", "SNEX": "StoneX Group Inc. Common Stock", "SNFCA": "Security National Financial Corporation Class A Common Stock", "SNGX": "Soligenix Inc. Common Stock", "SNGXW": "Soligenix Inc. Warrant", "SNII": "Supernova Partners Acquisition Company II Ltd. Class A Ordinary Shares", "SNMP": "Evolve Transition Infrastructure LP Common Stock", "SNN": "Smith & Nephew SNATS Inc. Common Stock", "SNOA": "Sonoma Pharmaceuticals Inc. Common Stock", "SNOW": "Snowflake Inc. Class A Common Stock", "SNP": "China Petroleum & Chemical Corporation Common Stock", "SNPO": "Snap One Holdings Corp. Common Stock", "SNPS": "Synopsys Inc. Common Stock", "SNPX": "Synaptogenix Inc. Common Stock", "SNRH": "Senior Connect Acquisition Corp. I Class A Common Stock", "SNRHU": "Senior Connect Acquisition Corp. I Unit", "SNRHW": "Senior Connect Acquisition Corp. I Warrant", "SNSE": "Sensei Biotherapeutics Inc. Common Stock", "SNT": "Senstar Technologies Ltd. Ordinary Shares", "SNTG": "Sentage Holdings Inc. Ordinary Shares", "SNV": "Synovus Financial Corp. Common Stock", "SNV^D": "Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference $25.00 per Share", "SNV^E": "Synovus Financial Corp. 5.875% Fixed-Rate Reset Non-Cumulative Perpetual Preferred Stock Series E", "SNX": "Synnex Corporation Common Stock", "SNY": "Sanofi ADR", "SO": "Southern Company (The) Common Stock", "SOFI": "SoFi Technologies Inc. Common Stock ", "SOFIW": "SoFi Technologies Inc. Warrants", "SOHO": "Sotherly Hotels Inc. Common Stock", "SOHOB": "Sotherly Hotels Inc. 8.0% Series B Cumulative Redeemable Perpetual Preferred Stock", "SOHON": "Sotherly Hotels Inc. 8.25% Series D Cumulative Redeemable Perpetual Preferred Stock", "SOHOO": "Sotherly Hotels Inc. 7.875% Series C Cumulative Redeemable Perpetual Preferred Stock", "SOHU": "Sohu.com Limited American Depositary Shares", "SOI": "Solaris Oilfield Infrastructure Inc. Class A Common Stock", "SOJB": "Southern Company (The) Series 2016A 5.25% Junior Subordinated Notes due October 1 2076", "SOJC": "Southern Company (The) Series 2017B 5.25% Junior Subordinated Notes due December 1 2077", "SOJD": "Southern Company (The) Series 2020A 4.95% Junior Subordinated Notes due January 30 2080", "SOJE": "Southern Company (The) Series 2020C 4.20% Junior Subordinated Notes due October 15 2060", "SOL": "Renesola Ltd. American Depsitary Shares (Each representing 10 shares)", "SOLN": "Southern Company (The) 2019 Series A Corporate Units", "SOLO": "Electrameccanica Vehicles Corp. Ltd. Common Stock", "SOLOW": "Electrameccanica Vehicles Corp. Ltd. Warrants", "SOLY": "Soliton Inc. Common Stock", "SON": "Sonoco Products Company Common Stock", "SONM": "Sonim Technologies Inc. Common Stock", "SONN": "Sonnet BioTherapeutics Holdings Inc. Common Stock", "SONO": "Sonos Inc. Common Stock", "SONY": "Sony Group Corporation American Depositary Shares ", "SOPH": "SOPHiA GENETICS SA Ordinary Shares", "SOR": "Source Capital Inc. Common Stock", "SOS": "SOS Limited American Depositary Shares", "SOTK": "Sono-Tek Corporation Common Stock", "SOVO": "Sovos Brands Inc. Common Stock", "SP": "SP Plus Corporation Common Stock", "SPAQ": "Spartan Acquisition Corp. III Class A Common Stock", "SPB": "Spectrum Brands Holdings Inc. Common Stock", "SPCB": "SuperCom Ltd. Ordinary Shares (Israel)", "SPCE": "Virgin Galactic Holdings Inc. Common Stock", "SPE": "Special Opportunities Fund Inc Common Stock", "SPFI": "South Plains Financial Inc. Common Stock", "SPG": "Simon Property Group Inc. Common Stock", "SPG^J": "Simon Property Group Inc. Simon Property Group 8 3/8% Series J Cumulative Redeemable Preferred Stock", "SPGI": "S&P Global Inc. Common Stock", "SPGS": "Simon Property Group Acquisition Holdings Inc. Class A Common Stock", "SPH": "Suburban Propane Partners L.P. Common Stock", "SPI": "SPI Energy Co. Ltd. Ordinary Shares", "SPIR": "Spire Global Inc. Class A Common Stock", "SPK": "SPK Acquisition Corp. Common Stock", "SPKAR": "SPK Acquisition Corp. Right", "SPKB": "Silver Spike Acquisition Corp II Class A Ordinary Shares", "SPKBU": "Silver Spike Acquisition Corp II Units", "SPKBW": "Silver Spike Acquisition Corp II Warrant", "SPLK": "Splunk Inc. Common Stock", "SPLP": "Steel Partners Holdings LP LTD PARTNERSHIP UNIT", "SPLP^A": "Steel Partners Holdings LP 6.0% Series A Preferred Units no par value", "SPNE": "SeaSpine Holdings Corporation Common Stock", "SPNS": "Sapiens International Corporation N.V. Common Shares (Cayman Islands)", "SPNT": "SiriusPoint Ltd. Common Shares", "SPNT^B": "SiriusPoint Ltd. 8.00% Resettable Fixed Rate Preference Shares Series B $25.00 liquidation preference per share", "SPOK": "Spok Holdings Inc. Common Stock", "SPOT": "Spotify Technology S.A. Ordinary Shares", "SPPI": "Spectrum Pharmaceuticals Inc.Common Stock", "SPR": "Spirit Aerosystems Holdings Inc. Common Stock", "SPRB": "Spruce Biosciences Inc. Common Stock", "SPRO": "Spero Therapeutics Inc. Common Stock", "SPSC": "SPS Commerce Inc. Common Stock", "SPT": "Sprout Social Inc Class A Common Stock", "SPTK": "SportsTek Acquisition Corp. Class A Common Stock", "SPTKU": "SportsTek Acquisition Corp. Unit", "SPTKW": "SportsTek Acquisition Corp. Warrant", "SPTN": "SpartanNash Company Common Stock", "SPWH": "Sportsman's Warehouse Holdings Inc. Common Stock", "SPWR": "SunPower Corporation Common Stock", "SPXC": "SPX Corporation Common Stock", "SPXX": "Nuveen S&P 500 Dynamic Overwrite Fund", "SQ": "Square Inc. Class A Common Stock", "SQFT": "Presidio Property Trust Inc. Class A Common Stock", "SQFTP": "Presidio Property Trust Inc. 9.375% Series D Cumulative Redeemable Perpetual Preferred Stock $0.01 par value per share", "SQL": "SeqLL Inc. Common stock", "SQLLW": "SeqLL Inc. Warrant", "SQM": "Sociedad Quimica y Minera S.A. Common Stock", "SQNS": "Sequans Communications S.A. American Depositary Shares", "SQSP": "Squarespace Inc. Class A Common Stock", "SQZ": "SQZ Biotechnologies Company Common Stock", "SR": "Spire Inc. Common Stock", "SR^A": "Spire Inc. Depositary Shares each representing a 1/1000th interest in a share of 5.90% Series A Cumulative Redeemable Perpetual Preferred Stock", "SRAD": "Sportradar Group AG Class A Ordinary Shares", "SRAX": "SRAX Inc. Class A Common Stock", "SRC": "Spirit Realty Capital Inc. Common Stock", "SRC^A": "Spirit Realty Capital Inc. 6.000% Series A Cumulative Redeemable Preferred Stock", "SRCE": "1st Source Corporation Common Stock", "SRCL": "Stericycle Inc. Common Stock", "SRDX": "Surmodics Inc. Common Stock", "SRE": "DBA Sempra Common Stock", "SREA": "DBA Sempra 5.750% Junior Subordinated Notes due 2079", "SREV": "ServiceSource International Inc. Common Stock", "SRG": "Seritage Growth Properties Class A Common Stock", "SRG^A": "Seritage Growth Properties 7.00% Series A Cumulative Redeemable Preferred Shares of Beneficial Interest", "SRGA": "Surgalign Holdings Inc. Common Stock", "SRI": "Stoneridge Inc. Common Stock", "SRL": "Scully Royalty Ltd.", "SRLP": "Sprague Resources LP Common Units representing Limited Partner Interests", "SRNE": "Sorrento Therapeutics Inc. Common Stock", "SRPT": "Sarepta Therapeutics Inc. Common Stock (DE)", "SRRA": "Sierra Oncology Inc. Common Stock", "SRRK": "Scholar Rock Holding Corporation Common Stock", "SRSA": "Sarissa Capital Acquisition Corp. Class A Ordinary Shares", "SRSAU": "Sarissa Capital Acquisition Corp. Unit", "SRSAW": "Sarissa Capital Acquisition Corp. Warrants", "SRT": "StarTek Inc. Common Stock", "SRTS": "Sensus Healthcare Inc. Common Stock", "SRV": "Cushing MLP & Infrastructure Total Return Fund", "SRZN": "Surrozen Inc. Common Stock", "SRZNW": "Surrozen Inc. Warrant", "SSAA": "Science Strategic Acquisition Corp. Alpha Class A Common Stock", "SSAAU": "Science Strategic Acquisition Corp. Alpha Unit", "SSAAW": "Science Strategic Acquisition Corp. Alpha Warrant", "SSB": "SouthState Corporation Common Stock", "SSBI": "Summit State Bank Common Stock", "SSBK": "Southern States Bancshares Inc. Common Stock", "SSD": "Simpson Manufacturing Company Inc. Common Stock", "SSKN": "Strata Skin Sciences Inc. Common Stock", "SSL": "Sasol Ltd. American Depositary Shares", "SSNC": "SS&C Technologies Holdings Inc. Common Stock", "SSNT": "SilverSun Technologies Inc. Common Stock", "SSP": "E.W. Scripps Company (The) Class A Common Stock", "SSRM": "SSR Mining Inc. Common Stock", "SSSS": "SuRo Capital Corp. Common Stock", "SSTI": "ShotSpotter Inc. Common Stock", "SSTK": "Shutterstock Inc. Common Stock", "SSY": "SunLink Health Systems Inc. Common Stock", "SSYS": "Stratasys Ltd. Ordinary Shares (Israel)", "ST": "Sensata Technologies Holding plc Ordinary Shares", "STAA": "STAAR Surgical Company Common Stock", "STAB": "Statera Biopharma Inc. Common Stock", "STAF": "Staffing 360 Solutions Inc. Common Stock (DE)", "STAG": "Stag Industrial Inc. Common Stock", "STAR ": "iStar Inc. Common Stock", "STAR^D": "iStar Inc. Series D Cumulative Redeemable Preferred Stock", "STAR^G": "iStar Inc. Series G Cumulative Redeemable Preferred Stock", "STAR^I": "iStar Inc. Series I Cumulative Redeemable Preferred Stock", "STBA": "S&T Bancorp Inc. Common Stock", "STC": "Stewart Information Services Corporation Common Stock", "STCN": "Steel Connect Inc. Common Stock", "STE": "STERIS plc (Ireland) Ordinary Shares", "STEM": "Stem Inc. Class A Common Stock", "STEP": "StepStone Group Inc. Class A Common Stock", "STER": "Sterling Check Corp. Common Stock", "STFC": "State Auto Financial Corporation Common Stock", "STG": "Sunlands Technology Group American Depositary Shares representing Class A ordinary shares", "STGW": "Stagwell Inc. Class A Common Stock", "STIM": "Neuronetics Inc. Common Stock", "STK": "Columbia Seligman Premium Technology Growth Fund Inc", "STKL": "SunOpta Inc. Common Stock", "STKS": "The ONE Group Hospitality Inc. Common Stock", "STL": "Sterling Bancorp", "STL^A": "Sterling Bancorp Depositary Shares each representing ownership of a 1/40th interest in a share of 6.50% Non-Cumulative Perpetual Preferred Stock Series A", "STLA": "Stellantis N.V. Common Shares", "STLD": "Steel Dynamics Inc.", "STM": "STMicroelectronics N.V. Common Stock", "STMP": "Stamps.com Inc. Common Stock ($0.001 Par Value)", "STN": "Stantec Inc Common Stock", "STNE": "StoneCo Ltd. Class A Common Shares", "STNG": "Scorpio Tankers Inc. Common Shares", "STOK": "Stoke Therapeutics Inc. Common Stock", "STON": "StoneMor Inc. Common Stock", "STOR": "STORE Capital Corporation Common Stock", "STRA": "Strategic Education Inc. Common Stock", "STRC": "Sarcos Technology and Robotics Corporation Common Stock", "STRCW": "Sarcos Technology and Robotics Corporation Warrants", "STRE": "Supernova Partners Acquisition Company III Ltd. Class A Ordinary Shares", "STRL": "Sterling Construction Company Inc Common Stock", "STRM": "Streamline Health Solutions Inc. Common Stock", "STRO": "Sutro Biopharma Inc. Common Stock", "STRR": "Star Equity Holdings Inc. Common Stock", "STRRP": "Star Equity Holdings Inc. Series A Cumulative Perpetual Preferred Stock", "STRS": "Stratus Properties Inc. Common Stock", "STRT": "STRATTEC SECURITY CORPORATION Common Stock", "STSA": "Satsuma Pharmaceuticals Inc. Common Stock", "STT": "State Street Corporation Common Stock", "STT^D": "State Street Corporation Depositary Shares representing 1/4000th Perpetual Preferred Series D", "STT^G": "State Street Corporation Depositary shares each representing a 1/4000th ownership interest in a share of Fixed-to-Floating Rate Non-Cumulative", "STTK": "Shattuck Labs Inc. Common Stock", "STVN": "Stevanato Group S.p.A. Ordinary Shares", "STWD": "STARWOOD PROPERTY TRUST INC. Starwood Property Trust Inc.", "STWO": "ACON S2 Acquisition Corp. Class A Ordinary Shares", "STWOU": "ACON S2 Acquisition Corp. Unit", "STWOW": "ACON S2 Acquisition Corp. Warrant", "STX": "Seagate Technology Holdings PLC Ordinary Shares (Ireland)", "STXB": "Spirit of Texas Bancshares Inc. Common Stock", "STXS": "Stereotaxis Inc. Common Stock", "STZ": "Constellation Brands Inc. Common Stock", "STZ/B": "Constellation Brands Inc", "SU": "Suncor Energy Inc. Common Stock", "SUI": "Sun Communities Inc. Common Stock", "SUM": "Summit Materials Inc. Class A Common Stock", "SUMO": "Sumo Logic Inc. Common Stock", "SUMR": "Summer Infant Inc. Common Stock", "SUN": "Sunoco LP Common Units representing limited partner interests", "SUNL": "Sunlight Financial Holdings Inc. Class A Common Stock", "SUNS": "SLR Senior Investment Corp. Common Stock", "SUNW": "Sunworks Inc. Common Stock", "SUP": "Superior Industries International Inc. Common Stock (DE)", "SUPN": "Supernus Pharmaceuticals Inc. Common Stock", "SUPV": "Grupo Supervielle S.A. American Depositary Shares each Representing five Class B shares", "SURF": "Surface Oncology Inc. Common Stock", "SUZ": "Suzano S.A. American Depositary Shares (each representing One Ordinary Share)", "SV": "Spring Valley Acquisition Corp. Class A Ordinary Share", "SVBI": "Severn Bancorp Inc", "SVC": "Service Properties Trust Common Stock", "SVFA": "SVF Investment Corp. Class A Ordinary Shares", "SVFAU": "SVF Investment Corp. Unit", "SVFAW": "SVF Investment Corp. Warrant", "SVFB": "SVF Investment Corp. 2 Class A Ordinary Shares", "SVFC": "SVF Investment Corp. 3 Class A Ordinary Shares", "SVFD": "Save Foods Inc. Common Stock", "SVM": "Silvercorp Metals Inc. Common Shares", "SVOK": "Seven Oaks Acquisition Corp. Class A Common Stock", "SVOKU": "Seven Oaks Acquisition Corp. Unit", "SVOKW": "Seven Oaks Acquisition Corp. Warrant", "SVRA": "Savara Inc. Common Stock", "SVSVU": "Spring Valley Acquisition Corp. Unit", "SVSVW": "Spring Valley Acquisition Corp. Warrant", "SVT": "Servotronics Inc. Common Stock", "SVVC": "Firsthand Technology Value Fund Inc. Common Stock", "SWAG": "Software Acquisition Group Inc. III Class A Common Stock", "SWAGU": "Software Acquisition Group Inc. III Unit", "SWAGW": "Software Acquisition Group Inc. III Warrant", "SWAV": "ShockWave Medical Inc. Common Stock", "SWBI": "Smith & Wesson Brands Inc. Common Stock", "SWBK": "Switchback II Corporation Class A Ordinary Shares", "SWCH": "Switch Inc. Class A Common Stock", "SWET": "Athlon Acquisition Corp. Class A Common stock", "SWETU": "Athlon Acquisition Corp. Unit", "SWETW": "Athlon Acquisition Corp. Warrant", "SWI": "SolarWinds Corporation Common Stock", "SWIM": "Latham Group Inc. Common Stock", "SWIR": "Sierra Wireless Inc. Common Stock", "SWK": "Stanley Black & Decker Inc. Common Stock", "SWKH": "SWK Holdings Corporation Common Stock", "SWKS": "Skyworks Solutions Inc. Common Stock", "SWM": "Schweitzer-Mauduit International Inc. Common Stock", "SWN": "Southwestern Energy Company Common Stock", "SWSS": "Springwater Special Situations Corp. Common stock", "SWSSU": "Springwater Special Situations Corp. Unit", "SWSSW": "Springwater Special Situations Corp. Warrant", "SWT": "Stanley Black & Decker Inc. Corporate Unit", "SWTX": "SpringWorks Therapeutics Inc. Common Stock", "SWX": "Southwest Gas Holdings Inc. Common Stock (DE)", "SWZ": "Swiss Helvetia Fund Inc. (The) Common Stock", "SXC": "SunCoke Energy Inc. Common Stock", "SXI": "Standex International Corporation Common Stock", "SXT": "Sensient Technologies Corporation Common Stock", "SXTC": "China SXT Pharmaceuticals Inc. Ordinary Shares", "SY": "So-Young International Inc. American Depository Shares", "SYBT": "Stock Yards Bancorp Inc. Common Stock", "SYBX": "Synlogic Inc. Common Stock", "SYF": "Synchrony Financial Common Stock", "SYF^A": "Synchrony Financial Depositary Shares each Representing a 1/40th Interest in a Share of 5.625% Fixed Rate Non-Cumulative Perpetual Preferred Stock Series A", "SYK": "Stryker Corporation Common Stock", "SYN": "Synthetic Biologics Inc. Common Stock", "SYNA": "Synaptics Incorporated Common Stock $0.001 Par Value", "SYNH": "Syneos Health Inc. Class A Common Stock", "SYNL": "Synalloy Corporation Common Stock", "SYPR": "Sypris Solutions Inc. Common Stock", "SYRS": "Syros Pharmaceuticals Inc. Common Stock", "SYTA": "Siyata Mobile Inc. Common Shares", "SYTAW": "Siyata Mobile Inc. Warrant", "SYY": "Sysco Corporation Common Stock", "SZC": "Cushing NextGen Infrastructure Income Fund Common Shares of Beneficial Interest", "T": "AT&T Inc.", "T^A": "AT&T Inc. Depositary Shares each representing a 1/1000th interest in a share of 5.000% Perpetual Preferred Stock Series A", "T^C": "AT&T Inc. Depositary Shares each representing a 1/1000th interest in a share of 4.750% Perpetual Preferred Stock Series C", "TA": "TravelCenters of America Inc. Common Stock", "TAC": "TransAlta Corporation Ordinary Shares", "TACA": "Trepont Acquisition Corp I Class A Ordinary Shares", "TACO": "Del Taco Restaurants Inc. Common Stock", "TACT": "TransAct Technologies Incorporated Common Stock", "TAIT": "Taitron Components Incorporated Class A Common Stock", "TAK": "Takeda Pharmaceutical Company Limited American Depositary Shares (each representing 1/2 of a share of Common Stock)", "TAL": "TAL Education Group American Depositary Shares", "TALK": "Talkspace Inc. Common Stock", "TALKW": "Talkspace Inc. Warrant", "TALO": "Talos Energy Inc. Common Stock", "TALS": "Talaris Therapeutics Inc. Common Stock", "TANH": "Tantech Holdings Ltd. Common Stock", "TANNI": "TravelCenters of America Inc. 8.25% Senior Notes due 2028", "TANNL": "TravelCenters of America Inc. 8.00% Senior Notes due 2029", "TANNZ": "TravelCenters of America Inc. 8.00% Senior Notes due 2030", "TAOP": "Taoping Inc. Ordinary Shares ", "TAP": "Molson Coors Beverage Company Class B Common Stock", "TARA": "Protara Therapeutics Inc. Common Stock", "TARO": "Taro Pharmaceutical Industries Ltd. Ordinary Shares", "TARS": "Tarsus Pharmaceuticals Inc. Common Stock", "TASK": "TaskUs Inc. Class A Common Stock", "TAST": "Carrols Restaurant Group Inc. Common Stock", "TATT": "TAT Technologies Ltd. Ordinary Shares", "TAYD": "Taylor Devices Inc. Common Stock", "TBB": "AT&T Inc. 5.350% Global Notes due 2066", "TBBK": "The Bancorp Inc Common Stock", "TBC": "AT&T Inc. 5.625% Global Notes due 2067", "TBCP": "Thunder Bridge Capital Partners III Inc. Class A Common Stock", "TBCPU": "Thunder Bridge Capital Partners III Inc. Units", "TBCPW": "Thunder Bridge Capital Partners III Inc. Warrant", "TBI": "TrueBlue Inc. Common Stock", "TBK": "Triumph Bancorp Inc. Common Stock", "TBKCP": "Triumph Bancorp Inc. Depositary Shares Each Representing a 1/40th Interest in a Share of 7.125% Series C Fixed-Rate Non-Cumulative Perpetual Preferred Stock", "TBLA": "Taboola.com Ltd. Ordinary Shares", "TBLAW": "Taboola.com Ltd. Warrant", "TBLD": "Thornburg Income Builder Opportunities Trust Common Stock", "TBLT": "ToughBuilt Industries Inc. Common Stock", "TBLTW": "ToughBuilt Industries Inc. Warrant", "TBNK": "Territorial Bancorp Inc. Common Stock", "TBPH": "Theravance Biopharma Inc. Ordinary Shares", "TBSA": "TB SA Acquisition Corp Class A Ordinary Share", "TBSAU": "TB SA Acquisition Corp Unit", "TC": "TuanChe Limited American Depositary Shares", "TCAC": "Tuatara Capital Acquisition Corporation Class A Ordinary Shares", "TCACU": "Tuatara Capital Acquisition Corporation Unit", "TCACW": "Tuatara Capital Acquisition Corporation Warrant", "TCBC": "TC Bancshares Inc. Common Stock", "TCBI": "Texas Capital Bancshares Inc. Common Stock", "TCBIO": "Texas Capital Bancshares Inc. Depositary Shares 5.75% Fixed Rate Non-Cumulative Perpetual Preferred Stock Series B", "TCBK": "TriCo Bancshares Common Stock", "TCBS": "Texas Community Bancshares Inc. Common Stock", "TCDA": "Tricida Inc. Common Stock", "TCFC": "The Community Financial Corporation Common Stock", "TCI": "Transcontinental Realty Investors Inc. Common Stock", "TCMD": "Tactile Systems Technology Inc. Common Stock", "TCOM": "Trip.com Group Limited American Depositary Shares", "TCON": "TRACON Pharmaceuticals Inc. Common Stock", "TCPC": "BlackRock TCP Capital Corp. Common Stock", "TCRR": "TCR2 Therapeutics Inc. Common Stock", "TCRX": "TScan Therapeutics Inc. Common Stock", "TCS": "Container Store (The) Common Stock", "TCVA": "TCV Acquisition Corp. Class A Ordinary Shares", "TCX": "Tucows Inc. Class A Common Stock", "TD": "Toronto Dominion Bank (The) Common Stock", "TDA": "Telephone and Data Systems Inc. 5.875% Senior Notes due 2061", "TDAC": "Trident Acquisitions Corp. Common Stock", "TDACU": "Trident Acquisitions Corp. Units", "TDACW": "Trident Acquisitions Corp. Warrants", "TDC": "Teradata Corporation Common Stock", "TDCX": "TDCX Inc. American Depositary Shares each representing one Class A ordinary share", "TDF": "Templeton Dragon Fund Inc. Common Stock", "TDG": "Transdigm Group Incorporated Transdigm Group Inc. Common Stock", "TDOC": "Teladoc Health Inc. Common Stock", "TDS": "Telephone and Data Systems Inc. Common Shares", "TDS^U": "Telephone and Data Systems Inc. Depositary Shares Each Representing a 1/1000th Interest in a 6.625% Series UU Cumulative Redeemable Perpetual Preferred Stock", "TDS^V": "Telephone and Data Systems Inc. Depositary Shares Each Representing a 1/1000th Interest in a 6.000% Series VV Cumulative Redeemable Perpetual Preferred Stock", "TDUP": "ThredUp Inc. Class A Common Stock", "TDW": "Tidewater Inc. Common Stock", "TDY": "Teledyne Technologies Incorporated Common Stock", "TEAF": "Ecofin Sustainable and Social Impact Term Fund", "TEAM": "Atlassian Corporation Plc Class A Ordinary Shares", "TECH": "Bio-Techne Corp Common Stock", "TECK": "Teck Resources Ltd Ordinary Shares", "TECTP": "Tectonic Financial Inc. 9.00% Fixed-to-Floating Rate Series B Non-Cumulative Perpetual Preferred Stock", "TEDU": "Tarena International Inc. American Depositary Shares", "TEF": "Telefonica SA Common Stock", "TEI": "Templeton Emerging Markets Income Fund Inc. Common Stock", "TEKK": "Tekkorp Digital Acquisition Corp. Class A Ordinary Shares", "TEKKU": "Tekkorp Digital Acquisition Corp. Unit", "TEKKW": "Tekkorp Digital Acquisition Corp. Warrant", "TEL": "TE Connectivity Ltd. New Switzerland Registered Shares", "TELA": "TELA Bio Inc. Common Stock", "TELL": "Tellurian Inc. Common Stock", "TEN": "Tenneco Inc. Class A Voting Common Stock", "TENB": "Tenable Holdings Inc. Common Stock", "TENX": "Tenax Therapeutics Inc. Common Stock", "TEO": "Telecom Argentina SA", "TER": "Teradyne Inc. Common Stock", "TERN": "Terns Pharmaceuticals Inc. Common Stock", "TESS": "TESSCO Technologies Incorporated Common Stock", "TETC": "Tech and Energy Transition Corporation Class A Common Stock", "TETCU": "Tech and Energy Transition Corporation Unit", "TETCW": "Tech and Energy Transition Corporation Warrant", "TEVA": "Teva Pharmaceutical Industries Limited American Depositary Shares", "TEX": "Terex Corporation Common Stock", "TFC": "Truist Financial Corporation Common Stock", "TFC^I": "Truist Financial Corporation Depositary Shares", "TFC^O": "Truist Financial Corporation Depositary Shares Each Representing a 1/1000th Interest in a Share of Series O Non-Cumulative Perpetual Preferred Stock", "TFC^R": "Truist Financial Corporation Depositary Shares each representing 1/1000th interest in a share of Series R Non-Cumulative Perpetual Preferred Stock", "TFFP": "TFF Pharmaceuticals Inc. Common Stock", "TFII": "TFI International Inc. Common Shares", "TFSA": "Terra Income Fund VI 7.00% Notes due 2026", "TFSL": "TFS Financial Corporation Common Stock", "TFX": "Teleflex Incorporated Common Stock", "TG": "Tredegar Corporation Common Stock", "TGA": "TransGlobe Energy Corporation Ordinary Shares (Canada)", "TGB": "Taseko Mines Ltd. Common Stock", "TGH": "Textainer Group Holdings Limited Common Shares", "TGH^A": "Textainer Group Holdings Limited Depositary Shares each representing a 1/1000th interest in a share of 7.000% Series A Cumulative Redeemable Perpetual Preference Shares", "TGH^B": "Textainer Group Holdings Limited Depositary Shares each representing a 1/1000th interest in a share of 6.250% Series B Cumulative Redeemable Perpetual Preference Shares", "TGI": "Triumph Group Inc. Common Stock", "TGLS": "Tecnoglass Inc. Ordinary Shares", "TGNA": "TEGNA Inc", "TGP": "Teekay LNG Partners L.P.", "TGP^A": "Teekay LNG Partners L.P. 9.00% Series A Cumulative Redeemable Perpetual Preferred Units representing limited partner interests", "TGP^B": "Teekay LNG Partners L.P. 8.50% Series B Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Units representing limited partner interests", "TGS": "Transportadora de Gas del Sur SA TGS Common Stock", "TGT": "Target Corporation Common Stock", "TGTX": "TG Therapeutics Inc. Common Stock", "TH": "Target Hospitality Corp. Common Stock", "THC": "Tenet Healthcare Corporation Common Stock", "THCA": "Tuscan Holdings Corp. II Common Stock", "THCAW": "Tuscan Holdings Corp. II Warrant", "THCP": "Thunder Bridge Capital Partners IV Inc. Class A Common Stock", "THCPU": "Thunder Bridge Capital Partners IV Inc. Unit", "THCPW": "Thunder Bridge Capital Partners IV Inc. Warrant", "THFF": "First Financial Corporation Indiana Common Stock", "THG": "Hanover Insurance Group Inc", "THM": "International Tower Hill Mines Ltd. Ordinary Shares (Canada)", "THMA": "Thimble Point Acquisition Corp. Class A Common Stock", "THMAW": "Thimble Point Acquisition Corp. Warrant", "THMO": "ThermoGenesis Holdings Inc. Common Stock", "THO": "Thor Industries Inc. Common Stock", "THQ": "Tekla Healthcare Opportunies Fund Shares of Beneficial Interest", "THR": "Thermon Group Holdings Inc. Common Stock", "THRM": "Gentherm Inc Common Stock", "THRN": "Thorne Healthtech Inc. Common Stock", "THRY": "Thryv Holdings Inc. Common Stock", "THS": "Treehouse Foods Inc. Common Stock", "THTX": "Theratechnologies Inc. Common Shares", "THW": "Tekla World Healthcare Fund Shares of Beneficial Interest", "THWWW": "Target Hospitality Corp. Warrant expiring 3/15/2024", "TIG": "Trean Insurance Group Inc. Common Stock", "TIGO": "Millicom International Cellular S.A. Common Stock", "TIGR": "UP Fintech Holding Ltd American Depositary Share representing fifteen Class A Ordinary Shares", "TIL": "Instil Bio Inc. Common Stock", "TILE": "Interface Inc. Common Stock", "TIMB": "TIM S.A. American Depositary Shares (Each representing 5 Common Shares) ", "TINV": "Tiga Acquisition Corp. Class A Ordinary Shares", "TIOA": "Tio Tech A Class A Ordinary Share", "TIOAU": "Tio Tech A Units", "TIOAW": "Tio Tech A Warrants", "TIPT": "Tiptree Inc. Common Stock", "TIRX": "TIAN RUIXIANG Holdings Ltd Class A Ordinary Shares", "TISI": "Team Inc. Common Stock", "TITN": "Titan Machinery Inc. Common Stock", "TIXT": "TELUS International (Cda) Inc. Subordinate Voting Shares", "TJX": "TJX Companies Inc. (The) Common Stock", "TK": "Teekay Corporation Common Stock", "TKAT": "Takung Art Co. Ltd. Common Stock", "TKC": "Turkcell Iletisim Hizmetleri AS Common Stock", "TKNO": "Alpha Teknova Inc. Common Stock", "TKR": "Timken Company (The) Common Stock", "TLGA": "TLG Acquisition One Corp. Class A Common Stock", "TLGT": "Teligent Inc. Common Stock", "TLIS": "Talis Biomedical Corporation Common Stock", "TLK": "PT Telekomunikasi Indonesia Tbk", "TLMD": "SOC Telemed Inc. Class A Common Stock", "TLMDW": "SOC Telemed Inc. Warrants", "TLRY": "Tilray Inc. Class 2 Common Stock ", "TLS": "Telos Corporation Common Stock", "TLSA": "Tiziana Life Sciences plc American Depository Share", "TLYS": "Tilly's Inc. Common Stock", "TM": "Toyota Motor Corporation Common Stock", "TMAC": "The Music Acquisition Corporation Class A Common Stock", "TMBR": "Timber Pharmaceuticals Inc. Common Stock", "TMC": "TMC the metals company Inc. Common Stock", "TMCI": "Treace Medical Concepts Inc. Common Stock", "TMCWW": "TMC the metals company Inc. Warrants", "TMDI": "Titan Medical Inc. Ordinary Shares", "TMDX": "TransMedics Group Inc. Common Stock", "TME": "Tencent Music Entertainment Group American Depositary Shares each representing two Class A Ordinary Shares", "TMHC": "Taylor Morrison Home Corporation Common Stock", "TMKR": "Tastemaker Acquisition Corp. Class A Common Stock", "TMKRU": "Tastemaker Acquisition Corp. Unit", "TMKRW": "Tastemaker Acquisition Corp. Warrant to purchase Class A common stock", "TMO": "Thermo Fisher Scientific Inc Common Stock", "TMP": "Tompkins Financial Corporation Common Stock", "TMPM": "Turmeric Acquisition Corp. Class A Ordinary Shares", "TMPMU": "Turmeric Acquisition Corp. Unit", "TMPMW": "Turmeric Acquisition Corp. Warrant", "TMQ": "Trilogy Metals Inc. Common Stock", "TMST": "TimkenSteel Corporation Common Shares", "TMTS": "Spartacus Acquisition Corporation Class A Common Stock", "TMTSU": "Spartacus Acquisition Corporation Unit", "TMTSW": "Spartacus Acquisition Corporation Warrant", "TMUS": "T-Mobile US Inc. Common Stock", "TMX": "Terminix Global Holdings Inc. Common Stock", "TNC": "Tennant Company Common Stock", "TNDM": "Tandem Diabetes Care Inc. Common Stock", "TNET": "TriNet Group Inc. Common Stock", "TNGX": "Tango Therapeutics Inc.", "TNK": "Teekay Tankers Ltd.", "TNL": "Travel Leisure Co. Common Stock", "TNP": "Tsakos Energy Navigation Ltd Common Shares", "TNP^D": "Tsakos Energy Navigation Ltd 8.75% Series D Cumulative Redeemable Perpetual Preferred Shares", "TNP^E": "Tsakos Energy Navigation Ltd Series E Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Shares par value $1.00", "TNP^F": "Tsakos Energy Navigation Ltd Series F Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Shares par value $1.00", "TNXP": "Tonix Pharmaceuticals Holding Corp. Common Stock", "TNYA": "Tenaya Therapeutics Inc. Common Stock", "TOL": "Toll Brothers Inc. Common Stock", "TOMZ": "TOMI Environmental Solutions Inc. Common Stock", "TOPS": "TOP Ships Inc. Common Stock", "TOST": "Toast Inc. Class A Common Stock", "TOUR": "Tuniu Corporation American Depositary Shares", "TOWN": "TowneBank Common Stock", "TPB": "Turning Point Brands Inc. Common Stock", "TPBAU": "TPB Acquisition Corporation I Unit", "TPC": "Tutor Perini Corporation Common Stock", "TPGS": "TPG Pace Solutions Corp. Class A Ordinary Shares", "TPGY": "TPG Pace Beneficial Finance Corp. Class A Ordinary Shares", "TPH": "Tri Pointe Homes Inc. Common Stock", "TPHS": "Trinity Place Holdings Inc. Common Stock", "TPIC": "TPI Composites Inc. Common Stock", "TPL": "Texas Pacific Land Corporation Common Stock", "TPR": "Tapestry Inc. Common Stock", "TPST": "Tempest Therapeutics Inc. Common Stock", "TPTA": "Terra Property Trust Inc. 6.00% Notes due 2026", "TPTX": "Turning Point Therapeutics Inc. Common Stock", "TPVG": "TriplePoint Venture Growth BDC Corp. Common Stock", "TPX": "Tempur Sealy International Inc. Common Stock", "TPZ": "Tortoise Power and Energy Infrastructure Fund Inc Common Stock", "TR": "Tootsie Roll Industries Inc. Common Stock", "TRC": "Tejon Ranch Co Common Stock", "TRCA": "Twin Ridge Capital Acquisition Corp. Class A Ordinary Shares", "TREB": "Trebia Acquisition Corp. Class A Ordinary Shares", "TREC": "Trecora Resources Common Stock", "TREE": "LendingTree Inc. Common Stock", "TREX": "Trex Company Inc. Common Stock", "TRGP": "Targa Resources Inc. Common Stock", "TRHC": "Tabula Rasa HealthCare Inc. Common Stock", "TRI": "Thomson Reuters Corp Ordinary Shares", "TRIB": "Trinity Biotech plc American Depositary Shares", "TRIL": "Trillium Therapeutics Inc. Common Shares", "TRIN": "Trinity Capital Inc. Common Stock", "TRIP": "TripAdvisor Inc. Common Stock", "TRIT": "Triterras Inc. Class A Ordinary Shares", "TRITW": "Triterras Inc. Warrant", "TRKA": "Troika Media Group Inc. Common Stock", "TRKAW": "Troika Media Group Inc. Warrant", "TRMB": "Trimble Inc. Common Stock", "TRMD": "TORM plc Class A Common Stock", "TRMK": "Trustmark Corporation Common Stock", "TRMR": "Tremor International Ltd. American Depository Shares", "TRN": "Trinity Industries Inc. Common Stock", "TRNO": "Terreno Realty Corporation Common Stock", "TRNS": "Transcat Inc. Common Stock", "TRON": "Corner Growth Acquisition Corp. 2 Class A Ordinary Share", "TRONU": "Corner Growth Acquisition Corp. 2 Units", "TRONW": "Corner Growth Acquisition Corp. 2 Warrants", "TROW": "T. Rowe Price Group Inc. Common Stock", "TROX": "Tronox Holdings plc Ordinary Shares (UK)", "TRP": "TC Energy Corporation Common Stock", "TRQ": "Turquoise Hill Resources Ltd. Ordinary Shares", "TRS": "TriMas Corporation Common Stock", "TRST": "TrustCo Bank Corp NY Common Stock", "TRT": "Trio-Tech International Common Stock", "TRTL": "TortoiseEcofin Acquisition Corp. III Class A Ordinary Shares", "TRTN": "Triton International Limited Common Shares", "TRTN^A": "Triton International Limited 8.50% Series A Cumulative Redeemable Perpetual Preference Shares", "TRTN^B": "Triton International Limited 8.00% Series B Cumulative Redeemable Perpetual Preference Shares", "TRTN^C": "Triton International Limited 7.375% Series C Cumulative Redeemable Perpetual Preference Shares", "TRTN^D": "Triton International Limited 6.875% Series D Cumulative Redeemable Perpetual Preference Shares", "TRTN^E": "Triton International Limited 5.75% Series E Cumulative Redeemable Perpetual Preference Shares", "TRTX": "TPG RE Finance Trust Inc. Common Stock", "TRTX^C": "TPG RE Finance Trust Inc. 6.25% Series C Cumulative Redeemable Preferred Stock $0.001 par value per share", "TRU": "TransUnion Common Stock", "TRUE": "TrueCar Inc. Common Stock", "TRUP": "Trupanion Inc. Common Stock", "TRV": "The Travelers Companies Inc. Common Stock", "TRVG": "trivago N.V. American Depositary Shares", "TRVI": "Trevi Therapeutics Inc. Common Stock", "TRVN": "Trevena Inc. Common Stock", "TRX": "Tanzanian Gold Corporation Common Stock", "TS": "Tenaris S.A. American Depositary Shares", "TSBK": "Timberland Bancorp Inc. Common Stock", "TSC": "TriState Capital Holdings Inc. Common Stock", "TSCAP": "TriState Capital Holdings Inc. Dep Shs Rep 1/40th Int 6.75% Srs A Non-Cum Pfd Stock", "TSCBP": "TriState Capital Holdings Inc. Depositary Share representing a 1/40th Interest in a Share of 6.375% Fixed-to-Floating Rate Series B Non-Cumulative Perpetual Preferred Stock", "TSCO": "Tractor Supply Company Common Stock", "TSE": "Trinseo S.A. Ordinary Shares", "TSEM": "Tower Semiconductor Ltd. Ordinary Shares", "TSHA": "Taysha Gene Therapies Inc. Common Stock", "TSI": "TCW Strategic Income Fund Inc. Common Stock", "TSIB": "Tishman Speyer Innovation Corp. II Class A common stock", "TSIBU": "Tishman Speyer Innovation Corp. II Unit", "TSIBW": "Tishman Speyer Innovation Corp. II Warrant", "TSLA": "Tesla Inc. Common Stock", "TSLX": "Sixth Street Specialty Lending Inc. Common Stock", "TSM": "Taiwan Semiconductor Manufacturing Company Ltd.", "TSN": "Tyson Foods Inc. Common Stock", "TSP": "TuSimple Holdings Inc. Class A Common Stock", "TSPQ": "TCW Special Purpose Acquisition Corp. Class A Common Stock", "TSQ": "Townsquare Media Inc. Class A Common Stock", "TSRI": "TSR Inc. Common Stock", "TT": "Trane Technologies plc", "TTC": "Toro Company (The) Common Stock", "TTCF": "Tattooed Chef Inc Class A Common Stock", "TTD": "The Trade Desk Inc. Class A Common Stock", "TTE": "TotalEnergies SE", "TTEC": "TTEC Holdings Inc. Common Stock", "TTEK": "Tetra Tech Inc. Common Stock", "TTGT": "TechTarget Inc. Common Stock", "TTI": "Tetra Technologies Inc. Common Stock", "TTM": "Tata Motors Ltd Tata Motors Limited", "TTMI": "TTM Technologies Inc. Common Stock", "TTNP": "Titan Pharmaceuticals Inc. Common Stock", "TTOO": "T2 Biosystems Inc. Common Stock", "TTP": "Tortoise Pipeline & Energy Fund Inc. Common Stock", "TTSH": "Tile Shop Holdings Inc. Common Stock", "TTWO": "Take-Two Interactive Software Inc. Common Stock", "TU": "Telus Corporation Ordinary Shares", "TUEM": "Tuesday Morning Corp. Common Stock", "TUFN": "Tufin Software Technologies Ltd. Ordinary Shares", "TUGC": "TradeUP Global Corporation Class A Ordinary Shares", "TUGCU": "TradeUP Global Corporation Unit", "TUGCW": "TradeUP Global Corporation Warrant", "TUP": "Tupperware Brands Corporation Common Stock", "TURN": "180 Degree Capital Corp. Common Stock", "TUSK": "Mammoth Energy Services Inc. Common Stock", "TUYA": "Tuya Inc. American Depositary Shares each representing one Class A Ordinary Share", "TV": "Grupo Televisa S.A. Common Stock", "TVAC": "Thayer Ventures Acquisition Corporation Class A Common Stock", "TVACU": "Thayer Ventures Acquisition Corporation Units", "TVACW": "Thayer Ventures Acquisition Corporation Warrant", "TVC": "Tennessee Valley Authority Common Stock", "TVE": "Tennessee Valley Authority", "TVTX": "Travere Therapeutics Inc. Common Stock", "TVTY": "Tivity Health Inc. Common Stock", "TW": "Tradeweb Markets Inc. Class A Common Stock", "TWCB": "Bilander Acquisition Corp. Class A Common Stock", "TWCBU": "Bilander Acquisition Corp. Unit", "TWCBW": "Bilander Acquisition Corp. Warrant", "TWI": "Titan International Inc. (DE) Common Stock", "TWIN": "Twin Disc Incorporated Common Stock", "TWKS": "Thoughtworks Holding Inc. Common Stock", "TWLO": "Twilio Inc. Class A Common Stock", "TWLV": "Twelve Seas Investment Company II Class A Common Stock", "TWLVU": "Twelve Seas Investment Company II Unit", "TWLVW": "Twelve Seas Investment Company II Warrant", "TWN": "Taiwan Fund Inc. (The) Common Stock", "TWND": "Tailwind Acquisition Corp. Class A Common Stock", "TWNI": "Tailwind International Acquisition Corp. Class A Ordinary Shares", "TWNK": "Hostess Brands Inc. Class A Common Stock", "TWNKW": "Hostess Brands Inc. Warrants", "TWNT": "Tailwind Two Acquisition Corp. Class A Ordinary Shares", "TWO": "Two Harbors Investment Corp", "TWO^A": "Two Harbors Investments Corp 8.125% Series A Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock ($25.00 liquidation preference per share)", "TWO^B": "Two Harbors Investments Corp 7.625% Series B Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "TWO^C": "Two Harbors Investments Corp 7.25% Series C Fixed-to-Floating Rate Cumulative Redeemable Preferred Stock", "TWOA": "two Class A Ordinary Shares", "TWOU": "2U Inc. Common Stock", "TWST": "Twist Bioscience Corporation Common Stock", "TWTR": "Twitter Inc. Common Stock", "TX": "Ternium S.A. Ternium S.A. American Depositary Shares (each representing ten shares USD1.00 par value)", "TXG": "10x Genomics Inc. Class A Common Stock", "TXMD": "TherapeuticsMD Inc. Common Stock", "TXN": "Texas Instruments Incorporated Common Stock", "TXRH": "Texas Roadhouse Inc. Common Stock", "TXT": "Textron Inc. Common Stock", "TY": "Tri Continental Corporation Common Stock", "TY^": "Tri Continental Corporation Preferred Stock", "TYG": "Tortoise Energy Infrastructure Corporation Common Stock", "TYHT": "Shineco Inc. Common Stock", "TYL": "Tyler Technologies Inc. Common Stock", "TYME": "Tyme Technologies Inc. Common Stock", "TYRA": "Tyra Biosciences Inc. Common Stock", "TZOO": "Travelzoo Common Stock", "TZPS": "TZP Strategies Acquisition Corp. Class A Ordinary Share", "TZPSU": "TZP Strategies Acquisition Corp. Unit", "TZPSW": "TZP Strategies Acquisition Corp. Warrant", "U": "Unity Software Inc. Common Stock", "UA": "Under Armour Inc. Class C Common Stock", "UAA": "Under Armour Inc. Class A Common Stock", "UAL": "United Airlines Holdings Inc. Common Stock", "UAMY": "United States Antimony Corporation Common Stock", "UAN": "CVR Partners LP Common Units representing Limited Partner Interests", "UAVS": "AgEagle Aerial Systems Inc. Common Stock", "UBA": "Urstadt Biddle Properties Inc. Common Stock", "UBCP": "United Bancorp Inc. Common Stock", "UBER": "Uber Technologies Inc. Common Stock", "UBFO": "United Security Bancshares Common Stock", "UBOH": "United Bancshares Inc. Common Stock", "UBP": "Urstadt Biddle Properties Inc. Common Stock", "UBP^H": "Urstadt Biddle Properties Inc. 6.250% Series H Cumulative Redeemable Preferred Stock", "UBP^K": "Urstadt Biddle Properties Inc. 5.875% Series K Cumulative Redeemable Preferred Stock", "UBS": "UBS Group AG Registered Ordinary Shares", "UBSI": "United Bankshares Inc. Common Stock", "UBX": "Unity Biotechnology Inc. Common Stock", "UCBI": "United Community Banks Inc. Common Stock", "UCBIO": "United Community Banks Inc. Depositary Shares each representing 1/1000th interest in a share of Series I Non-CumulativePreferred Stock", "UCL": "uCloudlink Group Inc. American Depositary Shares", "UCTT": "Ultra Clean Holdings Inc. Common Stock", "UDR": "UDR Inc. Common Stock", "UE": "Urban Edge Properties Common Shares of Beneficial Interest", "UEC": "Uranium Energy Corp. Common Stock", "UEIC": "Universal Electronics Inc. Common Stock", "UEPS": "Net 1 UEPS Technologies Inc. Common Stock", "UFAB": "Unique Fabricating Inc. Common Stock", "UFCS": "United Fire Group Inc. Common Stock", "UFI": "Unifi Inc. New Common Stock", "UFPI": "UFP Industries Inc. Common Stock", "UFPT": "UFP Technologies Inc. Common Stock", "UFS": "Domtar Corporation (NEW) Common Stock", "UG": "United-Guardian Inc. Common Stock", "UGI": "UGI Corporation Common Stock", "UGIC": "UGI Corporation Corporate Units", "UGP": "Ultrapar Participacoes S.A. (New) American Depositary Shares (Each representing one Common Share)", "UGRO": "urban-gro Inc. Common Stock", "UHAL": "Amerco Common Stock", "UHS": "Universal Health Services Inc. Common Stock", "UHT": "Universal Health Realty Income Trust Common Stock", "UI": "Ubiquiti Inc. Common Stock", "UIHC": "United Insurance Holdings Corp. Common Stock", "UIS": "Unisys Corporation New Common Stock", "UK": "Ucommune International Ltd Ordinary Shares", "UKOMW": "Ucommune International Ltd Warrant expiring 11/17/2025", "UL": "Unilever PLC Common Stock", "ULBI": "Ultralife Corporation Common Stock", "ULCC": "Frontier Group Holdings Inc. Common Stock", "ULH": "Universal Logistics Holdings Inc. Common Stock", "ULTA": "Ulta Beauty Inc. Common Stock", "UMBF": "UMB Financial Corporation Common Stock", "UMC": "United Microelectronics Corporation (NEW) Common Stock", "UMH": "UMH Properties Inc. Common Stock", "UMH^C": "UMH Properties Inc. 6.75% Series C Cumulative Redeemable Preferred Stock Liquidation Preference $25 per share", "UMH^D": "UMH Properties Inc. 6.375% Series D Cumulative Redeemable Preferred Stock Liquidation Preference $25 per share", "UMPQ": "Umpqua Holdings Corporation Common Stock", "UNAM": "Unico American Corporation Common Stock", "UNB": "Union Bankshares Inc. Common Stock", "UNCY": "Unicycive Therapeutics Inc. Common Stock", "UNF": "Unifirst Corporation Common Stock", "UNFI": "United Natural Foods Inc. Common Stock", "UNH": "UnitedHealth Group Incorporated Common Stock (DE)", "UNIT": "Uniti Group Inc. Common Stock", "UNM": "Unum Group Common Stock", "UNMA": "Unum Group 6.250% Junior Subordinated Notes due 2058", "UNP": "Union Pacific Corporation Common Stock", "UNTY": "Unity Bancorp Inc. Common Stock", "UNVR": "Univar Solutions Inc. Common Stock", "UONE": "Urban One Inc. Class A Common Stock", "UONEK": "Urban One Inc. Class D Common Stock", "UP": "Wheels Up Experience Inc. Class A Common Stock", "UPC": "Universe Pharmaceuticals Inc. Ordinary Shares", "UPH": "UpHealth Inc. Common Stock", "UPLD": "Upland Software Inc. Common Stock", "UPS": "United Parcel Service Inc. Common Stock", "UPST": "Upstart Holdings Inc. Common stock", "UPTD": "TradeUP Acquisition Corp. Common Stock", "UPTDU": "TradeUP Acquisition Corp. Unit", "UPWK": "Upwork Inc. Common Stock", "URBN": "Urban Outfitters Inc. Common Stock", "URG": "Ur Energy Inc Common Shares (Canada)", "URGN": "UroGen Pharma Ltd. Ordinary Shares", "URI": "United Rentals Inc. Common Stock", "UROY": "Uranium Royalty Corp. Common Stock", "USA": "Liberty All-Star Equity Fund Common Stock", "USAC": "USA Compression Partners LP Common Units Representing Limited Partner Interests", "USAK": "USA Truck Inc. Common Stock", "USAP": "Universal Stainless & Alloy Products Inc. Common Stock", "USAS": "Americas Gold and Silver Corporation Common Shares no par value", "USAU": "U.S. Gold Corp. Common Stock", "USB": "U.S. Bancorp Common Stock", "USB^A": "U.S. Bancorp Depositary Shares Each representing a 1/100th interest in a share of Series A Non-CumulativePerpetual Pfd Stock", "USB^H": "U.S. Bancorp Depositary Shares repstg 1/1000th Pfd Ser B", "USB^M": "U.S. Bancorp Depositary Shares Representing 1/1000th Interest in a Shares Series F", "USB^P": "U.S. Bancorp Depositary Shares each representing a 1/1000th interest in a share of Series K Non-Cumulative Perpetual Preferred Stock", "USB^Q": "U.S. Bancorp Depositary Shares Each Representing a 1/1000th Interest in a Share of Series L Non-Cumulative Perpetual Preferred Stock", "USB^R": "U.S. Bancorp Depositary Shares Each Representing a 1/1000th Interest in a Share of Series M Non-Cumulative Perpetual Preferred Stock", "USCB": "U.S. Century Bank Class A Common Stock", "USDP": "USD Partners LP Common Units representing limited partner interest", "USEG": "U.S. Energy Corp. Common Stock", "USFD": "US Foods Holding Corp. Common Stock", "USIO": "Usio Inc. Common Stock", "USLM": "United States Lime & Minerals Inc. Common Stock", "USM": "United States Cellular Corporation Common Stock", "USNA": "USANA Health Sciences Inc. Common Stock", "USPH": "U.S. Physical Therapy Inc. Common Stock", "USWS": "U.S. Well Services Inc. Class A Common Stock", "USWSW": "U.S. Well Services Inc. Warrants", "USX": "U.S. Xpress Enterprises Inc. Class A Common Stock", "UTF": "Cohen & Steers Infrastructure Fund Inc Common Stock", "UTG": "Reaves Utility Income Fund Common Shares of Beneficial Interest", "UTHR": "United Therapeutics Corporation Common Stock", "UTI": "Universal Technical Institute Inc Common Stock", "UTL": "UNITIL Corporation Common Stock", "UTMD": "Utah Medical Products Inc. Common Stock", "UTME": "UTime Limited Ordinary Shares", "UTSI": "UTStarcom Holdings Corp.", "UTZ": "Utz Brands Inc Class A Common Stock ", "UUU": "Universal Security Instruments Inc. Common Stock", "UUUU": "Energy Fuels Inc Ordinary Shares (Canada)", "UVE": "UNIVERSAL INSURANCE HOLDINGS INC Common Stock", "UVSP": "Univest Financial Corporation Common Stock", "UVV": "Universal Corporation Common Stock", "UWMC": "UWM Holdings Corporation Class A Common Stock", "UXIN": "Uxin Limited ADS", "UZD": "United States Cellular Corporation 6.250% Senior Notes due 2069", "UZE": "United States Cellular Corporation 5.500% Senior Notes due 2070", "UZF": "United States Cellular Corporation 5.500% Senior Notes due 2070", "V": "Visa Inc.", "VABK": "Virginia National Bankshares Corporation Common Stock", "VAC": "Marriott Vacations Worldwide Corporation Common Stock", "VACC": "Vaccitech plc American Depositary Shares", "VAL": "Valaris Limited Common Shares", "VALE": "VALE S.A. American Depositary Shares Each Representing one common share", "VALN": "Valneva SE American Depositary Shares", "VALU": "Value Line Inc. Common Stock", "VAPO": "Vapotherm Inc. Common Stock", "VAQC": "Vector Acquisition Corporation II Class A Ordinary Shares", "VATE": "INNOVATE Corp. Common Stock", "VBF": "Invesco Bond Fund Common Stock", "VBFC": "Village Bank and Trust Financial Corp. Common Stock", "VBIV": "VBI Vaccines Inc. New Common Stock (Canada)", "VBLT": "Vascular Biogenics Ltd. Ordinary Shares", "VBNK": "VersaBank Common Shares", "VBTX": "Veritex Holdings Inc. Common Stock", "VC": "Visteon Corporation Common Stock", "VCEL": "Vericel Corporation Common Stock", "VCF": "Delaware Investments Colorado Municipal Income Fund Inc Common Stock", "VCIF": "Vertical Capital Income Fund Common Shares of Beneficial Interest", "VCKA": "Vickers Vantage Corp. I Ordinary Shares", "VCKAU": "Vickers Vantage Corp. I Unit", "VCKAW": "Vickers Vantage Corp. I Warrant", "VCNX": "Vaccinex Inc. Common Stock", "VCRA": "Vocera Communications Inc. Common Stock", "VCTR": "Victory Capital Holdings Inc. Class A Common Stock", "VCV": "Invesco California Value Municipal Income Trust Common Stock", "VCXAU": "10X Capital Venture Acquisition Corp. II Unit", "VCXAW": "10X Capital Venture Acquisition Corp. II Warrant", "VCYT": "Veracyte Inc. Common Stock", "VEC": "Vectrus Inc. Common Stock", "VECO": "Veeco Instruments Inc. Common Stock", "VECT": "VectivBio Holding AG Ordinary Shares", "VEDL": "Vedanta Limited American Depositary Shares (Each representing four equity shares)", "VEEE": "Twin Vee PowerCats Co. Common Stock", "VEEV": "Veeva Systems Inc. Class A Common Stock", "VEI": "Vine Energy Inc. Class A Common Stock", "VEL": "Velocity Financial Inc. Common Stock", "VELO": "Velocity Acquisition Corp. Class A Common Stock", "VELOU": "Velocity Acquisition Corp. Units", "VELOW": "Velocity Acquisition Corp. Warrant", "VENA": "Venus Acquisition Corporation Ordinary Shares", "VENAR": "Venus Acquisition Corporation Rights", "VENAU": "Venus Acquisition Corporation Units", "VENAW": "Venus Acquisition Corporation Warrant", "VEON": "VEON Ltd. ADS", "VER": "VEREIT Inc. Common Stock", "VERA": "Vera Therapeutics Inc. Class A Common Stock", "VERB": "Verb Technology Company Inc. Common Stock", "VERBW": "Verb Technology Company Inc. Warrant", "VERI": "Veritone Inc. Common Stock", "VERO": "Venus Concept Inc. Common Stock", "VERU": "Veru Inc. Common Stock", "VERV": "Verve Therapeutics Inc. Common Stock", "VERX": "Vertex Inc. Class A Common Stock", "VERY": "Vericity Inc. Common Stock", "VET": "Vermilion Energy Inc. Common (Canada)", "VEV": "Vicinity Motor Corp. Common Stock", "VFC": "V.F. Corporation Common Stock", "VFF": "Village Farms International Inc. Common Shares", "VFL": "Delaware Investments National Municipal Income Fund Common Stock", "VG": "Vonage Holdings Corp. Common Stock", "VGI": "Virtus Global Multi-Sector Income Fund Common Shares of Beneficial Interest", "VGII": "Virgin Group Acquisition Corp. II Class A Ordinary Shares", "VGM": "Invesco Trust for Investment Grade Municipals Common Stock (DE)", "VGR": "Vector Group Ltd. Common Stock", "VGZ": "Vista Gold Corp Common Stock", "VHAQ": "Viveon Health Acquisition Corp. Common Stock", "VHC": "VirnetX Holding Corp Common Stock", "VHI": "Valhi Inc. Common Stock", "VIA": "Via Renewables Inc. Class A Common Stock", "VIAC": "ViacomCBS Inc. Class B Common Stock", "VIACA": "ViacomCBS Inc. Class A Common Stock", "VIACP": "ViacomCBS Inc. 5.75% Series A Mandatory Convertible Preferred Stock", "VIAO": "VIA optronics AG American Depositary Shares each representing one-fifth of an Ordinary Share", "VIASP": "Via Renewables Inc. 8.75% Series A Fixed-to-Floating Rate Cumulative Redeemable Perpetual Preferred Stock", "VIAV": "Viavi Solutions Inc. Common Stock", "VICI": "VICI Properties Inc. Common Stock", "VICR": "Vicor Corporation Common Stock", "VIEW": "View Inc. Class A Common Stock", "VIEWW": "View Inc. Warrant", "VIH": "VPC Impact Acquisition Holdings Class A Ordinary Shares", "VIHAU": "VPC Impact Acquisition Holdings Unit", "VIHAW": "VPC Impact Acquisition Holdings Warrant", "VII": "7GC & Co. Holdings Inc. Class A common stock", "VIIAU": "7GC & Co. Holdings Inc. Unit", "VIIAW": "7GC & Co. Holdings Inc. Warrant", "VINC": "Vincerx Pharma Inc. Common Stock", "VINO": "Gaucho Group Holdings Inc. Common Stock", "VINP": "Vinci Partners Investments Ltd. Class A Common Shares", "VIOT": "Viomi Technology Co. Ltd American Depositary Shares", "VIPS": "Vipshop Holdings Limited American Depositary Shares each representing two ordinary shares", "VIR": "Vir Biotechnology Inc. Common Stock", "VIRC": "Virco Manufacturing Corporation Common Stock", "VIRI": "Virios Therapeutics Inc. Common Stock", "VIRT": "Virtu Financial Inc. Class A Common Stock", "VIRX": "Viracta Therapeutics Inc. Common Stock", "VISL": "Vislink Technologies Inc. Common Stock", "VIST": "Vista Oil & Gas S.A.B. de C.V. American Depositary Shares each representing one series A share with no par value", "VITL": "Vital Farms Inc. Common Stock", "VIV": "Telefonica Brasil S.A. American Depositary Shares (Each representing One Common Share)", "VIVE": "Viveve Medical Inc. Common Stock", "VIVO": "Meridian Bioscience Inc. Common Stock", "VJET": "voxeljet AG American Depositary Shares", "VKI": "Invesco Advantage Municipal Income Trust II Common Shares of Beneficial Interest (DE)", "VKQ": "Invesco Municipal Trust Common Stock", "VKTX": "Viking Therapeutics Inc. Common Stock", "VLAT": "Valor Latitude Acquisition Corp. Class A Ordinary Shares", "VLATU": "Valor Latitude Acquisition Corp. Unit", "VLATW": "Valor Latitude Acquisition Corp. Warrant", "VLD": "Velo3D Inc. Common Stock", "VLDR": "Velodyne Lidar Inc. Common Stock", "VLDRW": "Velodyne Lidar Inc. Warrants ", "VLGEA": "Village Super Market Inc. Class A Common Stock", "VLN": "Valens Semiconductor Ltd. Ordinary Shares", "VLO": "Valero Energy Corporation Common Stock", "VLON": "Vallon Pharmaceuticals Inc. Common Stock", "VLRS": "Controladora Vuela Compania de Aviacion S.A.B. de C.V. American Depositary Shares each representing ten (10) Ordinary Participation Certificates", "VLT": "Invesco High Income Trust II", "VLTA": "Volta Inc. Class A Common Stock", "VLY": "Valley National Bancorp Common Stock", "VLYPO": "Valley National Bancorp 5.50% Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series B", "VLYPP": "Valley National Bancorp 6.25% Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series A", "VMAC": "Vistas Media Acquisition Company Inc. Class A Common Stock", "VMACW": "Vistas Media Acquisition Company Inc. Warrant", "VMAR": "Vision Marine Technologies Inc. Common Shares", "VMC": "Vulcan Materials Company (Holding Company) Common Stock", "VMD": "Viemed Healthcare Inc. Common Shares", "VMEO": "Vimeo Inc. Common Stock", "VMI": "Valmont Industries Inc. Common Stock", "VMM": "Delaware Investments Minnesota Municipal Income Fund II Inc. Common Stock", "VMO": "Invesco Municipal Opportunity Trust Common Stock", "VMW": "Vmware Inc. Common stock Class A", "VNCE": "Vince Holding Corp. Common Stock", "VNDA": "Vanda Pharmaceuticals Inc. Common Stock", "VNE": "Veoneer Inc. Common Stock ", "VNET": "21Vianet Group Inc. American Depositary Shares", "VNO": "Vornado Realty Trust Common Stock", "VNO^K": "Vornado Realty Trust Pfd S K", "VNO^L": "Vornado Realty Trust Pfd Ser L %", "VNO^M": "Vornado Realty Trust 5.25% Series M Cumulative Redeemable Preferred Shares of Beneficial Interest liquidation preference $25.00 per share no par value per share", "VNO^N": "Vornado Realty Trust 5.25% Series N Cumulative Redeemable Preferred Shares of Beneficial Interest liquidation preference $25.00 per share", "VNO^O": "Vornado Realty Trust 4.45% Series O Cumulative Redeemable Preferred Shares Liquidation Preference $25.00 Per Share", "VNOM": "Viper Energy Partners LP Common Unit", "VNRX": "VolitionRX Limited Common Stock", "VNT": "Vontier Corporation Common Stock ", "VNTR": "Venator Materials PLC Ordinary Shares", "VOC": "VOC Energy Trust Units of Beneficial Interest", "VOD": "Vodafone Group Plc American Depositary Shares", "VOLT": "Volt Information Sciences Inc. Common Stock", "VOR": "Vor Biopharma Inc. Common Stock", "VOSO": "Virtuoso Acquisition Corp. Class A Common Stock", "VOSOU": "Virtuoso Acquisition Corp. Unit", "VOSOW": "Virtuoso Acquisition Corp. Warrant", "VOXX": "VOXX International Corporation Class A Common Stock", "VOYA": "Voya Financial Inc. Common Stock", "VOYA^B": "Voya Financial Inc. Depositary Shares each representing a 1/40th interest in a share of 5.35% Fixed-Rate Reset Non-Cumulative Preferred Stock Series B", "VPCB": "VPC Impact Acquisition Holdings II Class A ordinary shre", "VPCBU": "VPC Impact Acquisition Holdings II Unit", "VPCBW": "VPC Impact Acquisition Holdings II Warrant", "VPCC": "VPC Impact Acquisition Holdings III Inc. Class A Common Stock", "VPG": "Vishay Precision Group Inc. Common Stock", "VPV": "Invesco Pennsylvania Value Municipal Income Trust Common Stock (DE)", "VQS": "VIQ Solutions Inc. Common Shares", "VRA": "Vera Bradley Inc. Common Stock", "VRAR": "The Glimpse Group Inc. Common Stock", "VRAY": "ViewRay Inc. Common Stock", "VRCA": "Verrica Pharmaceuticals Inc. Common Stock", "VRDN": "Viridian Therapeutics Inc. Common Stock", "VREX": "Varex Imaging Corporation Common Stock", "VRM": "Vroom Inc. Common Stock", "VRME": "VerifyMe Inc. Common Stock", "VRMEW": "VerifyMe Inc. Warrant", "VRNA": "Verona Pharma plc American Depositary Share", "VRNS": "Varonis Systems Inc. Common Stock", "VRNT": "Verint Systems Inc. Common Stock", "VRPX": "Virpax Pharmaceuticals Inc. Common Stock", "VRRM": "Verra Mobility Corporation Class A Common Stock", "VRS": "Verso Corporation Common Stock", "VRSK": "Verisk Analytics Inc. Common Stock", "VRSN": "VeriSign Inc. Common Stock", "VRT": "Vertiv Holdings LLC Class A Common Stock", "VRTS": "Virtus Investment Partners Inc. Common Stock", "VRTV": "Veritiv Corporation Common Stock", "VRTX": "Vertex Pharmaceuticals Incorporated Common Stock", "VS": "Versus Systems Inc. Common Shares", "VSAT": "ViaSat Inc. Common Stock", "VSCO": "Victorias Secret & Co. Common Stock ", "VSEC": "VSE Corporation Common Stock", "VSH": "Vishay Intertechnology Inc. Common Stock", "VSSYW": "Versus Systems Inc. Class A Warrants", "VST": "Vistra Corp. Common Stock", "VSTA": "Vasta Platform Limited Class A Ordinary Shares", "VSTM": "Verastem Inc. Common Stock", "VSTO": "Vista Outdoor Inc. Common Stock", "VTA": "Invesco Credit Opportunities Fund Common Shares of Beneficial Interest", "VTAQ": "Ventoux CCM Acquisition Corp. Common Stock", "VTAQR": "Ventoux CCM Acquisition Corp. Right", "VTAQU": "Ventoux CCM Acquisition Corp. Unit", "VTAQW": "Ventoux CCM Acquisition Corp. Warrant", "VTEX": "VTEX Class A Common Shares", "VTGN": "VistaGen Therapeutics Inc. Common Stock", "VTIQ": "VectoIQ Acquisition Corp. II Class A Common Stock", "VTIQU": "VectoIQ Acquisition Corp. II Unit", "VTIQW": "VectoIQ Acquisition Corp. II Warrant", "VTN": "Invesco Trust for Investment Grade New York Municipals Common Stock", "VTNR": "Vertex Energy Inc Common Stock", "VTOL": "Bristow Group Inc. Common Stock", "VTR": "Ventas Inc. Common Stock", "VTRS": "Viatris Inc. Common Stock", "VTRU": "Vitru Limited Common Shares", "VTSI": "VirTra Inc. Common Stock", "VTVT": "vTv Therapeutics Inc. Class A Common Stock", "VUZI": "Vuzix Corporation Common Stock", "VVI": "Viad Corp Common Stock", "VVNT": "Vivint Smart Home Inc. ", "VVOS": "Vivos Therapeutics Inc. Common Stock", "VVPR": "VivoPower International PLC Ordinary Shares", "VVR": "Invesco Senior Income Trust Common Stock (DE)", "VVV": "Valvoline Inc. Common Stock", "VWE": "Vintage Wine Estates Inc. Common Stock", "VWTR": "Vidler Water Resources Inc. Common Stock", "VXRT": "Vaxart Inc Common Stock", "VYGG": "Vy Global Growth Class A Ordinary Shares", "VYGR": "Voyager Therapeutics Inc. Common Stock", "VYNE": "VYNE Therapeutics Inc. Common Stock", "VYNT": "Vyant Bio Inc. Common Stock", "VZ": "Verizon Communications Inc. Common Stock", "VZIO": "VIZIO Holding Corp. Class A Common Stock", "W": "Wayfair Inc. Class A Common Stock", "WAB": "Westinghouse Air Brake Technologies Corporation Common Stock", "WABC": "Westamerica Bancorporation Common Stock", "WAFD": "Washington Federal Inc. Common Stock", "WAFDP": "Washington Federal Inc. Depositary Shares", "WAFU": "Wah Fu Education Group Limited Ordinary Shares", "WAL": "Western Alliance Bancorporation Common Stock (DE)", "WAL^A": "Western Alliance Bancorporation Depositary Shares Each Representing a 1/400th Interest in a Share of 4.250% Fixed-Rate Non-Cumulative Perpetual Preferred Stock Series A", "WALA": "Western Alliance Bancorporation 6.25% Subordinated Debentures due 2056", "WALD": "Waldencast Acquisition Corp. Class A Ordinary Share", "WALDU": "Waldencast Acquisition Corp. Units", "WALDW": "Waldencast Acquisition Corp. Warrant ", "WARR": "Warrior Technologies Acquisition Company Class A Common Stock", "WASH": "Washington Trust Bancorp Inc. Common Stock", "WAT": "Waters Corporation Common Stock", "WATT": "Energous Corporation Common Stock", "WAVE": "Eco Wave Power Global AB (publ) American Depositary Shares", "WB": "Weibo Corporation American Depositary Share", "WBA": "Walgreens Boots Alliance Inc. Common Stock", "WBK": "Westpac Banking Corporation Common Stock", "WBS": "Webster Financial Corporation Common Stock", "WBS^F": "Webster Financial Corporation Depositary Shares Each Representing 1/1000th Interest in a Share of 5.25% Series F Non-Cumulative Perpetual Preferred Stock", "WBT": "Welbilt Inc. Common Stock", "WBX": "Wallbox N.V.", "WCC": "WESCO International Inc. Common Stock", "WCC^A": "WESCO International Inc. Depositary Shares each representing 1/1000th interest in a share of Series A Fixed-Rate Reset Cumulative Perpetual Preferred Stock", "WCN": "Waste Connections Inc. Common Shares", "WD": "Walker & Dunlop Inc Common Stock", "WDAY": "Workday Inc. Class A Common Stock", "WDC": "Western Digital Corporation Common Stock", "WDFC": "WD-40 Company Common Stock", "WDH": "Waterdrop Inc. American Depositary Shares (each representing the right to receive 10 Class A Ordinary Shares)", "WDI": "Western Asset Diversified Income Fund Common Shares of Beneficial Interest", "WEA": "Western Asset Bond Fund Share of Beneficial Interest", "WEBR": "Weber Inc. Class A Common Stock", "WEC": "WEC Energy Group Inc. Common Stock", "WEI": "Weidai Ltd. American depositary shares each representing one (1) Class A ordinary share", "WELL": "Welltower Inc. Common Stock", "WEN": "Wendy's Company (The) Common Stock", "WERN": "Werner Enterprises Inc. Common Stock", "WES": "Western Midstream Partners LP Common Units Representing Limited Partner Interests", "WETF": "WisdomTree Investments Inc. Common Stock", "WEX": "WEX Inc. common stock", "WEYS": "Weyco Group Inc. Common Stock", "WF": "Woori Financial Group Inc. American Depositary Shares (each representing three (3) shares of Common Stock)", "WFC": "Wells Fargo & Company Common Stock", "WFC^A": "Wells Fargo & Company Depositary Shares each representing a 1/1000th interest in a share of Non-Cumulative Perpetual Class A Preferred Stock Series AA", "WFC^C": "Wells Fargo & Company Depositary Shares each representing a 1/1000th interest in a share of Non-Cumulative Perpetual Class A Preferred Stock Series CC", "WFC^D": "Wells Fargo & Company Depositary Shares each representing a 1/1000th interest in a share of Non-Cumulative Perpetual Class A Preferred Stock Series DD", "WFC^L": "Wells Fargo & Company Wells Fargo & Company 7.50% Non-Cumulative Perpetual Convertible Class A Preferred Stock Series L", "WFC^Q": "Wells Fargo & Company Depositary Shares Representing 1/1000th Interest Perpetual Preferred Class A Series Q Fixed to Floating", "WFC^R": "Wells Fargo & Company Dep Shs Repstg 1/1000th Int Perp Pfd Cl A (Ser R Fixed To Flltg)", "WFC^Y": "Wells Fargo & Company Depositary Shares each representing a 1/1000th interest in a share of Non-Cumulative Perpetual Class A Preferred Stock Series Y", "WFC^Z": "Wells Fargo & Company Depositary Shares each representing a 1/1000th interest in a share of Non-Cumulative Perpetual Class A Preferred Stock Series Z", "WFCF": "Where Food Comes From Inc. Common Stock", "WFG": "West Fraser Timber Co. Ltd Common stock", "WFRD": "Weatherford International plc Ordinary Shares", "WGO": "Winnebago Industries Inc. Common Stock", "WH": "Wyndham Hotels & Resorts Inc. Common Stock ", "WHD": "Cactus Inc. Class A Common Stock", "WHF": "WhiteHorse Finance Inc. Common Stock", "WHFBZ": "WhiteHorse Finance Inc. 6.50% Notes due 2025", "WHG": "Westwood Holdings Group Inc Common Stock", "WHLM": "Wilhelmina International Inc. Common Stock", "WHLR": "Wheeler Real Estate Investment Trust Inc. Common Stock", "WHLRD": "Wheeler Real Estate Investment Trust Inc. Series D Cumulative Preferred Stock", "WHLRL": "Wheeler Real Estate Investment Trust Inc. 7.00% Senior Subordinated Convertible Notes Due 2031", "WHLRP": "Wheeler Real Estate Investment Trust Inc. Class B Preferred Stock", "WHR": "Whirlpool Corporation Common Stock", "WIA": "Western Asset Inflation-Linked Income Fund", "WILC": "G. Willi-Food International Ltd. Ordinary Shares", "WIMI": "WiMi Hologram Cloud Inc. American Depositary Share", "WINA": "Winmark Corporation Common Stock", "WING": "Wingstop Inc. Common Stock", "WINT": "Windtree Therapeutics Inc. Common Stock", "WINV": "WinVest Acquisition Corp. Common Stock", "WINVR": "WinVest Acquisition Corp. Right", "WINVU": "WinVest Acquisition Corp. Unit", "WIRE": "Encore Wire Corporation Common Stock", "WISA": "Summit Wireless Technologies Inc. Common Stock", "WISH": "ContextLogic Inc. Class A Common Stock", "WIT": "Wipro Limited Common Stock", "WIW": "Western Asset Inflation-Linked Opportunities & Income Fund", "WIX": "Wix.com Ltd. Ordinary Shares", "WK": "Workiva Inc. Class A Common Stock", "WKEY": "WISeKey International Holding AG American Depositary Shares", "WKHS": "Workhorse Group Inc. Common Stock", "WKME": "WalkMe Ltd. Ordinary Shares", "WKSP": "Worksport Ltd. Common Stock", "WKSPW": "Worksport Ltd. Warrant", "WLDN": "Willdan Group Inc. Common Stock", "WLFC": "Willis Lease Finance Corporation Common Stock", "WLK": "Westlake Chemical Corporation Common Stock", "WLKP": "Westlake Chemical Partners LP Common Units representing limited partner interests", "WLL": "Whiting Petroleum Corporation Common Stock (New)", "WLMS": "Williams Industrial Services Group Inc. Common Stock", "WLTW": "Willis Towers Watson Public Limited Company Ordinary Shares", "WM": "Waste Management Inc. Common Stock", "WMB": "Williams Companies Inc. (The) Common Stock", "WMC": "Western Asset Mortgage Capital Corporation Common Stock", "WMG": "Warner Music Group Corp. Class A Common Stock", "WMK": "Weis Markets Inc. Common Stock", "WMPN": "William Penn Bancorporation Common Stock", "WMS": "Advanced Drainage Systems Inc. Common Stock", "WMT": "Walmart Inc. Common Stock", "WNC": "Wabash National Corporation Common Stock", "WNEB": "Western New England Bancorp Inc. Common Stock", "WNS": "WNS (Holdings) Limited Sponsored ADR (Jersey)", "WNW": "Wunong Net Technology Company Limited Ordinary Shares", "WOLF": "Wolfspeed Inc.", "WOOF": "Petco Health and Wellness Company Inc. Class A Common Stock", "WOR": "Worthington Industries Inc. Common Stock", "WORX": "SCWorx Corp. Common Stock", "WOW": "WideOpenWest Inc. Common Stock", "WPC": "W. P. Carey Inc. REIT", "WPCA": "Warburg Pincus Capital Corporation IA Class A Ordinary Shares", "WPCB": "Warburg Pincus Capital Corporation IB Class A Ordinary Shares", "WPM": "Wheaton Precious Metals Corp Common Shares (Canada)", "WPP": "WPP plc American Depositary Shares", "WPRT": "Westport Fuel Systems Inc Common Shares", "WQGA": "World Quantum Growth Acquisition Corp. Class A Ordinary Shares", "WRAC": "Williams Rowland Acquisition Corp. Common Stock", "WRAP": "Wrap Technologies Inc. Common Stock", "WRB": "W.R. Berkley Corporation Common Stock", "WRB^E": "W.R. Berkley Corporation 5.70% Subordinated Debentures due 2058", "WRB^F": "W.R. Berkley Corporation 5.10% Subordinated Debentures due 2059", "WRB^G": "W.R. Berkley Corporation 4.25% Subordinated Debentures due 2060", "WRB^H": "W.R. Berkley Corporation 4.125% Subordinated Debentures due 2061", "WRBY": "Warby Parker Inc. Class A Common Stock", "WRE": "Washington Real Estate Investment Trust Common Stock", "WRK": "Westrock Company Common Stock", "WRLD": "World Acceptance Corporation Common Stock", "WRN": "Western Copper and Gold Corporation Common Stock", "WSBC": "WesBanco Inc. Common Stock", "WSBCP": "WesBanco Inc. Depositary Shares Each Representing a 1/40th Interest in a Share of 6.75% Fixed-Rate Reset Non-Cumulative Perpetual Preferred Stock Series A", "WSBF": "Waterstone Financial Inc. Common Stock (MD)", "WSC": "WillScot Mobile Mini Holdings Corp. Class A Common Stock", "WSFS": "WSFS Financial Corporation Common Stock", "WSM": "Williams-Sonoma Inc. Common Stock (DE)", "WSO": "Watsco Inc. Common Stock", "WSO/B": "Watsco Inc.", "WSR": "Whitestone REIT Common Shares", "WST": "West Pharmaceutical Services Inc. Common Stock", "WSTG": "Wayside Technology Group Inc. Common Stock", "WTBA": "West Bancorporation Common Stock", "WTER": "The Alkaline Water Company Inc. Common Stock", "WTFC": "Wintrust Financial Corporation Common Stock", "WTFCM": "Wintrust Financial Corporation Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D", "WTFCP": "Wintrust Financial Corporation Depositary Shares Each Representing a 1/1000th Interest in a Share of 6.875% Fixed-Rate Reset Non-Cumulative Perpetual Preferred Stock Series E", "WTI": "W&T Offshore Inc. Common Stock", "WTM": "White Mountains Insurance Group Ltd. Common Stock", "WTRG": "Essential Utilities Inc. Common Stock", "WTRH": "Waitr Holdings Inc. Common Stock", "WTRU": "Essential Utilities Inc. 6.00% TEU", "WTS": "Watts Water Technologies Inc. Class A Common Stock", "WTT": "Wireless Telecom Group Inc. Common Stock", "WTTR": "Select Energy Services Inc. Class A Common Stock", "WU": "Western Union Company (The) Common Stock", "WVE": "Wave Life Sciences Ltd. Ordinary Shares", "WVFC": "WVS Financial Corp. Common Stock", "WVVI": "Willamette Valley Vineyards Inc. Common Stock", "WVVIP": "Willamette Valley Vineyards Inc. Series A Redeemable Preferred Stock", "WW": "WW International Inc. Common Stock", "WWD": "Woodward Inc. Common Stock", "WWE": "World Wrestling Entertainment Inc. Class A Common Stock", "WWR": "Westwater Resources Inc. Common Stock", "WWW": "Wolverine World Wide Inc. Common Stock", "WY": "Weyerhaeuser Company Common Stock", "WYNN": "Wynn Resorts Limited Common stock", "WYY": "WidePoint Corporation Common Stock", "X": "United States Steel Corporation Common Stock", "XAIR": "Beyond Air Inc. Common Stock", "XBIO": "Xenetic Biosciences Inc. Common Stock", "XBIT": "XBiotech Inc. Common Stock", "XCUR": "Exicure Inc. Common Stock", "XEL": "Xcel Energy Inc. Common Stock", "XELA": "Exela Technologies Inc. Common Stock", "XELB": "Xcel Brands Inc. Common Stock", "XENE": "Xenon Pharmaceuticals Inc. Common Shares", "XENT": "Intersect ENT Inc. Common Stock", "XERS": "Xeris Pharmaceuticals Inc. Common Stock", "XFLT": "XAI Octagon Floating Rate & Alternative Income Term Trust Common Shares of Beneficial Interest", "XFLT^A": "XAI Octagon Floating Rate & Alternative Income Term Trust 6.50% Series 2026 Term Preferred Shares (Liquidation Preference $25.00)", "XFOR": "X4 Pharmaceuticals Inc. Common Stock", "XGN": "Exagen Inc. Common Stock", "XHR": "Xenia Hotels & Resorts Inc. Common Stock", "XIN": "Xinyuan Real Estate Co Ltd American Depositary Shares", "XL": "XL Fleet Corp. Class A Common Stock", "XLNX": "Xilinx Inc. Common Stock", "XLRN": "Acceleron Pharma Inc. Common Stock", "XM": "Qualtrics International Inc. Class A Common Stock", "XMTR": "Xometry Inc. Class A Common Stock", "XNCR": "Xencor Inc. Common Stock", "XNET": "Xunlei Limited American Depositary Receipts", "XOG": "Extraction Oil & Gas Inc. Common Stock", "XOM": "Exxon Mobil Corporation Common Stock", "XOMA": "XOMA Corporation Common Stock", "XOMAO": "XOMA Corporation Depositary Shares Rep Series B 8.375% Cumulative Preferred Stock", "XOMAP": "XOMA Corporation 8.625% Series A Cumulative Perpetual Preferred Stock", "XONE": "The ExOne Company Common Stock", "XOS": "Xos Inc. Common Stock", "XOSWW": "Xos Inc. Warrants", "XP": "XP Inc. Class A Common Stock", "XPAX": "XPAC Acquisition Corp. Class A Ordinary Shares", "XPAXU": "XPAC Acquisition Corp. Unit", "XPAXW": "XPAC Acquisition Corp. Warrant", "XPDI": "Power & Digital Infrastructure Acquisition Corp. Class A Common Stock", "XPDIU": "Power & Digital Infrastructure Acquisition Corp. Unit", "XPDIW": "Power & Digital Infrastructure Acquisition Corp. Warrant", "XPEL": "XPEL Inc. Common Stock", "XPER": "Xperi Holding Corporation Common Stock", "XPEV": "XPeng Inc. American depositary shares each representing two Class A ordinary shares", "XPL": "Solitario Zinc Corp. Common Stock", "XPO": "XPO Logistics Inc.", "XPOA": "DPCM Capital Inc. Class A Common Stock", "XPOF": "Xponential Fitness Inc. Class A Common Stock", "XPRO": "Expro Group Holdings N.V.", "XPVVV": "XP Inc. Class A Common Stock When-Issued", "XRAY": "DENTSPLY SIRONA Inc. Common Stock", "XRX": "Xerox Holdings Corporation Common Stock", "XSPA": "XpresSpa Group Inc. Common Stock", "XTLB": "XTL Biopharmaceuticals Ltd. American Depositary Shares", "XTNT": "Xtant Medical Holdings Inc. Common Stock", "XXII": "22nd Century Group Inc. Common Stock", "XYF": "X Financial American Depositary Shares each representing six Class A Ordinary Shares", "XYL": "Xylem Inc. Common Stock New", "Y": "Alleghany Corporation Common Stock", "YAC": "Yucaipa Acquisition Corporation Class A Ordinary Shares", "YALA": "Yalla Group Limited American Depositary Shares each representing one Class A Ordinary Share", "YCBD": "cbdMD Inc. Common Stock", "YCBD^A": "cbdMD Inc. 8.0% Series A Cumulative Convertible Preferred Stock", "YELL": "Yellow Corporation Common Stock", "YELP": "Yelp Inc. Common Stock", "YETI": "YETI Holdings Inc. Common Stock", "YEXT": "Yext Inc. Common Stock", "YGMZ": "MingZhu Logistics Holdings Limited Ordinary Shares", "YI": "111 Inc. American Depositary Shares", "YJ": "Yunji Inc. American Depository Shares", "YMAB": "Y-mAbs Therapeutics Inc. Common Stock", "YMM": "Full Truck Alliance Co. Ltd. American Depositary Shares (each representing 20 Class A Ordinary Shares)", "YMTX": "Yumanity Therapeutics Inc. Common Stock", "YNDX": "Yandex N.V. Class A Ordinary Shares", "YORW": "York Water Company (The) Common Stock", "YOU": "Clear Secure Inc. Class A Common Stock", "YPF": "YPF Sociedad Anonima Common Stock", "YQ": "17 Education & Technology Group Inc. American Depositary Shares", "YRD": "Yiren Digital Ltd. American Depositary Shares each representing two ordinary shares", "YSAC": "Yellowstone Acquisition Company Class A Common Stock", "YSACU": "Yellowstone Acquisition Company Units", "YSACW": "Yellowstone Acquisition Company Warrants to purchase Class A common stock", "YSG": "Yatsen Holding Limited American Depositary Shares each representing four Class A ordinary shares", "YTEN": "Yield10 Bioscience Inc. Common Stock", "YTPG": "TPG Pace Beneficial II Corp. Class A Ordinary Shares", "YTRA": "Yatra Online Inc. Ordinary Shares", "YUM": "Yum! Brands Inc.", "YUMC": "Yum China Holdings Inc. Common Stock", "YVR": "Liquid Media Group Ltd. Common Shares", "YY": "JOYY Inc. American Depositary Shares", "Z": "Zillow Group Inc. Class C Capital Stock", "ZBH": "Zimmer Biomet Holdings Inc. Common Stock", "ZBRA": "Zebra Technologies Corporation Class A Common Stock", "ZCMD": "Zhongchao Inc. Class A Ordinary Shares", "ZDGE": "Zedge Inc. Class B Common Stock ", "ZDVSV": "Ziff Davis Inc. Common Stock Ex-Distribution When Issued", "ZEAL": "Zealand Pharma A/S American Depositary Shares", "ZEN": "Zendesk Inc. Common Stock", "ZENV": "Zenvia Inc. Class A Common Stock", "ZEPP": "Zepp Health Corporation American Depositary Shares", "ZEST": "Ecoark Holdings Inc. Common Stock", "ZETA": "Zeta Global Holdings Corp. Class A Common Stock", "ZEUS": "Olympic Steel Inc. Common Stock", "ZEV": "Lightning eMotors Inc Common Stock", "ZG": "Zillow Group Inc. Class A Common Stock", "ZGNX": "Zogenix Inc. Common Stock", "ZGYH": "Yunhong International Class A Ordinary Shares", "ZGYHR": "Yunhong International Right", "ZGYHW": "Yunhong International Warrant", "ZH": "Zhihu Inc. American Depositary Shares (every two of each representing one Class A ordinary share)", "ZI": "ZoomInfo Technologies Inc. Class A Common Stock", "ZIM": "ZIM Integrated Shipping Services Ltd. Ordinary Shares", "ZION": "Zions Bancorporation N.A. Common Stock", "ZIONL": "Zions Bancorporation 6.95% Fixed-to-Floating Rate Subordinated Notes", "ZIONO": "Zions Bancorporation N.A. Dep Shs Repstg 1/40th Perp Pfd Ser G", "ZIONP": "Zions Bancorporation N.A. Depositary Shares (Each representing 1/40th Interest in a Share of Series A Floating-Rate Non-Cumulative Perpetual Preferred Stock)", "ZIOP": "ZIOPHARM Oncology Inc Common Stock", "ZIP": "ZipRecruiter Inc. Class A Common Stock", "ZIVO": "Zivo Bioscience Inc. Common Stock", "ZIVOW": "Zivo Bioscience Inc. Warrants", "ZIXI": "Zix Corporation Common Stock", "ZKIN": "ZK International Group Co. Ltd Ordinary Share", "ZLAB": "Zai Lab Limited American Depositary Shares", "ZM": "Zoom Video Communications Inc. Class A Common Stock", "ZME": "Zhangmen Education Inc. American Depositary Shares each representing nine (9) Class A ordinary shares", "ZNGA": "Zynga Inc. Class A Common Stock", "ZNH": "China Southern Airlines Company Limited Common Stock", "ZNTE": "Zanite Acquisition Corp. Class A Common Stock", "ZNTEU": "Zanite Acquisition Corp. Unit", "ZNTEW": "Zanite Acquisition Corp. Warrant", "ZNTL": "Zentalis Pharmaceuticals Inc. Common Stock", "ZOM": "Zomedica Corp. Common Shares", "ZS": "Zscaler Inc. Common Stock", "ZSAN": "Zosano Pharma Corporation Common Stock", "ZT": "Zimmer Energy Transition Acquisition Corp. Class A Common Stock", "ZTAQU": "Zimmer Energy Transition Acquisition Corp. Units", "ZTAQW": "Zimmer Energy Transition Acquisition Corp. Warrants", "ZTO": "ZTO Express (Cayman) Inc. American Depositary Shares each representing one Class A ordinary share.", "ZTR": "Virtus Total Return Fund Inc.", "ZTS": "Zoetis Inc. Class A Common Stock", "ZUMZ": "Zumiez Inc. Common Stock", "ZUO": "Zuora Inc. Class A Common Stock", "ZVIA": "Zevia PBC Class A Common Stock", "ZVO": "Zovio Inc. Common Stock", "ZWRK": "Z-Work Acquisition Corp. Class A Common Stock", "ZWRKU": "Z-Work Acquisition Corp. Units", "ZWRKW": "Z-Work Acquisition Corp. Warrant", "ZY": "Zymergen Inc. Common Stock", "ZYME": "Zymeworks Inc. Common Shares", "ZYNE": "Zynerba Pharmaceuticals Inc. Common Stock", "ZYXI": "Zynex Inc. Common Stock"}
\ No newline at end of file
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/renderers/puppeteer/lib/puppeteer.js b/spaces/CikeyQI/Yunzai/Yunzai/renderers/puppeteer/lib/puppeteer.js
deleted file mode 100644
index 9daf954b99dcf88cd60a8e3ea718e8f18495b7c6..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/renderers/puppeteer/lib/puppeteer.js
+++ /dev/null
@@ -1,321 +0,0 @@
-import Renderer from '../../../lib/renderer/Renderer.js'
-import os from 'node:os'
-import lodash from 'lodash'
-import puppeteer from 'puppeteer'
-// 暂时保留对原config的兼容
-import cfg from '../../../lib/config/config.js'
-import { Data } from '#miao'
-
-const _path = process.cwd()
-// mac地址
-let mac = ''
-// 超时计时器
-let overtimeList = []
-
-export default class Puppeteer extends Renderer {
- constructor (config) {
- super({
- id: 'puppeteer',
- type: 'image',
- render: 'screenshot'
- })
- this.browser = false
- this.lock = false
- this.shoting = []
- /** 截图数达到时重启浏览器 避免生成速度越来越慢 */
- this.restartNum = 100
- /** 截图次数 */
- this.renderNum = 0
- this.config = {
- headless: Data.def(config.headless, 'new'),
- args: Data.def(config.args, [
- '--disable-gpu',
- '--disable-setuid-sandbox',
- '--no-sandbox',
- '--no-zygote'
- ])
- }
- if (config.chromiumPath || cfg?.bot?.chromium_path) {
- /** chromium其他路径 */
- this.config.executablePath = config.chromiumPath || cfg?.bot?.chromium_path
- }
- if (config.puppeteerWS || cfg?.bot?.puppeteer_ws) {
- /** chromium其他路径 */
- this.config.wsEndpoint = config.puppeteerWS || cfg?.bot?.puppeteer_ws
- }
- /** puppeteer超时超时时间 */
- this.puppeteerTimeout = config.puppeteerTimeout || cfg?.bot?.puppeteer_timeout || 0
- }
-
- /**
- * 初始化chromium
- */
- async browserInit () {
- if (this.browser) return this.browser
- if (this.lock) return false
- this.lock = true
-
- logger.info('puppeteer Chromium 启动中...')
-
- let connectFlag = false
- try {
- // 获取Mac地址
- if (!mac) {
- mac = await this.getMac()
- this.browserMacKey = `Yz:chromium:browserWSEndpoint:${mac}`
- }
- // 是否有browser实例
- const browserUrl = (await redis.get(this.browserMacKey)) || this.config.wsEndpoint
- if (browserUrl) {
- logger.info(`puppeteer Chromium from ${browserUrl}`)
- const browserWSEndpoint = await puppeteer.connect({ browserWSEndpoint: browserUrl }).catch(() => {
- logger.error('puppeteer Chromium 缓存的实例已关闭')
- redis.del(this.browserMacKey)
- })
- // 如果有实例,直接使用
- if (browserWSEndpoint) {
- this.browser = browserWSEndpoint
- if (this.browser) {
- connectFlag = true
- }
- }
- }
- } catch (e) {
- logger.info('puppeteer Chromium 不存在已有实例')
- }
-
- if (!this.browser || !connectFlag) {
- // 如果没有实例,初始化puppeteer
- this.browser = await puppeteer.launch(this.config).catch((err, trace) => {
- let errMsg = err.toString() + (trace ? trace.toString() : '')
- if (typeof err == 'object') {
- logger.error(JSON.stringify(err))
- } else {
- logger.error(err.toString())
- if (errMsg.includes('Could not find Chromium')) {
- logger.error('没有正确安装 Chromium,可以尝试执行安装命令:node node_modules/puppeteer/install.js')
- } else if (errMsg.includes('cannot open shared object file')) {
- logger.error('没有正确安装 Chromium 运行库')
- }
- }
- logger.error(err, trace)
- })
- }
-
- this.lock = false
-
- if (!this.browser) {
- logger.error('puppeteer Chromium 启动失败')
- return false
- }
- if (connectFlag) {
- logger.info('puppeteer Chromium 已连接启动的实例')
- } else {
- logger.info(`[Chromium] ${this.browser.wsEndpoint()}`)
- if (process.env.pm_id && this.browserMacKey) {
- // 缓存一下实例30天
- const expireTime = 60 * 60 * 24 * 30
- await redis.set(this.browserMacKey, this.browser.wsEndpoint(), { EX: expireTime })
- }
- logger.info('puppeteer Chromium 启动成功')
- }
-
- /** 监听Chromium实例是否断开 */
- this.browser.on('disconnected', () => {
- logger.error('Chromium 实例关闭或崩溃!')
- this.browser = false
- })
-
- return this.browser
- }
-
- // 获取Mac地址
- getMac () {
- let mac = '00:00:00:00:00:00'
- try {
- const network = os.networkInterfaces()
- let macFlag = false
- for (const a in network) {
- for (const i of network[a]) {
- if (i.mac && i.mac !== mac) {
- macFlag = true
- mac = i.mac
- break
- }
- }
- if (macFlag) {
- break
- }
- }
- } catch (e) {
- }
- mac = mac.replace(/:/g, '')
- return mac
- }
-
- /**
- * `chromium` 截图
- * @param name
- * @param data 模板参数
- * @param data.tplFile 模板路径,必传
- * @param data.saveId 生成html名称,为空name代替
- * @param data.imgType screenshot参数,生成图片类型:jpeg,png
- * @param data.quality screenshot参数,图片质量 0-100,jpeg是可传,默认90
- * @param data.omitBackground screenshot参数,隐藏默认的白色背景,背景透明。默认不透明
- * @param data.path screenshot参数,截图保存路径。截图图片类型将从文件扩展名推断出来。如果是相对路径,则从当前路径解析。如果没有指定路径,图片将不会保存到硬盘。
- * @param data.multiPage 是否分页截图,默认false
- * @param data.multiPageHeight 分页状态下页面高度,默认4000
- * @param data.pageGotoParams 页面goto时的参数
- * @return img 不做segment包裹
- */
- async screenshot (name, data = {}) {
- if (!await this.browserInit()) {
- return false
- }
- const pageHeight = data.multiPageHeight || 4000
-
- let savePath = this.dealTpl(name, data)
- if (!savePath) {
- return false
- }
-
- let buff = ''
- let start = Date.now()
-
- let ret = []
- this.shoting.push(name)
-
- const puppeteerTimeout = this.puppeteerTimeout
- let overtime
- let overtimeFlag = false
- if (puppeteerTimeout > 0) {
- // TODO 截图超时处理
- overtime = setTimeout(() => {
- if (!overtimeFlag) {
- logger.error(`[图片生成][${name}] 截图超时,当前等待队列:${this.shoting.join(',')}`)
- this.restart(true)
- this.shoting = []
- overtimeList.forEach(item => {
- clearTimeout(item)
- })
- }
- }, puppeteerTimeout)
- }
-
- try {
- const page = await this.browser.newPage()
- let pageGotoParams = lodash.extend({ timeout: 120000 }, data.pageGotoParams || {})
- await page.goto(`file://${_path}${lodash.trim(savePath, '.')}`, pageGotoParams)
- let body = await page.$('#container') || await page.$('body')
-
- // 计算页面高度
- const boundingBox = await body.boundingBox()
- // 分页数
- let num = 1
-
- let randData = {
- type: data.imgType || 'jpeg',
- omitBackground: data.omitBackground || false,
- quality: data.quality || 90,
- path: data.path || ''
- }
-
- if (data.multiPage) {
- randData.type = 'jpeg'
- num = Math.round(boundingBox.height / pageHeight) || 1
- }
-
- if (data.imgType === 'png') {
- delete randData.quality
- }
-
- if (!data.multiPage) {
- buff = await body.screenshot(randData)
- /** 计算图片大小 */
- const kb = (buff.length / 1024).toFixed(2) + 'KB'
- logger.mark(`[图片生成][${name}][${this.renderNum}次] ${kb} ${logger.green(`${Date.now() - start}ms`)}`)
- this.renderNum++
- ret.push(buff)
- } else {
- // 分片截图
- if (num > 1) {
- await page.setViewport({
- width: boundingBox.width,
- height: pageHeight + 100
- })
- }
- for (let i = 1; i <= num; i++) {
- if (i !== 1 && i === num) {
- await page.setViewport({
- width: boundingBox.width,
- height: parseInt(boundingBox.height) - pageHeight * (num - 1)
- })
- }
- if (i !== 1 && i <= num) {
- await page.evaluate(pageHeight => window.scrollBy(0, pageHeight), pageHeight)
- }
- if (num === 1) {
- buff = await body.screenshot(randData)
- } else {
- buff = await page.screenshot(randData)
- }
- if (num > 2) {
- await Data.sleep(200)
- }
- this.renderNum++
-
- /** 计算图片大小 */
- const kb = (buff.length / 1024).toFixed(2) + 'KB'
- logger.mark(`[图片生成][${name}][${i}/${num}] ${kb}`)
- ret.push(buff)
- }
- if (num > 1) {
- logger.mark(`[图片生成][${name}] 处理完成`)
- }
- }
- page.close().catch((err) => logger.error(err))
- } catch (error) {
- logger.error(`[图片生成][${name}] 图片生成失败:${error}`)
- /** 关闭浏览器 */
- if (this.browser) {
- await this.browser.close().catch((err) => logger.error(err))
- }
- this.browser = false
- ret = []
- return false
- } finally {
- if (overtime) {
- overtimeFlag = true
- clearTimeout(overtime)
- overtimeList = []
- }
- }
-
- this.shoting.pop()
-
- if (ret.length === 0 || !ret[0]) {
- logger.error(`[图片生成][${name}] 图片生成为空`)
- return false
- }
-
- this.restart(false)
-
- return data.multiPage ? ret : ret[0]
- }
-
- /** 重启 */
- restart (force = false) {
- /** 截图超过重启数时,自动关闭重启浏览器,避免生成速度越来越慢 */
- if (this.renderNum % this.restartNum === 0 || force) {
- if (this.shoting.length <= 0 || force) {
- setTimeout(async () => {
- if (this.browser) {
- await this.browser.close().catch((err) => logger.error(err))
- }
- this.browser = false
- logger.info(`puppeteer Chromium ${force ? '强制' : ''}关闭重启...`)
- }, 100)
- }
- }
- }
-}
diff --git a/spaces/Cletrason/cloudqi-cqi_text_to_image_pt_v0/README.md b/spaces/Cletrason/cloudqi-cqi_text_to_image_pt_v0/README.md
deleted file mode 100644
index 18834ce05f98ee45ac628025184d6b74bc8d58c8..0000000000000000000000000000000000000000
--- a/spaces/Cletrason/cloudqi-cqi_text_to_image_pt_v0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Cloudqi-cqi Text To Image Pt V0
-emoji: 😻
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CofAI/chat.b4/client/css/dropdown.css b/spaces/CofAI/chat.b4/client/css/dropdown.css
deleted file mode 100644
index 302e911e84d171c55384732f759a79ce195abca5..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.b4/client/css/dropdown.css
+++ /dev/null
@@ -1,10 +0,0 @@
-.dropdown {
- border: 1px solid var(--conversations);
-}
-
-@media screen and (max-width: 990px) {
- .dropdown {
- padding: 4px 8px;
- font-size: 0.75rem;
- }
-}
diff --git a/spaces/CorvaeOboro/gen_ability_icon/README.md b/spaces/CorvaeOboro/gen_ability_icon/README.md
deleted file mode 100644
index 9a7a54f4842d50cf7264a6d4bee35154da6fe5b1..0000000000000000000000000000000000000000
--- a/spaces/CorvaeOboro/gen_ability_icon/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: gen_ability_icon stylegan2ada
-emoji: 🔵🔥🌀
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 3.0.5
-app_file: app.py
-tags:
-- stylegan2
-license: cc0-1.0
-models:
-- "CorvaeOboro/gen_ability_icon"
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/exceptions.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/exceptions.py
deleted file mode 100644
index c1692f396127a4cb5ffa38568be70ad67192fd59..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/exceptions.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from typing import Any, Dict, Optional, Sequence, Type
-
-from pydantic import BaseModel, create_model
-from starlette.exceptions import HTTPException as StarletteHTTPException
-from starlette.exceptions import WebSocketException as WebSocketException # noqa: F401
-
-
-class HTTPException(StarletteHTTPException):
- def __init__(
- self,
- status_code: int,
- detail: Any = None,
- headers: Optional[Dict[str, str]] = None,
- ) -> None:
- super().__init__(status_code=status_code, detail=detail, headers=headers)
-
-
-RequestErrorModel: Type[BaseModel] = create_model("Request")
-WebSocketErrorModel: Type[BaseModel] = create_model("WebSocket")
-
-
-class FastAPIError(RuntimeError):
- """
- A generic, FastAPI-specific error.
- """
-
-
-class ValidationException(Exception):
- def __init__(self, errors: Sequence[Any]) -> None:
- self._errors = errors
-
- def errors(self) -> Sequence[Any]:
- return self._errors
-
-
-class RequestValidationError(ValidationException):
- def __init__(self, errors: Sequence[Any], *, body: Any = None) -> None:
- super().__init__(errors)
- self.body = body
-
-
-class WebSocketRequestValidationError(ValidationException):
- pass
-
-
-class ResponseValidationError(ValidationException):
- def __init__(self, errors: Sequence[Any], *, body: Any = None) -> None:
- super().__init__(errors)
- self.body = body
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-322e8a8e.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-322e8a8e.css
deleted file mode 100644
index aa7186b19dcf31452295d0d5d4dbb3b5aadb3dea..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-322e8a8e.css
+++ /dev/null
@@ -1 +0,0 @@
-.gallery.svelte-1ayixqk,.gallery.svelte-1viwdyg{padding:var(--size-1) var(--size-2)}div.svelte-1viwdyg{overflow:hidden;min-width:var(--local-text-width);white-space:nowrap}video.svelte-1tntsc1{flex:none;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);max-width:none}video.svelte-1tntsc1:hover,video.selected.svelte-1tntsc1{border-color:var(--border-color-accent)}.table.svelte-1tntsc1{margin:0 auto;width:var(--size-20);height:var(--size-20);object-fit:cover}.gallery.svelte-1tntsc1{max-height:var(--size-20);object-fit:cover}div.svelte-rgtszb{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.gallery.svelte-rgtszb{display:flex;align-items:center;cursor:pointer;padding:var(--size-1) var(--size-2);text-align:left}table.svelte-1cib1xd.svelte-1cib1xd{position:relative}td.svelte-1cib1xd.svelte-1cib1xd{border:1px solid var(--table-border-color);padding:var(--size-2);font-size:var(--text-sm);font-family:var(--font-mono)}.selected.svelte-1cib1xd td.svelte-1cib1xd{border-color:var(--border-color-accent)}.table.svelte-1cib1xd.svelte-1cib1xd{display:inline-block;margin:0 auto}.gallery.svelte-1cib1xd td.svelte-1cib1xd:first-child{border-left:none}.gallery.svelte-1cib1xd tr:first-child td.svelte-1cib1xd{border-top:none}.gallery.svelte-1cib1xd td.svelte-1cib1xd:last-child{border-right:none}.gallery.svelte-1cib1xd tr:last-child td.svelte-1cib1xd{border-bottom:none}.overlay.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:transparent;position:absolute;bottom:0;background:linear-gradient(to bottom,transparent,var(--gradient-to));width:var(--size-full);height:50%}.odd.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:var(--table-even-background-fill)}.even.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:var(--table-odd-background-fill)}.button.svelte-1cib1xd.svelte-1cib1xd{--gradient-to:var(--background-fill-primary)}div.svelte-h6ogpl{width:var(--size-10);height:var(--size-10)}.table.svelte-h6ogpl{margin:0 auto}.gallery.svelte-1ayixqk{padding:var(--size-1) var(--size-2)}.gallery.svelte-zvfedn{padding:var(--size-2)}pre.svelte-agpzo2{text-align:left}.gallery.svelte-agpzo2{padding:var(--size-1) var(--size-2)}.wrap.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:inline-block;width:var(--size-full);max-width:var(--size-full);color:var(--body-text-color)}.hide.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:none}.label.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:flex;align-items:center;margin-bottom:var(--size-2);color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}svg.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{margin-right:var(--size-1)}.gallery.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:flex;flex-wrap:wrap;gap:var(--spacing-lg)}.gallery-item.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{border:1px solid var(--border-color-primary);border-radius:var(--button-large-radius);overflow:hidden}.gallery-item.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:hover{border-color:var(--border-color-accent);background:var(--table-row-focus)}.table-wrap.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{border:1px solid var(--border-color-primary);border-radius:var(--table-radius);width:var(--size-full);table-layout:auto;overflow-x:auto;line-height:var(--line-sm)}table.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{width:var(--size-full)}.tr-head.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{box-shadow:var(--shadow-drop-lg);border-bottom:1px solid var(--border-color-primary)}.tr-head.svelte-13hsdno>.svelte-13hsdno+.svelte-13hsdno{border-right-width:0px;border-left-width:1px;border-color:var(--border-color-primary)}th.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{padding:var(--size-2);white-space:nowrap}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{cursor:pointer;border-bottom:1px solid var(--border-color-primary);background:var(--table-even-background-fill)}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:last-child{border:none}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:nth-child(odd){background:var(--table-odd-background-fill)}.tr-body.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno:hover{background:var(--table-row-focus)}.tr-body.svelte-13hsdno>.svelte-13hsdno+.svelte-13hsdno{border-right-width:0px;border-left-width:1px;border-color:var(--border-color-primary)}.tr-body.svelte-13hsdno:hover>.svelte-13hsdno+.svelte-13hsdno{border-color:var(--border-color-accent)}td.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{padding:var(--size-2);text-align:center}.paginate.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{display:flex;justify-content:center;align-items:center;gap:var(--spacing-sm);margin-top:var(--size-2);color:var(--block-label-text-color);font-size:var(--text-sm)}button.current-page.svelte-13hsdno.svelte-13hsdno.svelte-13hsdno{font-weight:var(--weight-bold)}
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-aa3a045c.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-aa3a045c.js
deleted file mode 100644
index fe368a00237d1e796f58aab8a8d5dbae14270cdf..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-aa3a045c.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{S as m,e as u,s as r,k as d,o as b,z as c,v as f,x as g,a9 as v,ab as k,ac as B,ad as h}from"./index-1d65707a.js";import{B as p}from"./Button-f155035a.js";function C(a){let t;const l=a[3].default,e=v(l,a,a[4],null);return{c(){e&&e.c()},m(s,n){e&&e.m(s,n),t=!0},p(s,n){e&&e.p&&(!t||n&16)&&k(e,l,s,s[4],t?h(l,s[4],n,null):B(s[4]),null)},i(s){t||(c(e,s),t=!0)},o(s){f(e,s),t=!1},d(s){e&&e.d(s)}}}function S(a){let t,l;return t=new p({props:{elem_id:a[0],elem_classes:a[1],visible:a[2],explicit_call:!0,$$slots:{default:[C]},$$scope:{ctx:a}}}),{c(){d(t.$$.fragment)},m(e,s){b(t,e,s),l=!0},p(e,[s]){const n={};s&1&&(n.elem_id=e[0]),s&2&&(n.elem_classes=e[1]),s&4&&(n.visible=e[2]),s&16&&(n.$$scope={dirty:s,ctx:e}),t.$set(n)},i(e){l||(c(t.$$.fragment,e),l=!0)},o(e){f(t.$$.fragment,e),l=!1},d(e){g(t,e)}}}function q(a,t,l){let{$$slots:e={},$$scope:s}=t,{elem_id:n}=t,{elem_classes:i}=t,{visible:_=!0}=t;return a.$$set=o=>{"elem_id"in o&&l(0,n=o.elem_id),"elem_classes"in o&&l(1,i=o.elem_classes),"visible"in o&&l(2,_=o.visible),"$$scope"in o&&l(4,s=o.$$scope)},[n,i,_,e,s]}class w extends m{constructor(t){super(),u(this,t,q,S,r,{elem_id:0,elem_classes:1,visible:2})}}const A=w,D=["static"];export{A as Component,D as modes};
-//# sourceMappingURL=index-aa3a045c.js.map
diff --git a/spaces/Dagfinn1962/prodia2/theme_dropdown.py b/spaces/Dagfinn1962/prodia2/theme_dropdown.py
deleted file mode 100644
index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000
--- a/spaces/Dagfinn1962/prodia2/theme_dropdown.py
+++ /dev/null
@@ -1,57 +0,0 @@
-import os
-import pathlib
-
-from gradio.themes.utils import ThemeAsset
-
-
-def create_theme_dropdown():
- import gradio as gr
-
- asset_path = pathlib.Path(__file__).parent / "themes"
- themes = []
- for theme_asset in os.listdir(str(asset_path)):
- themes.append(
- (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset)))
- )
-
- def make_else_if(theme_asset):
- return f"""
- else if (theme == '{str(theme_asset[0].version)}') {{
- var theme_css = `{theme_asset[1]._get_theme_css()}`
- }}"""
-
- head, tail = themes[0], themes[1:]
- if_statement = f"""
- if (theme == "{str(head[0].version)}") {{
- var theme_css = `{head[1]._get_theme_css()}`
- }} {" ".join(make_else_if(t) for t in tail)}
- """
-
- latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[
- ::-1
- ]
- latest_to_oldest = [str(t.version) for t in latest_to_oldest]
-
- component = gr.Dropdown(
- choices=latest_to_oldest,
- value=latest_to_oldest[0],
- render=False,
- label="Select Version",
- ).style(container=False)
-
- return (
- component,
- f"""
- (theme) => {{
- if (!document.querySelector('.theme-css')) {{
- var theme_elem = document.createElement('style');
- theme_elem.classList.add('theme-css');
- document.head.appendChild(theme_elem);
- }} else {{
- var theme_elem = document.querySelector('.theme-css');
- }}
- {if_statement}
- theme_elem.innerHTML = theme_css;
- }}
- """,
- )
diff --git a/spaces/DaleChen/AutoGPT/autogpt/memory/milvus.py b/spaces/DaleChen/AutoGPT/autogpt/memory/milvus.py
deleted file mode 100644
index 44aa72b956224fa4c2a16d5f40b0eaeb35e98581..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/memory/milvus.py
+++ /dev/null
@@ -1,115 +0,0 @@
-""" Milvus memory storage provider."""
-from pymilvus import Collection, CollectionSchema, DataType, FieldSchema, connections
-
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-class MilvusMemory(MemoryProviderSingleton):
- """Milvus memory storage provider."""
-
- def __init__(self, cfg) -> None:
- """Construct a milvus memory storage connection.
-
- Args:
- cfg (Config): Auto-GPT global config.
- """
- # connect to milvus server.
- connections.connect(address=cfg.milvus_addr)
- fields = [
- FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=True),
- FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=1536),
- FieldSchema(name="raw_text", dtype=DataType.VARCHAR, max_length=65535),
- ]
-
- # create collection if not exist and load it.
- self.milvus_collection = cfg.milvus_collection
- self.schema = CollectionSchema(fields, "auto-gpt memory storage")
- self.collection = Collection(self.milvus_collection, self.schema)
- # create index if not exist.
- if not self.collection.has_index():
- self.collection.release()
- self.collection.create_index(
- "embeddings",
- {
- "metric_type": "IP",
- "index_type": "HNSW",
- "params": {"M": 8, "efConstruction": 64},
- },
- index_name="embeddings",
- )
- self.collection.load()
-
- def add(self, data) -> str:
- """Add an embedding of data into memory.
-
- Args:
- data (str): The raw text to construct embedding index.
-
- Returns:
- str: log.
- """
- embedding = get_ada_embedding(data)
- result = self.collection.insert([[embedding], [data]])
- _text = (
- "Inserting data into memory at primary key: "
- f"{result.primary_keys[0]}:\n data: {data}"
- )
- return _text
-
- def get(self, data):
- """Return the most relevant data in memory.
- Args:
- data: The data to compare to.
- """
- return self.get_relevant(data, 1)
-
- def clear(self) -> str:
- """Drop the index in memory.
-
- Returns:
- str: log.
- """
- self.collection.drop()
- self.collection = Collection(self.milvus_collection, self.schema)
- self.collection.create_index(
- "embeddings",
- {
- "metric_type": "IP",
- "index_type": "HNSW",
- "params": {"M": 8, "efConstruction": 64},
- },
- index_name="embeddings",
- )
- self.collection.load()
- return "Obliviated"
-
- def get_relevant(self, data: str, num_relevant: int = 5):
- """Return the top-k relevant data in memory.
- Args:
- data: The data to compare to.
- num_relevant (int, optional): The max number of relevant data.
- Defaults to 5.
-
- Returns:
- list: The top-k relevant data.
- """
- # search the embedding and return the most relevant text.
- embedding = get_ada_embedding(data)
- search_params = {
- "metrics_type": "IP",
- "params": {"nprobe": 8},
- }
- result = self.collection.search(
- [embedding],
- "embeddings",
- search_params,
- num_relevant,
- output_fields=["raw_text"],
- )
- return [item.entity.value_of_field("raw_text") for item in result[0]]
-
- def get_stats(self) -> str:
- """
- Returns: The stats of the milvus cache.
- """
- return f"Entities num: {self.collection.num_entities}"
diff --git a/spaces/DemoLou/moe-tts/run.py b/spaces/DemoLou/moe-tts/run.py
deleted file mode 100644
index 5486d17a6dfc07b6cd3bfa53051046c60dd74fe2..0000000000000000000000000000000000000000
--- a/spaces/DemoLou/moe-tts/run.py
+++ /dev/null
@@ -1,9 +0,0 @@
-import gradio as gr
-
-def greet(name):
- return "ping-moe " + name + "!"
-
-demo = gr.Interface(fn=greet, inputs="text", outputs="text")
-
-if __name__ == "__main__":
- demo.launch()
\ No newline at end of file
diff --git a/spaces/Dewa/Text-Summurisation/Makefile b/spaces/Dewa/Text-Summurisation/Makefile
deleted file mode 100644
index be10e92af7540f2ff6a59eae6f538dcc61c87a8f..0000000000000000000000000000000000000000
--- a/spaces/Dewa/Text-Summurisation/Makefile
+++ /dev/null
@@ -1,12 +0,0 @@
-install:
- pip install --upgrade pip &&\
- pip install -r requirements.txt
-test:
- python -m pytest -vvv --cov=hello --cov=greeting \
- --cov=s math --cov=web tests
- python -m pytest --nbval notebook.ipynb #test jupyter notebooks
- # python -m pytest -v tests/test_web.py # if just want to test the web
-debug:
- python -m pytest -vv --pdb #Debugger is invoked
-one-test:
- python -m pytest -vv tests/test_greeting.py::test_my_name4
\ No newline at end of file
diff --git a/spaces/DinoPiteko/youtube-whisper-04/README.md b/spaces/DinoPiteko/youtube-whisper-04/README.md
deleted file mode 100644
index b6d10c84225d77f4369627a7ed9ac55bb293571d..0000000000000000000000000000000000000000
--- a/spaces/DinoPiteko/youtube-whisper-04/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Youtube Whisper
-emoji: ⚡
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: unknown
-duplicated_from: kazuk/youtube-whisper-04
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py
deleted file mode 100644
index 8c4a1fba06bf6bc680aa59bf645f796283f6f1c6..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/visualizer.py
+++ /dev/null
@@ -1,605 +0,0 @@
-# python 3.7
-"""Utility functions for visualizing results on html page."""
-
-import base64
-import os.path
-import cv2
-import numpy as np
-
-__all__ = [
- 'get_grid_shape', 'get_blank_image', 'load_image', 'save_image',
- 'resize_image', 'add_text_to_image', 'fuse_images', 'HtmlPageVisualizer',
- 'VideoReader', 'VideoWriter', 'adjust_pixel_range'
-]
-
-
-def adjust_pixel_range(images, min_val=-1.0, max_val=1.0, channel_order='NCHW'):
- """Adjusts the pixel range of the input images.
-
- This function assumes the input array (image batch) is with shape [batch_size,
- channel, height, width] if `channel_order = NCHW`, or with shape [batch_size,
- height, width] if `channel_order = NHWC`. The returned images are with shape
- [batch_size, height, width, channel] and pixel range [0, 255].
-
- NOTE: The channel order of output images will remain the same as the input.
-
- Args:
- images: Input images to adjust pixel range.
- min_val: Min value of the input images. (default: -1.0)
- max_val: Max value of the input images. (default: 1.0)
- channel_order: Channel order of the input array. (default: NCHW)
-
- Returns:
- The postprocessed images with dtype `numpy.uint8` and range [0, 255].
-
- Raises:
- ValueError: If the input `images` are not with type `numpy.ndarray` or the
- shape is invalid according to `channel_order`.
- """
- if not isinstance(images, np.ndarray):
- raise ValueError(f'Images should be with type `numpy.ndarray`!')
-
- channel_order = channel_order.upper()
- if channel_order not in ['NCHW', 'NHWC']:
- raise ValueError(f'Invalid channel order `{channel_order}`!')
-
- if images.ndim != 4:
- raise ValueError(f'Input images are expected to be with shape `NCHW` or '
- f'`NHWC`, but `{images.shape}` is received!')
- if channel_order == 'NCHW' and images.shape[1] not in [1, 3]:
- raise ValueError(f'Input images should have 1 or 3 channels under `NCHW` '
- f'channel order!')
- if channel_order == 'NHWC' and images.shape[3] not in [1, 3]:
- raise ValueError(f'Input images should have 1 or 3 channels under `NHWC` '
- f'channel order!')
-
- images = images.astype(np.float32)
- images = (images - min_val) * 255 / (max_val - min_val)
- images = np.clip(images + 0.5, 0, 255).astype(np.uint8)
- if channel_order == 'NCHW':
- images = images.transpose(0, 2, 3, 1)
-
- return images
-
-
-def get_grid_shape(size, row=0, col=0, is_portrait=False):
- """Gets the shape of a grid based on the size.
-
- This function makes greatest effort on making the output grid square if
- neither `row` nor `col` is set. If `is_portrait` is set as `False`, the height
- will always be equal to or smaller than the width. For example, if input
- `size = 16`, output shape will be `(4, 4)`; if input `size = 15`, output shape
- will be (3, 5). Otherwise, the height will always be equal to or larger than
- the width.
-
- Args:
- size: Size (height * width) of the target grid.
- is_portrait: Whether to return a portrait size of a landscape size.
- (default: False)
-
- Returns:
- A two-element tuple, representing height and width respectively.
- """
- assert isinstance(size, int)
- assert isinstance(row, int)
- assert isinstance(col, int)
- if size == 0:
- return (0, 0)
-
- if row > 0 and col > 0 and row * col != size:
- row = 0
- col = 0
-
- if row > 0 and size % row == 0:
- return (row, size // row)
- if col > 0 and size % col == 0:
- return (size // col, col)
-
- row = int(np.sqrt(size))
- while row > 0:
- if size % row == 0:
- col = size // row
- break
- row = row - 1
-
- return (col, row) if is_portrait else (row, col)
-
-
-def get_blank_image(height, width, channels=3, is_black=True):
- """Gets a blank image, either white of black.
-
- NOTE: This function will always return an image with `RGB` channel order for
- color image and pixel range [0, 255].
-
- Args:
- height: Height of the returned image.
- width: Width of the returned image.
- channels: Number of channels. (default: 3)
- is_black: Whether to return a black image or white image. (default: True)
- """
- shape = (height, width, channels)
- if is_black:
- return np.zeros(shape, dtype=np.uint8)
- return np.ones(shape, dtype=np.uint8) * 255
-
-
-def load_image(path):
- """Loads an image from disk.
-
- NOTE: This function will always return an image with `RGB` channel order for
- color image and pixel range [0, 255].
-
- Args:
- path: Path to load the image from.
-
- Returns:
- An image with dtype `np.ndarray` or `None` if input `path` does not exist.
- """
- if not os.path.isfile(path):
- return None
-
- image = cv2.imread(path)
- return image[:, :, ::-1]
-
-
-def save_image(path, image):
- """Saves an image to disk.
-
- NOTE: The input image (if colorful) is assumed to be with `RGB` channel order
- and pixel range [0, 255].
-
- Args:
- path: Path to save the image to.
- image: Image to save.
- """
- if image is None:
- return
-
- assert len(image.shape) == 3 and image.shape[2] in [1, 3]
- cv2.imwrite(path, image[:, :, ::-1])
-
-
-def resize_image(image, *args, **kwargs):
- """Resizes image.
-
- This is a wrap of `cv2.resize()`.
-
- NOTE: THe channel order of the input image will not be changed.
-
- Args:
- image: Image to resize.
- """
- if image is None:
- return None
-
- assert image.ndim == 3 and image.shape[2] in [1, 3]
- image = cv2.resize(image, *args, **kwargs)
- if image.ndim == 2:
- return image[:, :, np.newaxis]
- return image
-
-
-def add_text_to_image(image,
- text='',
- position=None,
- font=cv2.FONT_HERSHEY_TRIPLEX,
- font_size=1.0,
- line_type=cv2.LINE_8,
- line_width=1,
- color=(255, 255, 255)):
- """Overlays text on given image.
-
- NOTE: The input image is assumed to be with `RGB` channel order.
-
- Args:
- image: The image to overlay text on.
- text: Text content to overlay on the image. (default: '')
- position: Target position (bottom-left corner) to add text. If not set,
- center of the image will be used by default. (default: None)
- font: Font of the text added. (default: cv2.FONT_HERSHEY_TRIPLEX)
- font_size: Font size of the text added. (default: 1.0)
- line_type: Line type used to depict the text. (default: cv2.LINE_8)
- line_width: Line width used to depict the text. (default: 1)
- color: Color of the text added in `RGB` channel order. (default:
- (255, 255, 255))
-
- Returns:
- An image with target text overlayed on.
- """
- if image is None or not text:
- return image
-
- cv2.putText(img=image,
- text=text,
- org=position,
- fontFace=font,
- fontScale=font_size,
- color=color,
- thickness=line_width,
- lineType=line_type,
- bottomLeftOrigin=False)
-
- return image
-
-
-def fuse_images(images,
- image_size=None,
- row=0,
- col=0,
- is_row_major=True,
- is_portrait=False,
- row_spacing=0,
- col_spacing=0,
- border_left=0,
- border_right=0,
- border_top=0,
- border_bottom=0,
- black_background=True):
- """Fuses a collection of images into an entire image.
-
- Args:
- images: A collection of images to fuse. Should be with shape [num, height,
- width, channels].
- image_size: Int or two-element tuple. This field is used to resize the image
- before fusing. `None` disables resizing. (default: None)
- row: Number of rows used for image fusion. If not set, this field will be
- automatically assigned based on `col` and total number of images.
- (default: None)
- col: Number of columns used for image fusion. If not set, this field will be
- automatically assigned based on `row` and total number of images.
- (default: None)
- is_row_major: Whether the input images should be arranged row-major or
- column-major. (default: True)
- is_portrait: Only active when both `row` and `col` should be assigned
- automatically. (default: False)
- row_spacing: Space between rows. (default: 0)
- col_spacing: Space between columns. (default: 0)
- border_left: Width of left border. (default: 0)
- border_right: Width of right border. (default: 0)
- border_top: Width of top border. (default: 0)
- border_bottom: Width of bottom border. (default: 0)
-
- Returns:
- The fused image.
-
- Raises:
- ValueError: If the input `images` is not with shape [num, height, width,
- width].
- """
- if images is None:
- return images
-
- if not images.ndim == 4:
- raise ValueError(f'Input `images` should be with shape [num, height, '
- f'width, channels], but {images.shape} is received!')
-
- num, image_height, image_width, channels = images.shape
- if image_size is not None:
- if isinstance(image_size, int):
- image_size = (image_size, image_size)
- assert isinstance(image_size, (list, tuple)) and len(image_size) == 2
- width, height = image_size
- else:
- height, width = image_height, image_width
- row, col = get_grid_shape(num, row=row, col=col, is_portrait=is_portrait)
- fused_height = (
- height * row + row_spacing * (row - 1) + border_top + border_bottom)
- fused_width = (
- width * col + col_spacing * (col - 1) + border_left + border_right)
- fused_image = get_blank_image(
- fused_height, fused_width, channels=channels, is_black=black_background)
- images = images.reshape(row, col, image_height, image_width, channels)
- if not is_row_major:
- images = images.transpose(1, 0, 2, 3, 4)
-
- for i in range(row):
- y = border_top + i * (height + row_spacing)
- for j in range(col):
- x = border_left + j * (width + col_spacing)
- if image_size is not None:
- image = cv2.resize(images[i, j], image_size)
- else:
- image = images[i, j]
- fused_image[y:y + height, x:x + width] = image
-
- return fused_image
-
-
-def get_sortable_html_header(column_name_list, sort_by_ascending=False):
- """Gets header for sortable html page.
-
- Basically, the html page contains a sortable table, where user can sort the
- rows by a particular column by clicking the column head.
-
- Example:
-
- column_name_list = [name_1, name_2, name_3]
- header = get_sortable_html_header(column_name_list)
- footer = get_sortable_html_footer()
- sortable_table = ...
- html_page = header + sortable_table + footer
-
- Args:
- column_name_list: List of column header names.
- sort_by_ascending: Default sorting order. If set as `True`, the html page
- will be sorted by ascending order when the header is clicked for the first
- time.
-
- Returns:
- A string, which represents for the header for a sortable html page.
- """
- header = '\n'.join([
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '',
- '
',
- '',
- '
',
- ''])
- for idx, column_name in enumerate(column_name_list):
- header += f'
{column_name}
\n'
- header += '
\n'
- header += '\n'
- header += '\n'
-
- return header
-
-
-def get_sortable_html_footer():
- """Gets footer for sortable html page.
-
- Check function `get_sortable_html_header()` for more details.
- """
- return '\n
\n\n\n\n'
-
-
-def encode_image_to_html_str(image, image_size=None):
- """Encodes an image to html language.
-
- Args:
- image: The input image to encode. Should be with `RGB` channel order.
- image_size: Int or two-element tuple. This field is used to resize the image
- before encoding. `None` disables resizing. (default: None)
-
- Returns:
- A string which represents the encoded image.
- """
- if image is None:
- return ''
-
- assert len(image.shape) == 3 and image.shape[2] in [1, 3]
-
- # Change channel order to `BGR`, which is opencv-friendly.
- image = image[:, :, ::-1]
-
- # Resize the image if needed.
- if image_size is not None:
- if isinstance(image_size, int):
- image_size = (image_size, image_size)
- assert isinstance(image_size, (list, tuple)) and len(image_size) == 2
- image = cv2.resize(image, image_size)
-
- # Encode the image to html-format string.
- encoded_image = cv2.imencode(".jpg", image)[1].tostring()
- encoded_image_base64 = base64.b64encode(encoded_image).decode('utf-8')
- html_str = f''
-
- return html_str
-
-
-class HtmlPageVisualizer(object):
- """Defines the html page visualizer.
-
- This class can be used to visualize image results as html page. Basically, it
- is based on an html-format sorted table with helper functions
- `get_sortable_html_header()`, `get_sortable_html_footer()`, and
- `encode_image_to_html_str()`. To simplify the usage, specifying the following
- fields is enough to create a visualization page:
-
- (1) num_rows: Number of rows of the table (header-row exclusive).
- (2) num_cols: Number of columns of the table.
- (3) header contents (optional): Title of each column.
-
- NOTE: `grid_size` can be used to assign `num_rows` and `num_cols`
- automatically.
-
- Example:
-
- html = HtmlPageVisualizer(num_rows, num_cols)
- html.set_headers([...])
- for i in range(num_rows):
- for j in range(num_cols):
- html.set_cell(i, j, text=..., image=...)
- html.save('visualize.html')
- """
-
- def __init__(self,
- num_rows=0,
- num_cols=0,
- grid_size=0,
- is_portrait=False,
- viz_size=None):
- if grid_size > 0:
- num_rows, num_cols = get_grid_shape(
- grid_size, row=num_rows, col=num_cols, is_portrait=is_portrait)
- assert num_rows > 0 and num_cols > 0
-
- self.num_rows = num_rows
- self.num_cols = num_cols
- self.viz_size = viz_size
- self.headers = ['' for _ in range(self.num_cols)]
- self.cells = [[{
- 'text': '',
- 'image': '',
- } for _ in range(self.num_cols)] for _ in range(self.num_rows)]
-
- def set_header(self, column_idx, content):
- """Sets the content of a particular header by column index."""
- self.headers[column_idx] = content
-
- def set_headers(self, contents):
- """Sets the contents of all headers."""
- if isinstance(contents, str):
- contents = [contents]
- assert isinstance(contents, (list, tuple))
- assert len(contents) == self.num_cols
- for column_idx, content in enumerate(contents):
- self.set_header(column_idx, content)
-
- def set_cell(self, row_idx, column_idx, text='', image=None):
- """Sets the content of a particular cell.
-
- Basically, a cell contains some text as well as an image. Both text and
- image can be empty.
-
- Args:
- row_idx: Row index of the cell to edit.
- column_idx: Column index of the cell to edit.
- text: Text to add into the target cell.
- image: Image to show in the target cell. Should be with `RGB` channel
- order.
- """
- self.cells[row_idx][column_idx]['text'] = text
- self.cells[row_idx][column_idx]['image'] = encode_image_to_html_str(
- image, self.viz_size)
-
- def save(self, save_path):
- """Saves the html page."""
- html = ''
- for i in range(self.num_rows):
- html += f'
\n'
- for j in range(self.num_cols):
- text = self.cells[i][j]['text']
- image = self.cells[i][j]['image']
- if text:
- html += f'
{text}
{image}
\n'
- else:
- html += f'
{image}
\n'
- html += f'
\n'
-
- header = get_sortable_html_header(self.headers)
- footer = get_sortable_html_footer()
-
- with open(save_path, 'w') as f:
- f.write(header + html + footer)
-
-
-class VideoReader(object):
- """Defines the video reader.
-
- This class can be used to read frames from a given video.
- """
-
- def __init__(self, path):
- """Initializes the video reader by loading the video from disk."""
- if not os.path.isfile(path):
- raise ValueError(f'Video `{path}` does not exist!')
-
- self.path = path
- self.video = cv2.VideoCapture(path)
- assert self.video.isOpened()
- self.position = 0
-
- self.length = int(self.video.get(cv2.CAP_PROP_FRAME_COUNT))
- self.frame_height = int(self.video.get(cv2.CAP_PROP_FRAME_HEIGHT))
- self.frame_width = int(self.video.get(cv2.CAP_PROP_FRAME_WIDTH))
- self.fps = self.video.get(cv2.CAP_PROP_FPS)
-
- def __del__(self):
- """Releases the opened video."""
- self.video.release()
-
- def read(self, position=None):
- """Reads a certain frame.
-
- NOTE: The returned frame is assumed to be with `RGB` channel order.
-
- Args:
- position: Optional. If set, the reader will read frames from the exact
- position. Otherwise, the reader will read next frames. (default: None)
- """
- if position is not None and position < self.length:
- self.video.set(cv2.CAP_PROP_POS_FRAMES, position)
- self.position = position
-
- success, frame = self.video.read()
- self.position = self.position + 1
-
- return frame[:, :, ::-1] if success else None
-
-
-class VideoWriter(object):
- """Defines the video writer.
-
- This class can be used to create a video.
-
- NOTE: `.avi` and `DIVX` is the most recommended codec format since it does not
- rely on other dependencies.
- """
-
- def __init__(self, path, frame_height, frame_width, fps=24, codec='DIVX'):
- """Creates the video writer."""
- self.path = path
- self.frame_height = frame_height
- self.frame_width = frame_width
- self.fps = fps
- self.codec = codec
-
- self.video = cv2.VideoWriter(filename=path,
- fourcc=cv2.VideoWriter_fourcc(*codec),
- fps=fps,
- frameSize=(frame_width, frame_height))
-
- def __del__(self):
- """Releases the opened video."""
- self.video.release()
-
- def write(self, frame):
- """Writes a target frame.
-
- NOTE: The input frame is assumed to be with `RGB` channel order.
- """
- self.video.write(frame[:, :, ::-1])
diff --git a/spaces/DragGan/DragGan/stylegan_human/interpolation.py b/spaces/DragGan/DragGan/stylegan_human/interpolation.py
deleted file mode 100644
index f95050479c5e085b0793f84fdcc09a8678af1379..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/stylegan_human/interpolation.py
+++ /dev/null
@@ -1,146 +0,0 @@
-# Copyright (c) SenseTime Research. All rights reserved.
-
-## interpolate between two z code
-## score all middle latent code
-# https://www.aiuai.cn/aifarm1929.html
-
-import os
-import re
-from typing import List
-from tqdm import tqdm
-import click
-import dnnlib
-import numpy as np
-import PIL.Image
-import torch
-import click
-import legacy
-import random
-from typing import List, Optional
-
-
-def lerp(code1, code2, alpha):
- return code1 * alpha + code2 * (1 - alpha)
-
-# Taken and adapted from wikipedia's slerp article
-# https://en.wikipedia.org/wiki/Slerp
-def slerp(code1, code2, alpha, DOT_THRESHOLD=0.9995): # Spherical linear interpolation
- code1_copy = np.copy(code1)
- code2_copy = np.copy(code2)
-
- code1 = code1 / np.linalg.norm(code1)
- code2 = code2 / np.linalg.norm(code2)
- dot = np.sum(code1 * code2)
- if np.abs(dot) > DOT_THRESHOLD:
- return lerp(code1_copy, code2_copy, alpha)
-
- # Calculate initial angle between v0 and v1
- theta_0 = np.arccos(dot)
- sin_theta_0 = np.sin(theta_0)
- # Angle at timestep t
- theta_t = theta_0 * alpha
- sin_theta_t = np.sin(theta_t)
-
- s0 = np.sin(theta_0 - theta_t) / sin_theta_0
- s1 = sin_theta_t / sin_theta_0
- code3 = s0 * code1_copy + s1 * code2_copy
- return code3
-
-def generate_image_from_z(G, z, noise_mode, truncation_psi, device):
- label = torch.zeros([1, G.c_dim], device=device)
- w = G.mapping(z, label,truncation_psi=truncation_psi)
- img = G.synthesis(w, noise_mode=noise_mode,force_fp32 = True)
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8)
- img = PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB')
- return img
-
-
-def get_concat_h(im1, im2):
- dst = PIL.Image.new('RGB', (im1.width + im2.width, im1.height))
- dst.paste(im1, (0, 0))
- dst.paste(im2, (im1.width, 0))
- return dst
-
-
-def make_latent_interp_animation(G, code1, code2, img1, img2, num_interps, noise_mode, save_mid_image, truncation_psi,device, outdir,fps):
- step_size = 1.0/num_interps
-
- all_imgs = []
- amounts = np.arange(0, 1, step_size)
- for seed_idx, alpha in enumerate(tqdm(amounts)):
- interpolated_latent_code = lerp(code1, code2, alpha)
- image = generate_image_from_z(G,interpolated_latent_code, noise_mode, truncation_psi, device)
- interp_latent_image = image.resize((512, 1024))
- if not os.path.exists(os.path.join(outdir,'img')): os.makedirs(os.path.join(outdir,'img'), exist_ok=True)
- if save_mid_image:
- interp_latent_image.save(f'{outdir}/img/seed{seed_idx:04d}.png')
-
- frame = get_concat_h(img2, interp_latent_image)
- frame = get_concat_h(frame, img1)
- all_imgs.append(frame)
-
- save_name = os.path.join(outdir,'latent_space_traversal.gif')
- all_imgs[0].save(save_name, save_all=True, append_images=all_imgs[1:], duration=1000/fps, loop=0)
-
-
-"""
-Create interpolated images between two given seeds using pretrained network pickle.
-
-Examples:
-
-\b
-python interpolation.py --network=pretrained_models/stylegan_human_v2_1024.pkl --seeds=85,100 --outdir=outputs/inter_gifs
-
-"""
-
-@click.command()
-@click.pass_context
-@click.option('--network', 'network_pkl', help='Network pickle filename', required=True)
-@click.option('--seeds', type=legacy.num_range, help='List of 2 random seeds, e.g. 1,2')
-@click.option('--trunc', 'truncation_psi', type=float, help='Truncation psi', default=0.8, show_default=True)
-@click.option('--noise-mode', 'noise_mode', help='Noise mode', type=click.Choice(['const', 'random', 'none']), default='const', show_default=True)
-@click.option('--outdir', default= 'outputs/inter_gifs', help='Where to save the output images', type=str, required=True, metavar='DIR')
-@click.option('--save_mid_image', default=True, type=bool, help='select True if you want to save all interpolated images')
-@click.option('--fps', default= 15, help='FPS for GIF', type=int)
-@click.option('--num_interps', default= 100, help='Number of interpolation images', type=int)
-def main(
- ctx: click.Context,
- network_pkl: str,
- seeds: Optional[List[int]],
- truncation_psi: float,
- noise_mode: str,
- outdir: str,
- save_mid_image: bool,
- fps:int,
- num_interps:int
-):
-
-
- device = torch.device('cuda')
- with dnnlib.util.open_url(network_pkl) as f:
- G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore
-
- outdir = os.path.join(outdir)
- if not os.path.exists(outdir):
- os.makedirs(outdir,exist_ok=True)
- os.makedirs(os.path.join(outdir,'img'),exist_ok=True)
-
- if len(seeds) > 2:
- print("Receiving more than two seeds, only use the first two.")
- seeds = seeds[0:2]
- elif len(seeds) == 1:
- print('Require two seeds, randomly generate two now.')
- seeds = [seeds[0],random.randint(0,10000)]
-
- z1 = torch.from_numpy(np.random.RandomState(seeds[0]).randn(1, G.z_dim)).to(device)
- z2 = torch.from_numpy(np.random.RandomState(seeds[1]).randn(1, G.z_dim)).to(device)
- img1 = generate_image_from_z(G, z1, noise_mode, truncation_psi, device)
- img2 = generate_image_from_z(G, z2, noise_mode, truncation_psi, device)
- img1.save(f'{outdir}/seed{seeds[0]:04d}.png')
- img2.save(f'{outdir}/seed{seeds[1]:04d}.png')
-
- make_latent_interp_animation(G, z1, z2, img1, img2, num_interps, noise_mode, save_mid_image, truncation_psi, device, outdir, fps)
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Dusan/clickbaitonator/fudge/evaluate_clickbait.py b/spaces/Dusan/clickbaitonator/fudge/evaluate_clickbait.py
deleted file mode 100644
index 476955aba7ea6ade2c9eaca9fcd959d92b0ae948..0000000000000000000000000000000000000000
--- a/spaces/Dusan/clickbaitonator/fudge/evaluate_clickbait.py
+++ /dev/null
@@ -1,200 +0,0 @@
-import os
-import random
-import time
-import pickle
-import math
-from argparse import ArgumentParser
-
-from typing import Iterable, List, Optional, Tuple
-
-from tqdm import tqdm
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from transformers import AutoTokenizer, AutoModelWithLMHead
-from torch import Tensor
-
-from fudge.data import Dataset
-from fudge.model import Model
-from fudge.util import num_params
-from fudge.constants import *
-
-
-
-tokenizer = AutoTokenizer.from_pretrained('google/pegasus-xsum')
-classifier_tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2')
-
-
-def main(args):
- with open(args.dataset_info, 'rb') as rf:
- dataset_info = pickle.load(rf)
-
- article_content = """Australian actor Guy Pearce will return for the iconic soap Neighbours finale on August 1 to reprise his role as Mike Young.
- Guy, 54, played the troubled Mike from 1986 to 1989, and is now set to make a comeback on the show after 33 years, Metro.co.uk reports.
- The star's character arcs explored the implications of domestic abuse, student-teacher relationships and dealing with loss of loved ones.
- Speaking to Metro.co.uk, Guy said: 'It is very exciting and surreal at the same time being back on set again, however it feels like coming home.
- 'It's where it all started for me professionally. I've been asked to come back on occasions over the years and wondered if it was the right thing
- to do, but once I knew the show was finishing, I knew I had to do it.'He added that there is 'nothing like being here all together again'
- , even though he's had a chance to catch-up with other cast members."""
-
- tokenizer.add_special_tokens({'pad_token': PAD_TOKEN})
- pad_id = tokenizer.encode(PAD_TOKEN)[0]
-
- #For loading Clickbait summarizer
- model = AutoModelWithLMHead.from_pretrained(args.model_string, return_dict=True).to(args.device)
-
- model.eval()
-
- checkpoint = torch.load(args.ckpt, map_location=args.device)
- model_args = checkpoint['args']
- conditioning_model = Model(model_args, pad_id, len(dataset_info.index2word)) # no need to get the glove embeddings when reloading since they're saved in model ckpt anyway
- conditioning_model.load_state_dict(checkpoint['state_dict'])
- conditioning_model = conditioning_model.to(args.device)
- conditioning_model.eval()
- print("=> loaded checkpoint '{}' (epoch {})"
- .format(args.ckpt, checkpoint['epoch']))
- print('num params', num_params(conditioning_model))
-
- while True:
- results = generate_clickbait(model,
- tokenizer,
- conditioning_model,
- [args.input_text],
- dataset_info,
- precondition_topk=args.precondition_topk,
- do_sample=args.do_sample,
- length_cutoff=args.length_cutoff,
- condition_lambda=args.condition_lambda,
- article_content=article_content,
- device=args.device)
- # print(results)
- import pdb; pdb.set_trace()
-
-
-def generate_clickbait(model,
- tokenizer,
- conditioning_model,
- input_text,
- dataset_info,
- precondition_topk,
- length_cutoff,
- condition_lambda=1.0,
- article_content=None,
- device='cuda'):
- with torch.no_grad():
- batch_size = len(input_text)
- # encoded_input_article = [tokenizer.encode(article_content, return_tensors='pt',add_special_tokens=False).to(device)] # batch x seq
- encoded_input_article = tokenizer(article_content, return_tensors='pt',add_special_tokens=False, max_length=512).to(device) # batch x seq
- # encoded_input_article = torch.cat(encoded_input_article, dim=0)
- # attention_mask = encoded_input_article.new_ones(encoded_input_article.shape).to(device)
-
- # CHANGE=ko
- encoded_input = tokenizer('', return_tensors='pt',add_special_tokens=False).to(device) # batch x seq
- # encoded_input = tokenizer(''+ input_text[0], return_tensors='pt',add_special_tokens=False).to(device) # batch x seq
- # encoded_input = torch.cat(encoded_input, dim=0)
- encoded_input = encoded_input['input_ids']
-
-
- lengths = torch.LongTensor([encoded_input.shape[1]]).to(device)
- # lengths = 1
-
- past = None
- use_cache = True
-
- # CHANGE
- # model_kwargs = {'encoder_outputs': model.get_encoder()(encoded_input_article, attention_mask=attention_mask)}
- # print(encoded_input_article)
- # print(encoded_input_article['input_ids'].shape, encoded_input_article['attention_mask'].shape)
- model_kwargs = {'encoder_outputs': model.get_encoder()(input_ids=encoded_input_article['input_ids'],
- attention_mask=encoded_input_article['attention_mask'],
- return_dict=True,
- output_attentions=False,
- output_hidden_states=False),
- }
-
- while lengths.max() < length_cutoff:
- model_inputs = model.prepare_inputs_for_generation(
- input_ids = encoded_input_article['input_ids'],
- decoder_input_ids=encoded_input,
- # past=past,
- attention_mask=encoded_input_article['attention_mask'],
- use_cache=use_cache,
- **model_kwargs
- )
-
- outputs = model(**model_inputs, return_dict=True)
- logits = outputs.logits[:, -1, :]
-
- if "past_key_values" in outputs:
- model_kwargs["past"] = outputs.past_key_values
-
- # logits = model(encoded_input)[0][:, -1, :] # batch x vocab
- top_logits, top_indices = logits.topk(precondition_topk, dim=1) # batch x topk
- new_input_candidates = torch.cat([encoded_input.unsqueeze(1).expand(-1, precondition_topk, -1), top_indices.unsqueeze(2)], dim=2) # batch x topk x seq+1
- expanded_lengths = (lengths + 1).unsqueeze(1).expand(batch_size, precondition_topk) # batch x topk
-
- if condition_lambda == 0:
- condition_logits = torch.zeros_like(top_logits).float()
- condition_logits = condition_logits.view(batch_size, precondition_topk, -1) # batch x topk x N
- else:
- decoded_outputs = tokenizer.batch_decode(new_input_candidates.view(-1, new_input_candidates.size(-1)), clean_up_tokenization_spaces=False)
- resulting_tokenization = classifier_tokenizer(decoded_outputs, add_special_tokens=False, padding='longest')
- encoded_with_classifier = resulting_tokenization['input_ids']
- attention_mask = torch.tensor(resulting_tokenization['attention_mask']).to(model.device)
- tplus1_candidates_classifier = torch.tensor(encoded_with_classifier).view(batch_size, precondition_topk, -1).to(model.device)
-
- condition_logits = conditioning_model(tplus1_candidates_classifier.flatten(0, 1), # batch*topk x seq+1
- expanded_lengths.flatten(0, 1), # batch*topk
- None,
- None,
- None,
- attention_mask=attention_mask
- )
- condition_logits = condition_logits.view(batch_size, precondition_topk, -1) # batch x topk x N
- condition_logits = condition_logits - torch.log(1 + torch.exp(condition_logits)) # get correct log probs
-
- condition_logits = torch.mean(condition_logits, dim=2)
- full_logits = top_logits + condition_logits * condition_lambda # batch x topk
- post_logits, post_indices = full_logits.topk(precondition_topk, dim=1)
- post_probs = F.softmax(post_logits, dim=1)
- # index_into_top_indices = post_indices[torch.arange(batch_size).to(post_indices.device), torch.multinomial(post_probs, 1).flatten()] # batch
- index_into_top_indices = post_indices[:, torch.multinomial(post_probs, 1).flatten()] # batch
-
- # next_indices = top_indices[torch.arange(batch_size).to(top_indices.device), index_into_top_indices] # batch
- next_indices = top_indices[:, index_into_top_indices] # batch
-
- # encoded_input = torch.cat([encoded_input, next_indices.unsqueeze(1)], dim=1) # batch x seq+1
- encoded_input = torch.cat([encoded_input, next_indices.squeeze(1)], dim=1)
- lengths = lengths + 1 # batch
-
-# print(tokenizer.decode(encoded_input[0], add_special_tokens=False))
- return [tokenizer.decode(s) for s in encoded_input]
-
-
-if __name__=='__main__':
- parser = ArgumentParser()
-
- # DATA
- parser.add_argument('--ckpt', type=str, required=True)
- parser.add_argument('--dataset_info', type=str, required=True, help='saved dataset info')
- parser.add_argument('--model_string', type=str, default='Helsinki-NLP/opus-mt-es-en')
-
- parser.add_argument('--in_file', type=str, default=None, required=True, help='text to run pred on')
-
- parser.add_argument('--precondition_topk', type=int, default=200, help='consider top k outputs from text generation at each step before conditioning and re-pruning')
- parser.add_argument('--do_sample', action='store_true', default=False, help='sample instead of greedy')
- parser.add_argument('--condition_lambda', type=float, default=1.0, help='lambda weight on conditioning model')
- parser.add_argument('--length_cutoff', type=int, default=512, help='max length')
-
- parser.add_argument('--seed', type=int, default=1, help='random seed')
- parser.add_argument('--device', type=str, default='cuda', choices=['cpu', 'cuda'])
- parser.add_argument('--debug', action='store_true', default=False)
-
- args = parser.parse_args()
-
- random.seed(args.seed)
- np.random.seed(args.seed)
- torch.manual_seed(args.seed)
-
- main(args)
diff --git a/spaces/ECCV2022/bytetrack/tools/trt.py b/spaces/ECCV2022/bytetrack/tools/trt.py
deleted file mode 100644
index f4673e9b961cb051229fad92a32641af22e05dc9..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/bytetrack/tools/trt.py
+++ /dev/null
@@ -1,74 +0,0 @@
-from loguru import logger
-
-import tensorrt as trt
-import torch
-from torch2trt import torch2trt
-
-from yolox.exp import get_exp
-
-import argparse
-import os
-import shutil
-
-
-def make_parser():
- parser = argparse.ArgumentParser("YOLOX ncnn deploy")
- parser.add_argument("-expn", "--experiment-name", type=str, default=None)
- parser.add_argument("-n", "--name", type=str, default=None, help="model name")
-
- parser.add_argument(
- "-f",
- "--exp_file",
- default=None,
- type=str,
- help="pls input your expriment description file",
- )
- parser.add_argument("-c", "--ckpt", default=None, type=str, help="ckpt path")
- return parser
-
-
-@logger.catch
-def main():
- args = make_parser().parse_args()
- exp = get_exp(args.exp_file, args.name)
- if not args.experiment_name:
- args.experiment_name = exp.exp_name
-
- model = exp.get_model()
- file_name = os.path.join(exp.output_dir, args.experiment_name)
- os.makedirs(file_name, exist_ok=True)
- if args.ckpt is None:
- ckpt_file = os.path.join(file_name, "best_ckpt.pth.tar")
- else:
- ckpt_file = args.ckpt
-
- ckpt = torch.load(ckpt_file, map_location="cpu")
- # load the model state dict
-
- model.load_state_dict(ckpt["model"])
- logger.info("loaded checkpoint done.")
- model.eval()
- model.cuda()
- model.head.decode_in_inference = False
- x = torch.ones(1, 3, exp.test_size[0], exp.test_size[1]).cuda()
- model_trt = torch2trt(
- model,
- [x],
- fp16_mode=True,
- log_level=trt.Logger.INFO,
- max_workspace_size=(1 << 32),
- )
- torch.save(model_trt.state_dict(), os.path.join(file_name, "model_trt.pth"))
- logger.info("Converted TensorRT model done.")
- engine_file = os.path.join(file_name, "model_trt.engine")
- engine_file_demo = os.path.join("deploy", "TensorRT", "cpp", "model_trt.engine")
- with open(engine_file, "wb") as f:
- f.write(model_trt.engine.serialize())
-
- shutil.copyfile(engine_file, engine_file_demo)
-
- logger.info("Converted TensorRT model engine file is saved for C++ inference.")
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/utils/autocast.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/utils/autocast.py
deleted file mode 100644
index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/audiocraft/utils/autocast.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch
-
-
-class TorchAutocast:
- """TorchAutocast utility class.
- Allows you to enable and disable autocast. This is specially useful
- when dealing with different architectures and clusters with different
- levels of support.
-
- Args:
- enabled (bool): Whether to enable torch.autocast or not.
- args: Additional args for torch.autocast.
- kwargs: Additional kwargs for torch.autocast
- """
- def __init__(self, enabled: bool, *args, **kwargs):
- self.autocast = torch.autocast(*args, **kwargs) if enabled else None
-
- def __enter__(self):
- if self.autocast is None:
- return
- try:
- self.autocast.__enter__()
- except RuntimeError:
- device = self.autocast.device
- dtype = self.autocast.fast_dtype
- raise RuntimeError(
- f"There was an error autocasting with dtype={dtype} device={device}\n"
- "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16"
- )
-
- def __exit__(self, *args, **kwargs):
- if self.autocast is None:
- return
- self.autocast.__exit__(*args, **kwargs)
diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/panet_r18_fpem_ffm.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/panet_r18_fpem_ffm.py
deleted file mode 100644
index a69a4d87603275bc1f89b5f58c722d79274e4fd7..0000000000000000000000000000000000000000
--- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_models/panet_r18_fpem_ffm.py
+++ /dev/null
@@ -1,43 +0,0 @@
-model_poly = dict(
- type='PANet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=18,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'),
- norm_eval=True,
- style='caffe'),
- neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]),
- bbox_head=dict(
- type='PANHead',
- in_channels=[128, 128, 128, 128],
- out_channels=6,
- loss=dict(type='PANLoss'),
- postprocessor=dict(type='PANPostprocessor', text_repr_type='poly')),
- train_cfg=None,
- test_cfg=None)
-
-model_quad = dict(
- type='PANet',
- backbone=dict(
- type='mmdet.ResNet',
- depth=18,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=-1,
- norm_cfg=dict(type='SyncBN', requires_grad=True),
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'),
- norm_eval=True,
- style='caffe'),
- neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]),
- bbox_head=dict(
- type='PANHead',
- in_channels=[128, 128, 128, 128],
- out_channels=6,
- loss=dict(type='PANLoss'),
- postprocessor=dict(type='PANPostprocessor', text_repr_type='quad')),
- train_cfg=None,
- test_cfg=None)
diff --git a/spaces/FantasticGNU/AnomalyGPT/app.py b/spaces/FantasticGNU/AnomalyGPT/app.py
deleted file mode 100644
index 2307d3c4f6094a40fb52428d3ec3d10064e66bb7..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/app.py
+++ /dev/null
@@ -1,245 +0,0 @@
-import os
-
-os.system("cp /home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so /home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so")
-
-
-import gradio as gr
-import mdtex2html
-from model.openllama import OpenLLAMAPEFTModel
-import torch
-from io import BytesIO
-from PIL import Image as PILImage
-import cv2
-import numpy as np
-from matplotlib import pyplot as plt
-from torchvision import transforms
-
-# init the model
-args = {
- 'model': 'openllama_peft',
- 'imagebind_ckpt_path': './pretrained_ckpt/imagebind_ckpt/imagebind_huge.pth',
- 'vicuna_ckpt_path': './pretrained_ckpt/vicuna_ckpt/7b_v0',
- 'anomalygpt_ckpt_path': './ckpt/train_supervised/pytorch_model.pt',
- 'delta_ckpt_path': './pretrained_ckpt/pandagpt_ckpt/7b/pytorch_model.pt',
- 'stage': 2,
- 'max_tgt_len': 128,
- 'lora_r': 32,
- 'lora_alpha': 32,
- 'lora_dropout': 0.1
-}
-
-model = OpenLLAMAPEFTModel(**args)
-delta_ckpt = torch.load(args['delta_ckpt_path'], map_location=torch.device('cpu'))
-model.load_state_dict(delta_ckpt, strict=False)
-delta_ckpt = torch.load(args['anomalygpt_ckpt_path'], map_location=torch.device('cpu'))
-model.load_state_dict(delta_ckpt, strict=False)
-model = model.eval()#.half()#.cuda()
-# model.image_decoder = model.image_decoder.cuda()
-# model.prompt_learner = model.prompt_learner.cuda()
-
-"""Override Chatbot.postprocess"""
-def postprocess(self, y):
- if y is None:
- return []
- for i, (message, response) in enumerate(y):
- y[i] = (
- None if message is None else mdtex2html.convert((message)),
- None if response is None else mdtex2html.convert(response),
- )
- return y
-
-
-gr.Chatbot.postprocess = postprocess
-
-
-def parse_text(text):
- """copy from https://github.com/GaiZhenbiao/ChuanhuChatGPT/"""
- lines = text.split("\n")
- lines = [line for line in lines if line != ""]
- count = 0
- for i, line in enumerate(lines):
- if "```" in line:
- count += 1
- items = line.split('`')
- if count % 2 == 1:
- lines[i] = f'
'
- else:
- lines[i] = f'
'
- else:
- if i > 0:
- if count % 2 == 1:
- line = line.replace("`", "\`")
- line = line.replace("<", "<")
- line = line.replace(">", ">")
- line = line.replace(" ", " ")
- line = line.replace("*", "*")
- line = line.replace("_", "_")
- line = line.replace("-", "-")
- line = line.replace(".", ".")
- line = line.replace("!", "!")
- line = line.replace("(", "(")
- line = line.replace(")", ")")
- line = line.replace("$", "$")
- lines[i] = " "+line
- text = "".join(lines)
- return text
-
-
-def predict(
- input,
- image_path,
- normal_img_path,
- chatbot,
- max_length,
- top_p,
- temperature,
- history,
- modality_cache,
-):
-
- if image_path is None and normal_img_path is None:
- return [(input, "There is no input data provided! Please upload your data and start the conversation.")]
- else:
- print(f'[!] image path: {image_path}\n[!] normal image path: {normal_img_path}\n')
-
- # prepare the prompt
- prompt_text = ''
- for idx, (q, a) in enumerate(history):
- if idx == 0:
- prompt_text += f'{q}\n### Assistant: {a}\n###'
- else:
- prompt_text += f' Human: {q}\n### Assistant: {a}\n###'
- if len(history) == 0:
- prompt_text += f'{input}'
- else:
- prompt_text += f' Human: {input}'
-
- response, pixel_output = model.generate({
- 'prompt': prompt_text,
- 'image_paths': [image_path] if image_path else [],
- 'normal_img_paths': [normal_img_path] if normal_img_path else [],
- 'audio_paths': [],
- 'video_paths': [],
- 'thermal_paths': [],
- 'top_p': top_p,
- 'temperature': temperature,
- 'max_tgt_len': max_length,
- 'modality_embeds': modality_cache
- },web_demo=True)
- chatbot.append((parse_text(input), parse_text(response)))
- history.append((input, response))
-
-
- plt.imshow(pixel_output.to(torch.float16).reshape(224,224).detach().cpu(), cmap='binary_r')
- plt.axis('off')
- plt.savefig('output.png',bbox_inches='tight',pad_inches = 0)
-
- target_size = 435
- original_width, original_height = PILImage.open(image_path).size
- if original_width > original_height:
- new_width = target_size
- new_height = int(target_size * (original_height / original_width))
- else:
- new_height = target_size
- new_width = int(target_size * (original_width / original_height))
-
- new_image = PILImage.new('L', (target_size, target_size), 255) # 'L' mode for grayscale
-
- paste_x = (target_size - new_width) // 2
- paste_y = (target_size - new_height) // 2
-
- pixel_output = PILImage.open('output.png').resize((new_width, new_height), PILImage.LANCZOS)
-
- new_image.paste(pixel_output, (paste_x, paste_y))
-
- new_image.save('output.png')
-
- image = cv2.imread('output.png', cv2.IMREAD_GRAYSCALE)
- kernel = np.ones((3, 3), np.uint8)
- eroded_image = cv2.erode(image, kernel, iterations=1)
- cv2.imwrite('output.png', eroded_image)
-
- output = PILImage.open('output.png').convert('L')
-
-
- return chatbot, history, modality_cache, output
-
-
-
-def reset_user_input():
- return gr.update(value='')
-
-
-def reset_state():
- return gr.update(value=''), None, None, [], [], [], PILImage.open('ffffff.png')
-
-examples = ['hazelnut_cut.png','capsule_crack.png','carpet_normal.jpg']
-
-with gr.Blocks() as demo:
- gr.HTML("""
Demo of AnomalyGPT
""")
-
- with gr.Row():
- with gr.Column(scale=1):
- with gr.Row():
- image_path = gr.Image(type="filepath", label="Query Image", value=examples[0])
- with gr.Row():
- normal_img_path = gr.Image(type="filepath", label="Normal Image (optional)", value=None)
- with gr.Row():
- gr.Examples(examples=examples, inputs=[image_path])
- with gr.Row():
- max_length = gr.Slider(0, 512, value=512, step=1.0, label="Max length", interactive=True)
- top_p = gr.Slider(0, 1, value=0.01, step=0.01, label="Top P", interactive=True)
- temperature = gr.Slider(0, 1, value=1.0, step=0.01, label="Temperature", interactive=True)
-
-
- with gr.Column(scale=3):
- with gr.Row():
- with gr.Column(scale=6):
- chatbot = gr.Chatbot().style(height=440)
- with gr.Column(scale=4):
- # gr.Image(output)
- image_output = gr.Image(interactive=False, label="Localization Output", type='pil',value=PILImage.open('ffffff.png'))
- with gr.Row():
- user_input = gr.Textbox(show_label=False, placeholder="Input...", lines=12).style(container=False)
- with gr.Row():
- with gr.Column(scale=2):
- submitBtn = gr.Button("Submit", variant="primary")
- with gr.Column(scale=1):
- emptyBtn = gr.Button("Clear History")
-
- history = gr.State([])
- modality_cache = gr.State([])
-
- submitBtn.click(
- predict, [
- user_input,
- image_path,
- normal_img_path,
- chatbot,
- max_length,
- top_p,
- temperature,
- history,
- modality_cache,
- ], [
- chatbot,
- history,
- modality_cache,
- image_output
- ],
- show_progress=True
- )
-
- submitBtn.click(reset_user_input, [], [user_input])
- emptyBtn.click(reset_state, outputs=[
- user_input,
- image_path,
- normal_img_path,
- chatbot,
- history,
- modality_cache,
- image_output
- ], show_progress=True)
-
-
-demo.queue().launch()
diff --git a/spaces/FantasticGNU/AnomalyGPT/model/openllama.py b/spaces/FantasticGNU/AnomalyGPT/model/openllama.py
deleted file mode 100644
index 5914fcac3d914e34a798085f4dab776e0b43b185..0000000000000000000000000000000000000000
--- a/spaces/FantasticGNU/AnomalyGPT/model/openllama.py
+++ /dev/null
@@ -1,755 +0,0 @@
-from header import *
-import torch.nn.functional as F
-from .ImageBind import *
-from .ImageBind import data
-from .modeling_llama import LlamaForCausalLM
-from .AnomalyGPT_models import LinearLayer, PromptLearner
-from transformers import StoppingCriteria, StoppingCriteriaList
-from utils.loss import FocalLoss, BinaryDiceLoss
-import kornia as K
-
-import torch
-from torch.nn.utils import rnn
-from transformers import AutoConfig, AutoModelForCausalLM
-from accelerate import init_empty_weights, load_checkpoint_and_dispatch, infer_auto_device_map
-
-CLASS_NAMES = ['bottle', 'cable', 'capsule', 'carpet', 'grid', 'hazelnut', 'leather', 'metal nut', 'pill', 'screw', 'tile', 'toothbrush', 'transistor', 'wood', 'zipper', 'object',
- 'candle', 'cashew', 'chewinggum', 'fryum', 'macaroni', 'pcb', 'pipe fryum']
-
-prompt_normal = ['{}', 'flawless {}', 'perfect {}', 'unblemished {}', '{} without flaw', '{} without defect', '{} without damage']
-prompt_abnormal = ['damaged {}', 'broken {}', '{} with flaw', '{} with defect', '{} with damage']
-
-prompt_state = [prompt_normal, prompt_abnormal]
-prompt_templates = ['a photo of a {}.', 'a photo of the {}.']
-# prompt_templates = [
-# 'a cropped photo of the {}.', 'a cropped photo of a {}.', 'a close-up photo of a {}.', 'a close-up photo of the {}.',
-# 'a bright photo of the {}.', 'a bright photo of a {}.', 'a dark photo of a {}.', 'a dark photo of the {}.',
-# 'a dark photo of the {}.', 'a dark photo of a {}.', 'a jpeg corrupted photo of a {}.', 'a jpeg corrupted photo of the {}.',
-# 'a blurry photo of the {}.', 'a blurry photo of a {}.', 'a photo of a {}.', 'a photo of the {}.',
-# 'a photo of the small {}.', 'a photo of a small {}.', 'a photo of the large {}.', 'a photo of a large {}.',
-# 'a photo of the {} for visual insprction.', 'a photo of a {} for visual insprction.',
-# 'a photo of the {} for anomaly detection.', 'a photo of a {} for anomaly detection.'
-# ]
-objs = ['bottle', 'cable', 'capsule', 'carpet', 'grid', 'hazelnut', 'leather', 'metal nut', 'pill', 'screw', 'tile', 'toothbrush', 'transistor', 'wood', 'zipper', 'object',
- 'candle', 'cashew', 'chewinggum', 'fryum', 'macaroni', 'pcb', 'pipe fryum', 'macaroni1', 'macaroni2','pcb1', 'pcb2', 'pcb3', 'pcb4', 'capsules']
-
-prompt_sentences = {}
-
-for obj in objs:
- prompt_sentence_obj = []
- for i in range(len(prompt_state)):
- prompted_state = [state.format(obj) for state in prompt_state[i]]
- prompted_sentence = []
- for s in prompted_state:
- for template in prompt_templates:
- prompted_sentence.append(template.format(s))
- prompted_sentence = data.load_and_transform_text(prompted_sentence, torch.cuda.current_device())#torch.cuda.current_device())
- prompt_sentence_obj.append(prompted_sentence)
- prompt_sentences[obj] = prompt_sentence_obj
-
-
-
-def encode_text_with_prompt_ensemble(model, obj, device):
-
- global prompt_sentences
- normal_sentences = []
- abnormal_sentences = []
- for idx in range(len(obj)):
- sentence = prompt_sentences[obj[idx].replace('_', ' ')]
- normal_sentences.append(sentence[0])
- abnormal_sentences.append(sentence[1])
-
- normal_sentences = torch.cat(normal_sentences).to(device)
- abnormal_sentences = torch.cat(abnormal_sentences).to(device)
-
- class_embeddings_normal = model({ModalityType.TEXT: normal_sentences})[ModalityType.TEXT][0]
- class_embeddings_abnormal = model({ModalityType.TEXT: abnormal_sentences})[ModalityType.TEXT][0]
- # class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)
-
- class_embeddings_normal = class_embeddings_normal.reshape((len(obj), len(prompt_templates) * len(prompt_normal), 1024))
- class_embeddings_normal = class_embeddings_normal.mean(dim=1, keepdim=True)
- class_embeddings_normal = class_embeddings_normal / class_embeddings_normal.norm(dim=-1, keepdim=True)
-
- class_embeddings_abnormal = class_embeddings_abnormal.reshape((len(obj), len(prompt_templates) * len(prompt_abnormal), 1024))
- class_embeddings_abnormal = class_embeddings_abnormal.mean(dim=1, keepdim=True)
- class_embeddings_abnormal = class_embeddings_abnormal / class_embeddings_abnormal.norm(dim=-1, keepdim=True)
-
- text_features = torch.cat([class_embeddings_normal, class_embeddings_abnormal], dim=1)
-
- return text_features
-
-
-
-class StoppingCriteriaSub(StoppingCriteria):
-
- def __init__(self, stops = [], encounters=1):
- super().__init__()
- self.stops = stops
- self.ENCOUNTERS = encounters
-
- def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
- stop_count = 0
- for stop in self.stops:
- stop_count = (stop == input_ids[0]).sum().item()
- if stop_count >= self.ENCOUNTERS:
- return True
- return False
-
-def build_one_instance(tokenizer, conversation):
- text_list = []
- turn_num = len(conversation)
- input_ids, target_ids = [], []
- for i in range(turn_num):
- turn = conversation[i]
- role = turn['from']
- if i == 0: # the first human turn
- assert role == 'human'
- text = turn['value'] + '\n### Assistant:'
- one_input_id = tokenizer(text, add_special_tokens=False).input_ids
- input_ids += one_input_id
- target_ids += [-100]*len(one_input_id) # do not perform loss regression on human prompt
- else:
- if role == 'human':
- text = 'Human: ' + turn['value'] + '\n### Assistant:'
- one_input_id = tokenizer(text, add_special_tokens=False).input_ids
- input_ids += one_input_id
- target_ids += [-100]*len(one_input_id)
- elif role == 'gpt':
- text = turn['value'] + '\n###'
- one_input_id = tokenizer(text, add_special_tokens=False).input_ids
- input_ids += one_input_id
- target_ids += one_input_id
- else:
- raise Exception('Wrong Role!!!')
- text_list.append(text)
- assert len(input_ids) == len(target_ids)
- return text_list, input_ids, target_ids
-
-def process_batch_instance(tokenizer, batch_of_conversations, max_tgt_len):
- batch_input_ids, batch_target_ids = [], []
- for conversation in batch_of_conversations:
- _, one_input_ids, one_target_ids = build_one_instance(tokenizer, conversation)
- batch_input_ids.append(torch.LongTensor(one_input_ids))
- batch_target_ids.append(torch.LongTensor(one_target_ids))
- input_ids = rnn.pad_sequence(batch_input_ids, batch_first=True, padding_value=tokenizer.pad_token_id)
- target_ids = rnn.pad_sequence(batch_target_ids, batch_first=True, padding_value=-100)
- assert input_ids.size() == target_ids.size()
- input_ids = input_ids[:,:max_tgt_len]
- target_ids = target_ids[:,:max_tgt_len]
- attention_mask = input_ids.ne(tokenizer.pad_token_id)
- assert attention_mask.size() == input_ids.size()
- return input_ids, target_ids, attention_mask.long()
-
-def find_first_file_in_directory(directory_path):
- try:
- file_list = os.listdir(directory_path)
- for item in file_list:
- item_path = os.path.join(directory_path, item)
- if os.path.isfile(item_path):
- return item_path
- return None
-
- except OSError as e:
- print(f"Error while accessing directory: {e}")
- return None
-
-
-PROMPT_START = '### Human: '
-class OpenLLAMAPEFTModel(nn.Module):
-
- '''LoRA for LLaMa model'''
-
- def __init__(self, **args):
- super(OpenLLAMAPEFTModel, self).__init__()
- self.args = args
- imagebind_ckpt_path = args['imagebind_ckpt_path']
- vicuna_ckpt_path = args['vicuna_ckpt_path']
- max_tgt_len = args['max_tgt_len']
- stage = args['stage']
-
- self.device = torch.cuda.current_device()
-
- print (f'Initializing visual encoder from {imagebind_ckpt_path} ...')
-
- self.visual_encoder, self.visual_hidden_size = imagebind_model.imagebind_huge(args)
- self.visual_encoder.to(torch.float16).to(self.device)
- imagebind_ckpt = torch.load(imagebind_ckpt_path, map_location=torch.device('cpu'))
- self.visual_encoder.load_state_dict(imagebind_ckpt, strict=True)
-
-
- self.iter = 0
-
- self.image_decoder = LinearLayer(1280, 1024, 4).to(torch.float16).to(self.device)
-
- self.prompt_learner = PromptLearner(1, 4096).to(torch.float16).to(self.device)
-
- self.loss_focal = FocalLoss()
- self.loss_dice = BinaryDiceLoss()
-
-
- # free vision encoder
- for name, param in self.visual_encoder.named_parameters():
- param.requires_grad = False
- self.visual_encoder.eval()
- print ('Visual encoder initialized.')
-
- print (f'Initializing language decoder from {vicuna_ckpt_path} ...')
-
- # add the lora module
- peft_config = LoraConfig(
- task_type=TaskType.CAUSAL_LM,
- inference_mode=False,
- r=self.args['lora_r'],
- lora_alpha=self.args['lora_alpha'],
- lora_dropout=self.args['lora_dropout'],
- target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj']
- )
-
- # config = AutoConfig.from_pretrained(vicuna_ckpt_path)
- # with init_empty_weights():
- # self.llama_model = AutoModelForCausalLM.from_config(config)
-
- # # device_map = infer_auto_device_map(self.llama_model, no_split_module_classes=["OPTDecoderLayer"], dtype="float16")
- # # print(device_map)
- device_map = {'model.embed_tokens': 0, 'model.layers.0': 0, 'model.layers.1': 0, 'model.layers.2': 0, 'model.layers.3': 0, 'model.layers.4': 0, 'model.layers.5': 0, 'model.layers.6': 0, 'model.layers.7': 0, 'model.layers.8': 0, 'model.layers.9': 0, 'model.layers.10.self_attn': 0, 'model.layers.10.mlp.gate_proj': 0, 'model.layers.10.mlp.down_proj': 'cpu', 'model.layers.10.mlp.up_proj': 'cpu', 'model.layers.10.mlp.act_fn': 'cpu', 'model.layers.10.input_layernorm': 'cpu', 'model.layers.10.post_attention_layernorm': 'cpu', 'model.layers.11': 'cpu', 'model.layers.12': 'cpu', 'model.layers.13': 'cpu', 'model.layers.14': 'cpu', 'model.layers.15': 'cpu', 'model.layers.16': 'cpu', 'model.layers.17': 'cpu', 'model.layers.18': 'cpu', 'model.layers.19': 'cpu', 'model.layers.20': 'cpu', 'model.layers.21': 'cpu', 'model.layers.22': 'cpu', 'model.layers.23': 'cpu', 'model.layers.24': 'disk', 'model.layers.25': 'disk', 'model.layers.26': 'disk', 'model.layers.27': 'disk', 'model.layers.28': 'disk', 'model.layers.29': 'disk', 'model.layers.30': 'disk', 'model.layers.31.self_attn': 'disk', 'model.layers.31.mlp.gate_proj': 'disk', 'model.layers.31.mlp.down_proj': 'disk', 'model.layers.31.mlp.up_proj': 'disk', 'model.layers.31.mlp.act_fn': 'disk', 'model.layers.31.input_layernorm': 'disk', 'model.layers.31.post_attention_layernorm': 'disk', 'model.norm': 'disk', 'lm_head': 'disk'}
- # # self.llama_model = load_checkpoint_and_dispatch(self.llama_model, vicuna_ckpt_path, device_map=device_map, offload_folder="offload", offload_state_dict = True)
- # # self.llama_model.to(torch.float16)
- # # try:
- self.llama_model = AutoModelForCausalLM.from_pretrained(vicuna_ckpt_path, torch_dtype=torch.float16, device_map='auto', load_in_8bit=True)
- # # except:
- # pass
- # finally:
- # print(self.llama_model.hf_device_map)
- self.llama_model = get_peft_model(self.llama_model, peft_config)
- # delta_ckpt = torch.load(args['delta_ckpt_path'], map_location=torch.device('cpu'))
- # self.llama_model.load_state_dict(delta_ckpt, strict=False)
- self.llama_model.print_trainable_parameters()
-
- self.llama_tokenizer = LlamaTokenizer.from_pretrained(vicuna_ckpt_path, use_fast=False, torch_dtype=torch.float16)
- self.llama_tokenizer.pad_token = self.llama_tokenizer.eos_token
- self.llama_tokenizer.padding_side = "right"
- print ('Language decoder initialized.')
-
- self.llama_proj = nn.Linear(
- self.visual_hidden_size, self.llama_model.config.hidden_size
- ).to(torch.float16).to(self.device)
-
- self.max_tgt_len = max_tgt_len
-
-
-
- def rot90_img(self,x,k):
- # k is 0,1,2,3
- degreesarr = [0., 90., 180., 270., 360]
- degrees = torch.tensor(degreesarr[k]).to(self.llama_model.dtype).to(self.device)
- x = K.geometry.transform.rotate(x, angle = degrees, padding_mode='reflection')
- return x
-
- def encode_video(self, video_paths):
- inputs = {ModalityType.VISION: data.load_and_transform_video_data(video_paths, self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- video_embeds = embeddings[ModalityType.VISION][0] # bsz x 1024
- inputs_llama = self.llama_proj(video_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama
-
- def encode_audio(self, audio_paths):
- inputs = {ModalityType.AUDIO: data.load_and_transform_audio_data(audio_paths, self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- audio_embeds = embeddings[ModalityType.AUDIO][0] # bsz x 1024
- inputs_llama = self.llama_proj(audio_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama
-
- def encode_thermal(self, thermal_paths):
- inputs = {ModalityType.THERMAL: data.load_and_transform_thermal_data(thermal_paths, self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- image_embeds = embeddings['thermal'][0] # bsz x 1024
- inputs_llama = self.llama_proj(image_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama
-
- def encode_image(self, image_paths):
- inputs = {ModalityType.VISION: data.load_and_transform_vision_data(image_paths, self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- image_embeds = embeddings['vision'][0] # bsz x 1024
- patch_features = embeddings['vision'][1] # bsz x h*w x 1280
- patch_tokens = self.image_decoder(patch_features) # bsz x h*w x 1024
-
- inputs_llama = self.llama_proj(image_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama, patch_tokens
-
- def encode_image_for_web_demo(self, image_paths):
- inputs = {ModalityType.VISION: data.load_and_transform_vision_data_for_web_demo(image_paths, self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- image_embeds = embeddings['vision'][0] # bsz x 1024
- patch_features = embeddings['vision'][1] # bsz x h*w x 1280
- patch_tokens = self.image_decoder(patch_features) # bsz x h*w x 1024
-
- inputs_llama = self.llama_proj(image_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama, patch_tokens
-
- def encode_image_for_one_shot(self, image_paths):
- inputs = {ModalityType.VISION: data.load_and_transform_vision_data(image_paths, self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- patch_features = embeddings['vision'][1] # bsz x h*w x 1280
- for i in range(len(patch_features)):
- patch_features[i] = patch_features[i].transpose(0, 1)[:, 1:, :]
-
- return patch_features
-
- def encode_image_for_one_shot_from_tensor(self, image_tensors):
- if not isinstance(image_tensors, list):
- image_tensors = [image_tensors]
- inputs = {ModalityType.VISION: torch.stack(image_tensors, dim=0).to(self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- patch_features = embeddings['vision'][1] # bsz x h*w x 1280
- for i in range(len(patch_features)):
- patch_features[i] = patch_features[i].transpose(0, 1)[:, 1:, :]
-
- return patch_features
-
- def encode_image_for_one_shot_with_aug(self, image_paths):
- image_tensors = data.load_and_transform_vision_data(image_paths, self.device).to(self.llama_model.dtype)
- B,C,H,W = image_tensors.shape
- # print(B,C,H,W)
-
- rotated_images = torch.zeros((4, B, C, H, W)).to(self.llama_model.dtype).to(self.device)
-
-
- for j, degree in enumerate([0, 1, 2, 3]):
- rotated_img = self.rot90_img(image_tensors, degree)
- # 存储旋转后的图像
- rotated_images[j] = rotated_img
-
- image_tensors = rotated_images.transpose(0,1).reshape(B * 4, C, H, W)
-
- inputs = {ModalityType.VISION: image_tensors}
- # convert into visual dtype
- inputs = {key: inputs[key] for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- patch_features = embeddings['vision'][1] # bsz x h*w x 1280
- for i in range(len(patch_features)):
- patch_features[i] = patch_features[i].transpose(0, 1)[:, 1:, :].reshape(B,4,256,1280).reshape(B, 4 * 256, 1280)
-
- return patch_features
-
- def encode_image_from_tensor(self, image_tensors):
- if not isinstance(image_tensors, list):
- image_tensors = [image_tensors]
- inputs = {ModalityType.VISION: torch.stack(image_tensors, dim=0).to(self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- image_embeds = embeddings['vision'][0] # bsz x 1024
- patch_features = embeddings['vision'][1] # bsz x h*w x 1024
- patch_tokens = self.image_decoder(patch_features)
-
-
- inputs_llama = self.llama_proj(image_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama, patch_tokens
-
- def encode_image_from_tensor_no_patch(self, image_tensors):
- if not isinstance(image_tensors, list):
- image_tensors = [image_tensors]
- inputs = {ModalityType.VISION: torch.stack(image_tensors, dim=0).to(self.device)}
- # convert into visual dtype
- inputs = {key: inputs[key].to(self.llama_model.dtype) for key in inputs}
- with torch.no_grad():
- embeddings = self.visual_encoder(inputs)
- image_embeds = embeddings['vision'][0] # bsz x 1024
-
- inputs_llama = self.llama_proj(image_embeds).unsqueeze(1) # bsz x 1 x llama_size
- atts_llama = torch.ones(inputs_llama.size()[:-1], dtype=torch.long).to(self.device) # bsz x 1
- return inputs_llama, atts_llama
-
-
-
- def prompt_wrap(self, img_embeds, input_ids, target_ids, attention_mask, anomaly_embedding = None):
- '''
- input_ids, target_ids, attention_mask: bsz x s2
- '''
- input_ids = input_ids.to(self.device) # bsz x s2
- target_ids = target_ids.to(self.device) # bsz x s2
- attention_mask = attention_mask.to(self.device) # bsz x s2
-
- batch_size = img_embeds.shape[0]
- p_before = PROMPT_START
- p_before_tokens = self.llama_tokenizer(p_before,
- return_tensors="pt", add_special_tokens=False).to(self.device)
- # peft model need deeper call
- p_before_embeds = self.llama_model.model.model.embed_tokens(p_before_tokens.input_ids).expand(batch_size, -1, -1) # bsz x s1 x embed_dim
-
- p_middle = ' '
- p_middle_tokens = self.llama_tokenizer(p_middle,
- return_tensors="pt", add_special_tokens=False).to(self.device)
- # peft model need deeper call
- p_middle_embeds = self.llama_model.model.model.embed_tokens(p_middle_tokens.input_ids).expand(batch_size, -1, -1) # bsz x s1 x embed_dim
-
-
- p_after_embeds = self.llama_model.model.model.embed_tokens(input_ids).expand(batch_size, -1, -1) # bsz x s2 x embed_dim
- bos = torch.ones([batch_size, 1],
- dtype=p_before_tokens.input_ids.dtype,
- device=p_before_tokens.input_ids.device) * self.llama_tokenizer.bos_token_id # bsz x 1
- bos_embeds = self.llama_model.model.model.embed_tokens(bos) # bsz x 1 x embed_dim
-
-
-
- if anomaly_embedding != None:
- inputs_embeds = torch.cat([bos_embeds, p_before_embeds, img_embeds, p_middle_embeds, anomaly_embedding, p_after_embeds], dim=1) # bsz x (1+s1+1+s2) x embed_dim
- # create targets
- empty_targets = (
- torch.ones([batch_size, 1+p_before_embeds.size()[1]+1+p_middle_embeds.size()[1] + anomaly_embedding.size()[1]], # 1 (bos) + s1 + 1 (image vector)
- dtype=torch.long).to(self.device).fill_(-100)
- ) # bsz x (1 + s1 + 1)
- targets = torch.cat([empty_targets, target_ids], dim=1) # bsz x (1 + s1 + 1 + s2)
- assert inputs_embeds.size()[1] == targets.size()[1]
-
- atts_prefix = torch.ones([batch_size, 1+p_before_embeds.size()[1]+1+p_middle_embeds.size()[1] + anomaly_embedding.size()[1]], dtype=torch.long).to(self.device) # bsz x (1 + s1 +1)
- attention_mask = torch.cat([atts_prefix, attention_mask], dim=1)
- assert attention_mask.size() == targets.size() # bsz x (1 + s1 + 1 + s2)
- return inputs_embeds, targets, attention_mask
- else:
- inputs_embeds = torch.cat([bos_embeds, p_before_embeds, img_embeds, p_middle_embeds, p_after_embeds], dim=1) # bsz x (1+s1+1+s2) x embed_dim
- # create targets
- empty_targets = (
- torch.ones([batch_size, 1+p_before_embeds.size()[1]+1+p_middle_embeds.size()[1]], # 1 (bos) + s1 + 1 (image vector)
- dtype=torch.long).to(self.device).fill_(-100)
- ) # bsz x (1 + s1 + 1)
- targets = torch.cat([empty_targets, target_ids], dim=1) # bsz x (1 + s1 + 1 + s2)
- assert inputs_embeds.size()[1] == targets.size()[1]
-
- atts_prefix = torch.ones([batch_size, 1+p_before_embeds.size()[1]+1+p_middle_embeds.size()[1]], dtype=torch.long).to(self.device) # bsz x (1 + s1 +1)
- attention_mask = torch.cat([atts_prefix, attention_mask], dim=1)
- assert attention_mask.size() == targets.size() # bsz x (1 + s1 + 1 + s2)
- return inputs_embeds, targets, attention_mask
-
-
- def forward(self, inputs):
-
- if 'masks' in inputs:
-
- image_paths = inputs['images']
- img_embeds, _, patch_tokens = self.encode_image_from_tensor(image_paths)
- class_name = inputs['class_names']
-
- loss_pixel = 0
- feats_text_tensor = encode_text_with_prompt_ensemble(self.visual_encoder, ['object' for _ in class_name], self.device)
-
- anomaly_maps = []
- for layer in range(len(patch_tokens)):
- patch_tokens[layer] = patch_tokens[layer] / patch_tokens[layer].norm(dim=-1, keepdim=True)
- # print(patch_tokens[layer].shape)
- # anomaly_map = torch.bmm(patch_tokens[layer], feats_text_tensor.transpose(-2,-1))
- anomaly_map = (100.0 * patch_tokens[layer] @ feats_text_tensor.transpose(-2,-1))
- B, L, C = anomaly_map.shape
- H = int(np.sqrt(L))
- anomaly_map = F.interpolate(anomaly_map.permute(0, 2, 1).view(B, 2, H, H),
- size=224, mode='bilinear', align_corners=True)
- # anomaly_map_no_softmax = anomaly_map
- anomaly_map = torch.softmax(anomaly_map, dim=1)
- anomaly_maps.append(anomaly_map)
- # anomaly_maps_ns.append(anomaly_map_no_softmax)
-
- gt = inputs['masks']
- gt = torch.stack(gt, dim=0).to(self.device)
- gt = gt.squeeze()
- # print(gt.max(), gt.min())
- gt[gt > 0.3], gt[gt <= 0.3] = 1, 0
-
-
- for num in range(len(anomaly_maps)):
- f_loss = self.loss_focal(anomaly_maps[num], gt)
- d_loss = self.loss_dice(anomaly_maps[num][:, 1, :, :], gt)
- loss_pixel = loss_pixel + f_loss + d_loss
-
- for num in range(len(anomaly_maps)):
- anomaly_maps[num] = anomaly_maps[num][:,1,:,:]
-
- anomaly_map_all = torch.mean(torch.stack(anomaly_maps, dim=0), dim=0).unsqueeze(1)
-
- if random.randint(0,1) == 0 and len(inputs['img_paths']) == len(image_paths):
-
- normal_paths = []
- for path in inputs['img_paths']:
- normal_path = path.replace('test', 'train')
- normal_path = find_first_file_in_directory("/".join(normal_path.split('/')[:-2])+'/good')
- normal_paths.append(normal_path)
-
- print(normal_paths)
- query_patch_tokens = self.encode_image_for_one_shot_from_tensor(image_paths)
- normal_patch_tokens = self.encode_image_for_one_shot_with_aug(normal_paths)
- sims = []
- B = len(image_paths)
-
- for i in range(len(query_patch_tokens)):
- query_patch_tokens_reshaped = query_patch_tokens[i].view(B,256,1,1280)
- normal_tokens_reshaped = normal_patch_tokens[i].reshape(B,1,-1,1280)
- cosine_similarity_matrix = F.cosine_similarity(query_patch_tokens_reshaped, normal_tokens_reshaped, dim=-1)
- sim_max, _ = torch.max(cosine_similarity_matrix, dim=-1)
- sims.append(sim_max)
-
- sim = torch.mean(torch.stack(sims,dim=0), dim=0).reshape(B,1,16,16)
- sim = F.interpolate(sim,size=224, mode='bilinear', align_corners=True)
- anomaly_map_all = 1 - sim # (anomaly_map_all + 1 - sim) / 2
-
- anomaly_map_prompts = self.prompt_learner(anomaly_map_all)
-
- # img_embeds = img_embeds + anomaly_map_prompts
-
- output_texts = inputs['texts']
- input_ids, target_ids, attention_mask = process_batch_instance(self.llama_tokenizer, output_texts, self.max_tgt_len)
- inputs_embeds, targets, attention_mask = self.prompt_wrap(img_embeds, input_ids, target_ids, attention_mask, anomaly_map_prompts)
-
- outputs = self.llama_model(
- inputs_embeds=inputs_embeds,
- attention_mask=attention_mask,
- return_dict=True,
- labels=targets,
- )
- loss = outputs.loss
-
- # loss_l2 = torch.norm(anomaly_map_prompts / 2 , p=2)
- # loss_l2 = nn.MSELoss()(img_embeds_origin, img_embeds)
- # calculate the token accuarcy
- chosen_tokens = torch.max(outputs.logits, dim=-1)[1][:, 1:-1] # [B, S-1]
- # print(self.llama_tokenizer.decode(chosen_tokens[0], skip_special_tokens=True))
- labels = targets[:, 2:]
- gen_acc = (chosen_tokens.reshape(-1) == labels.reshape(-1)).to(torch.long) # [B*S]
- valid_mask = (labels != -100).reshape(-1)
- # print(self.llama_tokenizer.decode(chosen_tokens.reshape(-1)[valid_mask], skip_special_tokens=True))
- valid_tokens = gen_acc & valid_mask # [B*S]
- gen_acc = valid_tokens.sum().item() / valid_mask.sum().item()
-
- return loss + loss_pixel, gen_acc
-
- else:
-
- image_paths = inputs['image_paths']
- img_embeds, _, patch_tokens = self.encode_image_from_tensor(image_paths)
-
- output_texts = inputs['output_texts']
-
- c_name = 'object'
- for name in CLASS_NAMES:
- if name in output_texts:
- c_name = name
- break
-
- feats_text_tensor = encode_text_with_prompt_ensemble(self.visual_encoder, ['object'] * len(image_paths), self.device)
-
- anomaly_maps = []
- for layer in range(len(patch_tokens)):
- patch_tokens[layer] = patch_tokens[layer] / patch_tokens[layer].norm(dim=-1, keepdim=True)
- # print(patch_tokens[layer].shape)
- # anomaly_map = torch.bmm(patch_tokens[layer], feats_text_tensor.transpose(-2,-1))
- anomaly_map = (100.0 * patch_tokens[layer] @ feats_text_tensor.transpose(-2,-1))
- B, L, C = anomaly_map.shape
- H = int(np.sqrt(L))
- anomaly_map = F.interpolate(anomaly_map.permute(0, 2, 1).view(B, 2, H, H),
- size=224, mode='bilinear', align_corners=True)
- # anomaly_map_no_softmax = anomaly_map
- anomaly_map = torch.softmax(anomaly_map, dim=1)
- anomaly_maps.append(anomaly_map)
-
- for num in range(len(anomaly_maps)):
- anomaly_maps[num] = anomaly_maps[num][:,1,:,:]
-
- anomaly_map_all = torch.mean(torch.stack(anomaly_maps, dim=0), dim=0).unsqueeze(1)
-
- anomaly_map_prompts = self.prompt_learner(anomaly_map_all)
-
- # img_embeds = img_embeds + anomaly_map_prompts
-
- input_ids, target_ids, attention_mask = process_batch_instance(self.llama_tokenizer, output_texts, self.max_tgt_len)
- inputs_embeds, targets, attention_mask = self.prompt_wrap(img_embeds, input_ids, target_ids, attention_mask, anomaly_map_prompts)
-
- outputs = self.llama_model(
- inputs_embeds=inputs_embeds,
- attention_mask=attention_mask,
- return_dict=True,
- labels=targets,
- )
- loss = outputs.loss
- # calculate the token accuarcy
- chosen_tokens = torch.max(outputs.logits, dim=-1)[1][:, 1:-1] # [B, S-1]
- labels = targets[:, 2:]
- gen_acc = (chosen_tokens.reshape(-1) == labels.reshape(-1)).to(torch.long) # [B*S]
- valid_mask = (labels != -100).reshape(-1)
- valid_tokens = gen_acc & valid_mask # [B*S]
- gen_acc = valid_tokens.sum().item() / valid_mask.sum().item()
-
- return loss, gen_acc
-
-
- def extract_multimodal_feature(self, inputs, web_demo):
- features = []
- if inputs['image_paths']:
-
- prompt = inputs['prompt']
- c_name = 'object'
- for name in CLASS_NAMES:
- if name in prompt:
- c_name = name
- break
-
- if not web_demo:
- image_embeds, _, patch_tokens = self.encode_image(inputs['image_paths'])
- feats_text_tensor = encode_text_with_prompt_ensemble(self.visual_encoder, [c_name], self.device)
- else:
- image_embeds, _, patch_tokens = self.encode_image_for_web_demo(inputs['image_paths'])
- feats_text_tensor = encode_text_with_prompt_ensemble(self.visual_encoder, ['object'], self.device)
-
- anomaly_maps = []
- for layer in range(len(patch_tokens)):
- patch_tokens[layer] = patch_tokens[layer] / patch_tokens[layer].norm(dim=-1, keepdim=True)
- # print(patch_tokens[layer].shape)
- # anomaly_map = torch.bmm(patch_tokens[layer], feats_text_tensor.transpose(-2,-1))
- anomaly_map = (100.0 * patch_tokens[layer] @ feats_text_tensor.transpose(-2,-1))
- B, L, C = anomaly_map.shape
- H = int(np.sqrt(L))
- # anomaly_map = anomaly_map.to(torch.float16)
- anomaly_map = F.interpolate(anomaly_map.permute(0, 2, 1).view(B, 2, H, H),
- size=224, mode='bilinear', align_corners=True)
- # anomaly_map = anomaly_map.to(torch.bfloat16)
- anomaly_map = torch.softmax(anomaly_map, dim=1)
- anomaly_maps.append(anomaly_map[:,1,:,:])
-
- anomaly_map_ret = torch.mean(torch.stack(anomaly_maps, dim=0), dim=0).unsqueeze(1)
- # anomaly_map_all = anomaly_map_ret.unsqueeze(1).repeat((1,3,1,1))
- # anomaly_map_feature, _, _ = self.encode_image_from_tensor(anomaly_map_all)
- # image_embeds = anomaly_map_feature + image_embeds
- if inputs['normal_img_paths']:
- query_patch_tokens = self.encode_image_for_one_shot(inputs['image_paths'])
- if 'mvtec' in 'normal_img_paths':
- normal_patch_tokens = self.encode_image_for_one_shot_with_aug(inputs['normal_img_paths'])
- else:
- normal_patch_tokens = self.encode_image_for_one_shot(inputs['normal_img_paths'])
- sims = []
-
- for i in range(len(query_patch_tokens)):
- query_patch_tokens_reshaped = query_patch_tokens[i].view(256,1,1280)
- normal_tokens_reshaped = normal_patch_tokens[i].reshape(1,-1,1280)
- cosine_similarity_matrix = F.cosine_similarity(query_patch_tokens_reshaped, normal_tokens_reshaped, dim=2)
- sim_max, _ = torch.max(cosine_similarity_matrix, dim=1)
- sims.append(sim_max)
-
- sim = torch.mean(torch.stack(sims,dim=0), dim=0).reshape(1,1,16,16)
- # anomaly_map = anomaly_map.to(torch.float16)
- sim = F.interpolate(sim,size=224, mode='bilinear', align_corners=True)
- # anomaly_map = anomaly_map.to(torch.bfloat16)
- anomaly_map_ret = 1 - sim # (anomaly_map_ret + 1 - sim) / 2
-
-
- features.append(image_embeds)
- if inputs['audio_paths']:
- audio_embeds, _ = self.encode_audio(inputs['audio_paths'])
- features.append(audio_embeds)
- if inputs['video_paths']:
- video_embeds, _ = self.encode_video(inputs['video_paths'])
- features.append(video_embeds)
- if inputs['thermal_paths']:
- thermal_embeds, _ = self.encode_thermal(inputs['thermal_paths'])
- features.append(thermal_embeds)
-
- feature_embeds = torch.cat(features).sum(dim=0).unsqueeze(0)
- return feature_embeds, anomaly_map_ret
-
- def prepare_generation_embedding(self, inputs, web_demo):
- prompt = inputs['prompt']
- # if len(inputs['modality_embeds']) == 1:
- # feature_embeds = inputs['modality_embeds'][0]
- # else:
- feature_embeds, anomaly_map = self.extract_multimodal_feature(inputs, web_demo)
- # print(anomaly_map.shape)
- inputs['modality_embeds'].append(feature_embeds)
-
- batch_size = feature_embeds.shape[0]
- p_before = PROMPT_START
- p_before_tokens = self.llama_tokenizer(p_before,
- return_tensors="pt", add_special_tokens=False).to(self.device)
- p_before_embeds = self.llama_model.model.model.embed_tokens(p_before_tokens.input_ids).expand(batch_size, -1, -1) # bsz x s1 x embed_dim
-
- p_middle = ' '
- p_middle_tokens = self.llama_tokenizer(p_middle,
- return_tensors="pt", add_special_tokens=False).to(self.device)
- # peft model need deeper call
- p_middle_embeds = self.llama_model.model.model.embed_tokens(p_middle_tokens.input_ids).expand(batch_size, -1, -1) # bsz x s1 x embed_dim
-
- # self.prompt_learner.eval()
- anomaly_map_prompts = self.prompt_learner(anomaly_map)
-
-
-
-
- text = prompt + '\n### Assistant:'
- p_after_tokens = self.llama_tokenizer(text, add_special_tokens=False, return_tensors='pt').to(self.device)
- p_after_embeds = self.llama_model.model.model.embed_tokens(p_after_tokens.input_ids).expand(batch_size, -1, -1) # bsz x s2 x embed_dim
- bos = torch.ones([batch_size, 1],
- dtype=p_before_tokens.input_ids.dtype,
- device=p_before_tokens.input_ids.device) * self.llama_tokenizer.bos_token_id # bsz x 1
- bos_embeds = self.llama_model.model.model.embed_tokens(bos) # bsz x 1 x embed_dim
- inputs_embeds = torch.cat([bos_embeds, p_before_embeds, feature_embeds, p_middle_embeds, anomaly_map_prompts, p_after_embeds], dim=1) # bsz x (1+s1+1+s2) x embed_dim
-
- return inputs_embeds, anomaly_map
-
- def generate(self, inputs, web_demo=False):
- '''
- inputs = {
- 'image_paths': optional,
- 'audio_paths': optional
- 'video_paths': optional
- 'thermal_paths': optional
- 'mode': generation mode,
- 'prompt': human input prompt,
- 'max_tgt_len': generation length,
- 'top_p': top_p,
- 'temperature': temperature
- 'modality_embeds': None or torch.tensor
- 'modality_cache': save the image cache
- }
- '''
- # self.prompt_learner.eval()
- # self.llama_model.eval()
- # self.llama_proj.eval()
- # self.image_decoder.eval()
- # self.llama_tokenizer.eval()
- input_embeds, pixel_output = self.prepare_generation_embedding(inputs, web_demo)
- stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=[2277], encounters=1)])
- outputs = self.llama_model.generate(
- inputs_embeds=input_embeds,
- max_new_tokens=inputs['max_tgt_len'],
- top_p=inputs['top_p'],
- temperature=inputs['temperature'],
- do_sample=True,
- use_cache=True,
- stopping_criteria=stopping_criteria,
- )
- output_text = self.llama_tokenizer.decode(outputs[0][:-2], skip_special_tokens=True)
- return output_text, pixel_output
\ No newline at end of file
diff --git a/spaces/Fazzie/Pokemon-GAI/static/js/dom-manipulation.js b/spaces/Fazzie/Pokemon-GAI/static/js/dom-manipulation.js
deleted file mode 100644
index 1ee588fd04ad7c8d671f5c7de523b02bfbc3d513..0000000000000000000000000000000000000000
--- a/spaces/Fazzie/Pokemon-GAI/static/js/dom-manipulation.js
+++ /dev/null
@@ -1,93 +0,0 @@
-import { toPng } from 'https://cdn.jsdelivr.net/npm/html-to-image@~1.9/es/index.js/+esm';
-
-const updateCardName = (trainerName, pokeName, useTrainerName) => {
- const cardName = document.querySelector('.pokecard .name');
-
- if (!cardName) {
- return;
- }
-
- let trainerString = '';
-
- if (trainerName && useTrainerName) {
- trainerName = [...trainerName].filter((char) => char.match(/[\wÀ-ÿ '".,@&+#!?:/\\()_-]/g)?.length).join('');
- trainerString = `${trainerName}${trainerName.match(/[sSzZ]$/g)?.length ? "' " : "'s "}`;
- }
-
- const fullName = `${trainerString}${pokeName}`;
- cardName.innerText = fullName;
-
- let nameWidth;
- let cardWidth = document.querySelector('.pokecard').getBoundingClientRect().width;
-
- let scale = 1.01;
-
- do {
- scale -= 0.01;
- cardName.style.transform = `scaleX(${scale})`;
- nameWidth = cardName.getBoundingClientRect().width;
- } while (nameWidth / cardWidth > 0.62);
-
- return fullName;
-};
-
-const rotateCard = () => {
- const RANGE = 0.1;
- const INTERVAL = 13; // ~75 per second
- let previousTime = 0;
-
- // Throttle closure
- return (card, containerMouseEvent) => {
- const currentTime = performance.now();
-
- if (currentTime - previousTime > INTERVAL) {
- previousTime = currentTime;
-
- const rect = card.getBoundingClientRect();
-
- const rotateX = (containerMouseEvent.clientY - rect.y - rect.height / 2) * RANGE;
- const rotateY = -(containerMouseEvent.clientX - rect.x - rect.width / 2) * RANGE;
-
- card.style.setProperty('--card-rx', rotateX + 'deg');
- card.style.setProperty('--card-ry', rotateY + 'deg');
- }
- };
-};
-
-const initialiseCardRotation = (scene) => {
- const card = document.querySelector('.pokecard');
-
- const mousemoveHandler = rotateCard().bind(null, card);
-
- scene.addEventListener('mousemove', mousemoveHandler, true);
-
- return mousemoveHandler;
-};
-
-const setOutput = (mode, state) => {
- const output = document.querySelector('.output');
-
- output.dataset.mode = mode;
- output.dataset.state = state;
-};
-
-const screenshotCard = async () => {
- const card = document.querySelector('.pokecard');
-
- /* Load twice for Safari bug */
-
- let imageUrl = await toPng(card);
-
- imageUrl = await toPng(card, {
- width: 400,
- height: 558,
- backgroundColor: 'transparent',
- style: {
- transform: 'none',
- },
- });
-
- return imageUrl;
-};
-
-export { updateCardName, initialiseCardRotation, setOutput, screenshotCard };
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/inference_codeformer.py b/spaces/FelixLuoX/codeformer/CodeFormer/inference_codeformer.py
deleted file mode 100644
index fdfe8b301cc7c20c2fb653618e379d243603a108..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/inference_codeformer.py
+++ /dev/null
@@ -1,189 +0,0 @@
-# Modified by Shangchen Zhou from: https://github.com/TencentARC/GFPGAN/blob/master/inference_gfpgan.py
-import os
-import cv2
-import argparse
-import glob
-import torch
-from torchvision.transforms.functional import normalize
-from basicsr.utils import imwrite, img2tensor, tensor2img
-from basicsr.utils.download_util import load_file_from_url
-from facelib.utils.face_restoration_helper import FaceRestoreHelper
-import torch.nn.functional as F
-
-from basicsr.utils.registry import ARCH_REGISTRY
-
-pretrain_model_url = {
- 'restoration': 'https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth',
-}
-
-def set_realesrgan():
- if not torch.cuda.is_available(): # CPU
- import warnings
- warnings.warn('The unoptimized RealESRGAN is slow on CPU. We do not use it. '
- 'If you really want to use it, please modify the corresponding codes.',
- category=RuntimeWarning)
- bg_upsampler = None
- else:
- from basicsr.archs.rrdbnet_arch import RRDBNet
- from basicsr.utils.realesrgan_utils import RealESRGANer
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
- bg_upsampler = RealESRGANer(
- scale=2,
- model_path='https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth',
- model=model,
- tile=args.bg_tile,
- tile_pad=40,
- pre_pad=0,
- half=True) # need to set False in CPU mode
- return bg_upsampler
-
-if __name__ == '__main__':
- device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- parser = argparse.ArgumentParser()
-
- parser.add_argument('--w', type=float, default=0.5, help='Balance the quality and fidelity')
- parser.add_argument('--upscale', type=int, default=2, help='The final upsampling scale of the image. Default: 2')
- parser.add_argument('--test_path', type=str, default='./inputs/cropped_faces')
- parser.add_argument('--has_aligned', action='store_true', help='Input are cropped and aligned faces')
- parser.add_argument('--only_center_face', action='store_true', help='Only restore the center face')
- # large det_model: 'YOLOv5l', 'retinaface_resnet50'
- # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
- parser.add_argument('--detection_model', type=str, default='retinaface_resnet50')
- parser.add_argument('--draw_box', action='store_true')
- parser.add_argument('--bg_upsampler', type=str, default='None', help='background upsampler. Optional: realesrgan')
- parser.add_argument('--face_upsample', action='store_true', help='face upsampler after enhancement.')
- parser.add_argument('--bg_tile', type=int, default=400, help='Tile size for background sampler. Default: 400')
-
- args = parser.parse_args()
-
- # ------------------------ input & output ------------------------
- if args.test_path.endswith('/'): # solve when path ends with /
- args.test_path = args.test_path[:-1]
-
- w = args.w
- result_root = f'results/{os.path.basename(args.test_path)}_{w}'
-
- # ------------------ set up background upsampler ------------------
- if args.bg_upsampler == 'realesrgan':
- bg_upsampler = set_realesrgan()
- else:
- bg_upsampler = None
-
- # ------------------ set up face upsampler ------------------
- if args.face_upsample:
- if bg_upsampler is not None:
- face_upsampler = bg_upsampler
- else:
- face_upsampler = set_realesrgan()
- else:
- face_upsampler = None
-
- # ------------------ set up CodeFormer restorer -------------------
- net = ARCH_REGISTRY.get('CodeFormer')(dim_embd=512, codebook_size=1024, n_head=8, n_layers=9,
- connect_list=['32', '64', '128', '256']).to(device)
-
- # ckpt_path = 'weights/CodeFormer/codeformer.pth'
- ckpt_path = load_file_from_url(url=pretrain_model_url['restoration'],
- model_dir='weights/CodeFormer', progress=True, file_name=None)
- checkpoint = torch.load(ckpt_path)['params_ema']
- net.load_state_dict(checkpoint)
- net.eval()
-
- # ------------------ set up FaceRestoreHelper -------------------
- # large det_model: 'YOLOv5l', 'retinaface_resnet50'
- # small det_model: 'YOLOv5n', 'retinaface_mobile0.25'
- if not args.has_aligned:
- print(f'Face detection model: {args.detection_model}')
- if bg_upsampler is not None:
- print(f'Background upsampling: True, Face upsampling: {args.face_upsample}')
- else:
- print(f'Background upsampling: False, Face upsampling: {args.face_upsample}')
-
- face_helper = FaceRestoreHelper(
- args.upscale,
- face_size=512,
- crop_ratio=(1, 1),
- det_model = args.detection_model,
- save_ext='png',
- use_parse=True,
- device=device)
-
- # -------------------- start to processing ---------------------
- # scan all the jpg and png images
- for img_path in sorted(glob.glob(os.path.join(args.test_path, '*.[jp][pn]g'))):
- # clean all the intermediate results to process the next image
- face_helper.clean_all()
-
- img_name = os.path.basename(img_path)
- print(f'Processing: {img_name}')
- basename, ext = os.path.splitext(img_name)
- img = cv2.imread(img_path, cv2.IMREAD_COLOR)
-
- if args.has_aligned:
- # the input faces are already cropped and aligned
- img = cv2.resize(img, (512, 512), interpolation=cv2.INTER_LINEAR)
- face_helper.cropped_faces = [img]
- else:
- face_helper.read_image(img)
- # get face landmarks for each face
- num_det_faces = face_helper.get_face_landmarks_5(
- only_center_face=args.only_center_face, resize=640, eye_dist_threshold=5)
- print(f'\tdetect {num_det_faces} faces')
- # align and warp each face
- face_helper.align_warp_face()
-
- # face restoration for each cropped face
- for idx, cropped_face in enumerate(face_helper.cropped_faces):
- # prepare data
- cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
- normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
- cropped_face_t = cropped_face_t.unsqueeze(0).to(device)
-
- try:
- with torch.no_grad():
- output = net(cropped_face_t, w=w, adain=True)[0]
- restored_face = tensor2img(output, rgb2bgr=True, min_max=(-1, 1))
- del output
- torch.cuda.empty_cache()
- except Exception as error:
- print(f'\tFailed inference for CodeFormer: {error}')
- restored_face = tensor2img(cropped_face_t, rgb2bgr=True, min_max=(-1, 1))
-
- restored_face = restored_face.astype('uint8')
- face_helper.add_restored_face(restored_face)
-
- # paste_back
- if not args.has_aligned:
- # upsample the background
- if bg_upsampler is not None:
- # Now only support RealESRGAN for upsampling background
- bg_img = bg_upsampler.enhance(img, outscale=args.upscale)[0]
- else:
- bg_img = None
- face_helper.get_inverse_affine(None)
- # paste each restored face to the input image
- if args.face_upsample and face_upsampler is not None:
- restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box, face_upsampler=face_upsampler)
- else:
- restored_img = face_helper.paste_faces_to_input_image(upsample_img=bg_img, draw_box=args.draw_box)
-
- # save faces
- for idx, (cropped_face, restored_face) in enumerate(zip(face_helper.cropped_faces, face_helper.restored_faces)):
- # save cropped face
- if not args.has_aligned:
- save_crop_path = os.path.join(result_root, 'cropped_faces', f'{basename}_{idx:02d}.png')
- imwrite(cropped_face, save_crop_path)
- # save restored face
- if args.has_aligned:
- save_face_name = f'{basename}.png'
- else:
- save_face_name = f'{basename}_{idx:02d}.png'
- save_restore_path = os.path.join(result_root, 'restored_faces', save_face_name)
- imwrite(restored_face, save_restore_path)
-
- # save restored img
- if not args.has_aligned and restored_img is not None:
- save_restore_path = os.path.join(result_root, 'final_results', f'{basename}.png')
- imwrite(restored_img, save_restore_path)
-
- print(f'\nAll results are saved in {result_root}')
diff --git a/spaces/FrankAst/image_mixer/README.md b/spaces/FrankAst/image_mixer/README.md
deleted file mode 100644
index 28c732e10d57c57a78c86b6ff82b90ba3655d2cc..0000000000000000000000000000000000000000
--- a/spaces/FrankAst/image_mixer/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Image_mixer
-emoji: ⚡
-colorFrom: purple
-colorTo: pink
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/uvr5_pack/lib_v5/layers.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/uvr5_pack/lib_v5/layers.py
deleted file mode 100644
index 4fc1b5cb85a3327f60cbb9f5deffbeeaaac516ad..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/uvr5_pack/lib_v5/layers.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/GT4SD/multitask-text-and-chemistry-t5/README.md b/spaces/GT4SD/multitask-text-and-chemistry-t5/README.md
deleted file mode 100644
index 008a44510a71390b66673901c7b908a3dcbefca8..0000000000000000000000000000000000000000
--- a/spaces/GT4SD/multitask-text-and-chemistry-t5/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Multitask Text and Chemistry T5
-emoji: 💡
-colorFrom: green
-colorTo: blue
-sdk: gradio
-sdk_version: 3.46.0
-app_file: app.py
-pinned: false
-python_version: 3.8.13
-pypi_version: 20.2.4
-duplicated_from: GT4SD/hf-transformers
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/GXSA/bingo/src/components/chat-attachments.tsx b/spaces/GXSA/bingo/src/components/chat-attachments.tsx
deleted file mode 100644
index ef43d4e262935d263b6099138c56f7daade5299d..0000000000000000000000000000000000000000
--- a/spaces/GXSA/bingo/src/components/chat-attachments.tsx
+++ /dev/null
@@ -1,37 +0,0 @@
-import Image from 'next/image'
-import ClearIcon from '@/assets/images/clear.svg'
-import RefreshIcon from '@/assets/images/refresh.svg'
-import { FileItem } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-import { useBing } from '@/lib/hooks/use-bing'
-
-type ChatAttachmentsProps = Pick, 'attachmentList' | 'setAttachmentList' | 'uploadImage'>
-
-export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) {
- return attachmentList.length ? (
-
", unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/Kangarroar/ApplioRVC-Inference/demucs/tasnet.py b/spaces/Kangarroar/ApplioRVC-Inference/demucs/tasnet.py
deleted file mode 100644
index ecc1257925ea8f4fbe389ddd6d73ce9fdf45f6d4..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/ApplioRVC-Inference/demucs/tasnet.py
+++ /dev/null
@@ -1,452 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-#
-# Created on 2018/12
-# Author: Kaituo XU
-# Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels
-# Here is the original license:
-# The MIT License (MIT)
-#
-# Copyright (c) 2018 Kaituo XU
-#
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-#
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE.
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from .utils import capture_init
-
-EPS = 1e-8
-
-
-def overlap_and_add(signal, frame_step):
- outer_dimensions = signal.size()[:-2]
- frames, frame_length = signal.size()[-2:]
-
- subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor
- subframe_step = frame_step // subframe_length
- subframes_per_frame = frame_length // subframe_length
- output_size = frame_step * (frames - 1) + frame_length
- output_subframes = output_size // subframe_length
-
- subframe_signal = signal.view(*outer_dimensions, -1, subframe_length)
-
- frame = torch.arange(0, output_subframes,
- device=signal.device).unfold(0, subframes_per_frame, subframe_step)
- frame = frame.long() # signal may in GPU or CPU
- frame = frame.contiguous().view(-1)
-
- result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length)
- result.index_add_(-2, frame, subframe_signal)
- result = result.view(*outer_dimensions, -1)
- return result
-
-
-class ConvTasNet(nn.Module):
- @capture_init
- def __init__(self,
- sources,
- N=256,
- L=20,
- B=256,
- H=512,
- P=3,
- X=8,
- R=4,
- audio_channels=2,
- norm_type="gLN",
- causal=False,
- mask_nonlinear='relu',
- samplerate=44100,
- segment_length=44100 * 2 * 4):
- """
- Args:
- sources: list of sources
- N: Number of filters in autoencoder
- L: Length of the filters (in samples)
- B: Number of channels in bottleneck 1 × 1-conv block
- H: Number of channels in convolutional blocks
- P: Kernel size in convolutional blocks
- X: Number of convolutional blocks in each repeat
- R: Number of repeats
- norm_type: BN, gLN, cLN
- causal: causal or non-causal
- mask_nonlinear: use which non-linear function to generate mask
- """
- super(ConvTasNet, self).__init__()
- # Hyper-parameter
- self.sources = sources
- self.C = len(sources)
- self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R
- self.norm_type = norm_type
- self.causal = causal
- self.mask_nonlinear = mask_nonlinear
- self.audio_channels = audio_channels
- self.samplerate = samplerate
- self.segment_length = segment_length
- # Components
- self.encoder = Encoder(L, N, audio_channels)
- self.separator = TemporalConvNet(
- N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear)
- self.decoder = Decoder(N, L, audio_channels)
- # init
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_normal_(p)
-
- def valid_length(self, length):
- return length
-
- def forward(self, mixture):
- """
- Args:
- mixture: [M, T], M is batch size, T is #samples
- Returns:
- est_source: [M, C, T]
- """
- mixture_w = self.encoder(mixture)
- est_mask = self.separator(mixture_w)
- est_source = self.decoder(mixture_w, est_mask)
-
- # T changed after conv1d in encoder, fix it here
- T_origin = mixture.size(-1)
- T_conv = est_source.size(-1)
- est_source = F.pad(est_source, (0, T_origin - T_conv))
- return est_source
-
-
-class Encoder(nn.Module):
- """Estimation of the nonnegative mixture weight by a 1-D conv layer.
- """
- def __init__(self, L, N, audio_channels):
- super(Encoder, self).__init__()
- # Hyper-parameter
- self.L, self.N = L, N
- # Components
- # 50% overlap
- self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False)
-
- def forward(self, mixture):
- """
- Args:
- mixture: [M, T], M is batch size, T is #samples
- Returns:
- mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1
- """
- mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K]
- return mixture_w
-
-
-class Decoder(nn.Module):
- def __init__(self, N, L, audio_channels):
- super(Decoder, self).__init__()
- # Hyper-parameter
- self.N, self.L = N, L
- self.audio_channels = audio_channels
- # Components
- self.basis_signals = nn.Linear(N, audio_channels * L, bias=False)
-
- def forward(self, mixture_w, est_mask):
- """
- Args:
- mixture_w: [M, N, K]
- est_mask: [M, C, N, K]
- Returns:
- est_source: [M, C, T]
- """
- # D = W * M
- source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K]
- source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N]
- # S = DV
- est_source = self.basis_signals(source_w) # [M, C, K, ac * L]
- m, c, k, _ = est_source.size()
- est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous()
- est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T
- return est_source
-
-
-class TemporalConvNet(nn.Module):
- def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'):
- """
- Args:
- N: Number of filters in autoencoder
- B: Number of channels in bottleneck 1 × 1-conv block
- H: Number of channels in convolutional blocks
- P: Kernel size in convolutional blocks
- X: Number of convolutional blocks in each repeat
- R: Number of repeats
- C: Number of speakers
- norm_type: BN, gLN, cLN
- causal: causal or non-causal
- mask_nonlinear: use which non-linear function to generate mask
- """
- super(TemporalConvNet, self).__init__()
- # Hyper-parameter
- self.C = C
- self.mask_nonlinear = mask_nonlinear
- # Components
- # [M, N, K] -> [M, N, K]
- layer_norm = ChannelwiseLayerNorm(N)
- # [M, N, K] -> [M, B, K]
- bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False)
- # [M, B, K] -> [M, B, K]
- repeats = []
- for r in range(R):
- blocks = []
- for x in range(X):
- dilation = 2**x
- padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2
- blocks += [
- TemporalBlock(B,
- H,
- P,
- stride=1,
- padding=padding,
- dilation=dilation,
- norm_type=norm_type,
- causal=causal)
- ]
- repeats += [nn.Sequential(*blocks)]
- temporal_conv_net = nn.Sequential(*repeats)
- # [M, B, K] -> [M, C*N, K]
- mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False)
- # Put together
- self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net,
- mask_conv1x1)
-
- def forward(self, mixture_w):
- """
- Keep this API same with TasNet
- Args:
- mixture_w: [M, N, K], M is batch size
- returns:
- est_mask: [M, C, N, K]
- """
- M, N, K = mixture_w.size()
- score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K]
- score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K]
- if self.mask_nonlinear == 'softmax':
- est_mask = F.softmax(score, dim=1)
- elif self.mask_nonlinear == 'relu':
- est_mask = F.relu(score)
- else:
- raise ValueError("Unsupported mask non-linear function")
- return est_mask
-
-
-class TemporalBlock(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- dilation,
- norm_type="gLN",
- causal=False):
- super(TemporalBlock, self).__init__()
- # [M, B, K] -> [M, H, K]
- conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False)
- prelu = nn.PReLU()
- norm = chose_norm(norm_type, out_channels)
- # [M, H, K] -> [M, B, K]
- dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding,
- dilation, norm_type, causal)
- # Put together
- self.net = nn.Sequential(conv1x1, prelu, norm, dsconv)
-
- def forward(self, x):
- """
- Args:
- x: [M, B, K]
- Returns:
- [M, B, K]
- """
- residual = x
- out = self.net(x)
- # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad?
- return out + residual # look like w/o F.relu is better than w/ F.relu
- # return F.relu(out + residual)
-
-
-class DepthwiseSeparableConv(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- kernel_size,
- stride,
- padding,
- dilation,
- norm_type="gLN",
- causal=False):
- super(DepthwiseSeparableConv, self).__init__()
- # Use `groups` option to implement depthwise convolution
- # [M, H, K] -> [M, H, K]
- depthwise_conv = nn.Conv1d(in_channels,
- in_channels,
- kernel_size,
- stride=stride,
- padding=padding,
- dilation=dilation,
- groups=in_channels,
- bias=False)
- if causal:
- chomp = Chomp1d(padding)
- prelu = nn.PReLU()
- norm = chose_norm(norm_type, in_channels)
- # [M, H, K] -> [M, B, K]
- pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False)
- # Put together
- if causal:
- self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv)
- else:
- self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv)
-
- def forward(self, x):
- """
- Args:
- x: [M, H, K]
- Returns:
- result: [M, B, K]
- """
- return self.net(x)
-
-
-class Chomp1d(nn.Module):
- """To ensure the output length is the same as the input.
- """
- def __init__(self, chomp_size):
- super(Chomp1d, self).__init__()
- self.chomp_size = chomp_size
-
- def forward(self, x):
- """
- Args:
- x: [M, H, Kpad]
- Returns:
- [M, H, K]
- """
- return x[:, :, :-self.chomp_size].contiguous()
-
-
-def chose_norm(norm_type, channel_size):
- """The input of normlization will be (M, C, K), where M is batch size,
- C is channel size and K is sequence length.
- """
- if norm_type == "gLN":
- return GlobalLayerNorm(channel_size)
- elif norm_type == "cLN":
- return ChannelwiseLayerNorm(channel_size)
- elif norm_type == "id":
- return nn.Identity()
- else: # norm_type == "BN":
- # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics
- # along M and K, so this BN usage is right.
- return nn.BatchNorm1d(channel_size)
-
-
-# TODO: Use nn.LayerNorm to impl cLN to speed up
-class ChannelwiseLayerNorm(nn.Module):
- """Channel-wise Layer Normalization (cLN)"""
- def __init__(self, channel_size):
- super(ChannelwiseLayerNorm, self).__init__()
- self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.reset_parameters()
-
- def reset_parameters(self):
- self.gamma.data.fill_(1)
- self.beta.data.zero_()
-
- def forward(self, y):
- """
- Args:
- y: [M, N, K], M is batch size, N is channel size, K is length
- Returns:
- cLN_y: [M, N, K]
- """
- mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K]
- var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K]
- cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta
- return cLN_y
-
-
-class GlobalLayerNorm(nn.Module):
- """Global Layer Normalization (gLN)"""
- def __init__(self, channel_size):
- super(GlobalLayerNorm, self).__init__()
- self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1]
- self.reset_parameters()
-
- def reset_parameters(self):
- self.gamma.data.fill_(1)
- self.beta.data.zero_()
-
- def forward(self, y):
- """
- Args:
- y: [M, N, K], M is batch size, N is channel size, K is length
- Returns:
- gLN_y: [M, N, K]
- """
- # TODO: in torch 1.0, torch.mean() support dim list
- mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1]
- var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True)
- gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta
- return gLN_y
-
-
-if __name__ == "__main__":
- torch.manual_seed(123)
- M, N, L, T = 2, 3, 4, 12
- K = 2 * T // L - 1
- B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False
- mixture = torch.randint(3, (M, T))
- # test Encoder
- encoder = Encoder(L, N)
- encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size())
- mixture_w = encoder(mixture)
- print('mixture', mixture)
- print('U', encoder.conv1d_U.weight)
- print('mixture_w', mixture_w)
- print('mixture_w size', mixture_w.size())
-
- # test TemporalConvNet
- separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal)
- est_mask = separator(mixture_w)
- print('est_mask', est_mask)
-
- # test Decoder
- decoder = Decoder(N, L)
- est_mask = torch.randint(2, (B, K, C, N))
- est_source = decoder(mixture_w, est_mask)
- print('est_source', est_source)
-
- # test Conv-TasNet
- conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type)
- est_source = conv_tasnet(mixture)
- print('est_source', est_source)
- print('est_source size', est_source.size())
diff --git a/spaces/Kedreamix/YoloGesture/predict.py b/spaces/Kedreamix/YoloGesture/predict.py
deleted file mode 100644
index 21e213ae9167eca28d738ba581790f791194663a..0000000000000000000000000000000000000000
--- a/spaces/Kedreamix/YoloGesture/predict.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#-----------------------------------------------------------------------#
-# predict.py将单张图片预测、摄像头检测、FPS测试和目录遍历检测等功能
-# 整合到了一个py文件中,通过指定mode进行模式的修改。
-#-----------------------------------------------------------------------#
-import time
-import yaml
-import cv2
-import numpy as np
-from PIL import Image
-from get_yaml import get_config
-from yolo import YOLO
-import argparse
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument('--weights',type=str,default='model_data/yolotiny_SE_ep100.pth',help='initial weights path')
- parser.add_argument('--tiny',action='store_true',help='使用yolotiny模型')
- parser.add_argument('--phi',type=int,default=1,help='yolov4tiny注意力机制类型')
- parser.add_argument('--mode',type=str,choices=['dir_predict', 'video', 'fps','predict','heatmap','export_onnx'],default="dir_predict",help='预测的模式')
- parser.add_argument('--cuda',action='store_true',help='表示是否使用GPU')
- parser.add_argument('--shape',type=int,default=416,help='输入图像的shape')
- parser.add_argument('--video',type=str,default='',help='需要检测的视频文件')
- parser.add_argument('--save-video',type=str,default='',help='保存视频的位置')
- parser.add_argument('--confidence',type=float,default=0.5,help='只有得分大于置信度的预测框会被保留下来')
- parser.add_argument('--nms_iou',type=float,default=0.3,help='非极大抑制所用到的nms_iou大小')
- opt = parser.parse_args()
- print(opt)
-
- # 配置文件
- config = get_config()
- yolo = YOLO(opt)
-
- #----------------------------------------------------------------------------------------------------------#
- # mode用于指定测试的模式:
- # 'predict' 表示单张图片预测,如果想对预测过程进行修改,如保存图片,截取对象等,可以先看下方详细的注释
- # 'video' 表示视频检测,可调用摄像头或者视频进行检测,详情查看下方注释。
- # 'fps' 表示测试fps,使用的图片是img里面的street.jpg,详情查看下方注释。
- # 'dir_predict' 表示遍历文件夹进行检测并保存。默认遍历img文件夹,保存img_out文件夹,详情查看下方注释。
- # 'heatmap' 表示进行预测结果的热力图可视化,详情查看下方注释。
- # 'export_onnx' 表示将模型导出为onnx,需要pytorch1.7.1以上。
- #----------------------------------------------------------------------------------------------------------#
- mode = opt.mode
- #-------------------------------------------------------------------------#
- # crop 指定了是否在单张图片预测后对目标进行截取
- # count 指定了是否进行目标的计数
- # crop、count仅在mode='predict'时有效
- #-------------------------------------------------------------------------#
- crop = False
- count = False
- #----------------------------------------------------------------------------------------------------------#
- # video_path 用于指定视频的路径,当video_path=0时表示检测摄像头
- # 想要检测视频,则设置如video_path = "xxx.mp4"即可,代表读取出根目录下的xxx.mp4文件。
- # video_save_path 表示视频保存的路径,当video_save_path=""时表示不保存
- # 想要保存视频,则设置如video_save_path = "yyy.mp4"即可,代表保存为根目录下的yyy.mp4文件。
- # video_fps 用于保存的视频的fps
- #
- # video_path、video_save_path和video_fps仅在mode='video'时有效
- # 保存视频时需要ctrl+c退出或者运行到最后一帧才会完成完整的保存步骤。
- #----------------------------------------------------------------------------------------------------------#
- video_path = 0 if opt.video == '' else opt.video
- video_save_path = opt.save_video
- video_fps = 25.0
- #----------------------------------------------------------------------------------------------------------#
- # test_interval 用于指定测量fps的时候,图片检测的次数。理论上test_interval越大,fps越准确。
- # fps_image_path 用于指定测试的fps图片
- #
- # test_interval和fps_image_path仅在mode='fps'有效
- #----------------------------------------------------------------------------------------------------------#
- test_interval = 100
- fps_image_path = "img/up.jpg"
- #-------------------------------------------------------------------------#
- # dir_origin_path 指定了用于检测的图片的文件夹路径
- # dir_save_path 指定了检测完图片的保存路径
- #
- # dir_origin_path和dir_save_path仅在mode='dir_predict'时有效
- #-------------------------------------------------------------------------#
- dir_origin_path = "img/"
- dir_save_path = "img_out/"
- #-------------------------------------------------------------------------#
- # heatmap_save_path 热力图的保存路径,默认保存在model_data下
- #
- # heatmap_save_path仅在mode='heatmap'有效
- #-------------------------------------------------------------------------#
- heatmap_save_path = "model_data/heatmap_vision.png"
- #-------------------------------------------------------------------------#
- # simplify 使用Simplify onnx
- # onnx_save_path 指定了onnx的保存路径
- #-------------------------------------------------------------------------#
- simplify = True
- onnx_save_path = "model_data/models.onnx"
-
- if mode == "predict":
- '''
- 1、如果想要进行检测完的图片的保存,利用r_image.save("img.jpg")即可保存,直接在predict.py里进行修改即可。
- 2、如果想要获得预测框的坐标,可以进入yolo.detect_image函数,在绘图部分读取top,left,bottom,right这四个值。
- 3、如果想要利用预测框截取下目标,可以进入yolo.detect_image函数,在绘图部分利用获取到的top,left,bottom,right这四个值
- 在原图上利用矩阵的方式进行截取。
- 4、如果想要在预测图上写额外的字,比如检测到的特定目标的数量,可以进入yolo.detect_image函数,在绘图部分对predicted_class进行判断,
- 比如判断if predicted_class == 'car': 即可判断当前目标是否为车,然后记录数量即可。利用draw.text即可写字。
- '''
- while True:
- img = input('Input image filename:')
- try:
- image = Image.open(img)
- except:
- print('Open Error! Try again!')
- continue
- else:
- r_image = yolo.detect_image(image, crop = crop, count=count)
- r_image.show()
- r_image.save(dir_save_path + 'img_result.jpg')
-
- elif mode == "video":
- capture = cv2.VideoCapture(video_path)
- if video_save_path != '':
- fourcc = cv2.VideoWriter_fourcc(*'XVID')
- size = (int(capture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(capture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
- out = cv2.VideoWriter(video_save_path, fourcc, video_fps, size)
-
- ref, frame = capture.read()
- if not ref:
- raise ValueError("未能正确读取摄像头(视频),请注意是否正确安装摄像头(是否正确填写视频路径)。")
-
- fps = 0.0
- while(True):
- t1 = time.time()
- # 读取某一帧
- ref, frame = capture.read()
- if not ref:
- break
- # 格式转变,BGRtoRGB
- frame = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)
- # 转变成Image
- frame = Image.fromarray(np.uint8(frame))
- # 进行检测
- frame = np.array(yolo.detect_image(frame))
- # RGBtoBGR满足opencv显示格式
- frame = cv2.cvtColor(frame,cv2.COLOR_RGB2BGR)
-
- fps = ( fps + (1./(time.time()-t1)) ) / 2
- print("fps= %.2f"%(fps))
- frame = cv2.putText(frame, "fps= %.2f"%(fps), (0, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
-
- cv2.imshow("video",frame)
- c= cv2.waitKey(1) & 0xff
- if video_save_path != '':
- out.write(frame)
-
- if c==27:
- capture.release()
- break
-
- print("Video Detection Done!")
- capture.release()
- if video_save_path != '':
- print("Save processed video to the path :" + video_save_path)
- out.release()
- cv2.destroyAllWindows()
-
- elif mode == "fps":
- img = Image.open(fps_image_path)
- tact_time = yolo.get_FPS(img, test_interval)
- print(str(tact_time) + ' seconds, ' + str(1/tact_time) + 'FPS, @batch_size 1')
-
- elif mode == "dir_predict":
- import os
-
- from tqdm import tqdm
-
- img_names = os.listdir(dir_origin_path)
- for img_name in tqdm(img_names):
- if img_name.lower().endswith(('.bmp', '.dib', '.png', '.jpg', '.jpeg', '.pbm', '.pgm', '.ppm', '.tif', '.tiff')):
- image_path = os.path.join(dir_origin_path, img_name)
- image = Image.open(image_path)
- r_image = yolo.detect_image(image)
- if not os.path.exists(dir_save_path):
- os.makedirs(dir_save_path)
- r_image.save(os.path.join(dir_save_path, img_name.replace(".jpg", ".png")), quality=95, subsampling=0)
-
- elif mode == "heatmap":
- while True:
- img = input('Input image filename:')
- try:
- image = Image.open(img)
- except:
- print('Open Error! Try again!')
- continue
- else:
- yolo.detect_heatmap(image, heatmap_save_path)
-
- elif mode == "export_onnx":
- yolo.convert_to_onnx(simplify, onnx_save_path)
-
- else:
- raise AssertionError("Please specify the correct mode: 'predict', 'video', 'fps', 'heatmap', 'export_onnx', 'dir_predict'.")
diff --git a/spaces/Kevin676/AutoGPT/autogpt/config/singleton.py b/spaces/Kevin676/AutoGPT/autogpt/config/singleton.py
deleted file mode 100644
index 55b2aeea120bbe51ca837265fcb7fbff467e55f2..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/config/singleton.py
+++ /dev/null
@@ -1,24 +0,0 @@
-"""The singleton metaclass for ensuring only one instance of a class."""
-import abc
-
-
-class Singleton(abc.ABCMeta, type):
- """
- Singleton metaclass for ensuring only one instance of a class.
- """
-
- _instances = {}
-
- def __call__(cls, *args, **kwargs):
- """Call method for the singleton metaclass."""
- if cls not in cls._instances:
- cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
- return cls._instances[cls]
-
-
-class AbstractSingleton(abc.ABC, metaclass=Singleton):
- """
- Abstract singleton class for ensuring only one instance of a class.
- """
-
- pass
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/fregan/discriminator.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/fregan/discriminator.py
deleted file mode 100644
index 5f94092634db21102b977c0347e756993edbc2bc..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/fregan/discriminator.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import torch
-import torch.nn.functional as F
-import torch.nn as nn
-from torch.nn import Conv1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, spectral_norm
-from vocoder.fregan.utils import get_padding
-from vocoder.fregan.stft_loss import stft
-from vocoder.fregan.dwt import DWT_1D
-LRELU_SLOPE = 0.1
-
-
-
-class SpecDiscriminator(nn.Module):
- """docstring for Discriminator."""
-
- def __init__(self, fft_size=1024, shift_size=120, win_length=600, window="hann_window", use_spectral_norm=False):
- super(SpecDiscriminator, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.fft_size = fft_size
- self.shift_size = shift_size
- self.win_length = win_length
- self.window = getattr(torch, window)(win_length)
- self.discriminators = nn.ModuleList([
- norm_f(nn.Conv2d(1, 32, kernel_size=(3, 9), padding=(1, 4))),
- norm_f(nn.Conv2d(32, 32, kernel_size=(3, 9), stride=(1,2), padding=(1, 4))),
- norm_f(nn.Conv2d(32, 32, kernel_size=(3, 9), stride=(1,2), padding=(1, 4))),
- norm_f(nn.Conv2d(32, 32, kernel_size=(3, 9), stride=(1,2), padding=(1, 4))),
- norm_f(nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1,1), padding=(1, 1))),
- ])
-
- self.out = norm_f(nn.Conv2d(32, 1, 3, 1, 1))
-
- def forward(self, y):
-
- fmap = []
- with torch.no_grad():
- y = y.squeeze(1)
- y = stft(y, self.fft_size, self.shift_size, self.win_length, self.window.to(y.get_device()))
- y = y.unsqueeze(1)
- for i, d in enumerate(self.discriminators):
- y = d(y)
- y = F.leaky_relu(y, LRELU_SLOPE)
- fmap.append(y)
-
- y = self.out(y)
- fmap.append(y)
-
- return torch.flatten(y, 1, -1), fmap
-
-class MultiResSpecDiscriminator(torch.nn.Module):
-
- def __init__(self,
- fft_sizes=[1024, 2048, 512],
- hop_sizes=[120, 240, 50],
- win_lengths=[600, 1200, 240],
- window="hann_window"):
-
- super(MultiResSpecDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- SpecDiscriminator(fft_sizes[0], hop_sizes[0], win_lengths[0], window),
- SpecDiscriminator(fft_sizes[1], hop_sizes[1], win_lengths[1], window),
- SpecDiscriminator(fft_sizes[2], hop_sizes[2], win_lengths[2], window)
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.dwt1d = DWT_1D()
- self.dwt_conv1 = norm_f(Conv1d(2, 1, 1))
- self.dwt_proj1 = norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0)))
- self.dwt_conv2 = norm_f(Conv1d(4, 1, 1))
- self.dwt_proj2 = norm_f(Conv2d(1, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0)))
- self.dwt_conv3 = norm_f(Conv1d(8, 1, 1))
- self.dwt_proj3 = norm_f(Conv2d(1, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0)))
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # DWT 1
- x_d1_high1, x_d1_low1 = self.dwt1d(x)
- x_d1 = self.dwt_conv1(torch.cat([x_d1_high1, x_d1_low1], dim=1))
- # 1d to 2d
- b, c, t = x_d1.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x_d1 = F.pad(x_d1, (0, n_pad), "reflect")
- t = t + n_pad
- x_d1 = x_d1.view(b, c, t // self.period, self.period)
-
- x_d1 = self.dwt_proj1(x_d1)
-
- # DWT 2
- x_d2_high1, x_d2_low1 = self.dwt1d(x_d1_high1)
- x_d2_high2, x_d2_low2 = self.dwt1d(x_d1_low1)
- x_d2 = self.dwt_conv2(torch.cat([x_d2_high1, x_d2_low1, x_d2_high2, x_d2_low2], dim=1))
- # 1d to 2d
- b, c, t = x_d2.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x_d2 = F.pad(x_d2, (0, n_pad), "reflect")
- t = t + n_pad
- x_d2 = x_d2.view(b, c, t // self.period, self.period)
-
- x_d2 = self.dwt_proj2(x_d2)
-
- # DWT 3
-
- x_d3_high1, x_d3_low1 = self.dwt1d(x_d2_high1)
- x_d3_high2, x_d3_low2 = self.dwt1d(x_d2_low1)
- x_d3_high3, x_d3_low3 = self.dwt1d(x_d2_high2)
- x_d3_high4, x_d3_low4 = self.dwt1d(x_d2_low2)
- x_d3 = self.dwt_conv3(
- torch.cat([x_d3_high1, x_d3_low1, x_d3_high2, x_d3_low2, x_d3_high3, x_d3_low3, x_d3_high4, x_d3_low4],
- dim=1))
- # 1d to 2d
- b, c, t = x_d3.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x_d3 = F.pad(x_d3, (0, n_pad), "reflect")
- t = t + n_pad
- x_d3 = x_d3.view(b, c, t // self.period, self.period)
-
- x_d3 = self.dwt_proj3(x_d3)
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
- i = 0
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
-
- fmap.append(x)
- if i == 0:
- x = torch.cat([x, x_d1], dim=2)
- elif i == 1:
- x = torch.cat([x, x_d2], dim=2)
- elif i == 2:
- x = torch.cat([x, x_d3], dim=2)
- else:
- x = x
- i = i + 1
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class ResWiseMultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self):
- super(ResWiseMultiPeriodDiscriminator, self).__init__()
- self.discriminators = nn.ModuleList([
- DiscriminatorP(2),
- DiscriminatorP(3),
- DiscriminatorP(5),
- DiscriminatorP(7),
- DiscriminatorP(11),
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.dwt1d = DWT_1D()
- self.dwt_conv1 = norm_f(Conv1d(2, 128, 15, 1, padding=7))
- self.dwt_conv2 = norm_f(Conv1d(4, 128, 41, 2, padding=20))
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 128, 15, 1, padding=7)),
- norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
- norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
- norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- # DWT 1
- x_d1_high1, x_d1_low1 = self.dwt1d(x)
- x_d1 = self.dwt_conv1(torch.cat([x_d1_high1, x_d1_low1], dim=1))
-
- # DWT 2
- x_d2_high1, x_d2_low1 = self.dwt1d(x_d1_high1)
- x_d2_high2, x_d2_low2 = self.dwt1d(x_d1_low1)
- x_d2 = self.dwt_conv2(torch.cat([x_d2_high1, x_d2_low1, x_d2_high2, x_d2_low2], dim=1))
-
- i = 0
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, LRELU_SLOPE)
- fmap.append(x)
- if i == 0:
- x = torch.cat([x, x_d1], dim=2)
- if i == 1:
- x = torch.cat([x, x_d2], dim=2)
- i = i + 1
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class ResWiseMultiScaleDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(ResWiseMultiScaleDiscriminator, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.dwt1d = DWT_1D()
- self.dwt_conv1 = norm_f(Conv1d(2, 1, 1))
- self.dwt_conv2 = norm_f(Conv1d(4, 1, 1))
- self.discriminators = nn.ModuleList([
- DiscriminatorS(use_spectral_norm=True),
- DiscriminatorS(),
- DiscriminatorS(),
- ])
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- # DWT 1
- y_hi, y_lo = self.dwt1d(y)
- y_1 = self.dwt_conv1(torch.cat([y_hi, y_lo], dim=1))
- x_d1_high1, x_d1_low1 = self.dwt1d(y_hat)
- y_hat_1 = self.dwt_conv1(torch.cat([x_d1_high1, x_d1_low1], dim=1))
-
- # DWT 2
- x_d2_high1, x_d2_low1 = self.dwt1d(y_hi)
- x_d2_high2, x_d2_low2 = self.dwt1d(y_lo)
- y_2 = self.dwt_conv2(torch.cat([x_d2_high1, x_d2_low1, x_d2_high2, x_d2_low2], dim=1))
-
- x_d2_high1, x_d2_low1 = self.dwt1d(x_d1_high1)
- x_d2_high2, x_d2_low2 = self.dwt1d(x_d1_low1)
- y_hat_2 = self.dwt_conv2(torch.cat([x_d2_high1, x_d2_low1, x_d2_high2, x_d2_low2], dim=1))
-
- for i, d in enumerate(self.discriminators):
-
- if i == 1:
- y = y_1
- y_hat = y_hat_1
- if i == 2:
- y = y_2
- y_hat = y_hat_2
-
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- fmap_rs.append(fmap_r)
- y_d_gs.append(y_d_g)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
\ No newline at end of file
diff --git a/spaces/KyanChen/RSPrompter/mmpl/utils/labelme_utils.py b/spaces/KyanChen/RSPrompter/mmpl/utils/labelme_utils.py
deleted file mode 100644
index 0981919771a617ca79b29c3ddf96ea14c82fccc6..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpl/utils/labelme_utils.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import json
-import os.path
-
-from mmengine.structures import InstanceData
-
-
-class LabelmeFormat:
- """Predict results save into labelme file.
-
- Base on https://github.com/wkentaro/labelme/blob/main/labelme/label_file.py
-
- Args:
- classes (tuple): Model classes name.
- """
-
- def __init__(self, classes: tuple):
- super().__init__()
- self.classes = classes
-
- def __call__(self, pred_instances: InstanceData, metainfo: dict,
- output_path: str, selected_classes: list):
- """Get image data field for labelme.
-
- Args:
- pred_instances (InstanceData): Candidate prediction info.
- metainfo (dict): Meta info of prediction.
- output_path (str): Image file path.
- selected_classes (list): Selected class name.
-
- Labelme file eg.
- {
- "version": "5.1.1",
- "flags": {},
- "imagePath": "/data/cat/1.jpg",
- "imageData": null,
- "imageHeight": 3000,
- "imageWidth": 4000,
- "shapes": [
- {
- "label": "cat",
- "points": [
- [
- 1148.076923076923,
- 1188.4615384615383
- ],
- [
- 2471.1538461538457,
- 2176.923076923077
- ]
- ],
- "group_id": null,
- "shape_type": "rectangle",
- "flags": {}
- },
- {...}
- ]
- }
- """
-
- image_path = os.path.abspath(metainfo['img_path'])
-
- json_info = {
- 'version': '5.1.1',
- 'flags': {},
- 'imagePath': image_path,
- 'imageData': None,
- 'imageHeight': metainfo['ori_shape'][0],
- 'imageWidth': metainfo['ori_shape'][1],
- 'shapes': []
- }
-
- for pred_instance in pred_instances:
- pred_bbox = pred_instance.bboxes.cpu().numpy().tolist()[0]
- pred_label = self.classes[pred_instance.labels]
-
- if selected_classes is not None and \
- pred_label not in selected_classes:
- # filter class name
- continue
-
- sub_dict = {
- 'label': pred_label,
- 'points': [pred_bbox[:2], pred_bbox[2:]],
- 'group_id': None,
- 'shape_type': 'rectangle',
- 'flags': {}
- }
- json_info['shapes'].append(sub_dict)
-
- with open(output_path, 'w', encoding='utf-8') as f_json:
- json.dump(json_info, f_json, ensure_ascii=False, indent=2)
diff --git a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/infer_pack/commons.py b/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/AI-Cover-Gen-Web-Ui/src/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/Lewislou/Lewislou-cell-seg-sribd/models/flexible_unet.py b/spaces/Lewislou/Lewislou-cell-seg-sribd/models/flexible_unet.py
deleted file mode 100644
index 3f894bbde499f7c16a5c74fbf74ffe83aecdd914..0000000000000000000000000000000000000000
--- a/spaces/Lewislou/Lewislou-cell-seg-sribd/models/flexible_unet.py
+++ /dev/null
@@ -1,312 +0,0 @@
-# Copyright (c) MONAI Consortium
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-# http://www.apache.org/licenses/LICENSE-2.0
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import List, Optional, Sequence, Tuple, Union
-
-import torch
-from torch import nn
-
-from monai.networks.blocks import UpSample
-from monai.networks.layers.factories import Conv
-from monai.networks.layers.utils import get_act_layer
-from monai.networks.nets import EfficientNetBNFeatures
-from monai.networks.nets.basic_unet import UpCat
-from monai.utils import InterpolateMode
-
-__all__ = ["FlexibleUNet"]
-
-encoder_feature_channel = {
- "efficientnet-b0": (16, 24, 40, 112, 320),
- "efficientnet-b1": (16, 24, 40, 112, 320),
- "efficientnet-b2": (16, 24, 48, 120, 352),
- "efficientnet-b3": (24, 32, 48, 136, 384),
- "efficientnet-b4": (24, 32, 56, 160, 448),
- "efficientnet-b5": (24, 40, 64, 176, 512),
- "efficientnet-b6": (32, 40, 72, 200, 576),
- "efficientnet-b7": (32, 48, 80, 224, 640),
- "efficientnet-b8": (32, 56, 88, 248, 704),
- "efficientnet-l2": (72, 104, 176, 480, 1376),
-}
-
-
-def _get_encoder_channels_by_backbone(backbone: str, in_channels: int = 3) -> tuple:
- """
- Get the encoder output channels by given backbone name.
-
- Args:
- backbone: name of backbone to generate features, can be from [efficientnet-b0, ..., efficientnet-b7].
- in_channels: channel of input tensor, default to 3.
-
- Returns:
- A tuple of output feature map channels' length .
- """
- encoder_channel_tuple = encoder_feature_channel[backbone]
- encoder_channel_list = [in_channels] + list(encoder_channel_tuple)
- encoder_channel = tuple(encoder_channel_list)
- return encoder_channel
-
-
-class UNetDecoder(nn.Module):
- """
- UNet Decoder.
- This class refers to `segmentation_models.pytorch
- `_.
-
- Args:
- spatial_dims: number of spatial dimensions.
- encoder_channels: number of output channels for all feature maps in encoder.
- `len(encoder_channels)` should be no less than 2.
- decoder_channels: number of output channels for all feature maps in decoder.
- `len(decoder_channels)` should equal to `len(encoder_channels) - 1`.
- act: activation type and arguments.
- norm: feature normalization type and arguments.
- dropout: dropout ratio.
- bias: whether to have a bias term in convolution blocks in this decoder.
- upsample: upsampling mode, available options are
- ``"deconv"``, ``"pixelshuffle"``, ``"nontrainable"``.
- pre_conv: a conv block applied before upsampling.
- Only used in the "nontrainable" or "pixelshuffle" mode.
- interp_mode: {``"nearest"``, ``"linear"``, ``"bilinear"``, ``"bicubic"``, ``"trilinear"``}
- Only used in the "nontrainable" mode.
- align_corners: set the align_corners parameter for upsample. Defaults to True.
- Only used in the "nontrainable" mode.
- is_pad: whether to pad upsampling features to fit the encoder spatial dims.
-
- """
-
- def __init__(
- self,
- spatial_dims: int,
- encoder_channels: Sequence[int],
- decoder_channels: Sequence[int],
- act: Union[str, tuple],
- norm: Union[str, tuple],
- dropout: Union[float, tuple],
- bias: bool,
- upsample: str,
- pre_conv: Optional[str],
- interp_mode: str,
- align_corners: Optional[bool],
- is_pad: bool,
- ):
-
- super().__init__()
- if len(encoder_channels) < 2:
- raise ValueError("the length of `encoder_channels` should be no less than 2.")
- if len(decoder_channels) != len(encoder_channels) - 1:
- raise ValueError("`len(decoder_channels)` should equal to `len(encoder_channels) - 1`.")
-
- in_channels = [encoder_channels[-1]] + list(decoder_channels[:-1])
- skip_channels = list(encoder_channels[1:-1][::-1]) + [0]
- halves = [True] * (len(skip_channels) - 1)
- halves.append(False)
- blocks = []
- for in_chn, skip_chn, out_chn, halve in zip(in_channels, skip_channels, decoder_channels, halves):
- blocks.append(
- UpCat(
- spatial_dims=spatial_dims,
- in_chns=in_chn,
- cat_chns=skip_chn,
- out_chns=out_chn,
- act=act,
- norm=norm,
- dropout=dropout,
- bias=bias,
- upsample=upsample,
- pre_conv=pre_conv,
- interp_mode=interp_mode,
- align_corners=align_corners,
- halves=halve,
- is_pad=is_pad,
- )
- )
- self.blocks = nn.ModuleList(blocks)
-
- def forward(self, features: List[torch.Tensor], skip_connect: int = 4):
- skips = features[:-1][::-1]
- features = features[1:][::-1]
-
- x = features[0]
- for i, block in enumerate(self.blocks):
- if i < skip_connect:
- skip = skips[i]
- else:
- skip = None
- x = block(x, skip)
-
- return x
-
-
-class SegmentationHead(nn.Sequential):
- """
- Segmentation head.
- This class refers to `segmentation_models.pytorch
- `_.
-
- Args:
- spatial_dims: number of spatial dimensions.
- in_channels: number of input channels for the block.
- out_channels: number of output channels for the block.
- kernel_size: kernel size for the conv layer.
- act: activation type and arguments.
- scale_factor: multiplier for spatial size. Has to match input size if it is a tuple.
-
- """
-
- def __init__(
- self,
- spatial_dims: int,
- in_channels: int,
- out_channels: int,
- kernel_size: int = 3,
- act: Optional[Union[Tuple, str]] = None,
- scale_factor: float = 1.0,
- ):
-
- conv_layer = Conv[Conv.CONV, spatial_dims](
- in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, padding=kernel_size // 2
- )
- up_layer: nn.Module = nn.Identity()
- if scale_factor > 1.0:
- up_layer = UpSample(
- spatial_dims=spatial_dims,
- scale_factor=scale_factor,
- mode="nontrainable",
- pre_conv=None,
- interp_mode=InterpolateMode.LINEAR,
- )
- if act is not None:
- act_layer = get_act_layer(act)
- else:
- act_layer = nn.Identity()
- super().__init__(conv_layer, up_layer, act_layer)
-
-
-class FlexibleUNet(nn.Module):
- """
- A flexible implementation of UNet-like encoder-decoder architecture.
- """
-
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- backbone: str,
- pretrained: bool = False,
- decoder_channels: Tuple = (256, 128, 64, 32, 16),
- spatial_dims: int = 2,
- norm: Union[str, tuple] = ("batch", {"eps": 1e-3, "momentum": 0.1}),
- act: Union[str, tuple] = ("relu", {"inplace": True}),
- dropout: Union[float, tuple] = 0.0,
- decoder_bias: bool = False,
- upsample: str = "nontrainable",
- interp_mode: str = "nearest",
- is_pad: bool = True,
- ) -> None:
- """
- A flexible implement of UNet, in which the backbone/encoder can be replaced with
- any efficient network. Currently the input must have a 2 or 3 spatial dimension
- and the spatial size of each dimension must be a multiple of 32 if is pad parameter
- is False
-
- Args:
- in_channels: number of input channels.
- out_channels: number of output channels.
- backbone: name of backbones to initialize, only support efficientnet right now,
- can be from [efficientnet-b0,..., efficientnet-b8, efficientnet-l2].
- pretrained: whether to initialize pretrained ImageNet weights, only available
- for spatial_dims=2 and batch norm is used, default to False.
- decoder_channels: number of output channels for all feature maps in decoder.
- `len(decoder_channels)` should equal to `len(encoder_channels) - 1`,default
- to (256, 128, 64, 32, 16).
- spatial_dims: number of spatial dimensions, default to 2.
- norm: normalization type and arguments, default to ("batch", {"eps": 1e-3,
- "momentum": 0.1}).
- act: activation type and arguments, default to ("relu", {"inplace": True}).
- dropout: dropout ratio, default to 0.0.
- decoder_bias: whether to have a bias term in decoder's convolution blocks.
- upsample: upsampling mode, available options are``"deconv"``, ``"pixelshuffle"``,
- ``"nontrainable"``.
- interp_mode: {``"nearest"``, ``"linear"``, ``"bilinear"``, ``"bicubic"``, ``"trilinear"``}
- Only used in the "nontrainable" mode.
- is_pad: whether to pad upsampling features to fit features from encoder. Default to True.
- If this parameter is set to "True", the spatial dim of network input can be arbitary
- size, which is not supported by TensorRT. Otherwise, it must be a multiple of 32.
- """
- super().__init__()
-
- if backbone not in encoder_feature_channel:
- raise ValueError(f"invalid model_name {backbone} found, must be one of {encoder_feature_channel.keys()}.")
-
- if spatial_dims not in (2, 3):
- raise ValueError("spatial_dims can only be 2 or 3.")
-
- adv_prop = "ap" in backbone
-
- self.backbone = backbone
- self.spatial_dims = spatial_dims
- model_name = backbone
- encoder_channels = _get_encoder_channels_by_backbone(backbone, in_channels)
- self.encoder = EfficientNetBNFeatures(
- model_name=model_name,
- pretrained=pretrained,
- in_channels=in_channels,
- spatial_dims=spatial_dims,
- norm=norm,
- adv_prop=adv_prop,
- )
- self.decoder = UNetDecoder(
- spatial_dims=spatial_dims,
- encoder_channels=encoder_channels,
- decoder_channels=decoder_channels,
- act=act,
- norm=norm,
- dropout=dropout,
- bias=decoder_bias,
- upsample=upsample,
- interp_mode=interp_mode,
- pre_conv=None,
- align_corners=None,
- is_pad=is_pad,
- )
- self.dist_head = SegmentationHead(
- spatial_dims=spatial_dims,
- in_channels=decoder_channels[-1],
- out_channels=32,
- kernel_size=1,
- act='relu',
- )
- self.prob_head = SegmentationHead(
- spatial_dims=spatial_dims,
- in_channels=decoder_channels[-1],
- out_channels=1,
- kernel_size=1,
- act='sigmoid',
- )
-
- def forward(self, inputs: torch.Tensor):
- """
- Do a typical encoder-decoder-header inference.
-
- Args:
- inputs: input should have spatially N dimensions ``(Batch, in_channels, dim_0[, dim_1, ..., dim_N])``,
- N is defined by `dimensions`.
-
- Returns:
- A torch Tensor of "raw" predictions in shape ``(Batch, out_channels, dim_0[, dim_1, ..., dim_N])``.
-
- """
- x = inputs
- enc_out = self.encoder(x)
- decoder_out = self.decoder(enc_out)
- dist = self.dist_head(decoder_out)
- prob = self.prob_head(decoder_out)
- return dist,prob
diff --git a/spaces/Liu-LAB/GPT-academic/tests/test_llms.py b/spaces/Liu-LAB/GPT-academic/tests/test_llms.py
deleted file mode 100644
index 75e230327eec6d1e8869dccd85a576b94fb51f26..0000000000000000000000000000000000000000
--- a/spaces/Liu-LAB/GPT-academic/tests/test_llms.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# """
-# 对各个llm模型进行单元测试
-# """
-def validate_path():
- import os, sys
- dir_name = os.path.dirname(__file__)
- root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..')
- os.chdir(root_dir_assume)
- sys.path.append(root_dir_assume)
-
-validate_path() # validate path so you can run from base directory
-if __name__ == "__main__":
- # from request_llm.bridge_newbingfree import predict_no_ui_long_connection
- # from request_llm.bridge_moss import predict_no_ui_long_connection
- # from request_llm.bridge_jittorllms_pangualpha import predict_no_ui_long_connection
- # from request_llm.bridge_jittorllms_llama import predict_no_ui_long_connection
- # from request_llm.bridge_claude import predict_no_ui_long_connection
- # from request_llm.bridge_internlm import predict_no_ui_long_connection
- # from request_llm.bridge_qwen import predict_no_ui_long_connection
- from request_llm.bridge_spark import predict_no_ui_long_connection
-
- llm_kwargs = {
- 'max_length': 4096,
- 'top_p': 1,
- 'temperature': 1,
- }
-
- result = predict_no_ui_long_connection( inputs="请问什么是质子?",
- llm_kwargs=llm_kwargs,
- history=["你好", "我好!"],
- sys_prompt="")
- print('final result:', result)
diff --git a/spaces/Loke-60000/mio-amadeus/app.py b/spaces/Loke-60000/mio-amadeus/app.py
deleted file mode 100644
index 3283c4c6077d757f281d3959460909cec5203230..0000000000000000000000000000000000000000
--- a/spaces/Loke-60000/mio-amadeus/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/mio/amadeus").launch()
\ No newline at end of file
diff --git a/spaces/LuxOAI/ChatGpt-Web/app/layout.tsx b/spaces/LuxOAI/ChatGpt-Web/app/layout.tsx
deleted file mode 100644
index b0ce1a1b0e363bfe55aea8d85191e2d451380572..0000000000000000000000000000000000000000
--- a/spaces/LuxOAI/ChatGpt-Web/app/layout.tsx
+++ /dev/null
@@ -1,48 +0,0 @@
-/* eslint-disable @next/next/no-page-custom-font */
-import "./styles/globals.scss";
-import "./styles/markdown.scss";
-import "./styles/highlight.scss";
-import { getBuildConfig } from "./config/build";
-
-const buildConfig = getBuildConfig();
-
-export const metadata = {
- title: "ChatGPT Next Web",
- description: "Your personal ChatGPT Chat Bot.",
- appleWebApp: {
- title: "ChatGPT Next Web",
- statusBarStyle: "default",
- },
- themeColor: "#fafafa",
-};
-
-export default function RootLayout({
- children,
-}: {
- children: React.ReactNode;
-}) {
- return (
-
-
-
-
-
-
-
-
-
-
- {children}
-
- );
-}
diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/inference_core.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/inference_core.py
deleted file mode 100644
index b696cbf8884ac79e992d9e7c1da0be7fb5f3c74b..0000000000000000000000000000000000000000
--- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/inference_core.py
+++ /dev/null
@@ -1,107 +0,0 @@
-from XMem.inference.memory_manager import MemoryManager
-from XMem.model.network import XMem
-from XMem.model.aggregate import aggregate
-
-from XMem.util.tensor_util import pad_divide_by, unpad
-
-
-class InferenceCore:
- def __init__(self, network:XMem, config):
- self.config = config
- self.network = network
- self.mem_every = config['mem_every']
- self.deep_update_every = config['deep_update_every']
- self.enable_long_term = config['enable_long_term']
-
- # if deep_update_every < 0, synchronize deep update with memory frame
- self.deep_update_sync = (self.deep_update_every < 0)
-
- self.clear_memory()
- self.all_labels = None
-
- def clear_memory(self):
- self.curr_ti = -1
- self.last_mem_ti = 0
- if not self.deep_update_sync:
- self.last_deep_update_ti = -self.deep_update_every
- self.memory = MemoryManager(config=self.config)
-
- def update_config(self, config):
- self.mem_every = config['mem_every']
- self.deep_update_every = config['deep_update_every']
- self.enable_long_term = config['enable_long_term']
-
- # if deep_update_every < 0, synchronize deep update with memory frame
- self.deep_update_sync = (self.deep_update_every < 0)
- self.memory.update_config(config)
-
- def set_all_labels(self, all_labels):
- # self.all_labels = [l.item() for l in all_labels]
- self.all_labels = all_labels
-
- def step(self, image, mask=None, valid_labels=None, end=False):
- # image: 3*H*W
- # mask: num_objects*H*W or None
- self.curr_ti += 1
- image, self.pad = pad_divide_by(image, 16)
- image = image.unsqueeze(0) # add the batch dimension
-
- is_mem_frame = ((self.curr_ti-self.last_mem_ti >= self.mem_every) or (mask is not None)) and (not end)
- need_segment = (self.curr_ti > 0) and ((valid_labels is None) or (len(self.all_labels) != len(valid_labels)))
- is_deep_update = (
- (self.deep_update_sync and is_mem_frame) or # synchronized
- (not self.deep_update_sync and self.curr_ti-self.last_deep_update_ti >= self.deep_update_every) # no-sync
- ) and (not end)
- is_normal_update = (not self.deep_update_sync or not is_deep_update) and (not end)
-
- key, shrinkage, selection, f16, f8, f4 = self.network.encode_key(image,
- need_ek=(self.enable_long_term or need_segment),
- need_sk=is_mem_frame)
- multi_scale_features = (f16, f8, f4)
-
- # segment the current frame is needed
- if need_segment:
- memory_readout = self.memory.match_memory(key, selection).unsqueeze(0)
- hidden, _, pred_prob_with_bg = self.network.segment(multi_scale_features, memory_readout,
- self.memory.get_hidden(), h_out=is_normal_update, strip_bg=False)
- # remove batch dim
- pred_prob_with_bg = pred_prob_with_bg[0]
- pred_prob_no_bg = pred_prob_with_bg[1:]
- if is_normal_update:
- self.memory.set_hidden(hidden)
- else:
- pred_prob_no_bg = pred_prob_with_bg = None
-
- # use the input mask if any
- if mask is not None:
- mask, _ = pad_divide_by(mask, 16)
-
- if pred_prob_no_bg is not None:
- # if we have a predicted mask, we work on it
- # make pred_prob_no_bg consistent with the input mask
- mask_regions = (mask.sum(0) > 0.5)
- pred_prob_no_bg[:, mask_regions] = 0
- # shift by 1 because mask/pred_prob_no_bg do not contain background
- mask = mask.type_as(pred_prob_no_bg)
- if valid_labels is not None:
- shift_by_one_non_labels = [i for i in range(pred_prob_no_bg.shape[0]) if (i+1) not in valid_labels]
- # non-labelled objects are copied from the predicted mask
- mask[shift_by_one_non_labels] = pred_prob_no_bg[shift_by_one_non_labels]
- pred_prob_with_bg = aggregate(mask, dim=0)
-
- # also create new hidden states
- self.memory.create_hidden_state(len(self.all_labels), key)
-
- # save as memory if needed
- if is_mem_frame:
- value, hidden = self.network.encode_value(image, f16, self.memory.get_hidden(),
- pred_prob_with_bg[1:].unsqueeze(0), is_deep_update=is_deep_update)
- self.memory.add_memory(key, shrinkage, value, self.all_labels,
- selection=selection if self.enable_long_term else None)
- self.last_mem_ti = self.curr_ti
-
- if is_deep_update:
- self.memory.set_hidden(hidden)
- self.last_deep_update_ti = self.curr_ti
-
- return unpad(pred_prob_with_bg, self.pad)
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/cc_head.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/cc_head.py
deleted file mode 100644
index 5b9abb4e747f92657f4220b29788539340986c00..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/cc_head.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import torch
-
-from ..builder import HEADS
-from .fcn_head import FCNHead
-
-try:
- from annotator.uniformer.mmcv.ops import CrissCrossAttention
-except ModuleNotFoundError:
- CrissCrossAttention = None
-
-
-@HEADS.register_module()
-class CCHead(FCNHead):
- """CCNet: Criss-Cross Attention for Semantic Segmentation.
-
- This head is the implementation of `CCNet
- `_.
-
- Args:
- recurrence (int): Number of recurrence of Criss Cross Attention
- module. Default: 2.
- """
-
- def __init__(self, recurrence=2, **kwargs):
- if CrissCrossAttention is None:
- raise RuntimeError('Please install mmcv-full for '
- 'CrissCrossAttention ops')
- super(CCHead, self).__init__(num_convs=2, **kwargs)
- self.recurrence = recurrence
- self.cca = CrissCrossAttention(self.channels)
-
- def forward(self, inputs):
- """Forward function."""
- x = self._transform_inputs(inputs)
- output = self.convs[0](x)
- for _ in range(self.recurrence):
- output = self.cca(output)
- output = self.convs[1](output)
- if self.concat_input:
- output = self.conv_cat(torch.cat([x, output], dim=1))
- output = self.cls_seg(output)
- return output
diff --git a/spaces/MirageML/depth2img/README.md b/spaces/MirageML/depth2img/README.md
deleted file mode 100644
index b2c0a0f85a621ceaa4158c66e5fa542eb5693d84..0000000000000000000000000000000000000000
--- a/spaces/MirageML/depth2img/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Mirage Depth2Img
-emoji: 🔥🖼
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.11.0
-app_file: app.py
-pinned: false
-duplicated_from: radames/stable-diffusion-depth2img
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MirageML/sjc/guided_diffusion/README.md b/spaces/MirageML/sjc/guided_diffusion/README.md
deleted file mode 100644
index 4afc26c63af01a48a86f76a0b08f1c26161747c7..0000000000000000000000000000000000000000
--- a/spaces/MirageML/sjc/guided_diffusion/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
-Selected modules from OpenAI's [guided diffusion](https://github.com/openai/guided-diffusion), retrieved at commit `22e0df8183507e13a7813f8d38d51b072ca1e67c`
-
-It's a bare minimum set of files needed to run their pretrained models. You can download these model checkpoints following the instructions in their repository README
-
-Some modifications are made to remove the distributed processing utilities in order to reduce code complexity.
diff --git a/spaces/MrSinan/Reconstruction/app.py b/spaces/MrSinan/Reconstruction/app.py
deleted file mode 100644
index 4b05006fb241eef0e5db0e50d3e28127555b51ab..0000000000000000000000000000000000000000
--- a/spaces/MrSinan/Reconstruction/app.py
+++ /dev/null
@@ -1,781 +0,0 @@
-
-
-# example of face detection with mtcnn
-from __future__ import print_function, division
-from matplotlib import pyplot
-from PIL import Image
-from numpy import asarray
-from mtcnn.mtcnn import MTCNN
-import cv2
-from mask_the_face import *
-import numpy as np
-
-import cv2
-from tensorflow.keras.regularizers import l2
-import pathlib
-import tensorflow
-from tensorflow import keras
-from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense,Dropout,BatchNormalization
-import tensorflow.keras
-import pathlib
-import tensorflow as tf
-from tensorflow import keras
-from tensorflow.keras.preprocessing.image import ImageDataGenerator
-import tensorflow.keras.utils as utils
-from tensorflow.keras.optimizers import Adam as adam
-from tensorflow.keras.optimizers import SGD
-from tensorflow.keras.optimizers import RMSprop
-from tensorflow.keras.optimizers import Adagrad
-from tensorflow.keras.callbacks import EarlyStopping ,ModelCheckpoint
-import tensorflow as tf
-from tensorflow.keras import Model
-import matplotlib.pyplot as plt
-import numpy as np
-from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, GlobalAveragePooling2D, Dropout, Input
-# import keras_tuner as kt
-from tensorflow.keras.applications import InceptionResNetV2
-from tensorflow.keras import layers
-from tensorflow.keras.applications.inception_resnet_v2 import preprocess_input
-from matplotlib import pyplot
-
-from numpy import asarray
-import copy
-import random
-# from mtcnn.mtcnn import MTCNN
-import glob
-import gradio as gr
-
-
-from tensorflow.keras.regularizers import l2
-from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout, multiply, GaussianNoise
-from tensorflow.keras.layers import BatchNormalization, Activation, Embedding, ZeroPadding2D
-from tensorflow.keras.layers import MaxPooling2D
-from tensorflow.keras.layers import LeakyReLU
-from tensorflow.keras.layers import UpSampling2D, Conv2D
-from tensorflow.keras.models import Sequential, Model
-from tensorflow.keras.optimizers import Adam
-from tensorflow.keras import losses
-from tensorflow.keras.utils import to_categorical
-import tensorflow.keras.backend as K
-from tensorflow.keras.utils import plot_model
-import matplotlib.pyplot as plt
-import shutil
-import numpy as np
-from tensorflow.keras.applications import EfficientNetB0
-from tensorflow.keras.applications import VGG16
-
-def ssim_l1_loss(gt, y_pred, max_val=2.0, l1_weight=1.0):
- """
- Computes SSIM loss with L1 normalization
- @param gt: Ground truth image
- @param y_pred: Predicted image
- @param max_val: Maximal SSIM value
- @param l1_weight: Weight of L1 normalization
- @return: SSIM L1 loss
- """
- ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(gt, y_pred, max_val=max_val))
- l1 = tf.keras.metrics.mean_absolute_error(gt, y_pred)
- return ssim_loss + tf.cast(l1 * l1_weight, tf.float32)
-
-class GAN():
- def __init__(self,Xpointers,Ypointers,valX,valY,BigBatchSize,BinaryEnabled=False,BigBatchEnable=False,loading=True,printModel=False):
- self.Xpoint= Xpointers
- self.Ypoint= Ypointers
- self.X=''
- self.Y=''
- self.Binary=''
- self.DataSize=BigBatchSize
- self.genEnable=BigBatchEnable
- self.loading=loading
- self.PrintOut=printModel
- if self.loading:
- self.valX=self.get_all_images(valX)
- self.valY=self.get_all_images(valY)
- self.BestValLoss=1000
- self.BinaryEnabled=BinaryEnabled
- if self.loading:
- if self.BinaryEnabled:
- self.Binary=self.GetBinary(self.valY,self.valX)
- self.ChangeToGreen('val')
- optimizer = Adam(0.0010,)
-
- # # Build and compile the discriminator
-
- self.discriminator_glo = self.build_discriminator()
- self.discriminator_glo.compile(loss='binary_crossentropy',
- optimizer=optimizer,
- metrics=['accuracy'])
- self.discriminator_loc = self.build_local_discriminator()
- self.discriminator_loc.compile(loss='binary_crossentropy',
- optimizer=optimizer,
- metrics=['accuracy'])
-
-
- self.generator,self.predictor = self.build_generator()
-
-
-
- GenOut = self.generator.output
-
-
-
- valid = self.discriminator_glo(GenOut[0])
- self.discriminator_glo.trainable = False
-
- valid2 = self.discriminator_loc(GenOut)
- self.discriminator_loc.trainable = False
-
- self.combined = Model(self.generator.input , [self.generator.output[0], valid,valid2])
- self.combined.compile(loss=[ssim_l1_loss, 'binary_crossentropy','binary_crossentropy'],
- loss_weights=[0.35, 0.50,1],
- optimizer=optimizer)
- if self.PrintOut:
- self.generator.summary()
- self.discriminator_loc.summary()
- self.discriminator_glo.summary()
- self.combined.summary()
-
- if self.loading:
- self.getBigBatch()
-
- def GetBinary(self,Org,Masked):
- allBinary=[]
- for i,x in enumerate(Masked):
-
- diff = cv2.absdiff(Org[i], Masked[i])
- gray=cv2.cvtColor(diff,cv2.COLOR_BGR2GRAY)
- _, diff2 = cv2.threshold(gray, 9, 255, cv2.THRESH_BINARY)
- img_median = cv2.medianBlur(diff2, 3)
- img_median = img_median/255
- allBinary.append(img_median)
- return np.array(allBinary)
-
-
- def get_all_images(self,classes):
-
- allImages=[]
-
-
- for i,sample in enumerate(classes[:]):
-
- org_img = cv2.imread(sample)
- #org_img = org_img.astype('float32')
- org_img = cv2.resize(org_img, (256, 256))
- org_img=cv2.cvtColor(org_img,cv2.COLOR_BGR2RGB)
- # org_img= org_img/127.5 - 1
- # np.append(allImages, org_img)
- allImages.append(org_img)
-
-
- return np.array(allImages)
-
- def ChangeToGreen(self,data='train'):
- if data=='train':
- for i,x in enumerate(self.X):
- self.X[i][self.Binary[i]!=0]=(1,255,1)
- else:
- for i,x in enumerate(self.valX):
- self.valX[i][self.Binary[i]!=0]=(1,255,1)
-
- def getBigBatch(self):
- del self.X
- del self.Y
- del self.Binary
- if self.genEnable:
- idx = np.random.randint(0, self.Xpoint.shape[0], self.DataSize)
- currentX=self.Xpoint[idx]
- currentY=self.Ypoint[idx]
- self.X=self.get_all_images(currentX)
- self.Y=self.get_all_images(currentY)
- else:
- self.X=self.get_all_images(self.Xpoint)
- self.Y=self.get_all_images(self.Ypoint)
- if self.BinaryEnabled:
-
- self.Binary=self.GetBinary(self.Y,self.X)
- self.ChangeToGreen('train')
- self.Binary=self.Binary.reshape(self.Binary.shape[0],256,256,1)
-
-
-
- def downsample(self,filters, size, apply_batchnorm=True):
-
-
- result = tf.keras.Sequential()
- result.add(
- tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',))
- result.add(tf.keras.layers.ReLU())
- result.add(
- tf.keras.layers.Conv2D(filters, size, padding='same',))
- result.add(tf.keras.layers.ReLU())
-
- if apply_batchnorm:
- result.add(tf.keras.layers.BatchNormalization())
-
- return result
-
-
-
-
- def upsample(self,filters, size, apply_dropout=False):
-
-
- result = tf.keras.Sequential()
- result.add(
- tf.keras.layers.Conv2DTranspose(filters, size, strides=2,
- padding='same'))
- result.add(tf.keras.layers.ReLU())
- result.add(
- tf.keras.layers.Conv2DTranspose(filters, size,
- padding='same'))
-
-
-
-
- result.add(tf.keras.layers.ReLU())
- result.add(tf.keras.layers.BatchNormalization())
- if apply_dropout:
- result.add(tf.keras.layers.Dropout(0.2))
- return result
-
-
-
- def build_generator(self):
- inputs = tf.keras.layers.Input(shape=[256, 256, 3])
- binary= tf.keras.layers.Input(shape=[256, 256, 1])
- down_stack = [
- self.downsample(128, 3, apply_batchnorm=False), # (batch_size, 128, 128, 64)
- self.downsample(256, 3), # (batch_size, 64, 64, 128)
- self.downsample(256, 3), # (batch_size, 64, 64, 128)
- self.downsample(256, 3), # (batch_size, 64, 64, 128)
- self.downsample(256, 3), # (batch_size, 32, 32, 256)
- self.downsample(512, 3), # (batch_size, 32, 32, 256)
- self.downsample(512, 3), # (batch_size, 8, 8, 512)
- ]
-
- up_stack = [
- self.upsample(512, 3, apply_dropout=True), # (batch_size, 8, 8, 1024)
- self.upsample(512, 3), # (batch_size, 64, 64, 256)
- self.upsample(256, 3,apply_dropout=True), # (batch_size, 64, 64, 256)
- self.upsample(256, 3), # (batch_size, 64, 64, 256)
- self.upsample(256, 3,), # (batch_size, 64, 64, 256)
- self.upsample(256, 3), # (batch_size, 64, 64, 256)
- self.upsample(128, 3,), # (batch_size, 128, 128, 128)
- ]
- down_stack2 = [
- self.downsample(128, 5, apply_batchnorm=False), # (batch_size, 128, 128, 64)
- self.downsample(128, 5), # (batch_size, 64, 64, 128)
- self.downsample(256, 5), # (batch_size, 32, 32, 256)
- self.downsample(256, 5), # (batch_size, 32, 32, 256)
- self.downsample(256, 5), # (batch_size, 32, 32, 256)
- self.downsample(512, 5), # (batch_size, 8, 8, 512)
- ]
-
-
- up_stack2 = [
- self.upsample(512, 5, apply_dropout=True), # (batch_size, 8, 8, 1024)
- self.upsample(256, 5), # (batch_size, 64, 64, 256)
- self.upsample(256, 5,apply_dropout=True), # (batch_size, 64, 64, 256)
- self.upsample(256, 5), # (batch_size, 64, 64, 256)
- self.upsample(128, 5,), # (batch_size, 64, 64, 256)
- self.upsample(128, 5), # (batch_size, 128, 128, 128)
- ]
-
-
- initializer = tf.random_normal_initializer(0., 0.02)
- last = tf.keras.layers.Conv2DTranspose(3, 3,
- strides=2,
- padding='same',
- name='GenOut',
- activation='tanh') # (batch_size, 256, 256, 3)
- last2 = tf.keras.layers.Conv2DTranspose(3, 3,
- strides=2,
- padding='same',
- name='GenOut2',
- activation='tanh') # (batch_size, 256, 256, 3)
-
- x = inputs
-
- # Downsampling through the model
- skips = []
- for down in down_stack:
- x = down(x)
- skips.append(x)
-
- skips = reversed(skips[:-1])
-
- # Upsampling and establishing the skip connections
- for up, skip in zip(up_stack, skips):
- x = up(x)
- x = tf.keras.layers.Concatenate()([x, skip])
-
- x = last(x)
-
-
- y = inputs
-
- # Downsampling through the model
- skips = []
- for down in down_stack2:
- y = down(y)
- skips.append(y)
-
- skips = reversed(skips[:-1])
-
- # Upsampling and establishing the skip connections
- for up, skip in zip(up_stack2, skips):
- y= up(y)
- y = tf.keras.layers.Concatenate()([y, skip])
-
- y = last2(y)
-
- z= tf.keras.layers.Average()([x,y])
- model1=tf.keras.Model(inputs=[inputs,binary], outputs=[z,binary])
- model2=tf.keras.Model(inputs=inputs, outputs=z)
- return model1,model2
-
-
-
-
-
- def build_discriminator(self):
- inputs = Input(shape=[256, 256, 3])
-
- facenetmodel = Flatten()
- # facenetmodel.load_weights('/content/drive/MyDrive/facenet_keras_weights.h5')
- # for layer in facenetmodel.layers[:-50]:
- # layer.trainable = False
-
- # Augment data.
- augmented = keras.Sequential([layers.Resizing(160, 160),],name="data_augmentation",)(inputs)
- # This is 'bootstrapping' a new top_model onto the pretrained layers.
- top_model = facenetmodel(augmented)
- top_model = Dropout(0.5)(top_model)
- top_model = BatchNormalization()(top_model)
- # top_model = Flatten(name="flatten")(top_model)
-
- output_layer = Dense(1, activation='sigmoid')(top_model)
-
-
-
-
-
- return Model(inputs=inputs, outputs=output_layer,name='Discriminator')
-
- def build_local_discriminator(self):
- img = Input(shape=[256, 256, 3])
- binary = Input(shape=[256, 256, 1])
- bitAND=tf.keras.layers.Lambda(lambda x: tf.math.multiply(x[0], x[1]))([img,binary])
- facenetmodel = Flatten()
- # facenetmodel.load_weights('/content/drive/MyDrive/facenet_keras_weights.h5')
- # for layer in facenetmodel.layers[:-50]:
- # layer.trainable = False
-
- # Augment data.
- augmented = keras.Sequential([layers.Resizing(160, 160),],name="data_augmentation",)(bitAND)
- # This is 'bootstrapping' a new top_model onto the pretrained layers.
- top_model = facenetmodel(augmented)
- top_model = Dropout(0.5)(top_model)
- top_model = BatchNormalization()(top_model)
- # top_model = Flatten(name="flatten")(top_model)
-
- output_layer = Dense(1, activation='sigmoid')(top_model)
-
-
-
-
-
- return Model(inputs=[img,binary], outputs=output_layer,name='Discriminator_local')
-
-
-
-
-
-
-
- def train(self,epochs,batch_size,imagesSavePath,modelPath, sample_interval=50,BigBatchInterval=1000,modelInterval=50):
-
-
-
- xVal=self.valX/127.5 - 1
- yVal=self.valY/127.5 - 1
- # Adversarial ground truths
- valid = np.ones((batch_size, 1))
- fake = np.zeros((batch_size, 1))
- valid = np.ones((batch_size, 1))
- for epoch in range(epochs):
-
-
-
- # Select a random batch of images
- idx = np.random.randint(0, self.X.shape[0], batch_size)
-
- masked_imgs = self.X[idx]
- org_imgs= self.Y[idx]
-
- masked_imgs = masked_imgs /127.5 - 1.
- org_imgs= org_imgs /127.5 - 1.
- if self.BinaryEnabled:
- binary= self.Binary[idx]
- org_local= tf.math.multiply(org_imgs, binary)
-
- gen_missing = self.generator.predict([masked_imgs,binary])
-
- # Train the discriminator
- d_loss_real_glo = self.discriminator_glo.train_on_batch(org_imgs, valid)
- d_loss_fake_glo = self.discriminator_glo.train_on_batch(gen_missing[0], fake)
- d_loss_glo = 0.5 * np.add(d_loss_real_glo, d_loss_fake_glo)
-
- d_loss_real_loc = self.discriminator_loc.train_on_batch([org_imgs,binary], valid)
- d_loss_fake_loc = self.discriminator_loc.train_on_batch(gen_missing, fake)
- d_loss_loc = 0.5 * np.add(d_loss_real_loc, d_loss_fake_loc)
-
-
- # ---------------------
- # Train Generator
- # ---------------------
- # self.combined.layers[-1].trainable = False
- g_loss = self.combined.train_on_batch([masked_imgs,binary], [org_imgs, valid,valid])
-
- validx = np.random.randint(0, 500, 3)
- val_pred = self.predictor.predict(xVal[validx])
- val_loss=ssim_l1_loss(yVal[validx].astype('float32'),val_pred)
- val_loss=np.average(val_loss)
- # Plot the progress
- print ("%d [G loss: %f,mse:%f] [val_loss:%f]" % (epoch, g_loss[0], g_loss[1],val_loss))
-
-
-
-
-
- # Plot the progress
-
- if epoch!=0:
-
- if epoch % 100 == 0:
-
- self.combined.save_weights('/content/drive/MyDrive/combinedModel_loc12.h5')
- # If at save interval => save generator weights
- # if epoch!=0:
- # if epoch % modelInterval == 0:
- # if val_loss save generated image samples
- if epoch % sample_interval == 0:
- idx = np.random.randint(0, self.X.shape[0], 6)
- val_idx = np.random.randint(0, 499, 2)
- val_reals= self.valY[val_idx]
- val_imgs = self.valX[val_idx]
- reals= self.Y[idx]
- imgs = self.X[idx]
-
- self.sample_images(epoch, imgs,reals,imagesSavePath,val_reals,val_imgs)
-
-
- #Big Batch Gen
- if self.genEnable:
- if epoch!=0:
- if epoch % BigBatchInterval == 0:
- self.getBigBatch()
-
- def sample_images(self, epoch, imgs,reals,savepath,val_reals,val_imgs):
- r, c = 3, 8
-
- imgs=imgs/127.5 -1.
- val_imgs=val_imgs/127.5 -1.
- gen_missing = self.predictor.predict(imgs)
- val_missing = self.predictor.predict(val_imgs)
- imgs = 0.5 * imgs + 0.5
- val_imgs = 0.5 * val_imgs + 0.5
- # reals= 0.5* reals +0.5
- gen_missing=0.5*gen_missing+0.5
- val_missing=0.5*val_missing+0.5
-
- imgs=np.concatenate((imgs,val_imgs), axis=0)
- gen_missing=np.concatenate((gen_missing,val_missing), axis=0)
- reals=np.concatenate((reals,val_reals), axis=0)
- fig, axs = plt.subplots(r, c,figsize=(50,50))
- for i in range(c):
- axs[0,i].imshow(imgs[i, :,:])
- axs[0,i].axis('off')
- axs[1,i].imshow(reals[i, :,:])
- axs[1,i].axis('off')
-
- axs[2,i].imshow(gen_missing[i, :,:])
- axs[2,i].axis('off')
- fig.savefig(savepath+"%d.png" % epoch)
- plt.close()
-
-GAN_Model = GAN(Xpointers=None,Ypointers=None,valX=None,valY=None,
- BigBatchSize=50,BigBatchEnable=True,BinaryEnabled=True,loading=False)
-
-GAN_Model.predictor.load_weights('DemoPredictor2.h5')
-
-def extract_face(photo, required_size=(256, 256),incr=110):
- # load image from file
- pixels = photo
- print(pixels.shape)
- maxH=(pixels.shape[0])
- maxW=(pixels.shape[1])
- if (pixels.shape[-1])>3 or (pixels.shape[-1])<3:
- image = Image.fromarray(pixels)
- return image
-
- # create the detector, using default weights
- detector = MTCNN()
- # detect faces in the image
- results = detector.detect_faces(pixels)
- if not results:
- image = Image.fromarray(pixels)
- image = image.resize(required_size)
- return image
- # extract the bounding box from the first face
- x1, y1, width, height = results[0]['box']
- x2, y2 = x1 + width, y1 + height
- if y1-incr<=0:
- y1=0
- else :
- y1=y1-incr
- if x1-incr<=0:
- x1=0
- else :
- x1=x1-incr
-
- if y2+incr>=maxH:
- y2=maxH
- else :
- y2=y2+incr
- if x2+incr>=maxW:
- x2=maxW
- else :
- x2=x2+incr
- # extract the face
- face = pixels[y1:int(y2), int(x1):int(x2)]
- # resize pixels to the model size
- image = Image.fromarray(face)
- image = image.resize(required_size)
-
- return image
-
-def GetBinary_test(Org,Masked):
- allBinary=[]
- for i,x in enumerate(Masked):
-
- diff = cv2.absdiff(Org, Masked)
- gray=cv2.cvtColor(diff,cv2.COLOR_RGB2GRAY)
- _, diff2 = cv2.threshold(gray, 9, 255, cv2.THRESH_BINARY)
- img_median = cv2.medianBlur(diff2, 3)
- img_median = img_median/255
- allBinary.append(img_median)
- return np.array(allBinary)
-def ChangeToGreen_test(X,Binary):
- X[Binary[0]!=0]=(1,255,1)
-
-def predictImage_masked(GANmodel,groundTruth,masked):
- TestX=masked.copy()
- Testy=groundTruth.copy()
-
- Binary=GetBinary_test(Testy,TestX)
- ChangeToGreen_test(TestX,Binary)
-
- imgs=TestX/127.5 -1.
- Testy=Testy/255
-
- gen_missing = GANmodel.predictor.predict(imgs[None,...])
-
- gen_missing=0.5*gen_missing+0.5
- psnr2 = tf.image.psnr(Testy.astype('float32'),gen_missing, max_val=1.0)
- ssim=tf.image.ssim(Testy.astype('float32'), gen_missing, max_val=1)
- Mssim=np.average(ssim)
- Mpsnr=np.average(psnr2)
- I = gen_missing*255 # or any coefficient
- I = I.astype(np.uint8)
- I = cv2.normalize(I, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
- return (I,Mpsnr,Mssim)
-
-def grid_display(list_of_images, list_of_titles=[], no_of_columns=2, figsize=(10,10)):
-
- fig = plt.figure(figsize=figsize)
- column = 0
- for i in range(len(list_of_images)):
- column += 1
- # check for end of column and create a new figure
- if column == no_of_columns+1:
- fig = plt.figure(figsize=figsize)
- column = 1
- fig.add_subplot(1, no_of_columns, column)
- plt.imshow(list_of_images[i])
- plt.axis('off')
- if len(list_of_titles) >= len(list_of_images):
- plt.title(list_of_titles[i])
-
-# paths = r"C:\Users\MrSin\Downloads\images\*.jpg"
-# import glob
-
-# for filepath in glob.iglob(paths):
-# print(filepath)
-# org_img = cv2.imread(filepath)
-
-def ExecutePipline(img):
- im = Image.fromarray(img.astype('uint8'), 'RGB')
- org_img=np.array(im)
- # plt.imshow(org_img)
- errorPNG=cv2.imread('error.jpg',)
- errorPNG=errorPNG[...,::-1]
- img2=extract_face(org_img,incr=150)
-
-
- cropped = np.array(img2)
-
- open_cv_image = cropped[:, :, ::-1].copy()
- masked1=maskThisImages(open_cv_image)
- if len(masked1)==0:
- img2=extract_face(org_img,incr=165)
-
-
- cropped = np.array(img2)
-
- open_cv_image = cropped[:, :, ::-1].copy()
- masked1=maskThisImages(open_cv_image)
- if len(masked1)==0:
- img2=extract_face(org_img,incr=180)
-
-
- cropped = np.array(img2)
-
- open_cv_image = cropped[:, :, ::-1].copy()
- masked1=maskThisImages(open_cv_image)
- if len(masked1)==0:
- img2=extract_face(org_img,incr=200)
-
-
- cropped = np.array(img2)
-
- open_cv_image = cropped[:, :, ::-1].copy()
- masked1=maskThisImages(open_cv_image)
- if len(masked1)==0:
-
- img2=extract_face(org_img,incr=500)
-
-
- cropped = np.array(img2)
-
- open_cv_image = cropped[:, :, ::-1].copy()
- masked1=maskThisImages(open_cv_image)
- if len(masked1)==0:
- return np.zeros((256,256,3)),errorPNG,np.zeros((256,256,3))
- masked2=cv2.cvtColor(masked1,cv2.COLOR_BGR2RGB)
- # output1 = masked2*255 # or any coefficient
- # output1 = output1.astype(np.uint8)
- # output1 = cv2.normalize(output1, None, 0, 255, cv2.NORM_MINMAX, cv2.CV_8U)
- # plt.imshow(output1)
- # output1= Image.fromarray(output1)
-
- results,psnr,ssim=predictImage_masked(GAN_Model,cropped,masked2)
-
- return cropped,masked2,results[0] #,
-
-
-# paths = r"C:\Users\MrSin\Downloads\images\*.jpg"
-# import glob
-
-# for filepath in glob.iglob(paths):
-# print(filepath)
-# org_img = cv2.imread(filepath)
-# org_img=cv2.cvtColor(org_img,cv2.COLOR_BGR2RGB)
-
-# img=extract_face(org_img)
-
-# cropped = np.array(img)
-# #output 1^
-# open_cv_image = cropped[:, :, ::-1].copy()
-# masked=maskThisImages(open_cv_image)
-# cv2.imwrite('mytestmasked.jpg',masked)
-# masked=cv2.cvtColor(masked,cv2.COLOR_BGR2RGB)
-# #output 2^
-# print(masked.shape)
-# results,psnr,ssim=predictImage_masked_model2(GAN_Model,cropped,masked)
-# displayResult = np.array(results[0])
-# #output 2 results[0]^
-# titles = ["groundtruth",
-# "Masked",
-# "Generated", ]
-# images = [cropped,masked,results[0]]
-# grid_display(images, titles, 3, (15,15))
-
-# titles = ["groundtruth",
-# "Masked",
-# "Generated", ]
-# org_img = cv2.imread('mytestmasked.jpg')
-# org_img=cv2.cvtColor(org_img,cv2.COLOR_BGR2RGB)
-# results=predictImageOnly(GAN_Model,org_img)
-# images = [cropped,masked,results[0]]
-# grid_display(images, titles, 3, (15,15))
-
-
-
-# imagein = gr.Image()
-# maskedOut = gr.Image(type='numpy',label='Masked (Model-input)')
-# crop = gr.Image(type='numpy',label='cropped')
-# genOut= gr.Image(type='numpy',label='Unmasked Output')
-
-# gr.Interface(
-# ExecutePipline,
-# inputs=imagein,
-# outputs=[crop,maskedOut,genOut],
-# title="Face Un-Masking",
-# description="Compare 2 state-of-the-art machine learning models",).launch(share=True)
-
-
-
-with gr.Blocks() as demo:
- gr.HTML(
- """
-
-
-
-
- Face Un-Masking
-
-
-
- AI Model that generate area under masks! simply upload your face image without a mask, then click submit, the model will apply digital mask then send it to the Double Context GAN to predect area under the mask.
-
-
-
- """
- )
- with gr.Row():
- with gr.Column():
- imagein = gr.Image(label='Input',interactive=True)
-
- with gr.Column():
- gr.Examples(['40868.jpg','08227.jpg','59028.jpg','31735.jpg','49936.jpg','21565.jpg'],inputs=imagein)
- with gr.Row():
-
- image_button = gr.Button("Submit")
-
-
-
-
- with gr.Row():
- with gr.Column():
- crop = gr.Image(type='numpy',label='Groundtruth(cropped)',)
- with gr.Column():
- maskedOut = gr.Image(type='numpy',label='Masked (Model-input)')
- with gr.Column():
- genOut= gr.Image(type='numpy',label='Unmasked Output')
-
- gr.Markdown("
Made with 🖤 by Mohammed:Me.MohammedAlsinan@gmail.com & Aseel:A9eel.7neef@gmail.com
")
- image_button.click(fn=ExecutePipline,inputs=imagein,outputs=[crop,maskedOut,genOut])
-demo.launch()
\ No newline at end of file
diff --git a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_flip.py b/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_flip.py
deleted file mode 100644
index 9532fbb3d5c61d52770c24a9960b739e6b95f32d..0000000000000000000000000000000000000000
--- a/spaces/MuGeminorum/insecta/khandy/boxes/boxes_transform_flip.py
+++ /dev/null
@@ -1,135 +0,0 @@
-import numpy as np
-from .boxes_utils import assert_and_normalize_shape
-
-
-def flip_boxes(boxes, x_center=0, y_center=0, direction='h'):
- """
- Args:
- boxes: (N, 4+K)
- x_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- y_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- direction: str
- """
- assert direction in ['x', 'h', 'horizontal',
- 'y', 'v', 'vertical',
- 'o', 'b', 'both']
- boxes = np.asarray(boxes, np.float32)
- ret_boxes = boxes.copy()
-
- x_center = np.asarray(x_center, np.float32)
- y_center = np.asarray(y_center, np.float32)
- x_center = assert_and_normalize_shape(x_center, boxes.shape[0])
- y_center = assert_and_normalize_shape(y_center, boxes.shape[0])
-
- if direction in ['o', 'b', 'both', 'x', 'h', 'horizontal']:
- ret_boxes[:, 0] = 2 * x_center - boxes[:, 2]
- ret_boxes[:, 2] = 2 * x_center - boxes[:, 0]
- if direction in ['o', 'b', 'both', 'y', 'v', 'vertical']:
- ret_boxes[:, 1] = 2 * y_center - boxes[:, 3]
- ret_boxes[:, 3] = 2 * y_center - boxes[:, 1]
- return ret_boxes
-
-
-def fliplr_boxes(boxes, x_center=0, y_center=0):
- """
- Args:
- boxes: (N, 4+K)
- x_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- y_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- """
- boxes = np.asarray(boxes, np.float32)
- ret_boxes = boxes.copy()
-
- x_center = np.asarray(x_center, np.float32)
- y_center = np.asarray(y_center, np.float32)
- x_center = assert_and_normalize_shape(x_center, boxes.shape[0])
- y_center = assert_and_normalize_shape(y_center, boxes.shape[0])
-
- ret_boxes[:, 0] = 2 * x_center - boxes[:, 2]
- ret_boxes[:, 2] = 2 * x_center - boxes[:, 0]
- return ret_boxes
-
-
-def flipud_boxes(boxes, x_center=0, y_center=0):
- """
- Args:
- boxes: (N, 4+K)
- x_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- y_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- """
- boxes = np.asarray(boxes, np.float32)
- ret_boxes = boxes.copy()
-
- x_center = np.asarray(x_center, np.float32)
- y_center = np.asarray(y_center, np.float32)
- x_center = assert_and_normalize_shape(x_center, boxes.shape[0])
- y_center = assert_and_normalize_shape(y_center, boxes.shape[0])
-
- ret_boxes[:, 1] = 2 * y_center - boxes[:, 3]
- ret_boxes[:, 3] = 2 * y_center - boxes[:, 1]
- return ret_boxes
-
-
-def transpose_boxes(boxes, x_center=0, y_center=0):
- """
- Args:
- boxes: (N, 4+K)
- x_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- y_center: array-like whose shape is (), (1,), (N,), (1, 1) or (N, 1)
- """
- boxes = np.asarray(boxes, np.float32)
- ret_boxes = boxes.copy()
-
- x_center = np.asarray(x_center, np.float32)
- y_center = np.asarray(y_center, np.float32)
- x_center = assert_and_normalize_shape(x_center, boxes.shape[0])
- y_center = assert_and_normalize_shape(y_center, boxes.shape[0])
-
- shift = x_center - y_center
- ret_boxes[:, 0] = boxes[:, 1] + shift
- ret_boxes[:, 1] = boxes[:, 0] - shift
- ret_boxes[:, 2] = boxes[:, 3] + shift
- ret_boxes[:, 3] = boxes[:, 2] - shift
- return ret_boxes
-
-
-def flip_boxes_in_image(boxes, image_width, image_height, direction='h'):
- """
- Args:
- boxes: (N, 4+K)
- image_width: int
- image_width: int
- direction: str
-
- References:
- `core.bbox.bbox_flip` in mmdetection
- `datasets.pipelines.RandomFlip.bbox_flip` in mmdetection
- """
- x_center = (image_width - 1) * 0.5
- y_center = (image_height - 1) * 0.5
- ret_boxes = flip_boxes(boxes, x_center, y_center, direction=direction)
- return ret_boxes
-
-
-def rot90_boxes_in_image(boxes, image_width, image_height, n=1):
- """Rotate boxes counter-clockwise by 90 degrees.
-
- References:
- np.rot90
- cv2.rotate
- tf.image.rot90
- """
- n = n % 4
- if n == 0:
- ret_boxes = boxes.copy()
- elif n == 1:
- ret_boxes = transpose_boxes(boxes)
- ret_boxes = flip_boxes_in_image(ret_boxes, image_width, image_height, 'v')
- elif n == 2:
- ret_boxes = flip_boxes_in_image(boxes, image_width, image_height, 'o')
- else:
- ret_boxes = transpose_boxes(boxes)
- ret_boxes = flip_boxes_in_image(ret_boxes, image_width, image_height, 'h');
- return ret_boxes
-
-
\ No newline at end of file
diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/clip/clip.py b/spaces/NAACL2022/CLIP-Caption-Reward/clip/clip.py
deleted file mode 100644
index 76f241b053e3a6da06b1165e73e0d54c5b5356b2..0000000000000000000000000000000000000000
--- a/spaces/NAACL2022/CLIP-Caption-Reward/clip/clip.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import hashlib
-import os
-import urllib
-import warnings
-from typing import Union, List
-
-import torch
-from PIL import Image
-from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
-from tqdm import tqdm
-
-from .model import build_model
-from .simple_tokenizer import SimpleTokenizer as _Tokenizer
-
-__all__ = ["available_models", "load", "tokenize"]
-_tokenizer = _Tokenizer()
-
-_MODELS = {
- "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
- "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
- "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
- "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
-}
-
-
-def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
- os.makedirs(root, exist_ok=True)
- filename = os.path.basename(url)
-
- expected_sha256 = url.split("/")[-2]
- download_target = os.path.join(root, filename)
-
- if os.path.exists(download_target) and not os.path.isfile(download_target):
- raise RuntimeError(f"{download_target} exists and is not a regular file")
-
- if os.path.isfile(download_target):
- if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
- return download_target
- else:
- warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
-
- with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
- with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop:
- while True:
- buffer = source.read(8192)
- if not buffer:
- break
-
- output.write(buffer)
- loop.update(len(buffer))
-
- if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
- raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
-
- return download_target
-
-
-def _transform(n_px):
- return Compose([
- Resize(n_px, interpolation=Image.BICUBIC),
- CenterCrop(n_px),
- lambda image: image.convert("RGB"),
- ToTensor(),
- Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
- ])
-
-
-def available_models() -> List[str]:
- """Returns the names of available CLIP models"""
- return list(_MODELS.keys())
-
-
-def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=True):
- """Load a CLIP model
-
- Parameters
- ----------
- name : str
- A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
-
- device : Union[str, torch.device]
- The device to put the loaded model
-
- jit : bool
- Whether to load the optimized JIT model (default) or more hackable non-JIT model.
-
- Returns
- -------
- model : torch.nn.Module
- The CLIP model
-
- preprocess : Callable[[PIL.Image], torch.Tensor]
- A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
- """
- if name in _MODELS:
- model_path = _download(_MODELS[name])
- elif os.path.isfile(name):
- model_path = name
- else:
- raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
-
- try:
- # loading JIT archive
- model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
- state_dict = None
- except RuntimeError:
- # loading saved state dict
- if jit:
- warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
- jit = False
- state_dict = torch.load(model_path, map_location="cpu")
-
- if not jit:
- model = build_model(state_dict or model.state_dict()).to(device)
- if str(device) == "cpu":
- model.float()
- return model, _transform(model.visual.input_resolution)
-
- # patch the device names
- device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
- device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
-
- def patch_device(module):
- graphs = [module.graph] if hasattr(module, "graph") else []
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("prim::Constant"):
- if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
- node.copyAttributes(device_node)
-
- model.apply(patch_device)
- patch_device(model.encode_image)
- patch_device(model.encode_text)
-
- # patch dtype to float32 on CPU
- if str(device) == "cpu":
- float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
- float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
- float_node = float_input.node()
-
- def patch_float(module):
- graphs = [module.graph] if hasattr(module, "graph") else []
- if hasattr(module, "forward1"):
- graphs.append(module.forward1.graph)
-
- for graph in graphs:
- for node in graph.findAllNodes("aten::to"):
- inputs = list(node.inputs())
- for i in [1, 2]: # dtype can be the second or third argument to aten::to()
- if inputs[i].node()["value"] == 5:
- inputs[i].node().copyAttributes(float_node)
-
- model.apply(patch_float)
- patch_float(model.encode_image)
- patch_float(model.encode_text)
-
- model.float()
-
- return model, _transform(model.input_resolution.item())
-
-
-def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor:
- """
- Returns the tokenized representation of given input string(s)
-
- Parameters
- ----------
- texts : Union[str, List[str]]
- An input string or a list of input strings to tokenize
-
- context_length : int
- The context length to use; all CLIP models use 77 as the context length
-
- Returns
- -------
- A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
- """
- if isinstance(texts, str):
- texts = [texts]
-
- sot_token = _tokenizer.encoder["<|startoftext|>"]
- eot_token = _tokenizer.encoder["<|endoftext|>"]
- all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
- result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
-
- for i, tokens in enumerate(all_tokens):
- if len(tokens) > context_length:
- raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
- result[i, :len(tokens)] = torch.tensor(tokens)
-
- return result
diff --git a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/autoencoder_models/VariationalAutoencoder.py b/spaces/NCTCMumbai/NCTC/models/research/autoencoder/autoencoder_models/VariationalAutoencoder.py
deleted file mode 100644
index 3c2556ab89c2d32be0af5e61099aa12f91c1f176..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/research/autoencoder/autoencoder_models/VariationalAutoencoder.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import tensorflow as tf
-
-class VariationalAutoencoder(object):
-
- def __init__(self, n_input, n_hidden, optimizer = tf.train.AdamOptimizer()):
- self.n_input = n_input
- self.n_hidden = n_hidden
-
- network_weights = self._initialize_weights()
- self.weights = network_weights
-
- # model
- self.x = tf.placeholder(tf.float32, [None, self.n_input])
- self.z_mean = tf.add(tf.matmul(self.x, self.weights['w1']), self.weights['b1'])
- self.z_log_sigma_sq = tf.add(tf.matmul(self.x, self.weights['log_sigma_w1']), self.weights['log_sigma_b1'])
-
- # sample from gaussian distribution
- eps = tf.random_normal(tf.stack([tf.shape(self.x)[0], self.n_hidden]), 0, 1, dtype = tf.float32)
- self.z = tf.add(self.z_mean, tf.multiply(tf.sqrt(tf.exp(self.z_log_sigma_sq)), eps))
-
- self.reconstruction = tf.add(tf.matmul(self.z, self.weights['w2']), self.weights['b2'])
-
- # cost
- reconstr_loss = 0.5 * tf.reduce_sum(tf.pow(tf.subtract(self.reconstruction, self.x), 2.0), 1)
- latent_loss = -0.5 * tf.reduce_sum(1 + self.z_log_sigma_sq
- - tf.square(self.z_mean)
- - tf.exp(self.z_log_sigma_sq), 1)
- self.cost = tf.reduce_mean(reconstr_loss + latent_loss)
- self.optimizer = optimizer.minimize(self.cost)
-
- init = tf.global_variables_initializer()
- self.sess = tf.Session()
- self.sess.run(init)
-
- def _initialize_weights(self):
- all_weights = dict()
- all_weights['w1'] = tf.get_variable("w1", shape=[self.n_input, self.n_hidden],
- initializer=tf.contrib.layers.xavier_initializer())
- all_weights['log_sigma_w1'] = tf.get_variable("log_sigma_w1", shape=[self.n_input, self.n_hidden],
- initializer=tf.contrib.layers.xavier_initializer())
- all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype=tf.float32))
- all_weights['log_sigma_b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype=tf.float32))
- all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden, self.n_input], dtype=tf.float32))
- all_weights['b2'] = tf.Variable(tf.zeros([self.n_input], dtype=tf.float32))
- return all_weights
-
- def partial_fit(self, X):
- cost, opt = self.sess.run((self.cost, self.optimizer), feed_dict={self.x: X})
- return cost
-
- def calc_total_cost(self, X):
- return self.sess.run(self.cost, feed_dict = {self.x: X})
-
- def transform(self, X):
- return self.sess.run(self.z_mean, feed_dict={self.x: X})
-
- def generate(self, hidden = None):
- if hidden is None:
- hidden = self.sess.run(tf.random_normal([1, self.n_hidden]))
- return self.sess.run(self.reconstruction, feed_dict={self.z: hidden})
-
- def reconstruct(self, X):
- return self.sess.run(self.reconstruction, feed_dict={self.x: X})
-
- def getWeights(self):
- return self.sess.run(self.weights['w1'])
-
- def getBiases(self):
- return self.sess.run(self.weights['b1'])
-
diff --git a/spaces/NHNDQ/KoTAN/README.md b/spaces/NHNDQ/KoTAN/README.md
deleted file mode 100644
index 11d0af55cf3dc7cabcb21187d57e094c1cebd106..0000000000000000000000000000000000000000
--- a/spaces/NHNDQ/KoTAN/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: KoTAN
-emoji: 📊
-colorFrom: yellow
-colorTo: pink
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Ntabukiraniro/Recipe/modules/transformer_decoder.py b/spaces/Ntabukiraniro/Recipe/modules/transformer_decoder.py
deleted file mode 100644
index 1cdff5bc72c00f3f7af00fcf6b9c4851c21874ac..0000000000000000000000000000000000000000
--- a/spaces/Ntabukiraniro/Recipe/modules/transformer_decoder.py
+++ /dev/null
@@ -1,492 +0,0 @@
-import math
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from torch.nn.modules.utils import _single
-import modules.utils as utils
-from modules.multihead_attention import MultiheadAttention
-import numpy as np
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-import copy
-
-
-def make_positions(tensor, padding_idx, left_pad):
- """Replace non-padding symbols with their position numbers.
- Position numbers begin at padding_idx+1.
- Padding symbols are ignored, but it is necessary to specify whether padding
- is added on the left side (left_pad=True) or right side (left_pad=False).
- """
-
- # creates tensor from scratch - to avoid multigpu issues
- max_pos = padding_idx + 1 + tensor.size(1)
- #if not hasattr(make_positions, 'range_buf'):
- range_buf = tensor.new()
- #make_positions.range_buf = make_positions.range_buf.type_as(tensor)
- if range_buf.numel() < max_pos:
- torch.arange(padding_idx + 1, max_pos, out=range_buf)
- mask = tensor.ne(padding_idx)
- positions = range_buf[:tensor.size(1)].expand_as(tensor)
- if left_pad:
- positions = positions - mask.size(1) + mask.long().sum(dim=1).unsqueeze(1)
-
- out = tensor.clone()
- out = out.masked_scatter_(mask,positions[mask])
- return out
-
-
-class LearnedPositionalEmbedding(nn.Embedding):
- """This module learns positional embeddings up to a fixed maximum size.
- Padding symbols are ignored, but it is necessary to specify whether padding
- is added on the left side (left_pad=True) or right side (left_pad=False).
- """
-
- def __init__(self, num_embeddings, embedding_dim, padding_idx, left_pad):
- super().__init__(num_embeddings, embedding_dim, padding_idx)
- self.left_pad = left_pad
- nn.init.normal_(self.weight, mean=0, std=embedding_dim ** -0.5)
-
- def forward(self, input, incremental_state=None):
- """Input is expected to be of size [bsz x seqlen]."""
- if incremental_state is not None:
- # positions is the same for every token when decoding a single step
-
- positions = input.data.new(1, 1).fill_(self.padding_idx + input.size(1))
- else:
-
- positions = make_positions(input.data, self.padding_idx, self.left_pad)
- return super().forward(positions)
-
- def max_positions(self):
- """Maximum number of supported positions."""
- return self.num_embeddings - self.padding_idx - 1
-
-class SinusoidalPositionalEmbedding(nn.Module):
- """This module produces sinusoidal positional embeddings of any length.
- Padding symbols are ignored, but it is necessary to specify whether padding
- is added on the left side (left_pad=True) or right side (left_pad=False).
- """
-
- def __init__(self, embedding_dim, padding_idx, left_pad, init_size=1024):
- super().__init__()
- self.embedding_dim = embedding_dim
- self.padding_idx = padding_idx
- self.left_pad = left_pad
- self.weights = SinusoidalPositionalEmbedding.get_embedding(
- init_size,
- embedding_dim,
- padding_idx,
- )
- self.register_buffer('_float_tensor', torch.FloatTensor())
-
- @staticmethod
- def get_embedding(num_embeddings, embedding_dim, padding_idx=None):
- """Build sinusoidal embeddings.
- This matches the implementation in tensor2tensor, but differs slightly
- from the description in Section 3.5 of "Attention Is All You Need".
- """
- half_dim = embedding_dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb)
- emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0)
- emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1)
- if embedding_dim % 2 == 1:
- # zero pad
- emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1)
- if padding_idx is not None:
- emb[padding_idx, :] = 0
- return emb
-
- def forward(self, input, incremental_state=None):
- """Input is expected to be of size [bsz x seqlen]."""
- # recompute/expand embeddings if needed
- bsz, seq_len = input.size()
- max_pos = self.padding_idx + 1 + seq_len
- if self.weights is None or max_pos > self.weights.size(0):
- self.weights = SinusoidalPositionalEmbedding.get_embedding(
- max_pos,
- self.embedding_dim,
- self.padding_idx,
- )
- self.weights = self.weights.type_as(self._float_tensor)
-
- if incremental_state is not None:
- # positions is the same for every token when decoding a single step
- return self.weights[self.padding_idx + seq_len, :].expand(bsz, 1, -1)
-
- positions = make_positions(input.data, self.padding_idx, self.left_pad)
- return self.weights.index_select(0, positions.view(-1)).view(bsz, seq_len, -1).detach()
-
- def max_positions(self):
- """Maximum number of supported positions."""
- return int(1e5) # an arbitrary large number
-
-class TransformerDecoderLayer(nn.Module):
- """Decoder layer block."""
-
- def __init__(self, embed_dim, n_att, dropout=0.5, normalize_before=True, last_ln=False):
- super().__init__()
-
- self.embed_dim = embed_dim
- self.dropout = dropout
- self.relu_dropout = dropout
- self.normalize_before = normalize_before
- num_layer_norm = 3
-
- # self-attention on generated recipe
- self.self_attn = MultiheadAttention(
- self.embed_dim, n_att,
- dropout=dropout,
- )
-
- self.cond_att = MultiheadAttention(
- self.embed_dim, n_att,
- dropout=dropout,
- )
-
- self.fc1 = Linear(self.embed_dim, self.embed_dim)
- self.fc2 = Linear(self.embed_dim, self.embed_dim)
- self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for i in range(num_layer_norm)])
- self.use_last_ln = last_ln
- if self.use_last_ln:
- self.last_ln = LayerNorm(self.embed_dim)
-
- def forward(self, x, ingr_features, ingr_mask, incremental_state, img_features):
-
- # self attention
- residual = x
- x = self.maybe_layer_norm(0, x, before=True)
- x, _ = self.self_attn(
- query=x,
- key=x,
- value=x,
- mask_future_timesteps=True,
- incremental_state=incremental_state,
- need_weights=False,
- )
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(0, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(1, x, before=True)
-
- # attention
- if ingr_features is None:
-
- x, _ = self.cond_att(query=x,
- key=img_features,
- value=img_features,
- key_padding_mask=None,
- incremental_state=incremental_state,
- static_kv=True,
- )
- elif img_features is None:
- x, _ = self.cond_att(query=x,
- key=ingr_features,
- value=ingr_features,
- key_padding_mask=ingr_mask,
- incremental_state=incremental_state,
- static_kv=True,
- )
-
-
- else:
- # attention on concatenation of encoder_out and encoder_aux, query self attn (x)
- kv = torch.cat((img_features, ingr_features), 0)
- mask = torch.cat((torch.zeros(img_features.shape[1], img_features.shape[0], dtype=torch.uint8).to(device),
- ingr_mask), 1)
- x, _ = self.cond_att(query=x,
- key=kv,
- value=kv,
- key_padding_mask=mask,
- incremental_state=incremental_state,
- static_kv=True,
- )
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(1, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(-1, x, before=True)
- x = F.relu(self.fc1(x))
- x = F.dropout(x, p=self.relu_dropout, training=self.training)
- x = self.fc2(x)
- x = F.dropout(x, p=self.dropout, training=self.training)
- x = residual + x
- x = self.maybe_layer_norm(-1, x, after=True)
-
- if self.use_last_ln:
- x = self.last_ln(x)
-
- return x
-
- def maybe_layer_norm(self, i, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return self.layer_norms[i](x)
- else:
- return x
-
-class DecoderTransformer(nn.Module):
- """Transformer decoder."""
-
- def __init__(self, embed_size, vocab_size, dropout=0.5, seq_length=20, num_instrs=15,
- attention_nheads=16, pos_embeddings=True, num_layers=8, learned=True, normalize_before=True,
- normalize_inputs=False, last_ln=False, scale_embed_grad=False):
- super(DecoderTransformer, self).__init__()
- self.dropout = dropout
- self.seq_length = seq_length * num_instrs
- self.embed_tokens = nn.Embedding(vocab_size, embed_size, padding_idx=vocab_size-1,
- scale_grad_by_freq=scale_embed_grad)
- nn.init.normal_(self.embed_tokens.weight, mean=0, std=embed_size ** -0.5)
- if pos_embeddings:
- self.embed_positions = PositionalEmbedding(1024, embed_size, 0, left_pad=False, learned=learned)
- else:
- self.embed_positions = None
- self.normalize_inputs = normalize_inputs
- if self.normalize_inputs:
- self.layer_norms_in = nn.ModuleList([LayerNorm(embed_size) for i in range(3)])
-
- self.embed_scale = math.sqrt(embed_size)
- self.layers = nn.ModuleList([])
- self.layers.extend([
- TransformerDecoderLayer(embed_size, attention_nheads, dropout=dropout, normalize_before=normalize_before,
- last_ln=last_ln)
- for i in range(num_layers)
- ])
-
- self.linear = Linear(embed_size, vocab_size-1)
-
- def forward(self, ingr_features, ingr_mask, captions, img_features, incremental_state=None):
-
- if ingr_features is not None:
- ingr_features = ingr_features.permute(0, 2, 1)
- ingr_features = ingr_features.transpose(0, 1)
- if self.normalize_inputs:
- self.layer_norms_in[0](ingr_features)
-
- if img_features is not None:
- img_features = img_features.permute(0, 2, 1)
- img_features = img_features.transpose(0, 1)
- if self.normalize_inputs:
- self.layer_norms_in[1](img_features)
-
- if ingr_mask is not None:
- ingr_mask = (1-ingr_mask.squeeze(1)).byte()
-
- # embed positions
- if self.embed_positions is not None:
- positions = self.embed_positions(captions, incremental_state=incremental_state)
- if incremental_state is not None:
- if self.embed_positions is not None:
- positions = positions[:, -1:]
- captions = captions[:, -1:]
-
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(captions)
-
- if self.embed_positions is not None:
- x += positions
-
- if self.normalize_inputs:
- x = self.layer_norms_in[2](x)
-
- x = F.dropout(x, p=self.dropout, training=self.training)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- for p, layer in enumerate(self.layers):
- x = layer(
- x,
- ingr_features,
- ingr_mask,
- incremental_state,
- img_features
- )
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- x = self.linear(x)
- _, predicted = x.max(dim=-1)
-
- return x, predicted
-
- def sample(self, ingr_features, ingr_mask, greedy=True, temperature=1.0, beam=-1,
- img_features=None, first_token_value=0,
- replacement=True, last_token_value=0):
-
- incremental_state = {}
-
- # create dummy previous word
- if ingr_features is not None:
- fs = ingr_features.size(0)
- else:
- fs = img_features.size(0)
-
- if beam != -1:
- if fs == 1:
- return self.sample_beam(ingr_features, ingr_mask, beam, img_features, first_token_value,
- replacement, last_token_value)
- else:
- print ("Beam Search can only be used with batch size of 1. Running greedy or temperature sampling...")
-
- first_word = torch.ones(fs)*first_token_value
-
- first_word = first_word.to(device).long()
- sampled_ids = [first_word]
- logits = []
-
- for i in range(self.seq_length):
- # forward
- outputs, _ = self.forward(ingr_features, ingr_mask, torch.stack(sampled_ids, 1),
- img_features, incremental_state)
- outputs = outputs.squeeze(1)
- if not replacement:
- # predicted mask
- if i == 0:
- predicted_mask = torch.zeros(outputs.shape).float().to(device)
- else:
- # ensure no repetitions in sampling if replacement==False
- batch_ind = [j for j in range(fs) if sampled_ids[i][j] != 0]
- sampled_ids_new = sampled_ids[i][batch_ind]
- predicted_mask[batch_ind, sampled_ids_new] = float('-inf')
-
- # mask previously selected ids
- outputs += predicted_mask
-
- logits.append(outputs)
- if greedy:
- outputs_prob = torch.nn.functional.softmax(outputs, dim=-1)
- _, predicted = outputs_prob.max(1)
- predicted = predicted.detach()
- else:
- k = 10
- outputs_prob = torch.div(outputs.squeeze(1), temperature)
- outputs_prob = torch.nn.functional.softmax(outputs_prob, dim=-1).data
-
- # top k random sampling
- prob_prev_topk, indices = torch.topk(outputs_prob, k=k, dim=1)
- predicted = torch.multinomial(prob_prev_topk, 1).view(-1)
- predicted = torch.index_select(indices, dim=1, index=predicted)[:, 0].detach()
-
- sampled_ids.append(predicted)
-
- sampled_ids = torch.stack(sampled_ids[1:], 1)
- logits = torch.stack(logits, 1)
-
- return sampled_ids, logits
-
- def sample_beam(self, ingr_features, ingr_mask, beam=3, img_features=None, first_token_value=0,
- replacement=True, last_token_value=0):
- k = beam
- alpha = 0.0
- # create dummy previous word
- if ingr_features is not None:
- fs = ingr_features.size(0)
- else:
- fs = img_features.size(0)
- first_word = torch.ones(fs)*first_token_value
-
- first_word = first_word.to(device).long()
-
- sequences = [[[first_word], 0, {}, False, 1]]
- finished = []
-
- for i in range(self.seq_length):
- # forward
- all_candidates = []
- for rem in range(len(sequences)):
- incremental = sequences[rem][2]
- outputs, _ = self.forward(ingr_features, ingr_mask, torch.stack(sequences[rem][0], 1),
- img_features, incremental)
- outputs = outputs.squeeze(1)
- if not replacement:
- # predicted mask
- if i == 0:
- predicted_mask = torch.zeros(outputs.shape).float().to(device)
- else:
- # ensure no repetitions in sampling if replacement==False
- batch_ind = [j for j in range(fs) if sequences[rem][0][i][j] != 0]
- sampled_ids_new = sequences[rem][0][i][batch_ind]
- predicted_mask[batch_ind, sampled_ids_new] = float('-inf')
-
- # mask previously selected ids
- outputs += predicted_mask
-
- outputs_prob = torch.nn.functional.log_softmax(outputs, dim=-1)
- probs, indices = torch.topk(outputs_prob, beam)
- # tokens is [batch x beam ] and every element is a list
- # score is [ batch x beam ] and every element is a scalar
- # incremental is [batch x beam ] and every element is a dict
-
-
- for bid in range(beam):
- tokens = sequences[rem][0] + [indices[:, bid]]
- score = sequences[rem][1] + probs[:, bid].squeeze().item()
- if indices[:,bid].item() == last_token_value:
- finished.append([tokens, score, None, True, sequences[rem][-1] + 1])
- else:
- all_candidates.append([tokens, score, incremental, False, sequences[rem][-1] + 1])
-
- # if all the top-k scoring beams have finished, we can return them
- ordered_all = sorted(all_candidates + finished, key=lambda tup: tup[1]/(np.power(tup[-1],alpha)),
- reverse=True)[:k]
- if all(el[-1] == True for el in ordered_all):
- all_candidates = []
-
- # order all candidates by score
- ordered = sorted(all_candidates, key=lambda tup: tup[1]/(np.power(tup[-1],alpha)), reverse=True)
- # select k best
- sequences = ordered[:k]
- finished = sorted(finished, key=lambda tup: tup[1]/(np.power(tup[-1],alpha)), reverse=True)[:k]
-
- if len(finished) != 0:
- sampled_ids = torch.stack(finished[0][0][1:], 1)
- logits = finished[0][1]
- else:
- sampled_ids = torch.stack(sequences[0][0][1:], 1)
- logits = sequences[0][1]
- return sampled_ids, logits
-
- def max_positions(self):
- """Maximum output length supported by the decoder."""
- return self.embed_positions.max_positions()
-
- def upgrade_state_dict(self, state_dict):
- if isinstance(self.embed_positions, SinusoidalPositionalEmbedding):
- if 'decoder.embed_positions.weights' in state_dict:
- del state_dict['decoder.embed_positions.weights']
- if 'decoder.embed_positions._float_tensor' not in state_dict:
- state_dict['decoder.embed_positions._float_tensor'] = torch.FloatTensor()
- return state_dict
-
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx, ):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- return m
-
-
-def LayerNorm(embedding_dim):
- m = nn.LayerNorm(embedding_dim)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- nn.init.constant_(m.bias, 0.)
- return m
-
-
-def PositionalEmbedding(num_embeddings, embedding_dim, padding_idx, left_pad, learned=False):
- if learned:
- m = LearnedPositionalEmbedding(num_embeddings, embedding_dim, padding_idx, left_pad)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- else:
- m = SinusoidalPositionalEmbedding(embedding_dim, padding_idx, left_pad, num_embeddings)
- return m
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/__init__.py
deleted file mode 100644
index 47bb6e24ddf25aa4fd5bf0fe9672f89099efb9b4..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/__init__.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .asr_dataset import AsrDataset
-
-
-__all__ = [
- "AsrDataset",
-]
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/replabels.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/replabels.py
deleted file mode 100644
index 441f1bd432b95865fc981c6c695cee299b07ed62..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/data/replabels.py
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Replabel transforms for use with flashlight's ASG criterion.
-"""
-
-
-def replabel_symbol(i):
- """
- Replabel symbols used in flashlight, currently just "1", "2", ...
- This prevents training with numeral tokens, so this might change in the future
- """
- return str(i)
-
-
-def pack_replabels(tokens, dictionary, max_reps):
- """
- Pack a token sequence so that repeated symbols are replaced by replabels
- """
- if len(tokens) == 0 or max_reps <= 0:
- return tokens
-
- replabel_value_to_idx = [0] * (max_reps + 1)
- for i in range(1, max_reps + 1):
- replabel_value_to_idx[i] = dictionary.index(replabel_symbol(i))
-
- result = []
- prev_token = -1
- num_reps = 0
- for token in tokens:
- if token == prev_token and num_reps < max_reps:
- num_reps += 1
- else:
- if num_reps > 0:
- result.append(replabel_value_to_idx[num_reps])
- num_reps = 0
- result.append(token)
- prev_token = token
- if num_reps > 0:
- result.append(replabel_value_to_idx[num_reps])
- return result
-
-
-def unpack_replabels(tokens, dictionary, max_reps):
- """
- Unpack a token sequence so that replabels are replaced by repeated symbols
- """
- if len(tokens) == 0 or max_reps <= 0:
- return tokens
-
- replabel_idx_to_value = {}
- for i in range(1, max_reps + 1):
- replabel_idx_to_value[dictionary.index(replabel_symbol(i))] = i
-
- result = []
- prev_token = -1
- for token in tokens:
- try:
- for _ in range(replabel_idx_to_value[token]):
- result.append(prev_token)
- prev_token = -1
- except KeyError:
- result.append(token)
- prev_token = token
- return result
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/utils/cider/pyciderevalcap/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/utils/cider/pyciderevalcap/__init__.py
deleted file mode 100644
index 3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/utils/cider/pyciderevalcap/__init__.py
+++ /dev/null
@@ -1 +0,0 @@
-__author__ = 'tylin'
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/__init__.py
deleted file mode 100644
index 681fca3d4553f6832a65f61fc186793bc4ee0679..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/transformer/__init__.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright (c) Facebook Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""isort:skip_file"""
-
-from .transformer_config import (
- TransformerConfig,
- DEFAULT_MAX_SOURCE_POSITIONS,
- DEFAULT_MAX_TARGET_POSITIONS,
- DEFAULT_MIN_PARAMS_TO_WRAP,
-)
-from .transformer_decoder import TransformerDecoder, TransformerDecoderBase, Linear
-from .transformer_encoder import TransformerEncoder, TransformerEncoderBase
-from .transformer_legacy import (
- TransformerModel,
- base_architecture,
- tiny_architecture,
- transformer_iwslt_de_en,
- transformer_wmt_en_de,
- transformer_vaswani_wmt_en_de_big,
- transformer_vaswani_wmt_en_fr_big,
- transformer_wmt_en_de_big,
- transformer_wmt_en_de_big_t2t,
-)
-from .transformer_base import TransformerModelBase, Embedding
-
-
-__all__ = [
- "TransformerModelBase",
- "TransformerConfig",
- "TransformerDecoder",
- "TransformerDecoderBase",
- "TransformerEncoder",
- "TransformerEncoderBase",
- "TransformerModel",
- "Embedding",
- "Linear",
- "base_architecture",
- "tiny_architecture",
- "transformer_iwslt_de_en",
- "transformer_wmt_en_de",
- "transformer_vaswani_wmt_en_de_big",
- "transformer_vaswani_wmt_en_fr_big",
- "transformer_wmt_en_de_big",
- "transformer_wmt_en_de_big_t2t",
- "DEFAULT_MAX_SOURCE_POSITIONS",
- "DEFAULT_MAX_TARGET_POSITIONS",
- "DEFAULT_MIN_PARAMS_TO_WRAP",
-]
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/denoising.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/denoising.py
deleted file mode 100644
index d1dff26c36d51e394e1c955c6683fa4a20c52395..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/tasks/denoising.py
+++ /dev/null
@@ -1,277 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import logging
-import os
-
-from fairseq import utils
-from fairseq.data import (
- AppendTokenDataset,
- DenoisingDataset,
- Dictionary,
- IdDataset,
- NestedDictionaryDataset,
- NumelDataset,
- PadDataset,
- PrependTokenDataset,
- StripTokenDataset,
- TokenBlockDataset,
- data_utils,
-)
-from fairseq.data.encoders.utils import get_whole_word_mask
-from fairseq.data.shorten_dataset import maybe_shorten_dataset
-from fairseq.tasks import LegacyFairseqTask, register_task
-import numpy as np
-
-
-logger = logging.getLogger(__name__)
-
-
-@register_task("denoising")
-class DenoisingTask(LegacyFairseqTask):
- """
- Denoising task for applying sequence to sequence denoising. (ie. BART)
- """
-
- @staticmethod
- def add_args(parser):
- """Add task-specific arguments to the parser."""
- parser.add_argument("data", help="path to data directory")
- parser.add_argument(
- "--tokens-per-sample",
- default=512,
- type=int,
- help="max number of total tokens over all segments"
- " per sample for dataset",
- )
- parser.add_argument(
- "--sample-break-mode",
- default="complete_doc",
- type=str,
- help="mode for breaking sentence",
- )
- parser.add_argument(
- "--mask",
- default=0.0,
- type=float,
- help="fraction of words/subwords that will be masked",
- )
- parser.add_argument(
- "--mask-random",
- default=0.0,
- type=float,
- help="instead of using [MASK], use random token this often",
- )
- parser.add_argument(
- "--insert",
- default=0.0,
- type=float,
- help="insert this percentage of additional random tokens",
- )
- parser.add_argument(
- "--permute",
- default=0.0,
- type=float,
- help="take this proportion of subwords and permute them",
- )
- parser.add_argument(
- "--rotate",
- default=0.5,
- type=float,
- help="rotate this proportion of inputs",
- )
- parser.add_argument(
- "--poisson-lambda",
- default=3.0,
- type=float,
- help="randomly shuffle sentences for this proportion of inputs",
- )
- parser.add_argument(
- "--permute-sentences",
- default=0.0,
- type=float,
- help="shuffle this proportion of sentences in all inputs",
- )
- parser.add_argument(
- "--mask-length",
- default="subword",
- type=str,
- choices=["subword", "word", "span-poisson"],
- help="mask length to choose",
- )
- parser.add_argument(
- "--replace-length",
- default=-1,
- type=int,
- help="when masking N tokens, replace with 0, 1, or N tokens (use -1 for N)",
- )
- parser.add_argument(
- "--max-source-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the source sequence",
- )
- parser.add_argument(
- "--max-target-positions",
- default=1024,
- type=int,
- metavar="N",
- help="max number of tokens in the target sequence",
- )
-
- parser.add_argument(
- "--shorten-method",
- default="none",
- choices=["none", "truncate", "random_crop"],
- help="if not none, shorten sequences that exceed --tokens-per-sample",
- )
- parser.add_argument(
- "--shorten-data-split-list",
- default="",
- help="comma-separated list of dataset splits to apply shortening to, "
- 'e.g., "train,valid" (default: all dataset splits)',
- )
-
-
- def __init__(self, args, dictionary):
- super().__init__(args)
- self.dictionary = dictionary
- self.seed = args.seed
-
- # add mask token
- self.mask_idx = self.dictionary.add_symbol("")
-
- @classmethod
- def setup_task(cls, args, **kwargs):
- """Setup the task."""
- paths = utils.split_paths(args.data)
- assert len(paths) > 0
- dictionary = Dictionary.load(os.path.join(paths[0], "dict.txt"))
- logger.info("dictionary: {} types".format(len(dictionary)))
- if not hasattr(args, "shuffle_instance"):
- args.shuffle_instance = False
- return cls(args, dictionary)
-
- def load_dataset(self, split, epoch=1, combine=False, **kwargs):
- """Load a given dataset split.
-
- Args:
- split (str): name of the split (e.g., train, valid, test)
- """
- paths = utils.split_paths(self.args.data)
- assert len(paths) > 0
- data_path = paths[(epoch - 1) % len(paths)]
- split_path = os.path.join(data_path, split)
-
- dataset = data_utils.load_indexed_dataset(
- split_path,
- self.dictionary,
- self.args.dataset_impl,
- combine=combine,
- )
- if dataset is None:
- raise FileNotFoundError(
- "Dataset not found: {} ({})".format(split, split_path)
- )
-
- dataset = StripTokenDataset(dataset, self.dictionary.eos())
-
- dataset = maybe_shorten_dataset(
- dataset,
- split,
- self.args.shorten_data_split_list,
- self.args.shorten_method,
- self.args.tokens_per_sample,
- self.args.seed,
- )
-
- # create continuous blocks of tokens
- dataset = TokenBlockDataset(
- dataset,
- dataset.sizes,
- self.args.tokens_per_sample - 2, # one less for and one for
- pad=self.dictionary.pad(),
- eos=self.dictionary.eos(),
- break_mode=self.args.sample_break_mode,
- document_sep_len=0,
- )
- logger.info("loaded {} blocks from: {}".format(len(dataset), split_path))
-
- # prepend beginning-of-sentence token (, equiv. to [CLS] in BERT)
- dataset = PrependTokenDataset(dataset, self.source_dictionary.bos())
- dataset = AppendTokenDataset(dataset, self.source_dictionary.eos())
-
- mask_whole_words = (
- get_whole_word_mask(self.args, self.source_dictionary)
- if self.args.mask_length != "subword"
- else None
- )
-
- self.datasets[split] = DenoisingDataset(
- dataset,
- dataset.sizes,
- self.dictionary,
- self.mask_idx,
- mask_whole_words,
- shuffle=self.args.shuffle_instance,
- seed=self.seed,
- args=self.args,
- )
- logger.info(
- "Split: {0}, Loaded {1} samples of denoising_dataset".format(
- split,
- len(self.datasets[split]),
- )
- )
-
- def build_dataset_for_inference(self, src_tokens, src_lengths, **kwargs):
- """
- Generate batches for inference. We assume that the input begins with a
- bos symbol (``) and ends with an eos symbol (``).
- """
- pad = self.source_dictionary.pad()
- eos = self.source_dictionary.eos()
- src_dataset = TokenBlockDataset(
- src_tokens,
- src_lengths,
- block_size=self.args.tokens_per_sample - 2, # for and
- pad=pad,
- eos=eos,
- break_mode=self.args.sample_break_mode,
- document_sep_len=0,
- )
- prev_output_tokens = PrependTokenDataset(
- StripTokenDataset(src_dataset, eos), eos
- )
- src_dataset = PadDataset(src_dataset, pad_idx=pad, left_pad=False)
- return NestedDictionaryDataset(
- {
- "id": IdDataset(),
- "net_input": {
- "src_tokens": src_dataset,
- "src_lengths": NumelDataset(src_dataset, reduce=False),
- "prev_output_tokens": PadDataset(
- prev_output_tokens, pad_idx=pad, left_pad=False
- ),
- },
- "target": src_dataset,
- },
- sizes=[np.array(src_lengths)],
- )
-
- def max_positions(self):
- """Return the max sentence length allowed by the task."""
- return (self.args.max_source_positions, self.args.max_target_positions)
-
- @property
- def source_dictionary(self):
- """Return the source :class:`~fairseq.data.Dictionary`."""
- return self.dictionary
-
- @property
- def target_dictionary(self):
- """Return the target :class:`~fairseq.data.Dictionary`."""
- return self.dictionary
diff --git a/spaces/OIUGLK/bingo/src/components/button-scroll-to-bottom.tsx b/spaces/OIUGLK/bingo/src/components/button-scroll-to-bottom.tsx
deleted file mode 100644
index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/src/components/button-scroll-to-bottom.tsx
+++ /dev/null
@@ -1,34 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-import { cn } from '@/lib/utils'
-import { useAtBottom } from '@/lib/hooks/use-at-bottom'
-import { Button, type ButtonProps } from '@/components/ui/button'
-import { IconArrowDown } from '@/components/ui/icons'
-
-export function ButtonScrollToBottom({ className, ...props }: ButtonProps) {
- const isAtBottom = useAtBottom()
-
- return (
-
- )
-}
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript_patch.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript_patch.py
deleted file mode 100644
index da9b324f1582e31d1a16d2fe462ac2989bea56ea..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/export/torchscript_patch.py
+++ /dev/null
@@ -1,406 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import os
-import sys
-import tempfile
-from contextlib import ExitStack, contextmanager
-from copy import deepcopy
-from unittest import mock
-import torch
-from torch import nn
-
-# need some explicit imports due to https://github.com/pytorch/pytorch/issues/38964
-import detectron2 # noqa F401
-from detectron2.structures import Boxes, Instances
-from detectron2.utils.env import _import_file
-
-_counter = 0
-
-
-def _clear_jit_cache():
- from torch.jit._recursive import concrete_type_store
- from torch.jit._state import _jit_caching_layer
-
- concrete_type_store.type_store.clear() # for modules
- _jit_caching_layer.clear() # for free functions
-
-
-def _add_instances_conversion_methods(newInstances):
- """
- Add from_instances methods to the scripted Instances class.
- """
- cls_name = newInstances.__name__
-
- @torch.jit.unused
- def from_instances(instances: Instances):
- """
- Create scripted Instances from original Instances
- """
- fields = instances.get_fields()
- image_size = instances.image_size
- ret = newInstances(image_size)
- for name, val in fields.items():
- assert hasattr(ret, f"_{name}"), f"No attribute named {name} in {cls_name}"
- setattr(ret, name, deepcopy(val))
- return ret
-
- newInstances.from_instances = from_instances
-
-
-@contextmanager
-def patch_instances(fields):
- """
- A contextmanager, under which the Instances class in detectron2 is replaced
- by a statically-typed scriptable class, defined by `fields`.
- See more in `scripting_with_instances`.
- """
-
- with tempfile.TemporaryDirectory(prefix="detectron2") as dir, tempfile.NamedTemporaryFile(
- mode="w", encoding="utf-8", suffix=".py", dir=dir, delete=False
- ) as f:
- try:
- # Objects that use Instances should not reuse previously-compiled
- # results in cache, because `Instances` could be a new class each time.
- _clear_jit_cache()
-
- cls_name, s = _gen_instance_module(fields)
- f.write(s)
- f.flush()
- f.close()
-
- module = _import(f.name)
- new_instances = getattr(module, cls_name)
- _ = torch.jit.script(new_instances)
- # let torchscript think Instances was scripted already
- Instances.__torch_script_class__ = True
- # let torchscript find new_instances when looking for the jit type of Instances
- Instances._jit_override_qualname = torch._jit_internal._qualified_name(new_instances)
-
- _add_instances_conversion_methods(new_instances)
- yield new_instances
- finally:
- try:
- del Instances.__torch_script_class__
- del Instances._jit_override_qualname
- except AttributeError:
- pass
- sys.modules.pop(module.__name__)
-
-
-def _gen_instance_class(fields):
- """
- Args:
- fields (dict[name: type])
- """
-
- class _FieldType:
- def __init__(self, name, type_):
- assert isinstance(name, str), f"Field name must be str, got {name}"
- self.name = name
- self.type_ = type_
- self.annotation = f"{type_.__module__}.{type_.__name__}"
-
- fields = [_FieldType(k, v) for k, v in fields.items()]
-
- def indent(level, s):
- return " " * 4 * level + s
-
- lines = []
-
- global _counter
- _counter += 1
-
- cls_name = "ScriptedInstances{}".format(_counter)
-
- field_names = tuple(x.name for x in fields)
- extra_args = ", ".join([f"{f.name}: Optional[{f.annotation}] = None" for f in fields])
- lines.append(
- f"""
-class {cls_name}:
- def __init__(self, image_size: Tuple[int, int], {extra_args}):
- self.image_size = image_size
- self._field_names = {field_names}
-"""
- )
-
- for f in fields:
- lines.append(
- indent(2, f"self._{f.name} = torch.jit.annotate(Optional[{f.annotation}], {f.name})")
- )
-
- for f in fields:
- lines.append(
- f"""
- @property
- def {f.name}(self) -> {f.annotation}:
- # has to use a local for type refinement
- # https://pytorch.org/docs/stable/jit_language_reference.html#optional-type-refinement
- t = self._{f.name}
- assert t is not None, "{f.name} is None and cannot be accessed!"
- return t
-
- @{f.name}.setter
- def {f.name}(self, value: {f.annotation}) -> None:
- self._{f.name} = value
-"""
- )
-
- # support method `__len__`
- lines.append(
- """
- def __len__(self) -> int:
-"""
- )
- for f in fields:
- lines.append(
- f"""
- t = self._{f.name}
- if t is not None:
- return len(t)
-"""
- )
- lines.append(
- """
- raise NotImplementedError("Empty Instances does not support __len__!")
-"""
- )
-
- # support method `has`
- lines.append(
- """
- def has(self, name: str) -> bool:
-"""
- )
- for f in fields:
- lines.append(
- f"""
- if name == "{f.name}":
- return self._{f.name} is not None
-"""
- )
- lines.append(
- """
- return False
-"""
- )
-
- # support method `to`
- none_args = ", None" * len(fields)
- lines.append(
- f"""
- def to(self, device: torch.device) -> "{cls_name}":
- ret = {cls_name}(self.image_size{none_args})
-"""
- )
- for f in fields:
- if hasattr(f.type_, "to"):
- lines.append(
- f"""
- t = self._{f.name}
- if t is not None:
- ret._{f.name} = t.to(device)
-"""
- )
- else:
- # For now, ignore fields that cannot be moved to devices.
- # Maybe can support other tensor-like classes (e.g. __torch_function__)
- pass
- lines.append(
- """
- return ret
-"""
- )
-
- # support method `getitem`
- none_args = ", None" * len(fields)
- lines.append(
- f"""
- def __getitem__(self, item) -> "{cls_name}":
- ret = {cls_name}(self.image_size{none_args})
-"""
- )
- for f in fields:
- lines.append(
- f"""
- t = self._{f.name}
- if t is not None:
- ret._{f.name} = t[item]
-"""
- )
- lines.append(
- """
- return ret
-"""
- )
-
- # support method `cat`
- # this version does not contain checks that all instances have same size and fields
- none_args = ", None" * len(fields)
- lines.append(
- f"""
- def cat(self, instances: List["{cls_name}"]) -> "{cls_name}":
- ret = {cls_name}(self.image_size{none_args})
-"""
- )
- for f in fields:
- lines.append(
- f"""
- t = self._{f.name}
- if t is not None:
- values: List[{f.annotation}] = [x.{f.name} for x in instances]
- if torch.jit.isinstance(t, torch.Tensor):
- ret._{f.name} = torch.cat(values, dim=0)
- else:
- ret._{f.name} = t.cat(values)
-"""
- )
- lines.append(
- """
- return ret"""
- )
-
- # support method `get_fields()`
- lines.append(
- """
- def get_fields(self) -> Dict[str, Tensor]:
- ret = {}
- """
- )
- for f in fields:
- if f.type_ == Boxes:
- stmt = "t.tensor"
- elif f.type_ == torch.Tensor:
- stmt = "t"
- else:
- stmt = f'assert False, "unsupported type {str(f.type_)}"'
- lines.append(
- f"""
- t = self._{f.name}
- if t is not None:
- ret["{f.name}"] = {stmt}
- """
- )
- lines.append(
- """
- return ret"""
- )
- return cls_name, os.linesep.join(lines)
-
-
-def _gen_instance_module(fields):
- # TODO: find a more automatic way to enable import of other classes
- s = """
-from copy import deepcopy
-import torch
-from torch import Tensor
-import typing
-from typing import *
-
-import detectron2
-from detectron2.structures import Boxes, Instances
-
-"""
-
- cls_name, cls_def = _gen_instance_class(fields)
- s += cls_def
- return cls_name, s
-
-
-def _import(path):
- return _import_file(
- "{}{}".format(sys.modules[__name__].__name__, _counter), path, make_importable=True
- )
-
-
-@contextmanager
-def patch_builtin_len(modules=()):
- """
- Patch the builtin len() function of a few detectron2 modules
- to use __len__ instead, because __len__ does not convert values to
- integers and therefore is friendly to tracing.
-
- Args:
- modules (list[stsr]): names of extra modules to patch len(), in
- addition to those in detectron2.
- """
-
- def _new_len(obj):
- return obj.__len__()
-
- with ExitStack() as stack:
- MODULES = [
- "detectron2.modeling.roi_heads.fast_rcnn",
- "detectron2.modeling.roi_heads.mask_head",
- "detectron2.modeling.roi_heads.keypoint_head",
- ] + list(modules)
- ctxs = [stack.enter_context(mock.patch(mod + ".len")) for mod in MODULES]
- for m in ctxs:
- m.side_effect = _new_len
- yield
-
-
-def patch_nonscriptable_classes():
- """
- Apply patches on a few nonscriptable detectron2 classes.
- Should not have side-effects on eager usage.
- """
- # __prepare_scriptable__ can also be added to models for easier maintenance.
- # But it complicates the clean model code.
-
- from detectron2.modeling.backbone import ResNet, FPN
-
- # Due to https://github.com/pytorch/pytorch/issues/36061,
- # we change backbone to use ModuleList for scripting.
- # (note: this changes param names in state_dict)
-
- def prepare_resnet(self):
- ret = deepcopy(self)
- ret.stages = nn.ModuleList(ret.stages)
- for k in self.stage_names:
- delattr(ret, k)
- return ret
-
- ResNet.__prepare_scriptable__ = prepare_resnet
-
- def prepare_fpn(self):
- ret = deepcopy(self)
- ret.lateral_convs = nn.ModuleList(ret.lateral_convs)
- ret.output_convs = nn.ModuleList(ret.output_convs)
- for name, _ in self.named_children():
- if name.startswith("fpn_"):
- delattr(ret, name)
- return ret
-
- FPN.__prepare_scriptable__ = prepare_fpn
-
- # Annotate some attributes to be constants for the purpose of scripting,
- # even though they are not constants in eager mode.
- from detectron2.modeling.roi_heads import StandardROIHeads
-
- if hasattr(StandardROIHeads, "__annotations__"):
- # copy first to avoid editing annotations of base class
- StandardROIHeads.__annotations__ = deepcopy(StandardROIHeads.__annotations__)
- StandardROIHeads.__annotations__["mask_on"] = torch.jit.Final[bool]
- StandardROIHeads.__annotations__["keypoint_on"] = torch.jit.Final[bool]
-
-
-# These patches are not supposed to have side-effects.
-patch_nonscriptable_classes()
-
-
-@contextmanager
-def freeze_training_mode(model):
- """
- A context manager that annotates the "training" attribute of every submodule
- to constant, so that the training codepath in these modules can be
- meta-compiled away. Upon exiting, the annotations are reverted.
- """
- classes = {type(x) for x in model.modules()}
- # __constants__ is the old way to annotate constants and not compatible
- # with __annotations__ .
- classes = {x for x in classes if not hasattr(x, "__constants__")}
- for cls in classes:
- cls.__annotations__["training"] = torch.jit.Final[bool]
- yield
- for cls in classes:
- cls.__annotations__["training"] = bool
diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py
deleted file mode 100644
index 8bd45a930d3dc84912e58659ee575be08e9038f0..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/tests/test_numeric_batchnorm.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# -*- coding: utf-8 -*-
-# File : test_numeric_batchnorm.py
-# Author : Jiayuan Mao
-# Email : maojiayuan@gmail.com
-# Date : 27/01/2018
-#
-# This file is part of Synchronized-BatchNorm-PyTorch.
-
-import unittest
-
-import torch
-import torch.nn as nn
-from torch.autograd import Variable
-
-from sync_batchnorm.unittest import TorchTestCase
-
-
-def handy_var(a, unbias=True):
- n = a.size(0)
- asum = a.sum(dim=0)
- as_sum = (a ** 2).sum(dim=0) # a square sum
- sumvar = as_sum - asum * asum / n
- if unbias:
- return sumvar / (n - 1)
- else:
- return sumvar / n
-
-
-class NumericTestCase(TorchTestCase):
- def testNumericBatchNorm(self):
- a = torch.rand(16, 10)
- bn = nn.BatchNorm2d(10, momentum=1, eps=1e-5, affine=False)
- bn.train()
-
- a_var1 = Variable(a, requires_grad=True)
- b_var1 = bn(a_var1)
- loss1 = b_var1.sum()
- loss1.backward()
-
- a_var2 = Variable(a, requires_grad=True)
- a_mean2 = a_var2.mean(dim=0, keepdim=True)
- a_std2 = torch.sqrt(handy_var(a_var2, unbias=False).clamp(min=1e-5))
- # a_std2 = torch.sqrt(a_var2.var(dim=0, keepdim=True, unbiased=False) + 1e-5)
- b_var2 = (a_var2 - a_mean2) / a_std2
- loss2 = b_var2.sum()
- loss2.backward()
-
- self.assertTensorClose(bn.running_mean, a.mean(dim=0))
- self.assertTensorClose(bn.running_var, handy_var(a))
- self.assertTensorClose(a_var1.data, a_var2.data)
- self.assertTensorClose(b_var1.data, b_var2.data)
- self.assertTensorClose(a_var1.grad, a_var2.grad)
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/matlab_cp2tform.py b/spaces/PKUWilliamYang/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/matlab_cp2tform.py
deleted file mode 100644
index 025b18ec2e64472bd4c0c636f9ae061526bdc8cd..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/StyleGANEX/models/mtcnn/mtcnn_pytorch/src/matlab_cp2tform.py
+++ /dev/null
@@ -1,350 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Tue Jul 11 06:54:28 2017
-
-@author: zhaoyafei
-"""
-
-import numpy as np
-from numpy.linalg import inv, norm, lstsq
-from numpy.linalg import matrix_rank as rank
-
-
-class MatlabCp2tormException(Exception):
- def __str__(self):
- return 'In File {}:{}'.format(
- __file__, super.__str__(self))
-
-
-def tformfwd(trans, uv):
- """
- Function:
- ----------
- apply affine transform 'trans' to uv
-
- Parameters:
- ----------
- @trans: 3x3 np.array
- transform matrix
- @uv: Kx2 np.array
- each row is a pair of coordinates (x, y)
-
- Returns:
- ----------
- @xy: Kx2 np.array
- each row is a pair of transformed coordinates (x, y)
- """
- uv = np.hstack((
- uv, np.ones((uv.shape[0], 1))
- ))
- xy = np.dot(uv, trans)
- xy = xy[:, 0:-1]
- return xy
-
-
-def tforminv(trans, uv):
- """
- Function:
- ----------
- apply the inverse of affine transform 'trans' to uv
-
- Parameters:
- ----------
- @trans: 3x3 np.array
- transform matrix
- @uv: Kx2 np.array
- each row is a pair of coordinates (x, y)
-
- Returns:
- ----------
- @xy: Kx2 np.array
- each row is a pair of inverse-transformed coordinates (x, y)
- """
- Tinv = inv(trans)
- xy = tformfwd(Tinv, uv)
- return xy
-
-
-def findNonreflectiveSimilarity(uv, xy, options=None):
- options = {'K': 2}
-
- K = options['K']
- M = xy.shape[0]
- x = xy[:, 0].reshape((-1, 1)) # use reshape to keep a column vector
- y = xy[:, 1].reshape((-1, 1)) # use reshape to keep a column vector
- # print('--->x, y:\n', x, y
-
- tmp1 = np.hstack((x, y, np.ones((M, 1)), np.zeros((M, 1))))
- tmp2 = np.hstack((y, -x, np.zeros((M, 1)), np.ones((M, 1))))
- X = np.vstack((tmp1, tmp2))
- # print('--->X.shape: ', X.shape
- # print('X:\n', X
-
- u = uv[:, 0].reshape((-1, 1)) # use reshape to keep a column vector
- v = uv[:, 1].reshape((-1, 1)) # use reshape to keep a column vector
- U = np.vstack((u, v))
- # print('--->U.shape: ', U.shape
- # print('U:\n', U
-
- # We know that X * r = U
- if rank(X) >= 2 * K:
- r, _, _, _ = lstsq(X, U, rcond=None) # Make sure this is what I want
- r = np.squeeze(r)
- else:
- raise Exception('cp2tform:twoUniquePointsReq')
-
- # print('--->r:\n', r
-
- sc = r[0]
- ss = r[1]
- tx = r[2]
- ty = r[3]
-
- Tinv = np.array([
- [sc, -ss, 0],
- [ss, sc, 0],
- [tx, ty, 1]
- ])
-
- # print('--->Tinv:\n', Tinv
-
- T = inv(Tinv)
- # print('--->T:\n', T
-
- T[:, 2] = np.array([0, 0, 1])
-
- return T, Tinv
-
-
-def findSimilarity(uv, xy, options=None):
- options = {'K': 2}
-
- # uv = np.array(uv)
- # xy = np.array(xy)
-
- # Solve for trans1
- trans1, trans1_inv = findNonreflectiveSimilarity(uv, xy, options)
-
- # Solve for trans2
-
- # manually reflect the xy data across the Y-axis
- xyR = xy
- xyR[:, 0] = -1 * xyR[:, 0]
-
- trans2r, trans2r_inv = findNonreflectiveSimilarity(uv, xyR, options)
-
- # manually reflect the tform to undo the reflection done on xyR
- TreflectY = np.array([
- [-1, 0, 0],
- [0, 1, 0],
- [0, 0, 1]
- ])
-
- trans2 = np.dot(trans2r, TreflectY)
-
- # Figure out if trans1 or trans2 is better
- xy1 = tformfwd(trans1, uv)
- norm1 = norm(xy1 - xy)
-
- xy2 = tformfwd(trans2, uv)
- norm2 = norm(xy2 - xy)
-
- if norm1 <= norm2:
- return trans1, trans1_inv
- else:
- trans2_inv = inv(trans2)
- return trans2, trans2_inv
-
-
-def get_similarity_transform(src_pts, dst_pts, reflective=True):
- """
- Function:
- ----------
- Find Similarity Transform Matrix 'trans':
- u = src_pts[:, 0]
- v = src_pts[:, 1]
- x = dst_pts[:, 0]
- y = dst_pts[:, 1]
- [x, y, 1] = [u, v, 1] * trans
-
- Parameters:
- ----------
- @src_pts: Kx2 np.array
- source points, each row is a pair of coordinates (x, y)
- @dst_pts: Kx2 np.array
- destination points, each row is a pair of transformed
- coordinates (x, y)
- @reflective: True or False
- if True:
- use reflective similarity transform
- else:
- use non-reflective similarity transform
-
- Returns:
- ----------
- @trans: 3x3 np.array
- transform matrix from uv to xy
- trans_inv: 3x3 np.array
- inverse of trans, transform matrix from xy to uv
- """
-
- if reflective:
- trans, trans_inv = findSimilarity(src_pts, dst_pts)
- else:
- trans, trans_inv = findNonreflectiveSimilarity(src_pts, dst_pts)
-
- return trans, trans_inv
-
-
-def cvt_tform_mat_for_cv2(trans):
- """
- Function:
- ----------
- Convert Transform Matrix 'trans' into 'cv2_trans' which could be
- directly used by cv2.warpAffine():
- u = src_pts[:, 0]
- v = src_pts[:, 1]
- x = dst_pts[:, 0]
- y = dst_pts[:, 1]
- [x, y].T = cv_trans * [u, v, 1].T
-
- Parameters:
- ----------
- @trans: 3x3 np.array
- transform matrix from uv to xy
-
- Returns:
- ----------
- @cv2_trans: 2x3 np.array
- transform matrix from src_pts to dst_pts, could be directly used
- for cv2.warpAffine()
- """
- cv2_trans = trans[:, 0:2].T
-
- return cv2_trans
-
-
-def get_similarity_transform_for_cv2(src_pts, dst_pts, reflective=True):
- """
- Function:
- ----------
- Find Similarity Transform Matrix 'cv2_trans' which could be
- directly used by cv2.warpAffine():
- u = src_pts[:, 0]
- v = src_pts[:, 1]
- x = dst_pts[:, 0]
- y = dst_pts[:, 1]
- [x, y].T = cv_trans * [u, v, 1].T
-
- Parameters:
- ----------
- @src_pts: Kx2 np.array
- source points, each row is a pair of coordinates (x, y)
- @dst_pts: Kx2 np.array
- destination points, each row is a pair of transformed
- coordinates (x, y)
- reflective: True or False
- if True:
- use reflective similarity transform
- else:
- use non-reflective similarity transform
-
- Returns:
- ----------
- @cv2_trans: 2x3 np.array
- transform matrix from src_pts to dst_pts, could be directly used
- for cv2.warpAffine()
- """
- trans, trans_inv = get_similarity_transform(src_pts, dst_pts, reflective)
- cv2_trans = cvt_tform_mat_for_cv2(trans)
-
- return cv2_trans
-
-
-if __name__ == '__main__':
- """
- u = [0, 6, -2]
- v = [0, 3, 5]
- x = [-1, 0, 4]
- y = [-1, -10, 4]
-
- # In Matlab, run:
- #
- # uv = [u'; v'];
- # xy = [x'; y'];
- # tform_sim=cp2tform(uv,xy,'similarity');
- #
- # trans = tform_sim.tdata.T
- # ans =
- # -0.0764 -1.6190 0
- # 1.6190 -0.0764 0
- # -3.2156 0.0290 1.0000
- # trans_inv = tform_sim.tdata.Tinv
- # ans =
- #
- # -0.0291 0.6163 0
- # -0.6163 -0.0291 0
- # -0.0756 1.9826 1.0000
- # xy_m=tformfwd(tform_sim, u,v)
- #
- # xy_m =
- #
- # -3.2156 0.0290
- # 1.1833 -9.9143
- # 5.0323 2.8853
- # uv_m=tforminv(tform_sim, x,y)
- #
- # uv_m =
- #
- # 0.5698 1.3953
- # 6.0872 2.2733
- # -2.6570 4.3314
- """
- u = [0, 6, -2]
- v = [0, 3, 5]
- x = [-1, 0, 4]
- y = [-1, -10, 4]
-
- uv = np.array((u, v)).T
- xy = np.array((x, y)).T
-
- print('\n--->uv:')
- print(uv)
- print('\n--->xy:')
- print(xy)
-
- trans, trans_inv = get_similarity_transform(uv, xy)
-
- print('\n--->trans matrix:')
- print(trans)
-
- print('\n--->trans_inv matrix:')
- print(trans_inv)
-
- print('\n---> apply transform to uv')
- print('\nxy_m = uv_augmented * trans')
- uv_aug = np.hstack((
- uv, np.ones((uv.shape[0], 1))
- ))
- xy_m = np.dot(uv_aug, trans)
- print(xy_m)
-
- print('\nxy_m = tformfwd(trans, uv)')
- xy_m = tformfwd(trans, uv)
- print(xy_m)
-
- print('\n---> apply inverse transform to xy')
- print('\nuv_m = xy_augmented * trans_inv')
- xy_aug = np.hstack((
- xy, np.ones((xy.shape[0], 1))
- ))
- uv_m = np.dot(xy_aug, trans_inv)
- print(uv_m)
-
- print('\nuv_m = tformfwd(trans_inv, xy)')
- uv_m = tformfwd(trans_inv, xy)
- print(uv_m)
-
- uv_m = tforminv(trans, xy)
- print('\nuv_m = tforminv(trans, xy)')
- print(uv_m)
diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/data/imagenet.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/data/imagenet.py
deleted file mode 100644
index 1c473f9c6965b22315dbb289eff8247c71bdc790..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/stable-diffusion/ldm/data/imagenet.py
+++ /dev/null
@@ -1,394 +0,0 @@
-import os, yaml, pickle, shutil, tarfile, glob
-import cv2
-import albumentations
-import PIL
-import numpy as np
-import torchvision.transforms.functional as TF
-from omegaconf import OmegaConf
-from functools import partial
-from PIL import Image
-from tqdm import tqdm
-from torch.utils.data import Dataset, Subset
-
-import taming.data.utils as tdu
-from taming.data.imagenet import str_to_indices, give_synsets_from_indices, download, retrieve
-from taming.data.imagenet import ImagePaths
-
-from ldm.modules.image_degradation import degradation_fn_bsr, degradation_fn_bsr_light
-
-
-def synset2idx(path_to_yaml="data/index_synset.yaml"):
- with open(path_to_yaml) as f:
- di2s = yaml.load(f)
- return dict((v,k) for k,v in di2s.items())
-
-
-class ImageNetBase(Dataset):
- def __init__(self, config=None):
- self.config = config or OmegaConf.create()
- if not type(self.config)==dict:
- self.config = OmegaConf.to_container(self.config)
- self.keep_orig_class_label = self.config.get("keep_orig_class_label", False)
- self.process_images = True # if False we skip loading & processing images and self.data contains filepaths
- self._prepare()
- self._prepare_synset_to_human()
- self._prepare_idx_to_synset()
- self._prepare_human_to_integer_label()
- self._load()
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, i):
- return self.data[i]
-
- def _prepare(self):
- raise NotImplementedError()
-
- def _filter_relpaths(self, relpaths):
- ignore = set([
- "n06596364_9591.JPEG",
- ])
- relpaths = [rpath for rpath in relpaths if not rpath.split("/")[-1] in ignore]
- if "sub_indices" in self.config:
- indices = str_to_indices(self.config["sub_indices"])
- synsets = give_synsets_from_indices(indices, path_to_yaml=self.idx2syn) # returns a list of strings
- self.synset2idx = synset2idx(path_to_yaml=self.idx2syn)
- files = []
- for rpath in relpaths:
- syn = rpath.split("/")[0]
- if syn in synsets:
- files.append(rpath)
- return files
- else:
- return relpaths
-
- def _prepare_synset_to_human(self):
- SIZE = 2655750
- URL = "https://heibox.uni-heidelberg.de/f/9f28e956cd304264bb82/?dl=1"
- self.human_dict = os.path.join(self.root, "synset_human.txt")
- if (not os.path.exists(self.human_dict) or
- not os.path.getsize(self.human_dict)==SIZE):
- download(URL, self.human_dict)
-
- def _prepare_idx_to_synset(self):
- URL = "https://heibox.uni-heidelberg.de/f/d835d5b6ceda4d3aa910/?dl=1"
- self.idx2syn = os.path.join(self.root, "index_synset.yaml")
- if (not os.path.exists(self.idx2syn)):
- download(URL, self.idx2syn)
-
- def _prepare_human_to_integer_label(self):
- URL = "https://heibox.uni-heidelberg.de/f/2362b797d5be43b883f6/?dl=1"
- self.human2integer = os.path.join(self.root, "imagenet1000_clsidx_to_labels.txt")
- if (not os.path.exists(self.human2integer)):
- download(URL, self.human2integer)
- with open(self.human2integer, "r") as f:
- lines = f.read().splitlines()
- assert len(lines) == 1000
- self.human2integer_dict = dict()
- for line in lines:
- value, key = line.split(":")
- self.human2integer_dict[key] = int(value)
-
- def _load(self):
- with open(self.txt_filelist, "r") as f:
- self.relpaths = f.read().splitlines()
- l1 = len(self.relpaths)
- self.relpaths = self._filter_relpaths(self.relpaths)
- print("Removed {} files from filelist during filtering.".format(l1 - len(self.relpaths)))
-
- self.synsets = [p.split("/")[0] for p in self.relpaths]
- self.abspaths = [os.path.join(self.datadir, p) for p in self.relpaths]
-
- unique_synsets = np.unique(self.synsets)
- class_dict = dict((synset, i) for i, synset in enumerate(unique_synsets))
- if not self.keep_orig_class_label:
- self.class_labels = [class_dict[s] for s in self.synsets]
- else:
- self.class_labels = [self.synset2idx[s] for s in self.synsets]
-
- with open(self.human_dict, "r") as f:
- human_dict = f.read().splitlines()
- human_dict = dict(line.split(maxsplit=1) for line in human_dict)
-
- self.human_labels = [human_dict[s] for s in self.synsets]
-
- labels = {
- "relpath": np.array(self.relpaths),
- "synsets": np.array(self.synsets),
- "class_label": np.array(self.class_labels),
- "human_label": np.array(self.human_labels),
- }
-
- if self.process_images:
- self.size = retrieve(self.config, "size", default=256)
- self.data = ImagePaths(self.abspaths,
- labels=labels,
- size=self.size,
- random_crop=self.random_crop,
- )
- else:
- self.data = self.abspaths
-
-
-class ImageNetTrain(ImageNetBase):
- NAME = "ILSVRC2012_train"
- URL = "http://www.image-net.org/challenges/LSVRC/2012/"
- AT_HASH = "a306397ccf9c2ead27155983c254227c0fd938e2"
- FILES = [
- "ILSVRC2012_img_train.tar",
- ]
- SIZES = [
- 147897477120,
- ]
-
- def __init__(self, process_images=True, data_root=None, **kwargs):
- self.process_images = process_images
- self.data_root = data_root
- super().__init__(**kwargs)
-
- def _prepare(self):
- if self.data_root:
- self.root = os.path.join(self.data_root, self.NAME)
- else:
- cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
- self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
-
- self.datadir = os.path.join(self.root, "data")
- self.txt_filelist = os.path.join(self.root, "filelist.txt")
- self.expected_length = 1281167
- self.random_crop = retrieve(self.config, "ImageNetTrain/random_crop",
- default=True)
- if not tdu.is_prepared(self.root):
- # prep
- print("Preparing dataset {} in {}".format(self.NAME, self.root))
-
- datadir = self.datadir
- if not os.path.exists(datadir):
- path = os.path.join(self.root, self.FILES[0])
- if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
- import academictorrents as at
- atpath = at.get(self.AT_HASH, datastore=self.root)
- assert atpath == path
-
- print("Extracting {} to {}".format(path, datadir))
- os.makedirs(datadir, exist_ok=True)
- with tarfile.open(path, "r:") as tar:
- tar.extractall(path=datadir)
-
- print("Extracting sub-tars.")
- subpaths = sorted(glob.glob(os.path.join(datadir, "*.tar")))
- for subpath in tqdm(subpaths):
- subdir = subpath[:-len(".tar")]
- os.makedirs(subdir, exist_ok=True)
- with tarfile.open(subpath, "r:") as tar:
- tar.extractall(path=subdir)
-
- filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
- filelist = [os.path.relpath(p, start=datadir) for p in filelist]
- filelist = sorted(filelist)
- filelist = "\n".join(filelist)+"\n"
- with open(self.txt_filelist, "w") as f:
- f.write(filelist)
-
- tdu.mark_prepared(self.root)
-
-
-class ImageNetValidation(ImageNetBase):
- NAME = "ILSVRC2012_validation"
- URL = "http://www.image-net.org/challenges/LSVRC/2012/"
- AT_HASH = "5d6d0df7ed81efd49ca99ea4737e0ae5e3a5f2e5"
- VS_URL = "https://heibox.uni-heidelberg.de/f/3e0f6e9c624e45f2bd73/?dl=1"
- FILES = [
- "ILSVRC2012_img_val.tar",
- "validation_synset.txt",
- ]
- SIZES = [
- 6744924160,
- 1950000,
- ]
-
- def __init__(self, process_images=True, data_root=None, **kwargs):
- self.data_root = data_root
- self.process_images = process_images
- super().__init__(**kwargs)
-
- def _prepare(self):
- if self.data_root:
- self.root = os.path.join(self.data_root, self.NAME)
- else:
- cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
- self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
- self.datadir = os.path.join(self.root, "data")
- self.txt_filelist = os.path.join(self.root, "filelist.txt")
- self.expected_length = 50000
- self.random_crop = retrieve(self.config, "ImageNetValidation/random_crop",
- default=False)
- if not tdu.is_prepared(self.root):
- # prep
- print("Preparing dataset {} in {}".format(self.NAME, self.root))
-
- datadir = self.datadir
- if not os.path.exists(datadir):
- path = os.path.join(self.root, self.FILES[0])
- if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
- import academictorrents as at
- atpath = at.get(self.AT_HASH, datastore=self.root)
- assert atpath == path
-
- print("Extracting {} to {}".format(path, datadir))
- os.makedirs(datadir, exist_ok=True)
- with tarfile.open(path, "r:") as tar:
- tar.extractall(path=datadir)
-
- vspath = os.path.join(self.root, self.FILES[1])
- if not os.path.exists(vspath) or not os.path.getsize(vspath)==self.SIZES[1]:
- download(self.VS_URL, vspath)
-
- with open(vspath, "r") as f:
- synset_dict = f.read().splitlines()
- synset_dict = dict(line.split() for line in synset_dict)
-
- print("Reorganizing into synset folders")
- synsets = np.unique(list(synset_dict.values()))
- for s in synsets:
- os.makedirs(os.path.join(datadir, s), exist_ok=True)
- for k, v in synset_dict.items():
- src = os.path.join(datadir, k)
- dst = os.path.join(datadir, v)
- shutil.move(src, dst)
-
- filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
- filelist = [os.path.relpath(p, start=datadir) for p in filelist]
- filelist = sorted(filelist)
- filelist = "\n".join(filelist)+"\n"
- with open(self.txt_filelist, "w") as f:
- f.write(filelist)
-
- tdu.mark_prepared(self.root)
-
-
-
-class ImageNetSR(Dataset):
- def __init__(self, size=None,
- degradation=None, downscale_f=4, min_crop_f=0.5, max_crop_f=1.,
- random_crop=True):
- """
- Imagenet Superresolution Dataloader
- Performs following ops in order:
- 1. crops a crop of size s from image either as random or center crop
- 2. resizes crop to size with cv2.area_interpolation
- 3. degrades resized crop with degradation_fn
-
- :param size: resizing to size after cropping
- :param degradation: degradation_fn, e.g. cv_bicubic or bsrgan_light
- :param downscale_f: Low Resolution Downsample factor
- :param min_crop_f: determines crop size s,
- where s = c * min_img_side_len with c sampled from interval (min_crop_f, max_crop_f)
- :param max_crop_f: ""
- :param data_root:
- :param random_crop:
- """
- self.base = self.get_base()
- assert size
- assert (size / downscale_f).is_integer()
- self.size = size
- self.LR_size = int(size / downscale_f)
- self.min_crop_f = min_crop_f
- self.max_crop_f = max_crop_f
- assert(max_crop_f <= 1.)
- self.center_crop = not random_crop
-
- self.image_rescaler = albumentations.SmallestMaxSize(max_size=size, interpolation=cv2.INTER_AREA)
-
- self.pil_interpolation = False # gets reset later if incase interp_op is from pillow
-
- if degradation == "bsrgan":
- self.degradation_process = partial(degradation_fn_bsr, sf=downscale_f)
-
- elif degradation == "bsrgan_light":
- self.degradation_process = partial(degradation_fn_bsr_light, sf=downscale_f)
-
- else:
- interpolation_fn = {
- "cv_nearest": cv2.INTER_NEAREST,
- "cv_bilinear": cv2.INTER_LINEAR,
- "cv_bicubic": cv2.INTER_CUBIC,
- "cv_area": cv2.INTER_AREA,
- "cv_lanczos": cv2.INTER_LANCZOS4,
- "pil_nearest": PIL.Image.NEAREST,
- "pil_bilinear": PIL.Image.BILINEAR,
- "pil_bicubic": PIL.Image.BICUBIC,
- "pil_box": PIL.Image.BOX,
- "pil_hamming": PIL.Image.HAMMING,
- "pil_lanczos": PIL.Image.LANCZOS,
- }[degradation]
-
- self.pil_interpolation = degradation.startswith("pil_")
-
- if self.pil_interpolation:
- self.degradation_process = partial(TF.resize, size=self.LR_size, interpolation=interpolation_fn)
-
- else:
- self.degradation_process = albumentations.SmallestMaxSize(max_size=self.LR_size,
- interpolation=interpolation_fn)
-
- def __len__(self):
- return len(self.base)
-
- def __getitem__(self, i):
- example = self.base[i]
- image = Image.open(example["file_path_"])
-
- if not image.mode == "RGB":
- image = image.convert("RGB")
-
- image = np.array(image).astype(np.uint8)
-
- min_side_len = min(image.shape[:2])
- crop_side_len = min_side_len * np.random.uniform(self.min_crop_f, self.max_crop_f, size=None)
- crop_side_len = int(crop_side_len)
-
- if self.center_crop:
- self.cropper = albumentations.CenterCrop(height=crop_side_len, width=crop_side_len)
-
- else:
- self.cropper = albumentations.RandomCrop(height=crop_side_len, width=crop_side_len)
-
- image = self.cropper(image=image)["image"]
- image = self.image_rescaler(image=image)["image"]
-
- if self.pil_interpolation:
- image_pil = PIL.Image.fromarray(image)
- LR_image = self.degradation_process(image_pil)
- LR_image = np.array(LR_image).astype(np.uint8)
-
- else:
- LR_image = self.degradation_process(image=image)["image"]
-
- example["image"] = (image/127.5 - 1.0).astype(np.float32)
- example["LR_image"] = (LR_image/127.5 - 1.0).astype(np.float32)
-
- return example
-
-
-class ImageNetSRTrain(ImageNetSR):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def get_base(self):
- with open("data/imagenet_train_hr_indices.p", "rb") as f:
- indices = pickle.load(f)
- dset = ImageNetTrain(process_images=False,)
- return Subset(dset, indices)
-
-
-class ImageNetSRValidation(ImageNetSR):
- def __init__(self, **kwargs):
- super().__init__(**kwargs)
-
- def get_base(self):
- with open("data/imagenet_val_hr_indices.p", "rb") as f:
- indices = pickle.load(f)
- dset = ImageNetValidation(process_images=False,)
- return Subset(dset, indices)
diff --git a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/distributions/distributions.py b/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/distributions/distributions.py
deleted file mode 100644
index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000
--- a/spaces/PSLD/PSLD/stable-diffusion/ldm/modules/distributions/distributions.py
+++ /dev/null
@@ -1,92 +0,0 @@
-import torch
-import numpy as np
-
-
-class AbstractDistribution:
- def sample(self):
- raise NotImplementedError()
-
- def mode(self):
- raise NotImplementedError()
-
-
-class DiracDistribution(AbstractDistribution):
- def __init__(self, value):
- self.value = value
-
- def sample(self):
- return self.value
-
- def mode(self):
- return self.value
-
-
-class DiagonalGaussianDistribution(object):
- def __init__(self, parameters, deterministic=False):
- self.parameters = parameters
- self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
- self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
- self.deterministic = deterministic
- self.std = torch.exp(0.5 * self.logvar)
- self.var = torch.exp(self.logvar)
- if self.deterministic:
- self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
-
- def sample(self):
- x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
- return x
-
- def kl(self, other=None):
- if self.deterministic:
- return torch.Tensor([0.])
- else:
- if other is None:
- return 0.5 * torch.sum(torch.pow(self.mean, 2)
- + self.var - 1.0 - self.logvar,
- dim=[1, 2, 3])
- else:
- return 0.5 * torch.sum(
- torch.pow(self.mean - other.mean, 2) / other.var
- + self.var / other.var - 1.0 - self.logvar + other.logvar,
- dim=[1, 2, 3])
-
- def nll(self, sample, dims=[1,2,3]):
- if self.deterministic:
- return torch.Tensor([0.])
- logtwopi = np.log(2.0 * np.pi)
- return 0.5 * torch.sum(
- logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
- dim=dims)
-
- def mode(self):
- return self.mean
-
-
-def normal_kl(mean1, logvar1, mean2, logvar2):
- """
- source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12
- Compute the KL divergence between two gaussians.
- Shapes are automatically broadcasted, so batches can be compared to
- scalars, among other use cases.
- """
- tensor = None
- for obj in (mean1, logvar1, mean2, logvar2):
- if isinstance(obj, torch.Tensor):
- tensor = obj
- break
- assert tensor is not None, "at least one argument must be a Tensor"
-
- # Force variances to be Tensors. Broadcasting helps convert scalars to
- # Tensors, but it does not work for torch.exp().
- logvar1, logvar2 = [
- x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor)
- for x in (logvar1, logvar2)
- ]
-
- return 0.5 * (
- -1.0
- + logvar2
- - logvar1
- + torch.exp(logvar1 - logvar2)
- + ((mean1 - mean2) ** 2) * torch.exp(-logvar2)
- )
diff --git a/spaces/ParisNeo/Blip_QA/vit.py b/spaces/ParisNeo/Blip_QA/vit.py
deleted file mode 100644
index cec3d8e08ed4451d65392feb2e9f4848d1ef3899..0000000000000000000000000000000000000000
--- a/spaces/ParisNeo/Blip_QA/vit.py
+++ /dev/null
@@ -1,305 +0,0 @@
-'''
- * Copyright (c) 2022, salesforce.com, inc.
- * All rights reserved.
- * SPDX-License-Identifier: BSD-3-Clause
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
- * By Junnan Li
- * Based on timm code base
- * https://github.com/rwightman/pytorch-image-models/tree/master/timm
-'''
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from functools import partial
-
-from timm.models.vision_transformer import _cfg, PatchEmbed
-from timm.models.registry import register_model
-from timm.models.layers import trunc_normal_, DropPath
-from timm.models.helpers import named_apply, adapt_input_conv
-
-from fairscale.nn.checkpoint.checkpoint_activations import checkpoint_wrapper
-
-class Mlp(nn.Module):
- """ MLP as used in Vision Transformer, MLP-Mixer and related networks
- """
- def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
- super().__init__()
- out_features = out_features or in_features
- hidden_features = hidden_features or in_features
- self.fc1 = nn.Linear(in_features, hidden_features)
- self.act = act_layer()
- self.fc2 = nn.Linear(hidden_features, out_features)
- self.drop = nn.Dropout(drop)
-
- def forward(self, x):
- x = self.fc1(x)
- x = self.act(x)
- x = self.drop(x)
- x = self.fc2(x)
- x = self.drop(x)
- return x
-
-
-class Attention(nn.Module):
- def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.):
- super().__init__()
- self.num_heads = num_heads
- head_dim = dim // num_heads
- # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights
- self.scale = qk_scale or head_dim ** -0.5
- self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
- self.attn_drop = nn.Dropout(attn_drop)
- self.proj = nn.Linear(dim, dim)
- self.proj_drop = nn.Dropout(proj_drop)
- self.attn_gradients = None
- self.attention_map = None
-
- def save_attn_gradients(self, attn_gradients):
- self.attn_gradients = attn_gradients
-
- def get_attn_gradients(self):
- return self.attn_gradients
-
- def save_attention_map(self, attention_map):
- self.attention_map = attention_map
-
- def get_attention_map(self):
- return self.attention_map
-
- def forward(self, x, register_hook=False):
- B, N, C = x.shape
- qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
- q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
-
- attn = (q @ k.transpose(-2, -1)) * self.scale
- attn = attn.softmax(dim=-1)
- attn = self.attn_drop(attn)
-
- if register_hook:
- self.save_attention_map(attn)
- attn.register_hook(self.save_attn_gradients)
-
- x = (attn @ v).transpose(1, 2).reshape(B, N, C)
- x = self.proj(x)
- x = self.proj_drop(x)
- return x
-
-
-class Block(nn.Module):
-
- def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
- drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm, use_grad_checkpointing=False):
- super().__init__()
- self.norm1 = norm_layer(dim)
- self.attn = Attention(
- dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
- # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
- self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
- self.norm2 = norm_layer(dim)
- mlp_hidden_dim = int(dim * mlp_ratio)
- self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
-
- if use_grad_checkpointing:
- self.attn = checkpoint_wrapper(self.attn)
- self.mlp = checkpoint_wrapper(self.mlp)
-
- def forward(self, x, register_hook=False):
- x = x + self.drop_path(self.attn(self.norm1(x), register_hook=register_hook))
- x = x + self.drop_path(self.mlp(self.norm2(x)))
- return x
-
-
-class VisionTransformer(nn.Module):
- """ Vision Transformer
- A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` -
- https://arxiv.org/abs/2010.11929
- """
- def __init__(self, img_size=224, patch_size=16, in_chans=3, num_classes=1000, embed_dim=768, depth=12,
- num_heads=12, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None,
- drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=None,
- use_grad_checkpointing=False, ckpt_layer=0):
- """
- Args:
- img_size (int, tuple): input image size
- patch_size (int, tuple): patch size
- in_chans (int): number of input channels
- num_classes (int): number of classes for classification head
- embed_dim (int): embedding dimension
- depth (int): depth of transformer
- num_heads (int): number of attention heads
- mlp_ratio (int): ratio of mlp hidden dim to embedding dim
- qkv_bias (bool): enable bias for qkv if True
- qk_scale (float): override default qk scale of head_dim ** -0.5 if set
- representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
- drop_rate (float): dropout rate
- attn_drop_rate (float): attention dropout rate
- drop_path_rate (float): stochastic depth rate
- norm_layer: (nn.Module): normalization layer
- """
- super().__init__()
- self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models
- norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
-
- self.patch_embed = PatchEmbed(
- img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim)
-
- num_patches = self.patch_embed.num_patches
-
- self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
- self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + 1, embed_dim))
- self.pos_drop = nn.Dropout(p=drop_rate)
-
- dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth)] # stochastic depth decay rule
- self.blocks = nn.ModuleList([
- Block(
- dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
- drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer,
- use_grad_checkpointing=(use_grad_checkpointing and i>=depth-ckpt_layer)
- )
- for i in range(depth)])
- self.norm = norm_layer(embed_dim)
-
- trunc_normal_(self.pos_embed, std=.02)
- trunc_normal_(self.cls_token, std=.02)
- self.apply(self._init_weights)
-
- def _init_weights(self, m):
- if isinstance(m, nn.Linear):
- trunc_normal_(m.weight, std=.02)
- if isinstance(m, nn.Linear) and m.bias is not None:
- nn.init.constant_(m.bias, 0)
- elif isinstance(m, nn.LayerNorm):
- nn.init.constant_(m.bias, 0)
- nn.init.constant_(m.weight, 1.0)
-
- @torch.jit.ignore
- def no_weight_decay(self):
- return {'pos_embed', 'cls_token'}
-
- def forward(self, x, register_blk=-1):
- B = x.shape[0]
- x = self.patch_embed(x)
-
- cls_tokens = self.cls_token.expand(B, -1, -1) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
-
- x = x + self.pos_embed[:,:x.size(1),:]
- x = self.pos_drop(x)
-
- for i,blk in enumerate(self.blocks):
- x = blk(x, register_blk==i)
- x = self.norm(x)
-
- return x
-
- @torch.jit.ignore()
- def load_pretrained(self, checkpoint_path, prefix=''):
- _load_weights(self, checkpoint_path, prefix)
-
-
-@torch.no_grad()
-def _load_weights(model: VisionTransformer, checkpoint_path: str, prefix: str = ''):
- """ Load weights from .npz checkpoints for official Google Brain Flax implementation
- """
- import numpy as np
-
- def _n2p(w, t=True):
- if w.ndim == 4 and w.shape[0] == w.shape[1] == w.shape[2] == 1:
- w = w.flatten()
- if t:
- if w.ndim == 4:
- w = w.transpose([3, 2, 0, 1])
- elif w.ndim == 3:
- w = w.transpose([2, 0, 1])
- elif w.ndim == 2:
- w = w.transpose([1, 0])
- return torch.from_numpy(w)
-
- w = np.load(checkpoint_path)
- if not prefix and 'opt/target/embedding/kernel' in w:
- prefix = 'opt/target/'
-
- if hasattr(model.patch_embed, 'backbone'):
- # hybrid
- backbone = model.patch_embed.backbone
- stem_only = not hasattr(backbone, 'stem')
- stem = backbone if stem_only else backbone.stem
- stem.conv.weight.copy_(adapt_input_conv(stem.conv.weight.shape[1], _n2p(w[f'{prefix}conv_root/kernel'])))
- stem.norm.weight.copy_(_n2p(w[f'{prefix}gn_root/scale']))
- stem.norm.bias.copy_(_n2p(w[f'{prefix}gn_root/bias']))
- if not stem_only:
- for i, stage in enumerate(backbone.stages):
- for j, block in enumerate(stage.blocks):
- bp = f'{prefix}block{i + 1}/unit{j + 1}/'
- for r in range(3):
- getattr(block, f'conv{r + 1}').weight.copy_(_n2p(w[f'{bp}conv{r + 1}/kernel']))
- getattr(block, f'norm{r + 1}').weight.copy_(_n2p(w[f'{bp}gn{r + 1}/scale']))
- getattr(block, f'norm{r + 1}').bias.copy_(_n2p(w[f'{bp}gn{r + 1}/bias']))
- if block.downsample is not None:
- block.downsample.conv.weight.copy_(_n2p(w[f'{bp}conv_proj/kernel']))
- block.downsample.norm.weight.copy_(_n2p(w[f'{bp}gn_proj/scale']))
- block.downsample.norm.bias.copy_(_n2p(w[f'{bp}gn_proj/bias']))
- embed_conv_w = _n2p(w[f'{prefix}embedding/kernel'])
- else:
- embed_conv_w = adapt_input_conv(
- model.patch_embed.proj.weight.shape[1], _n2p(w[f'{prefix}embedding/kernel']))
- model.patch_embed.proj.weight.copy_(embed_conv_w)
- model.patch_embed.proj.bias.copy_(_n2p(w[f'{prefix}embedding/bias']))
- model.cls_token.copy_(_n2p(w[f'{prefix}cls'], t=False))
- pos_embed_w = _n2p(w[f'{prefix}Transformer/posembed_input/pos_embedding'], t=False)
- if pos_embed_w.shape != model.pos_embed.shape:
- pos_embed_w = resize_pos_embed( # resize pos embedding when different size from pretrained weights
- pos_embed_w, model.pos_embed, getattr(model, 'num_tokens', 1), model.patch_embed.grid_size)
- model.pos_embed.copy_(pos_embed_w)
- model.norm.weight.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/scale']))
- model.norm.bias.copy_(_n2p(w[f'{prefix}Transformer/encoder_norm/bias']))
-# if isinstance(model.head, nn.Linear) and model.head.bias.shape[0] == w[f'{prefix}head/bias'].shape[-1]:
-# model.head.weight.copy_(_n2p(w[f'{prefix}head/kernel']))
-# model.head.bias.copy_(_n2p(w[f'{prefix}head/bias']))
-# if isinstance(getattr(model.pre_logits, 'fc', None), nn.Linear) and f'{prefix}pre_logits/bias' in w:
-# model.pre_logits.fc.weight.copy_(_n2p(w[f'{prefix}pre_logits/kernel']))
-# model.pre_logits.fc.bias.copy_(_n2p(w[f'{prefix}pre_logits/bias']))
- for i, block in enumerate(model.blocks.children()):
- block_prefix = f'{prefix}Transformer/encoderblock_{i}/'
- mha_prefix = block_prefix + 'MultiHeadDotProductAttention_1/'
- block.norm1.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/scale']))
- block.norm1.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_0/bias']))
- block.attn.qkv.weight.copy_(torch.cat([
- _n2p(w[f'{mha_prefix}{n}/kernel'], t=False).flatten(1).T for n in ('query', 'key', 'value')]))
- block.attn.qkv.bias.copy_(torch.cat([
- _n2p(w[f'{mha_prefix}{n}/bias'], t=False).reshape(-1) for n in ('query', 'key', 'value')]))
- block.attn.proj.weight.copy_(_n2p(w[f'{mha_prefix}out/kernel']).flatten(1))
- block.attn.proj.bias.copy_(_n2p(w[f'{mha_prefix}out/bias']))
- for r in range(2):
- getattr(block.mlp, f'fc{r + 1}').weight.copy_(_n2p(w[f'{block_prefix}MlpBlock_3/Dense_{r}/kernel']))
- getattr(block.mlp, f'fc{r + 1}').bias.copy_(_n2p(w[f'{block_prefix}MlpBlock_3/Dense_{r}/bias']))
- block.norm2.weight.copy_(_n2p(w[f'{block_prefix}LayerNorm_2/scale']))
- block.norm2.bias.copy_(_n2p(w[f'{block_prefix}LayerNorm_2/bias']))
-
-
-def interpolate_pos_embed(pos_embed_checkpoint, visual_encoder):
- # interpolate position embedding
- embedding_size = pos_embed_checkpoint.shape[-1]
- num_patches = visual_encoder.patch_embed.num_patches
- num_extra_tokens = visual_encoder.pos_embed.shape[-2] - num_patches
- # height (== width) for the checkpoint position embedding
- orig_size = int((pos_embed_checkpoint.shape[-2] - num_extra_tokens) ** 0.5)
- # height (== width) for the new position embedding
- new_size = int(num_patches ** 0.5)
-
- if orig_size!=new_size:
- # class_token and dist_token are kept unchanged
- extra_tokens = pos_embed_checkpoint[:, :num_extra_tokens]
- # only the position tokens are interpolated
- pos_tokens = pos_embed_checkpoint[:, num_extra_tokens:]
- pos_tokens = pos_tokens.reshape(-1, orig_size, orig_size, embedding_size).permute(0, 3, 1, 2)
- pos_tokens = torch.nn.functional.interpolate(
- pos_tokens, size=(new_size, new_size), mode='bicubic', align_corners=False)
- pos_tokens = pos_tokens.permute(0, 2, 3, 1).flatten(1, 2)
- new_pos_embed = torch.cat((extra_tokens, pos_tokens), dim=1)
- print('reshape position embedding from %d to %d'%(orig_size ** 2,new_size ** 2))
-
- return new_pos_embed
- else:
- return pos_embed_checkpoint
\ No newline at end of file
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/futures.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/futures.go
deleted file mode 100644
index 76328efec61b95d2a55b30930182aa9f3ad5a337..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/ice-9/futures.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/debug.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/debug.go
deleted file mode 100644
index c5b5c03f80901abda12dd20cbe2addbc4457c568..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/debug.go and /dev/null differ
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/repl.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/repl.go
deleted file mode 100644
index 57ae509a0c979c0eb67658b53975fa77dd17fdba..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/repl.go and /dev/null differ
diff --git a/spaces/Paulraj916/paulraj916/scrapCss.py b/spaces/Paulraj916/paulraj916/scrapCss.py
deleted file mode 100644
index 3bb9677403b2ad72d7cc41b4b7a714c303fc7016..0000000000000000000000000000000000000000
--- a/spaces/Paulraj916/paulraj916/scrapCss.py
+++ /dev/null
@@ -1,53 +0,0 @@
-import os
-import requests
-from bs4 import BeautifulSoup, Tag
-from urllib.parse import urljoin
-import cssbeautifier
-
-class ScrapCss:
- def __init__(self, link):
- self.link = link
-
- def scrap_css(self):
- try:
- # Send an HTTP GET request to the webpage
- response = requests.get(self.link)
- response.raise_for_status()
-
- # Get the HTML content of the page
- html_content = response.text
-
- # Extract CSS file URLs from the webpage and convert to absolute URLs
- base_url = response.url
- soup = BeautifulSoup(html_content, 'html.parser')
- css_urls = [urljoin(base_url, link['href']) for link in soup.find_all('link', rel='stylesheet')]
-
- # Create an "output" folder if it doesn't exist
- output_path = "output"
- if not os.path.exists(output_path):
- os.makedirs(output_path)
-
- # Download and store CSS files in the "output" folder
- for css_url in css_urls:
- folder_name = os.path.dirname(css_url.replace(base_url, "").replace("http://", "").replace("https://", ""))
- if folder_name.startswith("/"):
- folder_name = folder_name[1:]
- folder_path = os.path.join(output_path, folder_name)
- try:
- os.makedirs(folder_path, exist_ok=True)
- filename = os.path.basename(css_url)
- try:
- css_content = requests.get(css_url).text
- # Beautify CSS content
- css_content = cssbeautifier.beautify(css_content)
- with open(os.path.join(folder_path, filename), 'w', encoding='utf-8') as file:
- file.write(css_content)
- print("Downloaded and beautified:", css_url)
- except Exception as e:
- print(f"Failed to download {css_url}: {e}")
- except Exception as e:
- print(f"Failed to download {css_url}: {e}")
-
- print("CSS files downloaded and saved successfully.")
- except requests.exceptions.RequestException as e:
- print(f"Failed to fetch content from {self.link}: {e}")
diff --git a/spaces/PeepDaSlan9/togethercomputer-RedPajama-INCITE-Chat-3B-v1/app.py b/spaces/PeepDaSlan9/togethercomputer-RedPajama-INCITE-Chat-3B-v1/app.py
deleted file mode 100644
index 91667f3eaa06e8062337a945e15d009e26950326..0000000000000000000000000000000000000000
--- a/spaces/PeepDaSlan9/togethercomputer-RedPajama-INCITE-Chat-3B-v1/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/togethercomputer/RedPajama-INCITE-Chat-3B-v1").launch()
\ No newline at end of file
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/alter_trainer.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/alter_trainer.py
deleted file mode 100644
index 06d3dc953028991b93407626b215c7d39dd03f2e..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/alter_trainer.py
+++ /dev/null
@@ -1,127 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import datetime
-import logging
-import time
-
-import torch
-import torch.distributed as dist
-
-from maskrcnn_benchmark.utils.comm import get_world_size
-from maskrcnn_benchmark.utils.metric_logger import MetricLogger
-
-
-def reduce_loss_dict(all_loss_dict):
- """
- Reduce the loss dictionary from all processes so that process with rank
- 0 has the averaged results. Returns a dict with the same fields as
- loss_dict, after reduction.
- """
- world_size = get_world_size()
- with torch.no_grad():
- loss_names = []
- all_losses = []
- for loss_dict in all_loss_dict:
- for k in sorted(loss_dict.keys()):
- loss_names.append(k)
- all_losses.append(loss_dict[k])
- all_losses = torch.stack(all_losses, dim=0)
- if world_size > 1:
- dist.reduce(all_losses, dst=0)
- if dist.get_rank() == 0:
- # only main process gets accumulated, so only divide by
- # world_size in this case
- all_losses /= world_size
-
- reduced_losses = {}
- for k, v in zip(loss_names, all_losses):
- if k not in reduced_losses:
- reduced_losses[k] = v / len(all_loss_dict)
- reduced_losses[k] += v / len(all_loss_dict)
-
- return reduced_losses
-
-
-def do_train(
- model,
- data_loader,
- optimizer,
- scheduler,
- checkpointer,
- device,
- checkpoint_period,
- arguments,
-):
- logger = logging.getLogger("maskrcnn_benchmark.trainer")
- logger.info("Start training")
- meters = MetricLogger(delimiter=" ")
- max_iter = min(len(task_loader) for task_loader in data_loader)
- start_iter = arguments["iteration"]
- model.train()
- start_training_time = time.time()
- end = time.time()
- for iteration, task_loader in enumerate(zip(*data_loader), start_iter):
- data_time = time.time() - end
- iteration = iteration + 1
- arguments["iteration"] = iteration
-
- all_task_loss_dict = []
- for task, (images, targets, _) in enumerate(task_loader, 1):
- if all(len(target) < 1 for target in targets):
- logger.warning('Sampled all negative batches, skip')
- continue
-
- images = images.to(device)
- targets = [target.to(device) for target in targets]
-
- loss_dict = model(images, targets, task)
- all_task_loss_dict.append(loss_dict)
-
- losses = sum(loss for loss_dict in all_task_loss_dict for loss in loss_dict.values())
-
- # reduce losses over all GPUs for logging purposes
- loss_dict_reduced = reduce_loss_dict(all_task_loss_dict)
- losses_reduced = sum(loss for loss in loss_dict_reduced.values())
- meters.update(loss=losses_reduced, **loss_dict_reduced)
-
- optimizer.zero_grad()
- losses.backward()
- optimizer.step()
- scheduler.step()
-
- batch_time = time.time() - end
- end = time.time()
- meters.update(time=batch_time, data=data_time)
-
- eta_seconds = meters.time.global_avg * (max_iter - iteration)
- eta_string = str(datetime.timedelta(seconds=int(eta_seconds)))
-
- if iteration % 20 == 0 or iteration == max_iter:
- logger.info(
- meters.delimiter.join(
- [
- "eta: {eta}",
- "iter: {iter}",
- "{meters}",
- "lr: {lr:.6f}",
- "max mem: {memory:.0f}",
- ]
- ).format(
- eta=eta_string,
- iter=iteration,
- meters=str(meters),
- lr=optimizer.param_groups[0]["lr"],
- memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0,
- )
- )
- if iteration % checkpoint_period == 0:
- checkpointer.save("model_{:07d}".format(iteration), **arguments)
- if iteration == max_iter:
- checkpointer.save("model_final", **arguments)
-
- total_training_time = time.time() - start_training_time
- total_time_str = str(datetime.timedelta(seconds=total_training_time))
- logger.info(
- "Total training time: {} ({:.4f} s / it)".format(
- total_time_str, total_training_time / (max_iter)
- )
- )
diff --git a/spaces/Proxy1/Turbo/greeting.md b/spaces/Proxy1/Turbo/greeting.md
deleted file mode 100644
index 84988f66b9ae45bd107066d31c5fa48a60ec56ea..0000000000000000000000000000000000000000
--- a/spaces/Proxy1/Turbo/greeting.md
+++ /dev/null
@@ -1 +0,0 @@
-Hint: Everyone was having a good time and then this faggot showed up | burniemail01@proton.me
\ No newline at end of file
diff --git a/spaces/RMeli/gnina-torch/html/pl.html b/spaces/RMeli/gnina-torch/html/pl.html
deleted file mode 100644
index f9afbe37bb916e7de9c043a456549d732c9bed5a..0000000000000000000000000000000000000000
--- a/spaces/RMeli/gnina-torch/html/pl.html
+++ /dev/null
@@ -1,39 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/target_python.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/target_python.py
deleted file mode 100644
index 744bd7ef58b4870406fcef8cb3b3667548a0ccea..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/models/target_python.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import sys
-from typing import List, Optional, Tuple
-
-from pip._vendor.packaging.tags import Tag
-
-from pip._internal.utils.compatibility_tags import get_supported, version_info_to_nodot
-from pip._internal.utils.misc import normalize_version_info
-
-
-class TargetPython:
-
- """
- Encapsulates the properties of a Python interpreter one is targeting
- for a package install, download, etc.
- """
-
- __slots__ = [
- "_given_py_version_info",
- "abis",
- "implementation",
- "platforms",
- "py_version",
- "py_version_info",
- "_valid_tags",
- ]
-
- def __init__(
- self,
- platforms: Optional[List[str]] = None,
- py_version_info: Optional[Tuple[int, ...]] = None,
- abis: Optional[List[str]] = None,
- implementation: Optional[str] = None,
- ) -> None:
- """
- :param platforms: A list of strings or None. If None, searches for
- packages that are supported by the current system. Otherwise, will
- find packages that can be built on the platforms passed in. These
- packages will only be downloaded for distribution: they will
- not be built locally.
- :param py_version_info: An optional tuple of ints representing the
- Python version information to use (e.g. `sys.version_info[:3]`).
- This can have length 1, 2, or 3 when provided.
- :param abis: A list of strings or None. This is passed to
- compatibility_tags.py's get_supported() function as is.
- :param implementation: A string or None. This is passed to
- compatibility_tags.py's get_supported() function as is.
- """
- # Store the given py_version_info for when we call get_supported().
- self._given_py_version_info = py_version_info
-
- if py_version_info is None:
- py_version_info = sys.version_info[:3]
- else:
- py_version_info = normalize_version_info(py_version_info)
-
- py_version = ".".join(map(str, py_version_info[:2]))
-
- self.abis = abis
- self.implementation = implementation
- self.platforms = platforms
- self.py_version = py_version
- self.py_version_info = py_version_info
-
- # This is used to cache the return value of get_tags().
- self._valid_tags: Optional[List[Tag]] = None
-
- def format_given(self) -> str:
- """
- Format the given, non-None attributes for display.
- """
- display_version = None
- if self._given_py_version_info is not None:
- display_version = ".".join(
- str(part) for part in self._given_py_version_info
- )
-
- key_values = [
- ("platforms", self.platforms),
- ("version_info", display_version),
- ("abis", self.abis),
- ("implementation", self.implementation),
- ]
- return " ".join(
- f"{key}={value!r}" for key, value in key_values if value is not None
- )
-
- def get_tags(self) -> List[Tag]:
- """
- Return the supported PEP 425 tags to check wheel candidates against.
-
- The tags are returned in order of preference (most preferred first).
- """
- if self._valid_tags is None:
- # Pass versions=None if no py_version_info was given since
- # versions=None uses special default logic.
- py_version_info = self._given_py_version_info
- if py_version_info is None:
- version = None
- else:
- version = version_info_to_nodot(py_version_info)
-
- tags = get_supported(
- version=version,
- platforms=self.platforms,
- abis=self.abis,
- impl=self.implementation,
- )
- self._valid_tags = tags
-
- return self._valid_tags
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/entrypoints.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/entrypoints.py
deleted file mode 100644
index 150136938548af6aa5ae1f716b330d0eb2d3e013..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/entrypoints.py
+++ /dev/null
@@ -1,84 +0,0 @@
-import itertools
-import os
-import shutil
-import sys
-from typing import List, Optional
-
-from pip._internal.cli.main import main
-from pip._internal.utils.compat import WINDOWS
-
-_EXECUTABLE_NAMES = [
- "pip",
- f"pip{sys.version_info.major}",
- f"pip{sys.version_info.major}.{sys.version_info.minor}",
-]
-if WINDOWS:
- _allowed_extensions = {"", ".exe"}
- _EXECUTABLE_NAMES = [
- "".join(parts)
- for parts in itertools.product(_EXECUTABLE_NAMES, _allowed_extensions)
- ]
-
-
-def _wrapper(args: Optional[List[str]] = None) -> int:
- """Central wrapper for all old entrypoints.
-
- Historically pip has had several entrypoints defined. Because of issues
- arising from PATH, sys.path, multiple Pythons, their interactions, and most
- of them having a pip installed, users suffer every time an entrypoint gets
- moved.
-
- To alleviate this pain, and provide a mechanism for warning users and
- directing them to an appropriate place for help, we now define all of
- our old entrypoints as wrappers for the current one.
- """
- sys.stderr.write(
- "WARNING: pip is being invoked by an old script wrapper. This will "
- "fail in a future version of pip.\n"
- "Please see https://github.com/pypa/pip/issues/5599 for advice on "
- "fixing the underlying issue.\n"
- "To avoid this problem you can invoke Python with '-m pip' instead of "
- "running pip directly.\n"
- )
- return main(args)
-
-
-def get_best_invocation_for_this_pip() -> str:
- """Try to figure out the best way to invoke pip in the current environment."""
- binary_directory = "Scripts" if WINDOWS else "bin"
- binary_prefix = os.path.join(sys.prefix, binary_directory)
-
- # Try to use pip[X[.Y]] names, if those executables for this environment are
- # the first on PATH with that name.
- path_parts = os.path.normcase(os.environ.get("PATH", "")).split(os.pathsep)
- exe_are_in_PATH = os.path.normcase(binary_prefix) in path_parts
- if exe_are_in_PATH:
- for exe_name in _EXECUTABLE_NAMES:
- found_executable = shutil.which(exe_name)
- binary_executable = os.path.join(binary_prefix, exe_name)
- if (
- found_executable
- and os.path.exists(binary_executable)
- and os.path.samefile(
- found_executable,
- binary_executable,
- )
- ):
- return exe_name
-
- # Use the `-m` invocation, if there's no "nice" invocation.
- return f"{get_best_invocation_for_this_python()} -m pip"
-
-
-def get_best_invocation_for_this_python() -> str:
- """Try to figure out the best way to invoke the current Python."""
- exe = sys.executable
- exe_name = os.path.basename(exe)
-
- # Try to use the basename, if it's the first executable.
- found_executable = shutil.which(exe_name)
- if found_executable and os.path.samefile(found_executable, exe):
- return exe_name
-
- # Use the full executable name, because we couldn't find something simpler.
- return exe
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/wheel.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/wheel.py
deleted file mode 100644
index 028c2d99b57782ed3bb268ce522ede37c1704d98..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/distlib/wheel.py
+++ /dev/null
@@ -1,1082 +0,0 @@
-# -*- coding: utf-8 -*-
-#
-# Copyright (C) 2013-2020 Vinay Sajip.
-# Licensed to the Python Software Foundation under a contributor agreement.
-# See LICENSE.txt and CONTRIBUTORS.txt.
-#
-from __future__ import unicode_literals
-
-import base64
-import codecs
-import datetime
-from email import message_from_file
-import hashlib
-import json
-import logging
-import os
-import posixpath
-import re
-import shutil
-import sys
-import tempfile
-import zipfile
-
-from . import __version__, DistlibException
-from .compat import sysconfig, ZipFile, fsdecode, text_type, filter
-from .database import InstalledDistribution
-from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME,
- LEGACY_METADATA_FILENAME)
-from .util import (FileOperator, convert_path, CSVReader, CSVWriter, Cache,
- cached_property, get_cache_base, read_exports, tempdir,
- get_platform)
-from .version import NormalizedVersion, UnsupportedVersionError
-
-logger = logging.getLogger(__name__)
-
-cache = None # created when needed
-
-if hasattr(sys, 'pypy_version_info'): # pragma: no cover
- IMP_PREFIX = 'pp'
-elif sys.platform.startswith('java'): # pragma: no cover
- IMP_PREFIX = 'jy'
-elif sys.platform == 'cli': # pragma: no cover
- IMP_PREFIX = 'ip'
-else:
- IMP_PREFIX = 'cp'
-
-VER_SUFFIX = sysconfig.get_config_var('py_version_nodot')
-if not VER_SUFFIX: # pragma: no cover
- VER_SUFFIX = '%s%s' % sys.version_info[:2]
-PYVER = 'py' + VER_SUFFIX
-IMPVER = IMP_PREFIX + VER_SUFFIX
-
-ARCH = get_platform().replace('-', '_').replace('.', '_')
-
-ABI = sysconfig.get_config_var('SOABI')
-if ABI and ABI.startswith('cpython-'):
- ABI = ABI.replace('cpython-', 'cp').split('-')[0]
-else:
- def _derive_abi():
- parts = ['cp', VER_SUFFIX]
- if sysconfig.get_config_var('Py_DEBUG'):
- parts.append('d')
- if IMP_PREFIX == 'cp':
- vi = sys.version_info[:2]
- if vi < (3, 8):
- wpm = sysconfig.get_config_var('WITH_PYMALLOC')
- if wpm is None:
- wpm = True
- if wpm:
- parts.append('m')
- if vi < (3, 3):
- us = sysconfig.get_config_var('Py_UNICODE_SIZE')
- if us == 4 or (us is None and sys.maxunicode == 0x10FFFF):
- parts.append('u')
- return ''.join(parts)
- ABI = _derive_abi()
- del _derive_abi
-
-FILENAME_RE = re.compile(r'''
-(?P[^-]+)
--(?P\d+[^-]*)
-(-(?P\d+[^-]*))?
--(?P\w+\d+(\.\w+\d+)*)
--(?P\w+)
--(?P\w+(\.\w+)*)
-\.whl$
-''', re.IGNORECASE | re.VERBOSE)
-
-NAME_VERSION_RE = re.compile(r'''
-(?P[^-]+)
--(?P\d+[^-]*)
-(-(?P\d+[^-]*))?$
-''', re.IGNORECASE | re.VERBOSE)
-
-SHEBANG_RE = re.compile(br'\s*#![^\r\n]*')
-SHEBANG_DETAIL_RE = re.compile(br'^(\s*#!("[^"]+"|\S+))\s+(.*)$')
-SHEBANG_PYTHON = b'#!python'
-SHEBANG_PYTHONW = b'#!pythonw'
-
-if os.sep == '/':
- to_posix = lambda o: o
-else:
- to_posix = lambda o: o.replace(os.sep, '/')
-
-if sys.version_info[0] < 3:
- import imp
-else:
- imp = None
- import importlib.machinery
- import importlib.util
-
-def _get_suffixes():
- if imp:
- return [s[0] for s in imp.get_suffixes()]
- else:
- return importlib.machinery.EXTENSION_SUFFIXES
-
-def _load_dynamic(name, path):
- # https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
- if imp:
- return imp.load_dynamic(name, path)
- else:
- spec = importlib.util.spec_from_file_location(name, path)
- module = importlib.util.module_from_spec(spec)
- sys.modules[name] = module
- spec.loader.exec_module(module)
- return module
-
-class Mounter(object):
- def __init__(self):
- self.impure_wheels = {}
- self.libs = {}
-
- def add(self, pathname, extensions):
- self.impure_wheels[pathname] = extensions
- self.libs.update(extensions)
-
- def remove(self, pathname):
- extensions = self.impure_wheels.pop(pathname)
- for k, v in extensions:
- if k in self.libs:
- del self.libs[k]
-
- def find_module(self, fullname, path=None):
- if fullname in self.libs:
- result = self
- else:
- result = None
- return result
-
- def load_module(self, fullname):
- if fullname in sys.modules:
- result = sys.modules[fullname]
- else:
- if fullname not in self.libs:
- raise ImportError('unable to find extension for %s' % fullname)
- result = _load_dynamic(fullname, self.libs[fullname])
- result.__loader__ = self
- parts = fullname.rsplit('.', 1)
- if len(parts) > 1:
- result.__package__ = parts[0]
- return result
-
-_hook = Mounter()
-
-
-class Wheel(object):
- """
- Class to build and install from Wheel files (PEP 427).
- """
-
- wheel_version = (1, 1)
- hash_kind = 'sha256'
-
- def __init__(self, filename=None, sign=False, verify=False):
- """
- Initialise an instance using a (valid) filename.
- """
- self.sign = sign
- self.should_verify = verify
- self.buildver = ''
- self.pyver = [PYVER]
- self.abi = ['none']
- self.arch = ['any']
- self.dirname = os.getcwd()
- if filename is None:
- self.name = 'dummy'
- self.version = '0.1'
- self._filename = self.filename
- else:
- m = NAME_VERSION_RE.match(filename)
- if m:
- info = m.groupdict('')
- self.name = info['nm']
- # Reinstate the local version separator
- self.version = info['vn'].replace('_', '-')
- self.buildver = info['bn']
- self._filename = self.filename
- else:
- dirname, filename = os.path.split(filename)
- m = FILENAME_RE.match(filename)
- if not m:
- raise DistlibException('Invalid name or '
- 'filename: %r' % filename)
- if dirname:
- self.dirname = os.path.abspath(dirname)
- self._filename = filename
- info = m.groupdict('')
- self.name = info['nm']
- self.version = info['vn']
- self.buildver = info['bn']
- self.pyver = info['py'].split('.')
- self.abi = info['bi'].split('.')
- self.arch = info['ar'].split('.')
-
- @property
- def filename(self):
- """
- Build and return a filename from the various components.
- """
- if self.buildver:
- buildver = '-' + self.buildver
- else:
- buildver = ''
- pyver = '.'.join(self.pyver)
- abi = '.'.join(self.abi)
- arch = '.'.join(self.arch)
- # replace - with _ as a local version separator
- version = self.version.replace('-', '_')
- return '%s-%s%s-%s-%s-%s.whl' % (self.name, version, buildver,
- pyver, abi, arch)
-
- @property
- def exists(self):
- path = os.path.join(self.dirname, self.filename)
- return os.path.isfile(path)
-
- @property
- def tags(self):
- for pyver in self.pyver:
- for abi in self.abi:
- for arch in self.arch:
- yield pyver, abi, arch
-
- @cached_property
- def metadata(self):
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- wrapper = codecs.getreader('utf-8')
- with ZipFile(pathname, 'r') as zf:
- wheel_metadata = self.get_wheel_metadata(zf)
- wv = wheel_metadata['Wheel-Version'].split('.', 1)
- file_version = tuple([int(i) for i in wv])
- # if file_version < (1, 1):
- # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME,
- # LEGACY_METADATA_FILENAME]
- # else:
- # fns = [WHEEL_METADATA_FILENAME, METADATA_FILENAME]
- fns = [WHEEL_METADATA_FILENAME, LEGACY_METADATA_FILENAME]
- result = None
- for fn in fns:
- try:
- metadata_filename = posixpath.join(info_dir, fn)
- with zf.open(metadata_filename) as bf:
- wf = wrapper(bf)
- result = Metadata(fileobj=wf)
- if result:
- break
- except KeyError:
- pass
- if not result:
- raise ValueError('Invalid wheel, because metadata is '
- 'missing: looked in %s' % ', '.join(fns))
- return result
-
- def get_wheel_metadata(self, zf):
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- metadata_filename = posixpath.join(info_dir, 'WHEEL')
- with zf.open(metadata_filename) as bf:
- wf = codecs.getreader('utf-8')(bf)
- message = message_from_file(wf)
- return dict(message)
-
- @cached_property
- def info(self):
- pathname = os.path.join(self.dirname, self.filename)
- with ZipFile(pathname, 'r') as zf:
- result = self.get_wheel_metadata(zf)
- return result
-
- def process_shebang(self, data):
- m = SHEBANG_RE.match(data)
- if m:
- end = m.end()
- shebang, data_after_shebang = data[:end], data[end:]
- # Preserve any arguments after the interpreter
- if b'pythonw' in shebang.lower():
- shebang_python = SHEBANG_PYTHONW
- else:
- shebang_python = SHEBANG_PYTHON
- m = SHEBANG_DETAIL_RE.match(shebang)
- if m:
- args = b' ' + m.groups()[-1]
- else:
- args = b''
- shebang = shebang_python + args
- data = shebang + data_after_shebang
- else:
- cr = data.find(b'\r')
- lf = data.find(b'\n')
- if cr < 0 or cr > lf:
- term = b'\n'
- else:
- if data[cr:cr + 2] == b'\r\n':
- term = b'\r\n'
- else:
- term = b'\r'
- data = SHEBANG_PYTHON + term + data
- return data
-
- def get_hash(self, data, hash_kind=None):
- if hash_kind is None:
- hash_kind = self.hash_kind
- try:
- hasher = getattr(hashlib, hash_kind)
- except AttributeError:
- raise DistlibException('Unsupported hash algorithm: %r' % hash_kind)
- result = hasher(data).digest()
- result = base64.urlsafe_b64encode(result).rstrip(b'=').decode('ascii')
- return hash_kind, result
-
- def write_record(self, records, record_path, archive_record_path):
- records = list(records) # make a copy, as mutated
- records.append((archive_record_path, '', ''))
- with CSVWriter(record_path) as writer:
- for row in records:
- writer.writerow(row)
-
- def write_records(self, info, libdir, archive_paths):
- records = []
- distinfo, info_dir = info
- hasher = getattr(hashlib, self.hash_kind)
- for ap, p in archive_paths:
- with open(p, 'rb') as f:
- data = f.read()
- digest = '%s=%s' % self.get_hash(data)
- size = os.path.getsize(p)
- records.append((ap, digest, size))
-
- p = os.path.join(distinfo, 'RECORD')
- ap = to_posix(os.path.join(info_dir, 'RECORD'))
- self.write_record(records, p, ap)
- archive_paths.append((ap, p))
-
- def build_zip(self, pathname, archive_paths):
- with ZipFile(pathname, 'w', zipfile.ZIP_DEFLATED) as zf:
- for ap, p in archive_paths:
- logger.debug('Wrote %s to %s in wheel', p, ap)
- zf.write(p, ap)
-
- def build(self, paths, tags=None, wheel_version=None):
- """
- Build a wheel from files in specified paths, and use any specified tags
- when determining the name of the wheel.
- """
- if tags is None:
- tags = {}
-
- libkey = list(filter(lambda o: o in paths, ('purelib', 'platlib')))[0]
- if libkey == 'platlib':
- is_pure = 'false'
- default_pyver = [IMPVER]
- default_abi = [ABI]
- default_arch = [ARCH]
- else:
- is_pure = 'true'
- default_pyver = [PYVER]
- default_abi = ['none']
- default_arch = ['any']
-
- self.pyver = tags.get('pyver', default_pyver)
- self.abi = tags.get('abi', default_abi)
- self.arch = tags.get('arch', default_arch)
-
- libdir = paths[libkey]
-
- name_ver = '%s-%s' % (self.name, self.version)
- data_dir = '%s.data' % name_ver
- info_dir = '%s.dist-info' % name_ver
-
- archive_paths = []
-
- # First, stuff which is not in site-packages
- for key in ('data', 'headers', 'scripts'):
- if key not in paths:
- continue
- path = paths[key]
- if os.path.isdir(path):
- for root, dirs, files in os.walk(path):
- for fn in files:
- p = fsdecode(os.path.join(root, fn))
- rp = os.path.relpath(p, path)
- ap = to_posix(os.path.join(data_dir, key, rp))
- archive_paths.append((ap, p))
- if key == 'scripts' and not p.endswith('.exe'):
- with open(p, 'rb') as f:
- data = f.read()
- data = self.process_shebang(data)
- with open(p, 'wb') as f:
- f.write(data)
-
- # Now, stuff which is in site-packages, other than the
- # distinfo stuff.
- path = libdir
- distinfo = None
- for root, dirs, files in os.walk(path):
- if root == path:
- # At the top level only, save distinfo for later
- # and skip it for now
- for i, dn in enumerate(dirs):
- dn = fsdecode(dn)
- if dn.endswith('.dist-info'):
- distinfo = os.path.join(root, dn)
- del dirs[i]
- break
- assert distinfo, '.dist-info directory expected, not found'
-
- for fn in files:
- # comment out next suite to leave .pyc files in
- if fsdecode(fn).endswith(('.pyc', '.pyo')):
- continue
- p = os.path.join(root, fn)
- rp = to_posix(os.path.relpath(p, path))
- archive_paths.append((rp, p))
-
- # Now distinfo. Assumed to be flat, i.e. os.listdir is enough.
- files = os.listdir(distinfo)
- for fn in files:
- if fn not in ('RECORD', 'INSTALLER', 'SHARED', 'WHEEL'):
- p = fsdecode(os.path.join(distinfo, fn))
- ap = to_posix(os.path.join(info_dir, fn))
- archive_paths.append((ap, p))
-
- wheel_metadata = [
- 'Wheel-Version: %d.%d' % (wheel_version or self.wheel_version),
- 'Generator: distlib %s' % __version__,
- 'Root-Is-Purelib: %s' % is_pure,
- ]
- for pyver, abi, arch in self.tags:
- wheel_metadata.append('Tag: %s-%s-%s' % (pyver, abi, arch))
- p = os.path.join(distinfo, 'WHEEL')
- with open(p, 'w') as f:
- f.write('\n'.join(wheel_metadata))
- ap = to_posix(os.path.join(info_dir, 'WHEEL'))
- archive_paths.append((ap, p))
-
- # sort the entries by archive path. Not needed by any spec, but it
- # keeps the archive listing and RECORD tidier than they would otherwise
- # be. Use the number of path segments to keep directory entries together,
- # and keep the dist-info stuff at the end.
- def sorter(t):
- ap = t[0]
- n = ap.count('/')
- if '.dist-info' in ap:
- n += 10000
- return (n, ap)
- archive_paths = sorted(archive_paths, key=sorter)
-
- # Now, at last, RECORD.
- # Paths in here are archive paths - nothing else makes sense.
- self.write_records((distinfo, info_dir), libdir, archive_paths)
- # Now, ready to build the zip file
- pathname = os.path.join(self.dirname, self.filename)
- self.build_zip(pathname, archive_paths)
- return pathname
-
- def skip_entry(self, arcname):
- """
- Determine whether an archive entry should be skipped when verifying
- or installing.
- """
- # The signature file won't be in RECORD,
- # and we don't currently don't do anything with it
- # We also skip directories, as they won't be in RECORD
- # either. See:
- #
- # https://github.com/pypa/wheel/issues/294
- # https://github.com/pypa/wheel/issues/287
- # https://github.com/pypa/wheel/pull/289
- #
- return arcname.endswith(('/', '/RECORD.jws'))
-
- def install(self, paths, maker, **kwargs):
- """
- Install a wheel to the specified paths. If kwarg ``warner`` is
- specified, it should be a callable, which will be called with two
- tuples indicating the wheel version of this software and the wheel
- version in the file, if there is a discrepancy in the versions.
- This can be used to issue any warnings to raise any exceptions.
- If kwarg ``lib_only`` is True, only the purelib/platlib files are
- installed, and the headers, scripts, data and dist-info metadata are
- not written. If kwarg ``bytecode_hashed_invalidation`` is True, written
- bytecode will try to use file-hash based invalidation (PEP-552) on
- supported interpreter versions (CPython 2.7+).
-
- The return value is a :class:`InstalledDistribution` instance unless
- ``options.lib_only`` is True, in which case the return value is ``None``.
- """
-
- dry_run = maker.dry_run
- warner = kwargs.get('warner')
- lib_only = kwargs.get('lib_only', False)
- bc_hashed_invalidation = kwargs.get('bytecode_hashed_invalidation', False)
-
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- data_dir = '%s.data' % name_ver
- info_dir = '%s.dist-info' % name_ver
-
- metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
- wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
- record_name = posixpath.join(info_dir, 'RECORD')
-
- wrapper = codecs.getreader('utf-8')
-
- with ZipFile(pathname, 'r') as zf:
- with zf.open(wheel_metadata_name) as bwf:
- wf = wrapper(bwf)
- message = message_from_file(wf)
- wv = message['Wheel-Version'].split('.', 1)
- file_version = tuple([int(i) for i in wv])
- if (file_version != self.wheel_version) and warner:
- warner(self.wheel_version, file_version)
-
- if message['Root-Is-Purelib'] == 'true':
- libdir = paths['purelib']
- else:
- libdir = paths['platlib']
-
- records = {}
- with zf.open(record_name) as bf:
- with CSVReader(stream=bf) as reader:
- for row in reader:
- p = row[0]
- records[p] = row
-
- data_pfx = posixpath.join(data_dir, '')
- info_pfx = posixpath.join(info_dir, '')
- script_pfx = posixpath.join(data_dir, 'scripts', '')
-
- # make a new instance rather than a copy of maker's,
- # as we mutate it
- fileop = FileOperator(dry_run=dry_run)
- fileop.record = True # so we can rollback if needed
-
- bc = not sys.dont_write_bytecode # Double negatives. Lovely!
-
- outfiles = [] # for RECORD writing
-
- # for script copying/shebang processing
- workdir = tempfile.mkdtemp()
- # set target dir later
- # we default add_launchers to False, as the
- # Python Launcher should be used instead
- maker.source_dir = workdir
- maker.target_dir = None
- try:
- for zinfo in zf.infolist():
- arcname = zinfo.filename
- if isinstance(arcname, text_type):
- u_arcname = arcname
- else:
- u_arcname = arcname.decode('utf-8')
- if self.skip_entry(u_arcname):
- continue
- row = records[u_arcname]
- if row[2] and str(zinfo.file_size) != row[2]:
- raise DistlibException('size mismatch for '
- '%s' % u_arcname)
- if row[1]:
- kind, value = row[1].split('=', 1)
- with zf.open(arcname) as bf:
- data = bf.read()
- _, digest = self.get_hash(data, kind)
- if digest != value:
- raise DistlibException('digest mismatch for '
- '%s' % arcname)
-
- if lib_only and u_arcname.startswith((info_pfx, data_pfx)):
- logger.debug('lib_only: skipping %s', u_arcname)
- continue
- is_script = (u_arcname.startswith(script_pfx)
- and not u_arcname.endswith('.exe'))
-
- if u_arcname.startswith(data_pfx):
- _, where, rp = u_arcname.split('/', 2)
- outfile = os.path.join(paths[where], convert_path(rp))
- else:
- # meant for site-packages.
- if u_arcname in (wheel_metadata_name, record_name):
- continue
- outfile = os.path.join(libdir, convert_path(u_arcname))
- if not is_script:
- with zf.open(arcname) as bf:
- fileop.copy_stream(bf, outfile)
- # Issue #147: permission bits aren't preserved. Using
- # zf.extract(zinfo, libdir) should have worked, but didn't,
- # see https://www.thetopsites.net/article/53834422.shtml
- # So ... manually preserve permission bits as given in zinfo
- if os.name == 'posix':
- # just set the normal permission bits
- os.chmod(outfile, (zinfo.external_attr >> 16) & 0x1FF)
- outfiles.append(outfile)
- # Double check the digest of the written file
- if not dry_run and row[1]:
- with open(outfile, 'rb') as bf:
- data = bf.read()
- _, newdigest = self.get_hash(data, kind)
- if newdigest != digest:
- raise DistlibException('digest mismatch '
- 'on write for '
- '%s' % outfile)
- if bc and outfile.endswith('.py'):
- try:
- pyc = fileop.byte_compile(outfile,
- hashed_invalidation=bc_hashed_invalidation)
- outfiles.append(pyc)
- except Exception:
- # Don't give up if byte-compilation fails,
- # but log it and perhaps warn the user
- logger.warning('Byte-compilation failed',
- exc_info=True)
- else:
- fn = os.path.basename(convert_path(arcname))
- workname = os.path.join(workdir, fn)
- with zf.open(arcname) as bf:
- fileop.copy_stream(bf, workname)
-
- dn, fn = os.path.split(outfile)
- maker.target_dir = dn
- filenames = maker.make(fn)
- fileop.set_executable_mode(filenames)
- outfiles.extend(filenames)
-
- if lib_only:
- logger.debug('lib_only: returning None')
- dist = None
- else:
- # Generate scripts
-
- # Try to get pydist.json so we can see if there are
- # any commands to generate. If this fails (e.g. because
- # of a legacy wheel), log a warning but don't give up.
- commands = None
- file_version = self.info['Wheel-Version']
- if file_version == '1.0':
- # Use legacy info
- ep = posixpath.join(info_dir, 'entry_points.txt')
- try:
- with zf.open(ep) as bwf:
- epdata = read_exports(bwf)
- commands = {}
- for key in ('console', 'gui'):
- k = '%s_scripts' % key
- if k in epdata:
- commands['wrap_%s' % key] = d = {}
- for v in epdata[k].values():
- s = '%s:%s' % (v.prefix, v.suffix)
- if v.flags:
- s += ' [%s]' % ','.join(v.flags)
- d[v.name] = s
- except Exception:
- logger.warning('Unable to read legacy script '
- 'metadata, so cannot generate '
- 'scripts')
- else:
- try:
- with zf.open(metadata_name) as bwf:
- wf = wrapper(bwf)
- commands = json.load(wf).get('extensions')
- if commands:
- commands = commands.get('python.commands')
- except Exception:
- logger.warning('Unable to read JSON metadata, so '
- 'cannot generate scripts')
- if commands:
- console_scripts = commands.get('wrap_console', {})
- gui_scripts = commands.get('wrap_gui', {})
- if console_scripts or gui_scripts:
- script_dir = paths.get('scripts', '')
- if not os.path.isdir(script_dir):
- raise ValueError('Valid script path not '
- 'specified')
- maker.target_dir = script_dir
- for k, v in console_scripts.items():
- script = '%s = %s' % (k, v)
- filenames = maker.make(script)
- fileop.set_executable_mode(filenames)
-
- if gui_scripts:
- options = {'gui': True }
- for k, v in gui_scripts.items():
- script = '%s = %s' % (k, v)
- filenames = maker.make(script, options)
- fileop.set_executable_mode(filenames)
-
- p = os.path.join(libdir, info_dir)
- dist = InstalledDistribution(p)
-
- # Write SHARED
- paths = dict(paths) # don't change passed in dict
- del paths['purelib']
- del paths['platlib']
- paths['lib'] = libdir
- p = dist.write_shared_locations(paths, dry_run)
- if p:
- outfiles.append(p)
-
- # Write RECORD
- dist.write_installed_files(outfiles, paths['prefix'],
- dry_run)
- return dist
- except Exception: # pragma: no cover
- logger.exception('installation failed.')
- fileop.rollback()
- raise
- finally:
- shutil.rmtree(workdir)
-
- def _get_dylib_cache(self):
- global cache
- if cache is None:
- # Use native string to avoid issues on 2.x: see Python #20140.
- base = os.path.join(get_cache_base(), str('dylib-cache'),
- '%s.%s' % sys.version_info[:2])
- cache = Cache(base)
- return cache
-
- def _get_extensions(self):
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- arcname = posixpath.join(info_dir, 'EXTENSIONS')
- wrapper = codecs.getreader('utf-8')
- result = []
- with ZipFile(pathname, 'r') as zf:
- try:
- with zf.open(arcname) as bf:
- wf = wrapper(bf)
- extensions = json.load(wf)
- cache = self._get_dylib_cache()
- prefix = cache.prefix_to_dir(pathname)
- cache_base = os.path.join(cache.base, prefix)
- if not os.path.isdir(cache_base):
- os.makedirs(cache_base)
- for name, relpath in extensions.items():
- dest = os.path.join(cache_base, convert_path(relpath))
- if not os.path.exists(dest):
- extract = True
- else:
- file_time = os.stat(dest).st_mtime
- file_time = datetime.datetime.fromtimestamp(file_time)
- info = zf.getinfo(relpath)
- wheel_time = datetime.datetime(*info.date_time)
- extract = wheel_time > file_time
- if extract:
- zf.extract(relpath, cache_base)
- result.append((name, dest))
- except KeyError:
- pass
- return result
-
- def is_compatible(self):
- """
- Determine if a wheel is compatible with the running system.
- """
- return is_compatible(self)
-
- def is_mountable(self):
- """
- Determine if a wheel is asserted as mountable by its metadata.
- """
- return True # for now - metadata details TBD
-
- def mount(self, append=False):
- pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
- if not self.is_compatible():
- msg = 'Wheel %s not compatible with this Python.' % pathname
- raise DistlibException(msg)
- if not self.is_mountable():
- msg = 'Wheel %s is marked as not mountable.' % pathname
- raise DistlibException(msg)
- if pathname in sys.path:
- logger.debug('%s already in path', pathname)
- else:
- if append:
- sys.path.append(pathname)
- else:
- sys.path.insert(0, pathname)
- extensions = self._get_extensions()
- if extensions:
- if _hook not in sys.meta_path:
- sys.meta_path.append(_hook)
- _hook.add(pathname, extensions)
-
- def unmount(self):
- pathname = os.path.abspath(os.path.join(self.dirname, self.filename))
- if pathname not in sys.path:
- logger.debug('%s not in path', pathname)
- else:
- sys.path.remove(pathname)
- if pathname in _hook.impure_wheels:
- _hook.remove(pathname)
- if not _hook.impure_wheels:
- if _hook in sys.meta_path:
- sys.meta_path.remove(_hook)
-
- def verify(self):
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- data_dir = '%s.data' % name_ver
- info_dir = '%s.dist-info' % name_ver
-
- metadata_name = posixpath.join(info_dir, LEGACY_METADATA_FILENAME)
- wheel_metadata_name = posixpath.join(info_dir, 'WHEEL')
- record_name = posixpath.join(info_dir, 'RECORD')
-
- wrapper = codecs.getreader('utf-8')
-
- with ZipFile(pathname, 'r') as zf:
- with zf.open(wheel_metadata_name) as bwf:
- wf = wrapper(bwf)
- message = message_from_file(wf)
- wv = message['Wheel-Version'].split('.', 1)
- file_version = tuple([int(i) for i in wv])
- # TODO version verification
-
- records = {}
- with zf.open(record_name) as bf:
- with CSVReader(stream=bf) as reader:
- for row in reader:
- p = row[0]
- records[p] = row
-
- for zinfo in zf.infolist():
- arcname = zinfo.filename
- if isinstance(arcname, text_type):
- u_arcname = arcname
- else:
- u_arcname = arcname.decode('utf-8')
- # See issue #115: some wheels have .. in their entries, but
- # in the filename ... e.g. __main__..py ! So the check is
- # updated to look for .. in the directory portions
- p = u_arcname.split('/')
- if '..' in p:
- raise DistlibException('invalid entry in '
- 'wheel: %r' % u_arcname)
-
- if self.skip_entry(u_arcname):
- continue
- row = records[u_arcname]
- if row[2] and str(zinfo.file_size) != row[2]:
- raise DistlibException('size mismatch for '
- '%s' % u_arcname)
- if row[1]:
- kind, value = row[1].split('=', 1)
- with zf.open(arcname) as bf:
- data = bf.read()
- _, digest = self.get_hash(data, kind)
- if digest != value:
- raise DistlibException('digest mismatch for '
- '%s' % arcname)
-
- def update(self, modifier, dest_dir=None, **kwargs):
- """
- Update the contents of a wheel in a generic way. The modifier should
- be a callable which expects a dictionary argument: its keys are
- archive-entry paths, and its values are absolute filesystem paths
- where the contents the corresponding archive entries can be found. The
- modifier is free to change the contents of the files pointed to, add
- new entries and remove entries, before returning. This method will
- extract the entire contents of the wheel to a temporary location, call
- the modifier, and then use the passed (and possibly updated)
- dictionary to write a new wheel. If ``dest_dir`` is specified, the new
- wheel is written there -- otherwise, the original wheel is overwritten.
-
- The modifier should return True if it updated the wheel, else False.
- This method returns the same value the modifier returns.
- """
-
- def get_version(path_map, info_dir):
- version = path = None
- key = '%s/%s' % (info_dir, LEGACY_METADATA_FILENAME)
- if key not in path_map:
- key = '%s/PKG-INFO' % info_dir
- if key in path_map:
- path = path_map[key]
- version = Metadata(path=path).version
- return version, path
-
- def update_version(version, path):
- updated = None
- try:
- v = NormalizedVersion(version)
- i = version.find('-')
- if i < 0:
- updated = '%s+1' % version
- else:
- parts = [int(s) for s in version[i + 1:].split('.')]
- parts[-1] += 1
- updated = '%s+%s' % (version[:i],
- '.'.join(str(i) for i in parts))
- except UnsupportedVersionError:
- logger.debug('Cannot update non-compliant (PEP-440) '
- 'version %r', version)
- if updated:
- md = Metadata(path=path)
- md.version = updated
- legacy = path.endswith(LEGACY_METADATA_FILENAME)
- md.write(path=path, legacy=legacy)
- logger.debug('Version updated from %r to %r', version,
- updated)
-
- pathname = os.path.join(self.dirname, self.filename)
- name_ver = '%s-%s' % (self.name, self.version)
- info_dir = '%s.dist-info' % name_ver
- record_name = posixpath.join(info_dir, 'RECORD')
- with tempdir() as workdir:
- with ZipFile(pathname, 'r') as zf:
- path_map = {}
- for zinfo in zf.infolist():
- arcname = zinfo.filename
- if isinstance(arcname, text_type):
- u_arcname = arcname
- else:
- u_arcname = arcname.decode('utf-8')
- if u_arcname == record_name:
- continue
- if '..' in u_arcname:
- raise DistlibException('invalid entry in '
- 'wheel: %r' % u_arcname)
- zf.extract(zinfo, workdir)
- path = os.path.join(workdir, convert_path(u_arcname))
- path_map[u_arcname] = path
-
- # Remember the version.
- original_version, _ = get_version(path_map, info_dir)
- # Files extracted. Call the modifier.
- modified = modifier(path_map, **kwargs)
- if modified:
- # Something changed - need to build a new wheel.
- current_version, path = get_version(path_map, info_dir)
- if current_version and (current_version == original_version):
- # Add or update local version to signify changes.
- update_version(current_version, path)
- # Decide where the new wheel goes.
- if dest_dir is None:
- fd, newpath = tempfile.mkstemp(suffix='.whl',
- prefix='wheel-update-',
- dir=workdir)
- os.close(fd)
- else:
- if not os.path.isdir(dest_dir):
- raise DistlibException('Not a directory: %r' % dest_dir)
- newpath = os.path.join(dest_dir, self.filename)
- archive_paths = list(path_map.items())
- distinfo = os.path.join(workdir, info_dir)
- info = distinfo, info_dir
- self.write_records(info, workdir, archive_paths)
- self.build_zip(newpath, archive_paths)
- if dest_dir is None:
- shutil.copyfile(newpath, pathname)
- return modified
-
-def _get_glibc_version():
- import platform
- ver = platform.libc_ver()
- result = []
- if ver[0] == 'glibc':
- for s in ver[1].split('.'):
- result.append(int(s) if s.isdigit() else 0)
- result = tuple(result)
- return result
-
-def compatible_tags():
- """
- Return (pyver, abi, arch) tuples compatible with this Python.
- """
- versions = [VER_SUFFIX]
- major = VER_SUFFIX[0]
- for minor in range(sys.version_info[1] - 1, - 1, -1):
- versions.append(''.join([major, str(minor)]))
-
- abis = []
- for suffix in _get_suffixes():
- if suffix.startswith('.abi'):
- abis.append(suffix.split('.', 2)[1])
- abis.sort()
- if ABI != 'none':
- abis.insert(0, ABI)
- abis.append('none')
- result = []
-
- arches = [ARCH]
- if sys.platform == 'darwin':
- m = re.match(r'(\w+)_(\d+)_(\d+)_(\w+)$', ARCH)
- if m:
- name, major, minor, arch = m.groups()
- minor = int(minor)
- matches = [arch]
- if arch in ('i386', 'ppc'):
- matches.append('fat')
- if arch in ('i386', 'ppc', 'x86_64'):
- matches.append('fat3')
- if arch in ('ppc64', 'x86_64'):
- matches.append('fat64')
- if arch in ('i386', 'x86_64'):
- matches.append('intel')
- if arch in ('i386', 'x86_64', 'intel', 'ppc', 'ppc64'):
- matches.append('universal')
- while minor >= 0:
- for match in matches:
- s = '%s_%s_%s_%s' % (name, major, minor, match)
- if s != ARCH: # already there
- arches.append(s)
- minor -= 1
-
- # Most specific - our Python version, ABI and arch
- for abi in abis:
- for arch in arches:
- result.append((''.join((IMP_PREFIX, versions[0])), abi, arch))
- # manylinux
- if abi != 'none' and sys.platform.startswith('linux'):
- arch = arch.replace('linux_', '')
- parts = _get_glibc_version()
- if len(parts) == 2:
- if parts >= (2, 5):
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux1_%s' % arch))
- if parts >= (2, 12):
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux2010_%s' % arch))
- if parts >= (2, 17):
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux2014_%s' % arch))
- result.append((''.join((IMP_PREFIX, versions[0])), abi,
- 'manylinux_%s_%s_%s' % (parts[0], parts[1],
- arch)))
-
- # where no ABI / arch dependency, but IMP_PREFIX dependency
- for i, version in enumerate(versions):
- result.append((''.join((IMP_PREFIX, version)), 'none', 'any'))
- if i == 0:
- result.append((''.join((IMP_PREFIX, version[0])), 'none', 'any'))
-
- # no IMP_PREFIX, ABI or arch dependency
- for i, version in enumerate(versions):
- result.append((''.join(('py', version)), 'none', 'any'))
- if i == 0:
- result.append((''.join(('py', version[0])), 'none', 'any'))
-
- return set(result)
-
-
-COMPATIBLE_TAGS = compatible_tags()
-
-del compatible_tags
-
-
-def is_compatible(wheel, tags=None):
- if not isinstance(wheel, Wheel):
- wheel = Wheel(wheel) # assume it's a filename
- result = False
- if tags is None:
- tags = COMPATIBLE_TAGS
- for ver, abi, arch in tags:
- if ver in wheel.pyver and abi in wheel.abi and arch in wheel.arch:
- result = True
- break
- return result
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/sandbox.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/sandbox.py
deleted file mode 100644
index 034fc80d20ea4a59d77af6f808dbcfc3b87612c3..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/sandbox.py
+++ /dev/null
@@ -1,530 +0,0 @@
-import os
-import sys
-import tempfile
-import operator
-import functools
-import itertools
-import re
-import contextlib
-import pickle
-import textwrap
-import builtins
-
-import pkg_resources
-from distutils.errors import DistutilsError
-from pkg_resources import working_set
-
-if sys.platform.startswith('java'):
- import org.python.modules.posix.PosixModule as _os
-else:
- _os = sys.modules[os.name]
-try:
- _file = file
-except NameError:
- _file = None
-_open = open
-
-
-__all__ = [
- "AbstractSandbox",
- "DirectorySandbox",
- "SandboxViolation",
- "run_setup",
-]
-
-
-def _execfile(filename, globals, locals=None):
- """
- Python 3 implementation of execfile.
- """
- mode = 'rb'
- with open(filename, mode) as stream:
- script = stream.read()
- if locals is None:
- locals = globals
- code = compile(script, filename, 'exec')
- exec(code, globals, locals)
-
-
-@contextlib.contextmanager
-def save_argv(repl=None):
- saved = sys.argv[:]
- if repl is not None:
- sys.argv[:] = repl
- try:
- yield saved
- finally:
- sys.argv[:] = saved
-
-
-@contextlib.contextmanager
-def save_path():
- saved = sys.path[:]
- try:
- yield saved
- finally:
- sys.path[:] = saved
-
-
-@contextlib.contextmanager
-def override_temp(replacement):
- """
- Monkey-patch tempfile.tempdir with replacement, ensuring it exists
- """
- os.makedirs(replacement, exist_ok=True)
-
- saved = tempfile.tempdir
-
- tempfile.tempdir = replacement
-
- try:
- yield
- finally:
- tempfile.tempdir = saved
-
-
-@contextlib.contextmanager
-def pushd(target):
- saved = os.getcwd()
- os.chdir(target)
- try:
- yield saved
- finally:
- os.chdir(saved)
-
-
-class UnpickleableException(Exception):
- """
- An exception representing another Exception that could not be pickled.
- """
-
- @staticmethod
- def dump(type, exc):
- """
- Always return a dumped (pickled) type and exc. If exc can't be pickled,
- wrap it in UnpickleableException first.
- """
- try:
- return pickle.dumps(type), pickle.dumps(exc)
- except Exception:
- # get UnpickleableException inside the sandbox
- from setuptools.sandbox import UnpickleableException as cls
-
- return cls.dump(cls, cls(repr(exc)))
-
-
-class ExceptionSaver:
- """
- A Context Manager that will save an exception, serialized, and restore it
- later.
- """
-
- def __enter__(self):
- return self
-
- def __exit__(self, type, exc, tb):
- if not exc:
- return
-
- # dump the exception
- self._saved = UnpickleableException.dump(type, exc)
- self._tb = tb
-
- # suppress the exception
- return True
-
- def resume(self):
- "restore and re-raise any exception"
-
- if '_saved' not in vars(self):
- return
-
- type, exc = map(pickle.loads, self._saved)
- raise exc.with_traceback(self._tb)
-
-
-@contextlib.contextmanager
-def save_modules():
- """
- Context in which imported modules are saved.
-
- Translates exceptions internal to the context into the equivalent exception
- outside the context.
- """
- saved = sys.modules.copy()
- with ExceptionSaver() as saved_exc:
- yield saved
-
- sys.modules.update(saved)
- # remove any modules imported since
- del_modules = (
- mod_name
- for mod_name in sys.modules
- if mod_name not in saved
- # exclude any encodings modules. See #285
- and not mod_name.startswith('encodings.')
- )
- _clear_modules(del_modules)
-
- saved_exc.resume()
-
-
-def _clear_modules(module_names):
- for mod_name in list(module_names):
- del sys.modules[mod_name]
-
-
-@contextlib.contextmanager
-def save_pkg_resources_state():
- saved = pkg_resources.__getstate__()
- try:
- yield saved
- finally:
- pkg_resources.__setstate__(saved)
-
-
-@contextlib.contextmanager
-def setup_context(setup_dir):
- temp_dir = os.path.join(setup_dir, 'temp')
- with save_pkg_resources_state():
- with save_modules():
- with save_path():
- hide_setuptools()
- with save_argv():
- with override_temp(temp_dir):
- with pushd(setup_dir):
- # ensure setuptools commands are available
- __import__('setuptools')
- yield
-
-
-_MODULES_TO_HIDE = {
- 'setuptools',
- 'distutils',
- 'pkg_resources',
- 'Cython',
- '_distutils_hack',
-}
-
-
-def _needs_hiding(mod_name):
- """
- >>> _needs_hiding('setuptools')
- True
- >>> _needs_hiding('pkg_resources')
- True
- >>> _needs_hiding('setuptools_plugin')
- False
- >>> _needs_hiding('setuptools.__init__')
- True
- >>> _needs_hiding('distutils')
- True
- >>> _needs_hiding('os')
- False
- >>> _needs_hiding('Cython')
- True
- """
- base_module = mod_name.split('.', 1)[0]
- return base_module in _MODULES_TO_HIDE
-
-
-def hide_setuptools():
- """
- Remove references to setuptools' modules from sys.modules to allow the
- invocation to import the most appropriate setuptools. This technique is
- necessary to avoid issues such as #315 where setuptools upgrading itself
- would fail to find a function declared in the metadata.
- """
- _distutils_hack = sys.modules.get('_distutils_hack', None)
- if _distutils_hack is not None:
- _distutils_hack.remove_shim()
-
- modules = filter(_needs_hiding, sys.modules)
- _clear_modules(modules)
-
-
-def run_setup(setup_script, args):
- """Run a distutils setup script, sandboxed in its directory"""
- setup_dir = os.path.abspath(os.path.dirname(setup_script))
- with setup_context(setup_dir):
- try:
- sys.argv[:] = [setup_script] + list(args)
- sys.path.insert(0, setup_dir)
- # reset to include setup dir, w/clean callback list
- working_set.__init__()
- working_set.callbacks.append(lambda dist: dist.activate())
-
- with DirectorySandbox(setup_dir):
- ns = dict(__file__=setup_script, __name__='__main__')
- _execfile(setup_script, ns)
- except SystemExit as v:
- if v.args and v.args[0]:
- raise
- # Normal exit, just return
-
-
-class AbstractSandbox:
- """Wrap 'os' module and 'open()' builtin for virtualizing setup scripts"""
-
- _active = False
-
- def __init__(self):
- self._attrs = [
- name
- for name in dir(_os)
- if not name.startswith('_') and hasattr(self, name)
- ]
-
- def _copy(self, source):
- for name in self._attrs:
- setattr(os, name, getattr(source, name))
-
- def __enter__(self):
- self._copy(self)
- if _file:
- builtins.file = self._file
- builtins.open = self._open
- self._active = True
-
- def __exit__(self, exc_type, exc_value, traceback):
- self._active = False
- if _file:
- builtins.file = _file
- builtins.open = _open
- self._copy(_os)
-
- def run(self, func):
- """Run 'func' under os sandboxing"""
- with self:
- return func()
-
- def _mk_dual_path_wrapper(name):
- original = getattr(_os, name)
-
- def wrap(self, src, dst, *args, **kw):
- if self._active:
- src, dst = self._remap_pair(name, src, dst, *args, **kw)
- return original(src, dst, *args, **kw)
-
- return wrap
-
- for name in ["rename", "link", "symlink"]:
- if hasattr(_os, name):
- locals()[name] = _mk_dual_path_wrapper(name)
-
- def _mk_single_path_wrapper(name, original=None):
- original = original or getattr(_os, name)
-
- def wrap(self, path, *args, **kw):
- if self._active:
- path = self._remap_input(name, path, *args, **kw)
- return original(path, *args, **kw)
-
- return wrap
-
- if _file:
- _file = _mk_single_path_wrapper('file', _file)
- _open = _mk_single_path_wrapper('open', _open)
- for name in [
- "stat",
- "listdir",
- "chdir",
- "open",
- "chmod",
- "chown",
- "mkdir",
- "remove",
- "unlink",
- "rmdir",
- "utime",
- "lchown",
- "chroot",
- "lstat",
- "startfile",
- "mkfifo",
- "mknod",
- "pathconf",
- "access",
- ]:
- if hasattr(_os, name):
- locals()[name] = _mk_single_path_wrapper(name)
-
- def _mk_single_with_return(name):
- original = getattr(_os, name)
-
- def wrap(self, path, *args, **kw):
- if self._active:
- path = self._remap_input(name, path, *args, **kw)
- return self._remap_output(name, original(path, *args, **kw))
- return original(path, *args, **kw)
-
- return wrap
-
- for name in ['readlink', 'tempnam']:
- if hasattr(_os, name):
- locals()[name] = _mk_single_with_return(name)
-
- def _mk_query(name):
- original = getattr(_os, name)
-
- def wrap(self, *args, **kw):
- retval = original(*args, **kw)
- if self._active:
- return self._remap_output(name, retval)
- return retval
-
- return wrap
-
- for name in ['getcwd', 'tmpnam']:
- if hasattr(_os, name):
- locals()[name] = _mk_query(name)
-
- def _validate_path(self, path):
- """Called to remap or validate any path, whether input or output"""
- return path
-
- def _remap_input(self, operation, path, *args, **kw):
- """Called for path inputs"""
- return self._validate_path(path)
-
- def _remap_output(self, operation, path):
- """Called for path outputs"""
- return self._validate_path(path)
-
- def _remap_pair(self, operation, src, dst, *args, **kw):
- """Called for path pairs like rename, link, and symlink operations"""
- return (
- self._remap_input(operation + '-from', src, *args, **kw),
- self._remap_input(operation + '-to', dst, *args, **kw),
- )
-
-
-if hasattr(os, 'devnull'):
- _EXCEPTIONS = [os.devnull]
-else:
- _EXCEPTIONS = []
-
-
-class DirectorySandbox(AbstractSandbox):
- """Restrict operations to a single subdirectory - pseudo-chroot"""
-
- write_ops = dict.fromkeys(
- [
- "open",
- "chmod",
- "chown",
- "mkdir",
- "remove",
- "unlink",
- "rmdir",
- "utime",
- "lchown",
- "chroot",
- "mkfifo",
- "mknod",
- "tempnam",
- ]
- )
-
- _exception_patterns = []
- "exempt writing to paths that match the pattern"
-
- def __init__(self, sandbox, exceptions=_EXCEPTIONS):
- self._sandbox = os.path.normcase(os.path.realpath(sandbox))
- self._prefix = os.path.join(self._sandbox, '')
- self._exceptions = [
- os.path.normcase(os.path.realpath(path)) for path in exceptions
- ]
- AbstractSandbox.__init__(self)
-
- def _violation(self, operation, *args, **kw):
- from setuptools.sandbox import SandboxViolation
-
- raise SandboxViolation(operation, args, kw)
-
- if _file:
-
- def _file(self, path, mode='r', *args, **kw):
- if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path):
- self._violation("file", path, mode, *args, **kw)
- return _file(path, mode, *args, **kw)
-
- def _open(self, path, mode='r', *args, **kw):
- if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path):
- self._violation("open", path, mode, *args, **kw)
- return _open(path, mode, *args, **kw)
-
- def tmpnam(self):
- self._violation("tmpnam")
-
- def _ok(self, path):
- active = self._active
- try:
- self._active = False
- realpath = os.path.normcase(os.path.realpath(path))
- return (
- self._exempted(realpath)
- or realpath == self._sandbox
- or realpath.startswith(self._prefix)
- )
- finally:
- self._active = active
-
- def _exempted(self, filepath):
- start_matches = (
- filepath.startswith(exception) for exception in self._exceptions
- )
- pattern_matches = (
- re.match(pattern, filepath) for pattern in self._exception_patterns
- )
- candidates = itertools.chain(start_matches, pattern_matches)
- return any(candidates)
-
- def _remap_input(self, operation, path, *args, **kw):
- """Called for path inputs"""
- if operation in self.write_ops and not self._ok(path):
- self._violation(operation, os.path.realpath(path), *args, **kw)
- return path
-
- def _remap_pair(self, operation, src, dst, *args, **kw):
- """Called for path pairs like rename, link, and symlink operations"""
- if not self._ok(src) or not self._ok(dst):
- self._violation(operation, src, dst, *args, **kw)
- return (src, dst)
-
- def open(self, file, flags, mode=0o777, *args, **kw):
- """Called for low-level os.open()"""
- if flags & WRITE_FLAGS and not self._ok(file):
- self._violation("os.open", file, flags, mode, *args, **kw)
- return _os.open(file, flags, mode, *args, **kw)
-
-
-WRITE_FLAGS = functools.reduce(
- operator.or_,
- [
- getattr(_os, a, 0)
- for a in "O_WRONLY O_RDWR O_APPEND O_CREAT O_TRUNC O_TEMPORARY".split()
- ],
-)
-
-
-class SandboxViolation(DistutilsError):
- """A setup script attempted to modify the filesystem outside the sandbox"""
-
- tmpl = textwrap.dedent(
- """
- SandboxViolation: {cmd}{args!r} {kwargs}
-
- The package setup script has attempted to modify files on your system
- that are not within the EasyInstall build area, and has been aborted.
-
- This package cannot be safely installed by EasyInstall, and may not
- support alternate installation locations even if you run its setup
- script by hand. Please inform the package's author and the EasyInstall
- maintainers to find out if a fix or workaround is available.
- """
- ).lstrip()
-
- def __str__(self):
- cmd, args, kwargs = self.args
- return self.tmpl.format(**locals())
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/dataset_tool.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/dataset_tool.py
deleted file mode 100644
index c59e6292891c3896722965020af7c60056729f2d..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/dataset_tool.py
+++ /dev/null
@@ -1,444 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import functools
-import io
-import json
-import os
-import pickle
-import sys
-import tarfile
-import gzip
-import zipfile
-from pathlib import Path
-from typing import Callable, Optional, Tuple, Union
-
-import click
-import numpy as np
-import PIL.Image
-from tqdm import tqdm
-
-#----------------------------------------------------------------------------
-
-def error(msg):
- print('Error: ' + msg)
- sys.exit(1)
-
-#----------------------------------------------------------------------------
-
-def maybe_min(a: int, b: Optional[int]) -> int:
- if b is not None:
- return min(a, b)
- return a
-
-#----------------------------------------------------------------------------
-
-def file_ext(name: Union[str, Path]) -> str:
- return str(name).split('.')[-1]
-
-#----------------------------------------------------------------------------
-
-def is_image_ext(fname: Union[str, Path]) -> bool:
- ext = file_ext(fname).lower()
- return f'.{ext}' in PIL.Image.EXTENSION # type: ignore
-
-#----------------------------------------------------------------------------
-
-def open_image_folder(source_dir, *, max_images: Optional[int]):
- input_images = [str(f) for f in sorted(Path(source_dir).rglob('*')) if is_image_ext(f) and os.path.isfile(f)]
-
- # Load labels.
- labels = {}
- meta_fname = os.path.join(source_dir, 'dataset.json')
- if os.path.isfile(meta_fname):
- with open(meta_fname, 'r') as file:
- labels = json.load(file)['labels']
- if labels is not None:
- labels = { x[0]: x[1] for x in labels }
- else:
- labels = {}
-
- max_idx = maybe_min(len(input_images), max_images)
-
- def iterate_images():
- for idx, fname in enumerate(input_images):
- arch_fname = os.path.relpath(fname, source_dir)
- arch_fname = arch_fname.replace('\\', '/')
- img = np.array(PIL.Image.open(fname))
- yield dict(img=img, label=labels.get(arch_fname))
- if idx >= max_idx-1:
- break
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_image_zip(source, *, max_images: Optional[int]):
- with zipfile.ZipFile(source, mode='r') as z:
- input_images = [str(f) for f in sorted(z.namelist()) if is_image_ext(f)]
-
- # Load labels.
- labels = {}
- if 'dataset.json' in z.namelist():
- with z.open('dataset.json', 'r') as file:
- labels = json.load(file)['labels']
- if labels is not None:
- labels = { x[0]: x[1] for x in labels }
- else:
- labels = {}
-
- max_idx = maybe_min(len(input_images), max_images)
-
- def iterate_images():
- with zipfile.ZipFile(source, mode='r') as z:
- for idx, fname in enumerate(input_images):
- with z.open(fname, 'r') as file:
- img = PIL.Image.open(file) # type: ignore
- img = np.array(img)
- yield dict(img=img, label=labels.get(fname))
- if idx >= max_idx-1:
- break
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_lmdb(lmdb_dir: str, *, max_images: Optional[int]):
- import cv2 # pip install opencv-python
- import lmdb # pip install lmdb # pylint: disable=import-error
-
- with lmdb.open(lmdb_dir, readonly=True, lock=False).begin(write=False) as txn:
- max_idx = maybe_min(txn.stat()['entries'], max_images)
-
- def iterate_images():
- with lmdb.open(lmdb_dir, readonly=True, lock=False).begin(write=False) as txn:
- for idx, (_key, value) in enumerate(txn.cursor()):
- try:
- try:
- img = cv2.imdecode(np.frombuffer(value, dtype=np.uint8), 1)
- if img is None:
- raise IOError('cv2.imdecode failed')
- img = img[:, :, ::-1] # BGR => RGB
- except IOError:
- img = np.array(PIL.Image.open(io.BytesIO(value)))
- yield dict(img=img, label=None)
- if idx >= max_idx-1:
- break
- except:
- print(sys.exc_info()[1])
-
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_cifar10(tarball: str, *, max_images: Optional[int]):
- images = []
- labels = []
-
- with tarfile.open(tarball, 'r:gz') as tar:
- for batch in range(1, 6):
- member = tar.getmember(f'cifar-10-batches-py/data_batch_{batch}')
- with tar.extractfile(member) as file:
- data = pickle.load(file, encoding='latin1')
- images.append(data['data'].reshape(-1, 3, 32, 32))
- labels.append(data['labels'])
-
- images = np.concatenate(images)
- labels = np.concatenate(labels)
- images = images.transpose([0, 2, 3, 1]) # NCHW -> NHWC
- assert images.shape == (50000, 32, 32, 3) and images.dtype == np.uint8
- assert labels.shape == (50000,) and labels.dtype in [np.int32, np.int64]
- assert np.min(images) == 0 and np.max(images) == 255
- assert np.min(labels) == 0 and np.max(labels) == 9
-
- max_idx = maybe_min(len(images), max_images)
-
- def iterate_images():
- for idx, img in enumerate(images):
- yield dict(img=img, label=int(labels[idx]))
- if idx >= max_idx-1:
- break
-
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def open_mnist(images_gz: str, *, max_images: Optional[int]):
- labels_gz = images_gz.replace('-images-idx3-ubyte.gz', '-labels-idx1-ubyte.gz')
- assert labels_gz != images_gz
- images = []
- labels = []
-
- with gzip.open(images_gz, 'rb') as f:
- images = np.frombuffer(f.read(), np.uint8, offset=16)
- with gzip.open(labels_gz, 'rb') as f:
- labels = np.frombuffer(f.read(), np.uint8, offset=8)
-
- images = images.reshape(-1, 28, 28)
- images = np.pad(images, [(0,0), (2,2), (2,2)], 'constant', constant_values=0)
- assert images.shape == (60000, 32, 32) and images.dtype == np.uint8
- assert labels.shape == (60000,) and labels.dtype == np.uint8
- assert np.min(images) == 0 and np.max(images) == 255
- assert np.min(labels) == 0 and np.max(labels) == 9
-
- max_idx = maybe_min(len(images), max_images)
-
- def iterate_images():
- for idx, img in enumerate(images):
- yield dict(img=img, label=int(labels[idx]))
- if idx >= max_idx-1:
- break
-
- return max_idx, iterate_images()
-
-#----------------------------------------------------------------------------
-
-def make_transform(
- transform: Optional[str],
- output_width: Optional[int],
- output_height: Optional[int],
- resize_filter: str
-) -> Callable[[np.ndarray], Optional[np.ndarray]]:
- resample = { 'box': PIL.Image.BOX, 'lanczos': PIL.Image.LANCZOS }[resize_filter]
- def scale(width, height, img):
- w = img.shape[1]
- h = img.shape[0]
- if width == w and height == h:
- return img
- img = PIL.Image.fromarray(img)
- ww = width if width is not None else w
- hh = height if height is not None else h
- img = img.resize((ww, hh), resample)
- return np.array(img)
-
- def center_crop(width, height, img):
- crop = np.min(img.shape[:2])
- img = img[(img.shape[0] - crop) // 2 : (img.shape[0] + crop) // 2, (img.shape[1] - crop) // 2 : (img.shape[1] + crop) // 2]
- img = PIL.Image.fromarray(img, 'RGB')
- img = img.resize((width, height), resample)
- return np.array(img)
-
- def center_crop_wide(width, height, img):
- ch = int(np.round(width * img.shape[0] / img.shape[1]))
- if img.shape[1] < width or ch < height:
- return None
-
- img = img[(img.shape[0] - ch) // 2 : (img.shape[0] + ch) // 2]
- img = PIL.Image.fromarray(img, 'RGB')
- img = img.resize((width, height), resample)
- img = np.array(img)
-
- canvas = np.zeros([width, width, 3], dtype=np.uint8)
- canvas[(width - height) // 2 : (width + height) // 2, :] = img
- return canvas
-
- if transform is None:
- return functools.partial(scale, output_width, output_height)
- if transform == 'center-crop':
- if (output_width is None) or (output_height is None):
- error ('must specify --width and --height when using ' + transform + 'transform')
- return functools.partial(center_crop, output_width, output_height)
- if transform == 'center-crop-wide':
- if (output_width is None) or (output_height is None):
- error ('must specify --width and --height when using ' + transform + ' transform')
- return functools.partial(center_crop_wide, output_width, output_height)
- assert False, 'unknown transform'
-
-#----------------------------------------------------------------------------
-
-def open_dataset(source, *, max_images: Optional[int]):
- if os.path.isdir(source):
- if source.rstrip('/').endswith('_lmdb'):
- return open_lmdb(source, max_images=max_images)
- else:
- return open_image_folder(source, max_images=max_images)
- elif os.path.isfile(source):
- if os.path.basename(source) == 'cifar-10-python.tar.gz':
- return open_cifar10(source, max_images=max_images)
- elif os.path.basename(source) == 'train-images-idx3-ubyte.gz':
- return open_mnist(source, max_images=max_images)
- elif file_ext(source) == 'zip':
- return open_image_zip(source, max_images=max_images)
- else:
- assert False, 'unknown archive type'
- else:
- error(f'Missing input file or directory: {source}')
-
-#----------------------------------------------------------------------------
-
-def open_dest(dest: str) -> Tuple[str, Callable[[str, Union[bytes, str]], None], Callable[[], None]]:
- dest_ext = file_ext(dest)
-
- if dest_ext == 'zip':
- if os.path.dirname(dest) != '':
- os.makedirs(os.path.dirname(dest), exist_ok=True)
- zf = zipfile.ZipFile(file=dest, mode='w', compression=zipfile.ZIP_STORED)
- def zip_write_bytes(fname: str, data: Union[bytes, str]):
- zf.writestr(fname, data)
- return '', zip_write_bytes, zf.close
- else:
- # If the output folder already exists, check that is is
- # empty.
- #
- # Note: creating the output directory is not strictly
- # necessary as folder_write_bytes() also mkdirs, but it's better
- # to give an error message earlier in case the dest folder
- # somehow cannot be created.
- if os.path.isdir(dest) and len(os.listdir(dest)) != 0:
- error('--dest folder must be empty')
- os.makedirs(dest, exist_ok=True)
-
- def folder_write_bytes(fname: str, data: Union[bytes, str]):
- os.makedirs(os.path.dirname(fname), exist_ok=True)
- with open(fname, 'wb') as fout:
- if isinstance(data, str):
- data = data.encode('utf8')
- fout.write(data)
- return dest, folder_write_bytes, lambda: None
-
-#----------------------------------------------------------------------------
-
-@click.command()
-@click.pass_context
-@click.option('--source', help='Directory or archive name for input dataset', required=True, metavar='PATH')
-@click.option('--dest', help='Output directory or archive name for output dataset', required=True, metavar='PATH')
-@click.option('--max-images', help='Output only up to `max-images` images', type=int, default=None)
-@click.option('--resize-filter', help='Filter to use when resizing images for output resolution', type=click.Choice(['box', 'lanczos']), default='lanczos', show_default=True)
-@click.option('--transform', help='Input crop/resize mode', type=click.Choice(['center-crop', 'center-crop-wide']))
-@click.option('--width', help='Output width', type=int)
-@click.option('--height', help='Output height', type=int)
-def convert_dataset(
- ctx: click.Context,
- source: str,
- dest: str,
- max_images: Optional[int],
- transform: Optional[str],
- resize_filter: str,
- width: Optional[int],
- height: Optional[int]
-):
- """Convert an image dataset into a dataset archive usable with StyleGAN2 ADA PyTorch.
-
- The input dataset format is guessed from the --source argument:
-
- \b
- --source *_lmdb/ Load LSUN dataset
- --source cifar-10-python.tar.gz Load CIFAR-10 dataset
- --source train-images-idx3-ubyte.gz Load MNIST dataset
- --source path/ Recursively load all images from path/
- --source dataset.zip Recursively load all images from dataset.zip
-
- Specifying the output format and path:
-
- \b
- --dest /path/to/dir Save output files under /path/to/dir
- --dest /path/to/dataset.zip Save output files into /path/to/dataset.zip
-
- The output dataset format can be either an image folder or an uncompressed zip archive.
- Zip archives makes it easier to move datasets around file servers and clusters, and may
- offer better training performance on network file systems.
-
- Images within the dataset archive will be stored as uncompressed PNG.
- Uncompresed PNGs can be efficiently decoded in the training loop.
-
- Class labels are stored in a file called 'dataset.json' that is stored at the
- dataset root folder. This file has the following structure:
-
- \b
- {
- "labels": [
- ["00000/img00000000.png",6],
- ["00000/img00000001.png",9],
- ... repeated for every image in the datase
- ["00049/img00049999.png",1]
- ]
- }
-
- If the 'dataset.json' file cannot be found, the dataset is interpreted as
- not containing class labels.
-
- Image scale/crop and resolution requirements:
-
- Output images must be square-shaped and they must all have the same power-of-two
- dimensions.
-
- To scale arbitrary input image size to a specific width and height, use the
- --width and --height options. Output resolution will be either the original
- input resolution (if --width/--height was not specified) or the one specified with
- --width/height.
-
- Use the --transform=center-crop or --transform=center-crop-wide options to apply a
- center crop transform on the input image. These options should be used with the
- --width and --height options. For example:
-
- \b
- python dataset_tool.py --source LSUN/raw/cat_lmdb --dest /tmp/lsun_cat \\
- --transform=center-crop-wide --width 512 --height=384
- """
-
- PIL.Image.init() # type: ignore
-
- if dest == '':
- ctx.fail('--dest output filename or directory must not be an empty string')
-
- num_files, input_iter = open_dataset(source, max_images=max_images)
- archive_root_dir, save_bytes, close_dest = open_dest(dest)
-
- transform_image = make_transform(transform, width, height, resize_filter)
-
- dataset_attrs = None
-
- labels = []
- for idx, image in tqdm(enumerate(input_iter), total=num_files):
- idx_str = f'{idx:08d}'
- archive_fname = f'{idx_str[:5]}/img{idx_str}.png'
-
- # Apply crop and resize.
- img = transform_image(image['img'])
-
- # Transform may drop images.
- if img is None:
- continue
-
- # Error check to require uniform image attributes across
- # the whole dataset.
- channels = img.shape[2] if img.ndim == 3 else 1
- cur_image_attrs = {
- 'width': img.shape[1],
- 'height': img.shape[0],
- 'channels': channels
- }
- if dataset_attrs is None:
- dataset_attrs = cur_image_attrs
- width = dataset_attrs['width']
- height = dataset_attrs['height']
- if width != height:
- error(f'Image dimensions after scale and crop are required to be square. Got {width}x{height}')
- if dataset_attrs['channels'] not in [1, 3]:
- error('Input images must be stored as RGB or grayscale')
- if width != 2 ** int(np.floor(np.log2(width))):
- error('Image width/height after scale and crop are required to be power-of-two')
- elif dataset_attrs != cur_image_attrs:
- err = [f' dataset {k}/cur image {k}: {dataset_attrs[k]}/{cur_image_attrs[k]}' for k in dataset_attrs.keys()]
- error(f'Image {archive_fname} attributes must be equal across all images of the dataset. Got:\n' + '\n'.join(err))
-
- # Save the image as an uncompressed PNG.
- img = PIL.Image.fromarray(img, { 1: 'L', 3: 'RGB' }[channels])
- image_bits = io.BytesIO()
- img.save(image_bits, format='png', compress_level=0, optimize=False)
- save_bytes(os.path.join(archive_root_dir, archive_fname), image_bits.getbuffer())
- labels.append([archive_fname, image['label']] if image['label'] is not None else None)
-
- metadata = {
- 'labels': labels if all(x is not None for x in labels) else None
- }
- save_bytes(os.path.join(archive_root_dir, 'dataset.json'), json.dumps(metadata))
- close_dest()
-
-#----------------------------------------------------------------------------
-
-if __name__ == "__main__":
- convert_dataset() # pylint: disable=no-value-for-parameter
diff --git a/spaces/Shreeraj/Metal_Defects_Classification_Application/app.py b/spaces/Shreeraj/Metal_Defects_Classification_Application/app.py
deleted file mode 100644
index f31a35c91eeef0b4255ad169b455f8709c48850f..0000000000000000000000000000000000000000
--- a/spaces/Shreeraj/Metal_Defects_Classification_Application/app.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import gradio as gr
-import torch
-from PIL import Image
-from torchvision.transforms import ToTensor
-import torchvision.transforms as transforms
-import torch.nn.functional as F
-import numpy as np
-from pytorch_grad_cam import GradCAM
-from pytorch_grad_cam.utils.image import show_cam_on_image
-import matplotlib.pyplot as plt
-
-
-# Load the pre-trained model
-model = torch.load('model.pth', map_location=torch.device('cpu'))
-model.eval()
-
-#define the target layer to pull for gradcam
-target_layers = [model.layer4[-1]]
-
-# Define the class labels
-class_labels = ['Crazing', 'Inclusion', 'Patches', 'Pitted', 'Rolled', 'Scratches']
-
-# Transformations for input images
-preprocess = transforms.Compose([
- transforms.Resize((224, 224)),
- transforms.ToTensor(),
- transforms.Normalize(mean=[0.4562, 0.4562, 0.4562], std=[0.2502, 0.2502, 0.2502]),
-])
-
-inv_normalize = transforms.Normalize(
- mean=[0.4562, 0.4562, 0.4562],
- std=[0.2502, 0.2502, 0.2502]
-)
-
-# Gradio app interface
-def classify_image(inp, transperancy=0.8):
- model.to("cpu")
- input_tensor = preprocess(inp)
- input_batch = input_tensor.unsqueeze(0).to('cpu') # Create a batch
-
- cam = GradCAM(model=model,use_cuda=False, target_layers=target_layers)
-
- grayscale_cam = cam(input_tensor=input_batch, targets=None)
- grayscale_cam = grayscale_cam[0, :]
- img = input_tensor.squeeze(0)
- img = inv_normalize(img)
- rgb_img = np.transpose(img, (1, 2, 0))
- rgb_img = rgb_img.numpy()
- rgb_img = (rgb_img - rgb_img.min()) / (rgb_img.max() - rgb_img.min())
- visualization = show_cam_on_image(rgb_img, grayscale_cam, use_rgb=True, image_weight=transperancy)
-
- with torch.no_grad():
- output = model(input_batch)
-
- probabilities = F.softmax(output[0], dim=0)
- pred_class_idx = torch.argmax(probabilities).item()
-
- class_probabilities = {class_labels[i]: float(probabilities[i]) for i in range(len(class_labels))}
- #prob_string = "\n".join([f"{label}: {prob:.2f}" for label, prob in class_probabilities.items()])
-
- return inp, class_probabilities, visualization
-
-iface = gr.Interface(
- fn=classify_image,
- inputs=[gr.Image(shape=(200, 200),type="pil", label="Input Image"),
- gr.Slider(0, 1, value = 0.8, label="Opacity of GradCAM")],
-
- outputs=[
- gr.Image(shape=(200,200),type="numpy", label="Input Image").style(width=300, height=300),
- gr.Label(label="Probability of Defect", num_top_classes=3),
- gr.Image(shape=(200,200), type="numpy", label="GradCam").style(width=300, height=300)
- ],
- title="Metal Defects Image Classification",
- description="The classification depends on the microscopic scale of the image being uploaded :)"
-)
-iface.launch()
diff --git a/spaces/Sijuade/Stable-Diffusion/README.md b/spaces/Sijuade/Stable-Diffusion/README.md
deleted file mode 100644
index 1efcffea8c6f3e22929de342c8a27f6fabea0466..0000000000000000000000000000000000000000
--- a/spaces/Sijuade/Stable-Diffusion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Stable Diffusion
-emoji: 😻
-colorFrom: yellow
-colorTo: red
-sdk: gradio
-sdk_version: 3.49.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Skyler123/TangGPT/modules/overwrites.py b/spaces/Skyler123/TangGPT/modules/overwrites.py
deleted file mode 100644
index bfcd4d01b7d7bec1184a8d09113933bca860530b..0000000000000000000000000000000000000000
--- a/spaces/Skyler123/TangGPT/modules/overwrites.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from __future__ import annotations
-import logging
-
-from llama_index import Prompt
-from typing import List, Tuple
-import mdtex2html
-
-from modules.presets import *
-from modules.llama_func import *
-
-
-def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]:
- logging.debug("Compacting text chunks...🚀🚀🚀")
- combined_str = [c.strip() for c in text_chunks if c.strip()]
- combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)]
- combined_str = "\n\n".join(combined_str)
- # resplit based on self.max_chunk_overlap
- text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1)
- return text_splitter.split_text(combined_str)
-
-
-def postprocess(
- self, y: List[Tuple[str | None, str | None]]
-) -> List[Tuple[str | None, str | None]]:
- """
- Parameters:
- y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format.
- Returns:
- List of tuples representing the message and response. Each message and response will be a string of HTML.
- """
- if y is None or y == []:
- return []
- user, bot = y[-1]
- if not detect_converted_mark(user):
- user = convert_asis(user)
- if not detect_converted_mark(bot):
- bot = convert_mdtext(bot)
- y[-1] = (user, bot)
- return y
-
-with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2:
- customJS = f.read()
- kelpyCodos = f2.read()
-
-def reload_javascript():
- print("Reloading javascript...")
- js = f''
- def template_response(*args, **kwargs):
- res = GradioTemplateResponseOriginal(*args, **kwargs)
- res.body = res.body.replace(b'