
D_AU - Dark Planet Wordstorm Project - Random Prune / Form.
Models using the org Dark Planet 8B formula, with random pruning / density, to create new Dark Planet versions with new abilities / generation.
Text Generation • Updated • 1.14k • 38Note The original Dark Planet in GGUF, with links to new 128k / 1 million context versions as well as expanded Dark Planet versions like DARKEST PLANET 16.5B and other versions too.
DavidAU/L3-Dark-Planet-8B
Text Generation • Updated • 55 • 6Note Original Dark Planet in full precision / source. Mergekit file included.
DavidAU/L3-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-47B-GGUF
Text Generation • Updated • 1.01k • 15Note The model shows the use of EIGHT of the "Dark Planet Wordstorm" ("cr2", "cr1", "r7", "r6", "b3", "b4", "r1" and "b6") models in a MOE (Mixture of experts) configuration. This allows you to use the power of up to 8 of these models at the same time.
DavidAU/L3.1-MOE-8X8B-Dark-Planet-8D-Mirrored-Chaos-Uncensored-47B-GGUF
Text Generation • Updated • 399 • 2Note The model shows the use of EIGHT of the "Dark Planet Wordstorm" ("cr2", "cr1", "r7", "r6", "b3", "b4", "r1" and "b6") models in a MOE (Mixture of experts) configuration. This allows you to use the power of up to 8 of these models at the same time. This uses a Llama 3.1 model as a "base" in the MOE to extend context to 128k.
DavidAU/L3-MOE-4x8B-Dark-Planet-Rising-25B-GGUF
Text Generation • Updated • 574 • 4Note The model shows the use of FOUR of the "Dark Planet Wordstorm" ("cr2","cr1","r7" and "r6") models in a MOE (Mixture of experts) configuration. This allows you to use the power of up to 4 of these models at the same time.
DavidAU/L3-MOE-4x8B-Dark-Planet-Rebel-FURY-25B-GGUF
Text Generation • Updated • 530 • 3Note The model shows the use of FOUR of the "Dark Planet Wordstorm" ("b3","b4","r1" and "b6") models in a MOE (Mixture of experts) configuration. This allows you to use the power of up to 4 of these models at the same time.
DavidAU/Llama-3.1-Dark-Planet-8B-SuperNova
Text Generation • Updated • 98Note Use of the exact Dark Planet 8B original formula, except the "base" model (Meta-Llama Instruct Llama 3) was replaced with Llama 3.1 Super Nova 8B. This alters both generation output, and changes the context maximum from 8k to 128k as Super Nova is a LLama 3.1 model. Mergekit file is listed in the "files" section.
DavidAU/Llama-3.1-Dark-Planet-SuperNova-8B-GGUF
Text Generation • Updated • 2.29kNote GGUF Quants of LLama 3.1-Dark-Planet-8B-SuperNova
DavidAU/L3-Dark-Planet-8B-wordstorm1
Text Generation • Updated • 107 • 1Note Base changes, and "density" are used to change the model from Dark Planet original. Wordstorm 1 and 2 are TWO versions showing applied "pruning" caused by using "density". Because of "pruning" wordstorm 1 and 2 produce very different output.
DavidAU/L3-Dark-Planet-8B-wordstorm2
Text Generation • Updated • 90 • 1Note Base changes, and "density" are used to change the model from Dark Planet original. Wordstorm 1 and 2 are TWO versions showing applied "pruning" caused by using "density". Because of "pruning" wordstorm 1 and 2 produce very different output.
DavidAU/L3-Dark-Planet-8B-wordstorm-b3
Text Generation • Updated • 225 • 1Note B3, B4 and B6 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 3 "B series" are the "winners" - best of - for this type of pruning / model merge.
DavidAU/L3-Dark-Planet-8B-wordstorm-b4
Text Generation • Updated • 185 • 1Note B3, B4 and B6 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 3 "B series" are the "winners" - best of - for this type of pruning / model merge.
DavidAU/L3-Dark-Planet-8B-wordstorm-b6
Text Generation • Updated • 109 • 1Note B3, B4 and B6 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 3 "B series" are the "winners" - best of - for this type of pruning / model merge.
DavidAU/L3-Dark-Planet-8B-wordstorm-cr1
Text Generation • Updated • 54 • 1Note These merges - CR1/CR2 - uses the original Dark Planet formula, but uses density above 1 for two models and only ONE model with "random pruning" - density at .95 ;
DavidAU/L3-Dark-Planet-8B-wordstorm-cr2
Text Generation • Updated • 108 • 1Note These merges - CR1/CR2 - uses the original Dark Planet formula, but uses density above 1 for two models and only ONE model with "random pruning" - density at .95 ;
DavidAU/L3-Dark-Planet-8B-wordstorm-r1
Text Generation • Updated • 120 • 1Note R1, R4, R6, R7 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 4 "R series" are the "winners" - best of - for this type of pruning / model merge.
DavidAU/L3-Dark-Planet-8B-wordstorm-r4
Text Generation • Updated • 152 • 1Note R1, R4, R6, R7 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 4 "R series" are the "winners" - best of - for this type of pruning / model merge.
DavidAU/L3-Dark-Planet-8B-wordstorm-r7
Text Generation • Updated • 198 • 1Note R1, R4, R6, R7 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 4 "R series" are the "winners" - best of - for this type of pruning / model merge.
DavidAU/L3-Dark-Planet-8B-wordstorm-r6
Text Generation • Updated • 114 • 1Note R1, R4, R6, R7 use the same formula as original Dark Planet 8b, but apply random pruning to each model in merge (not including "base"). These 4 "R series" are the "winners" - best of - for this type of pruning / model merge.