Creation Process: SFT
      SFT on approx 13 million tokens, SFW / NSFW RP, stories, creative instruct & chat data. Some of the SFW datasets are public and can be found in the model datasets list.
      I've switched over from Axolotl to MS-Swift w/ Megatron to train MoE models now. There's a roughly 5-10x speedup in training the models, thanks to escaping the naive MoE implementation in TRL. The training time for this run took only 40 minutes, excluding environment setup time.
      A low LR for GLM Air appears to be king. Going any higher, I've found it extremely easy to begin overcooking the model.
        
          
            
              >
              MS-Swift config
            
          
            Not optimized for cost / performance efficiency, YMMV.
            SFT (8*H200)
            PYTORCH_CUDA_ALLOC_CONF='expandable_segments:True' \
NPROC_PER_NODE=8 \
WANDB_API_KEY=wandb_key \
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
megatron sft \
    --load '/workspace/glm-4.5-air-mcore' \
    --dataset '/workspace/joined_dataset_cleaned_modified.jsonl' \
    --load_from_cache_file true \
    --train_type lora \
    --lora_rank 256 \
    --lora_alpha 16 \
    --use-rslora true \
    --target_modules all-linear \
    --split_dataset_ratio 0.01 \
    --moe_permute_fusion true \
    --tensor_model_parallel_size 8 \
    --expert_tensor_parallel_size 1 \
    --expert_model_parallel_size 8 \
    --moe_grouped_gemm true \
    --moe_shared_expert_overlap true \
    --moe_aux_loss_coeff 6e-5 \
    --micro_batch_size 4 \
    --global_batch_size 32 \
    --recompute_granularity full \
    --recompute_method uniform \
    --recompute_num_layers 1 \
    --max_epochs 2 \
    --cross_entropy_loss_fusion true \
    --lr 6e-6 \
    --lr_warmup_fraction 0.05 \
    --min_lr 6e-7 \
    --save megatron_output/Iceblink-v3-SFT-3 \
    --eval_interval 20 \
    --save_interval 25 \
    --finetune true \
    --packing true \
    --max_length 10280 \
    --num_workers 8 \
    --dataset_num_proc 8 \
    --no_save_optim true \
    --no_save_rng true \
    --sequence_parallel true \
    --wandb_project Megatron-Air-SFT \
    --wandb_exp_name Iceblink-v3-SFT-3 \
    --attention_backend flash