Skip to content

Comments

[model] support GLM-5#626

Open
yueming-yuan wants to merge 22 commits intomainfrom
glm5
Open

[model] support GLM-5#626
yueming-yuan wants to merge 22 commits intomainfrom
glm5

Conversation

@yueming-yuan
Copy link
Collaborator

@yueming-yuan yueming-yuan commented Feb 21, 2026

Acknowledgement

Many thanks to GLM-5 team, @zhuzilin @lilei199908 for their great work and open-source efforts.
THUDM/slime#1599

Supported/verified features:

  • parallelism: TP, SP, PP, EP works as expected in megatron side
  • rollout: supported slime-equivalent settings in SGLang side, (i.e. use PD disaggregation) and small-scale settings without PD disaggregation
  • bug fix: fix multiple bugs to support GLM-5 training, added patches in the transformers package, checkpoint, etc. Allow one-script run.

Known issues:

  • When enabling CP, it may hang in the train_actor stage.
  • FP8 rollout may fail after weight update of the first iteration

Usage

  1. Pull radixark/miles:glm5 docker
  2. See miles/scripts/run_glm5_744b_a40b.py for script usage

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yueming-yuan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the model support by integrating the GLM-5 architecture, which features a novel Multi-Latent Attention mechanism. It introduces the necessary infrastructure, including a custom Docker environment, specific patches for external libraries, and new weight conversion logic to correctly handle GLM-5's unique components. Furthermore, it enhances distributed training capabilities with advanced context parallelism strategies and provides comprehensive scripts to facilitate training and evaluation of GLM-5 models across different scales and configurations.

Highlights

  • GLM-5 Model Support: Introduced comprehensive support for the GLM-5 model, including its unique Multi-Latent Attention (MLA) mechanism and associated components like the indexer.
  • Distributed Training Enhancements: Implemented new context parallelism (CP) strategies, specifically an 'allgather-cp' mode, to optimize data handling and log probability redistribution across distributed training setups.
  • Custom Docker Environment and Patches: Added a dedicated Dockerfile for GLM-5, pre-installing necessary dependencies and applying specific patches to the transformers library and Megatron-LM to ensure compatibility and functionality.
  • Weight Conversion and Quantization: Extended weight conversion utilities to handle GLM-5's specific indexer weights, including rope reordering, and integrated these into FP8 quantization processes.
  • Training Scripts and Configuration: Provided new training scripts and configuration files for GLM-5, supporting various model sizes (full, 4-layer, 20-layer pruned) and enabling features like FP8 rollout and multi-node training.
Changelog
  • docker/glm5/Dockerfile_glm5
    • Added a new Dockerfile for GLM-5, installing various Python dependencies like flash-attn, mbridge, tilelang, transformer_engine, and apex.
    • Cloned and checked out specific commits for Megatron-LM and Miles, and applied custom patches for Megatron-LM and transformers.
  • docker/glm5/transformers.patch
    • Added a patch to src/transformers/models/auto/tokenization_auto.py to recognize 'TokenizersBackend' alongside 'PreTrainedTokenizerFast'.
    • Modified src/transformers/tokenization_utils_base.py to handle extra_special_tokens as lists or tuples, appending them to additional_special_tokens.
  • miles/backends/megatron_utils/megatron_to_hf/deepseekv3.py
    • Added conversion logic for DeepseekV3 indexer weights (wq_b.weight, wk.weight, weights_proj.weight, k_norm.weight, k_norm.bias), including rope reordering (swapping halves).
  • miles/backends/megatron_utils/megatron_to_hf/processors/quantizer_fp8.py
    • Included DeepseekV3 indexer weights (wq_b.weight, wk.weight) in the list of parameters to be quantized for FP8.
  • miles/backends/megatron_utils/megatron_to_hf/processors/quantizer_mxfp8.py
    • Wrapped the import of mxfp8_group_quantize in a try-except block to gracefully handle potential import errors.
  • miles/backends/megatron_utils/model.py
    • Passed the allgather_cp argument to get_batch calls within the forward_step function.
  • miles/backends/training_utils/data.py
    • Added allgather_cp as a parameter to the get_batch function.
    • Implemented new logic for qkv_format == "thd" when allgather_cp is enabled, concatenating and chunking tokens and loss masks globally for context parallelism.
  • miles/backends/training_utils/loss.py
    • Imported torch.distributed and torch.nn.functional.
    • Imported slice_log_prob_with_cp from cp_utils.
    • Modified get_responses to correctly slice logits and tokens based on args.allgather_cp for distributed processing.
    • Introduced _allgather_cp_redistribute to convert response tensors from allgather-CP layout to zigzag ring-attn layout.
    • Integrated _allgather_cp_redistribute into get_log_probs_and_entropy and get_values for allgather_cp mode.
  • miles/utils/arguments.py
    • Added a new command-line argument --allgather-cp to control context parallelism behavior.
    • Included logic to extract rope_theta from hf_config.rope_parameters if present.
  • miles/utils/typer_utils.py
    • Enhanced the dataclass_cli function to display arguments in a formatted table for better readability.
  • miles_plugins/mbridge/init.py
    • Imported DeepseekV32Bridge and added it to the module's __all__ export list.
  • miles_plugins/mbridge/deepseek_v32.py
    • Added DeepseekV32Bridge and GlmMoeDsaBridge classes, extending DeepseekV3Bridge.
    • Defined _DSA_ATTENTION_MAPPING for specific indexer weights in Deepseek Scalable Attention.
    • Implemented _weight_to_hf_format and _weight_to_mcore_format methods to handle rope reordering for DSA attention weights during model conversion.
  • miles_plugins/models/glm5/glm5.py
    • Added DSASelfAttentionSubmodules dataclass to define submodules for MLA.
    • Implemented DSAMultiLatentAttention and DSAMLASelfAttention classes, providing the core MLA logic, including QKV projections, rotary embeddings, and the new indexer components.
    • Defined get_glm5_spec to configure the transformer layer specification for GLM-5, incorporating the new MLA attention module.
  • miles_plugins/models/glm5/ops/indexer.py
    • Added pytorch_extract_topk_scores for extracting scores based on top-k indices.
    • Implemented IndexerFunction as a custom autograd function for the indexer's forward and backward passes.
    • Provided lighting_indexer as a wrapper for IndexerFunction.
    • Added generate_varlen_mask_params to create variable-length mask parameters from cumulative sequence lengths.
  • miles_plugins/models/glm5/ops/sparse_mla.py
    • Implemented SparseMLA as a custom autograd function for the forward and backward passes of Sparse Multi-Latent Attention.
  • miles_plugins/models/glm5/ops/tilelang_indexer_bwd.py
    • Added TileLang kernel (tl_indexer_bwd_impl) for the backward pass of the indexer, including gradient computations for query, weights, and key.
  • miles_plugins/models/glm5/ops/tilelang_indexer_fwd.py
    • Added TileLang kernel (tl_indexer_fwd_impl) for the forward pass of the indexer, computing logits and applying cleaning based on sequence lengths.
  • miles_plugins/models/glm5/ops/tilelang_sparse_mla_bwd.py
    • Added TileLang kernels (preprocess, postprocess, bwd) for the backward pass of Sparse Multi-Latent Attention, handling gradient accumulation and atomic updates.
  • miles_plugins/models/glm5/ops/tilelang_sparse_mla_fwd.py
    • Added TileLang kernel (sparse_mla_fwd) for the forward pass of Sparse Multi-Latent Attention, computing attention scores and outputs.
  • scripts/models/glm5-744B-A40B.sh
    • Added a new shell script defining model arguments for the GLM-5 744B-A40B model, including configurations for MoE, attention heads, hidden sizes, and various parallelism settings.
  • scripts/models/glm5-744B-A40B_20layer.sh
    • Added a script to override the base GLM-5 configuration for a 20-layer pruned model, adjusting num-layers and moe-layer-freq.
  • scripts/models/glm5-744B-A40B_4layer.sh
    • Added a script to override the base GLM-5 configuration for a 4-layer pruned model, adjusting num-layers and moe-layer-freq.
  • scripts/run_glm5_744b_a40b.py
    • Added a new Python script for training GLM-5 models with various arguments for model name, node count, FP8 rollout, evaluation, and optimizer offload.
    • Included functions for downloading models/data, patching GLM-5 checkpoints, converting to FP8, preparing Megatron checkpoints, and copying models to local storage.
    • Implemented _execute_train to construct and run the training command with detailed performance, rollout, optimizer, and SGLang arguments tailored for GLM-5.
  • tmp
    • Added a temporary file, likely for logging or testing purposes, containing a single line of training metrics.
Activity
  • The pull request introduces support for GLM-5, including its specific parallelism configurations (TP, SP, PP, EP) which are verified to work as expected in Megatron.
  • Rollout mechanisms are supported with slime-equivalent settings in SGLang, utilizing PD disaggregation, and also small-scale settings without PD disaggregation.
  • Multiple bugs were fixed to enable GLM-5 training, including patches to the transformers package and checkpoint handling, allowing for a streamlined one-script run.
  • Known issues include potential hangs in the train_actor stage when Context Parallelism (CP) is enabled, and possible failures in FP8 rollout after the first iteration's weight update.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request introduces support for the GLM-5 model, including a new model implementation, custom CUDA kernels via TileLang, and integration with the training pipeline. However, a security audit identified medium-severity vulnerabilities, specifically command injection in the training script scripts/run_glm5_744b_a40b.py due to unsanitized CLI arguments, and potential sensitive information leakage in miles/utils/typer_utils.py which prints all CLI arguments, risking exposure of secrets like W&B API keys. Additionally, critical issues were found with hardcoded dimensions in the CUDA kernels and weight conversion scripts that will cause failures when running with the GLM-5 configuration, and a bug in the autograd function for the indexer will lead to runtime errors during the backward pass.

def backward(ctx, grad_scores, grad_indices):
index_q, index_k, weights, cu_seqlen_ks, cu_seqlen_ke, topk_indices = ctx.saved_tensors
grad_q, grad_w, grad_k = indexer_bwd_interface(index_q, weights, index_k, topk_indices, grad_scores)
return grad_q, grad_k, grad_w, None, None, None, None, None, None, None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The backward method returns 10 values, but the forward method only receives 7 arguments (excluding ctx): index_q, index_k, weights, cu_seqlen_ks, cu_seqlen_ke, topk, and topk_indices. In PyTorch, the backward method must return exactly one gradient for each input to forward. This mismatch will cause a RuntimeError during training.

Suggested change
return grad_q, grad_k, grad_w, None, None, None, None, None, None, None
return grad_q, grad_k, grad_w, None, None, None, None

assert kv.shape[-1] == dim_plus_tail_dim
assert kv.shape[0] == B
# dim should be assigned
D = 512
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The head dimension D is hardcoded to 512. For GLM-5, where the query/key head dimension is 192 (or 256 including RoPE), this hardcoded value will lead to incorrect slicing and potential runtime errors (e.g., negative D_tail). This should be derived from the input tensor shape or configuration.

batch, seq_len, heads, dim_plus_tail_dim = q.shape
_, seq_len_kv, kv_group, _ = kv.shape

assert dim_plus_tail_dim == 576, "you should assign dim otherwise"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The dimension dim_plus_tail_dim is hardcoded to 576 in this assertion. However, the GLM-5 configuration in scripts/models/glm5-744B-A40B.sh uses --qk-head-dim 192 and --qk-pos-emb-head-dim 64, which totals 256. This kernel will fail for GLM-5 due to this dimension mismatch.

Comment on lines +166 to +175
U.convert_checkpoint(
model_name=args.model_name,
megatron_model_type=args.megatron_model_type,
num_gpus_per_node=num_gpus_per_node,
multinode=multinode,
num_nodes=num_nodes,
extra_args=extra_args,
dir_dst=args.model_dir,
megatron_path=args.megatron_path,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The U.convert_checkpoint function is called with several unsanitized CLI arguments, which likely leads to command injection in the underlying implementation.

Comment on lines +375 to +386
U.execute_train(
train_args=train_args,
config=args,
num_gpus_per_node=args.num_gpus_per_node,
megatron_model_type=args.megatron_model_type,
extra_env_vars={
**sglang_extra_env_vars,
"INDEXER_ROPE_NEOX_STYLE": "0",
"NVSHMEM_DISABLE_NCCL": "1",
},
megatron_path=args.megatron_path,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The U.execute_train function is called with a large command string constructed from multiple unsanitized CLI arguments. This is a major command injection vector.



def _prepare_download(args: ScriptArgs):
U.exec_command(f"mkdir -p {args.model_dir} {args.data_dir}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

Unsanitized input from args.model_dir and args.data_dir is used to construct a shell command. This is vulnerable to command injection.

def fuse_rope(q, cu_seqlens, gathered=False):
# worse precision than apex.
# from megatron.core.extensions.transformer_engine import fused_apply_rotary_pos_emb_thd
from apex.transformer.functional import fused_apply_rotary_pos_emb_thd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Importing apex inside the fuse_rope function, which is called during every forward pass, introduces unnecessary overhead. Although Python caches imports, the lookup still happens. It is better to move this import to the top of the file.

tp_group=self.pg_collection.tp,
)

self.index_topk = 2048
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The index_topk value is hardcoded to 2048. This parameter should ideally be retrieved from the model configuration to ensure consistency with the model's architecture definition.

hidden_size=self.config.index_head_dim,
config=self.config,
# The layernorm eps is hardcoded at the moment
eps=1e-6,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The epsilon for k_norm is hardcoded to 1e-6. It is safer to use the layernorm_epsilon from the configuration to avoid discrepancies between training and inference or across different model variants.

if "self_attention.wq_b.weight" in mcore_weights_name:
hf_names = self._weight_name_mapping_mcore_to_hf(mcore_weights_name)
wq_b = mcore_weights
wq_b = wq_b.view(-1, 128, wq_b.shape[-1]) # hard code 128
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The index_head_dim (128) is hardcoded here and in the inverse operation. This should be dynamic based on the model configuration to support variants with different indexing dimensions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant