Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
  • M metaseq
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 95
    • Issues 95
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 41
    • Merge requests 41
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Administrator
  • metaseq
  • Merge requests
  • !206

Using reshard_megatron_parts instead of glue_megatron_parts after changes in #169

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Administrator requested to merge punitkoura/fix-convert-to-singleton into main Jul 06, 2022
  • Overview 3
  • Commits 2
  • Pipelines 0
  • Changes 1

Created by: punitkoura

Patch Description Bug fix for convert_to_singleton.py - using reshard_megatron_parts instead of glue_megatron_parts after changes in #169

Testing steps

ls -a 125m/
.  ..  dict.txt  gpt2-merges.txt  gpt2-vocab.json  reshard-model_part-0.pt  reshard-model_part-1.pt
python -m metaseq.scripts.convert_to_singleton 125m

['--model-parallel-size', '2', '--distributed-world-size', '2', '--task', 'language_modeling', '--bpe-merges', '125m/gpt2-merges.txt', '--merges-filename', '125m/gpt2-merges.txt', '--bpe-vocab', '125m/gpt2-vocab.json', '--vocab-filename', '125m/gpt2-vocab.json', '--bpe', 'hf_byte_bpe', '--path', '125m/reshard.pt', '--checkpoint-shard-count', '1', '--use-sharded-state', '125m']
2022-07-06 20:24:19 | INFO | metaseq.distributed.utils | distributed init (rank 0): tcp://localhost:17628
2022-07-06 20:24:23 | INFO | metaseq.distributed.utils | distributed init (rank 1): tcp://localhost:17628
2022-07-06 20:24:23 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:1 to store for rank: 1
2022-07-06 20:24:23 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:1 to store for rank: 0
2022-07-06 20:24:23 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
2022-07-06 20:24:23 | INFO | metaseq.distributed.utils | initialized host fairwus3-1-htc-81 as rank 0
2022-07-06 20:24:23 | INFO | torch.distributed.distributed_c10d | Rank 1: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
2022-07-06 20:24:23 | INFO | metaseq.distributed.utils | initialized host fairwus3-1-htc-81 as rank 1
> initializing tensor model parallel with size 2
> initializing pipeline model parallel with size 1
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:2 to store for rank: 0
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:2 with 2 nodes.
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:3 to store for rank: 0
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:3 with 2 nodes.
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:4 to store for rank: 0
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:4 with 2 nodes.
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:5 to store for rank: 0
2022-07-06 20:24:38 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:5 with 2 nodes.
> initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 2719 and data parallel seed: 1
2022-07-06 20:24:44 | INFO | metaseq.checkpoint_utils | Done reading from disk
2022-07-06 20:24:45 | INFO | metaseq.modules.fused_bias_gelu | Compiling and loading fused kernels

NOTE: If this hangs here, your megatron fused kernels may be corrupted. This can happen if a previous job is interrupted during a build. In that case, delete the megatron build directory and relaunch training. The megatron build directory is located at: /shared/home/punitkoura/src/Megatron-LM/megatron/fused_kernels/build
Detected CUDA files, patching ldflags
Emitting ninja build file /shared/home/punitkoura/src/Megatron-LM/megatron/fused_kernels/build/build.ninja...
Building extension module scaled_upper_triang_masked_softmax_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module scaled_upper_triang_masked_softmax_cuda...
Detected CUDA files, patching ldflags
Emitting ninja build file /shared/home/punitkoura/src/Megatron-LM/megatron/fused_kernels/build/build.ninja...
Building extension module scaled_masked_softmax_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module scaled_masked_softmax_cuda...
Detected CUDA files, patching ldflags
Emitting ninja build file /shared/home/punitkoura/src/Megatron-LM/megatron/fused_kernels/build/build.ninja...
Building extension module fused_mix_prec_layer_norm_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module fused_mix_prec_layer_norm_cuda...
2022-07-06 20:24:50 | INFO | metaseq.modules.fused_bias_gelu | Done with compiling and loading fused kernels.
2022-07-06 20:24:54 | INFO | metaseq.checkpoint_utils | Done loading state dict
2022-07-06 20:24:54 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:6 to store for rank: 0
2022-07-06 20:24:54 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:6 with 2 nodes.
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.7.fc2.bias: 0.0009765625
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.7.final_layer_norm.weight: 0.000457763671875
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.7.final_layer_norm.bias: 0.00128173828125
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.8.self_attn.out_proj.bias: 0.00079345703125
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.8.self_attn_layer_norm.bias: 0.000946044921875
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.9.fc2.bias: 0.000762939453125
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.9.final_layer_norm.weight: 0.001373291015625
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.9.final_layer_norm.bias: 0.001678466796875
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.10.self_attn.out_proj.bias: 0.0011138916015625
2022-07-06 20:25:02 | INFO | metaseq.distributed.stitch_fsdp_ckpt | max discrepancy decoder.layers.10.self_attn_layer_norm.bias: 0.00146484375
2022-07-06 20:25:02 | INFO | metaseq.checkpoint_utils | Done reading from disk
(fairseq-20220503) punitkoura@fairwus3-1-htc-81:~/checkpoints$ ls -a 125m/
.  ..  dict.txt  gpt2-merges.txt  gpt2-vocab.json  reshard-model_part-0.pt  reshard-model_part-1.pt  restored.pt
Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: punitkoura/fix-convert-to-singleton