diff --git a/docs/contributor/github-workflow.md b/docs/contributor/github-workflow.md
index 8582a392..a362a3b5 100644
--- a/docs/contributor/github-workflow.md
+++ b/docs/contributor/github-workflow.md
@@ -110,7 +110,7 @@ in a few cycles.
## Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/docs/contributor/governance.md b/docs/contributor/governance.md
index 5cbbfb25..ca077537 100644
--- a/docs/contributor/governance.md
+++ b/docs/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v1.3.0/contributor/contributing.md b/versioned_docs/version-v1.3.0/contributor/contributing.md
index fab6d31c..72f9676d 100644
--- a/versioned_docs/version-v1.3.0/contributor/contributing.md
+++ b/versioned_docs/version-v1.3.0/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
#### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
### File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v1.3.0/contributor/governance.md b/versioned_docs/version-v1.3.0/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v1.3.0/contributor/governance.md
+++ b/versioned_docs/version-v1.3.0/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v1.3.0/contributor/ladder.md b/versioned_docs/version-v1.3.0/contributor/ladder.md
index 68ee4d27..b5db0b30 100644
--- a/versioned_docs/version-v1.3.0/contributor/ladder.md
+++ b/versioned_docs/version-v1.3.0/contributor/ladder.md
@@ -4,7 +4,7 @@ title: Contributor Ladder
This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added [here](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
## Removing Maintainers
diff --git a/versioned_docs/version-v1.3.0/developers/bash-auto-completion-on-linux.md b/versioned_docs/version-v1.3.0/developers/bash-auto-completion-on-linux.md
index 489f46c4..6b22baf8 100644
--- a/versioned_docs/version-v1.3.0/developers/bash-auto-completion-on-linux.md
+++ b/versioned_docs/version-v1.3.0/developers/bash-auto-completion-on-linux.md
@@ -47,4 +47,4 @@ Both approaches are equivalent. After reloading your shell, karmadactl autocompl
## Enable kubectl-karmada autocompletion
Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178).
-We will update the documentation as soon as it does.
+Documentation will be updated when support is added.
diff --git a/versioned_docs/version-v1.3.0/developers/dynamic-mig.md b/versioned_docs/version-v1.3.0/developers/dynamic-mig.md
index ccdde8a9..05467a8d 100644
--- a/versioned_docs/version-v1.3.0/developers/dynamic-mig.md
+++ b/versioned_docs/version-v1.3.0/developers/dynamic-mig.md
@@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v1.3.0/developers/scheduling.md b/versioned_docs/version-v1.3.0/developers/scheduling.md
index 02270146..7f8ce100 100644
--- a/versioned_docs/version-v1.3.0/developers/scheduling.md
+++ b/versioned_docs/version-v1.3.0/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
### User Stories
@@ -104,7 +104,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -124,7 +124,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -147,7 +147,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -166,4 +166,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
diff --git a/versioned_docs/version-v1.3.0/faq/faq.md b/versioned_docs/version-v1.3.0/faq/faq.md
index c52d50a2..c057a888 100644
--- a/versioned_docs/version-v1.3.0/faq/faq.md
+++ b/versioned_docs/version-v1.3.0/faq/faq.md
@@ -25,9 +25,8 @@ It's worth noting that not all controllers are needed by Karmada, for the recomm
## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver?
The quick answer is `yes`. In that case, you can save the effort to deploy
-[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just
-share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
-can be inherited seamlessly. We do have some users using Karmada in this way.
+[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
+can be inherited. Some users run Karmada this way.
There are some things you should consider before doing so:
diff --git a/versioned_docs/version-v1.3.0/roadmap.md b/versioned_docs/version-v1.3.0/roadmap.md
index a50c8db3..8793e529 100644
--- a/versioned_docs/version-v1.3.0/roadmap.md
+++ b/versioned_docs/version-v1.3.0/roadmap.md
@@ -6,7 +6,7 @@ title: Karmada Roadmap
This document defines a high level roadmap for Karmada development and upcoming releases.
Community and contributor involvement is vital for successfully implementing all desired items for each release.
-We hope that the items listed below will inspire further engagement from the community to keep karmada progressing and shipping exciting and valuable features.
+The items below are intended to inspire further community engagement to keep HAMi progressing and shipping exciting and valuable features.
## 2022 H1
diff --git a/versioned_docs/version-v1.3.0/troubleshooting/troubleshooting.md b/versioned_docs/version-v1.3.0/troubleshooting/troubleshooting.md
index f4e9a15a..f99bfb8a 100644
--- a/versioned_docs/version-v1.3.0/troubleshooting/troubleshooting.md
+++ b/versioned_docs/version-v1.3.0/troubleshooting/troubleshooting.md
@@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
-- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
+- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
diff --git a/versioned_docs/version-v1.3.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md b/versioned_docs/version-v1.3.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
index ae498abe..40cbeda5 100644
--- a/versioned_docs/version-v1.3.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
+++ b/versioned_docs/version-v1.3.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
@@ -4,7 +4,7 @@ title: Enable cambricon MLU sharing
## Introduction
-**We now support cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
***MLU sharing***: Each task can allocate a portion of MLU instead of a whole MLU card, thus MLU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v1.3.0/userguide/configure.md b/versioned_docs/version-v1.3.0/userguide/configure.md
index 8e3a34f3..536c889e 100644
--- a/versioned_docs/version-v1.3.0/userguide/configure.md
+++ b/versioned_docs/version-v1.3.0/userguide/configure.md
@@ -18,7 +18,7 @@ You can update these configurations using one of the following methods:
2. Modify Helm Chart: Update the corresponding values in the [ConfigMap](https://raw.githubusercontent.com/archlitchi/HAMi/refs/heads/master/charts/hami/templates/scheduler/device-configmap.yaml), then reapply the Helm Chart to regenerate the ConfigMap.
* `nvidia.deviceMemoryScaling:`
- Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if we set `nvidia.deviceMemoryScaling` argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with our device plugin.
+ Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if `nvidia.deviceMemoryScaling` is set argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with the HAMi device plugin.
* `nvidia.deviceSplitCount:`
Integer type, by default: equals 10. Maximum tasks assigned to a simple GPU device.
* `nvidia.migstrategy:`
diff --git a/versioned_docs/version-v1.3.0/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v1.3.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
index a90f4086..ef143119 100644
--- a/versioned_docs/version-v1.3.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v1.3.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v1.3.0/userguide/metax-device/enable-metax-gpu-schedule.md b/versioned_docs/version-v1.3.0/userguide/metax-device/enable-metax-gpu-schedule.md
index 164f7403..2b3cf90f 100644
--- a/versioned_docs/version-v1.3.0/userguide/metax-device/enable-metax-gpu-schedule.md
+++ b/versioned_docs/version-v1.3.0/userguide/metax-device/enable-metax-gpu-schedule.md
@@ -2,7 +2,7 @@
title: Enable Metax GPU topology-aware scheduling
---
-**We now support metax.com/gpu by implementing topo-awareness among metax GPUs**:
+**HAMi now supports metax.com/gpu by implementing topo-awareness among metax GPUs**:
When multiple GPUs are configured on a single server, the GPU cards are connected to the same PCIe Switch or MetaXLink depending on whether they are connected
, there is a near-far relationship. This forms a topology among all the cards on the server, as shown in the following figure:
diff --git a/versioned_docs/version-v1.3.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v1.3.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index 9942c982..5e625e81 100644
--- a/versioned_docs/version-v1.3.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v1.3.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v1.3.0/userguide/nvidia-device/dynamic-mig-support.md b/versioned_docs/version-v1.3.0/userguide/nvidia-device/dynamic-mig-support.md
index 13dd62d1..df06c2b8 100644
--- a/versioned_docs/version-v1.3.0/userguide/nvidia-device/dynamic-mig-support.md
+++ b/versioned_docs/version-v1.3.0/userguide/nvidia-device/dynamic-mig-support.md
@@ -4,7 +4,7 @@ title: Enable dynamic-mig feature
## Introduction
-**We now support dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
+**HAMi now supports dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
***Dynamic MIG instance management***: User don't need to operate on GPU node, using 'nvidia-smi -i 0 -mig 1' or other command to manage MIG instance, all will be done by HAMi-device-plugin.
diff --git a/versioned_docs/version-v1.3.0/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v1.3.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
index 397e984f..946f0e50 100644
--- a/versioned_docs/version-v1.3.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v1.3.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
+> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
diff --git a/versioned_docs/version-v2.4.1/contributor/contribute-docs.md b/versioned_docs/version-v2.4.1/contributor/contribute-docs.md
index 5fe0302e..7ebbb69c 100644
--- a/versioned_docs/version-v2.4.1/contributor/contribute-docs.md
+++ b/versioned_docs/version-v2.4.1/contributor/contribute-docs.md
@@ -9,12 +9,12 @@ the `Project-HAMi/website` repository.
## Prerequisites
- Docs, like codes, are also categorized and stored by version.
- 1.3 is the first version we have archived.
+ 1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
-- For our docs we use markdown. If you are unfamiliar with Markdown, please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or [https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
-- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+- The docs use markdown. If you are unfamiliar with Markdown, please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or [https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
+- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.
## Setup
@@ -85,7 +85,7 @@ title: A doc with tags
## secondary title
```
-The top section between two lines of --- is the Front Matter section. Here we define a couple of entries which tell Docusaurus how to handle the article:
+The top section between two lines of --- is the Front Matter section. These entries tell Docusaurus how to handle the article:
- Title is the equivalent of the `
` in a HTML document or `# ` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document (without the extension) related to the root docs directory.
@@ -101,7 +101,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
- with absolute paths. We use language-aware folders:
+ with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
@@ -187,5 +187,5 @@ If the previewed page is not what you expected, please check your docs again.
### Versioning
-For the newly supplemented documents of each version, we will synchronize to the latest version on the release date of each version, and the documents of the old version will not be modified.
-For errata found in the documentation, we will fix it with every release.
+For the newly supplemented documents of each version, they are synchronized to the latest version on the release date of each version, and the documents of the old version will not be modified.
+For errata found in the documentation, fixes are applied with every release.
diff --git a/versioned_docs/version-v2.4.1/contributor/contributing.md b/versioned_docs/version-v2.4.1/contributor/contributing.md
index fab6d31c..72f9676d 100644
--- a/versioned_docs/version-v2.4.1/contributor/contributing.md
+++ b/versioned_docs/version-v2.4.1/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
#### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
### File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v2.4.1/contributor/github-workflow.md b/versioned_docs/version-v2.4.1/contributor/github-workflow.md
index 2018d45e..e582f7f6 100644
--- a/versioned_docs/version-v2.4.1/contributor/github-workflow.md
+++ b/versioned_docs/version-v2.4.1/contributor/github-workflow.md
@@ -107,7 +107,7 @@ in a few cycles.
### 6 Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/versioned_docs/version-v2.4.1/contributor/governance.md b/versioned_docs/version-v2.4.1/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v2.4.1/contributor/governance.md
+++ b/versioned_docs/version-v2.4.1/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v2.4.1/contributor/ladder.md b/versioned_docs/version-v2.4.1/contributor/ladder.md
index 26ea756f..6d93ab1c 100644
--- a/versioned_docs/version-v2.4.1/contributor/ladder.md
+++ b/versioned_docs/version-v2.4.1/contributor/ladder.md
@@ -4,7 +4,7 @@ title: Contributor Ladder
This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added to the [HAMi AUTHORS list](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
## Removing Maintainers
diff --git a/versioned_docs/version-v2.4.1/developers/bash-auto-completion-on-linux.md b/versioned_docs/version-v2.4.1/developers/bash-auto-completion-on-linux.md
index 489f46c4..6b22baf8 100644
--- a/versioned_docs/version-v2.4.1/developers/bash-auto-completion-on-linux.md
+++ b/versioned_docs/version-v2.4.1/developers/bash-auto-completion-on-linux.md
@@ -47,4 +47,4 @@ Both approaches are equivalent. After reloading your shell, karmadactl autocompl
## Enable kubectl-karmada autocompletion
Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178).
-We will update the documentation as soon as it does.
+Documentation will be updated when support is added.
diff --git a/versioned_docs/version-v2.4.1/developers/dynamic-mig.md b/versioned_docs/version-v2.4.1/developers/dynamic-mig.md
index fd22875b..9f71a993 100644
--- a/versioned_docs/version-v2.4.1/developers/dynamic-mig.md
+++ b/versioned_docs/version-v2.4.1/developers/dynamic-mig.md
@@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v2.4.1/developers/scheduling.md b/versioned_docs/version-v2.4.1/developers/scheduling.md
index 02270146..7f8ce100 100644
--- a/versioned_docs/version-v2.4.1/developers/scheduling.md
+++ b/versioned_docs/version-v2.4.1/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
### User Stories
@@ -104,7 +104,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -124,7 +124,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -147,7 +147,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -166,4 +166,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
diff --git a/versioned_docs/version-v2.4.1/faq/faq.md b/versioned_docs/version-v2.4.1/faq/faq.md
index c52d50a2..c057a888 100644
--- a/versioned_docs/version-v2.4.1/faq/faq.md
+++ b/versioned_docs/version-v2.4.1/faq/faq.md
@@ -25,9 +25,8 @@ It's worth noting that not all controllers are needed by Karmada, for the recomm
## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver?
The quick answer is `yes`. In that case, you can save the effort to deploy
-[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just
-share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
-can be inherited seamlessly. We do have some users using Karmada in this way.
+[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
+can be inherited. Some users run Karmada this way.
There are some things you should consider before doing so:
diff --git a/versioned_docs/version-v2.4.1/roadmap.md b/versioned_docs/version-v2.4.1/roadmap.md
index a50c8db3..8793e529 100644
--- a/versioned_docs/version-v2.4.1/roadmap.md
+++ b/versioned_docs/version-v2.4.1/roadmap.md
@@ -6,7 +6,7 @@ title: Karmada Roadmap
This document defines a high level roadmap for Karmada development and upcoming releases.
Community and contributor involvement is vital for successfully implementing all desired items for each release.
-We hope that the items listed below will inspire further engagement from the community to keep karmada progressing and shipping exciting and valuable features.
+The items below are intended to inspire further community engagement to keep HAMi progressing and shipping exciting and valuable features.
## 2022 H1
diff --git a/versioned_docs/version-v2.4.1/troubleshooting/troubleshooting.md b/versioned_docs/version-v2.4.1/troubleshooting/troubleshooting.md
index f4e9a15a..f99bfb8a 100644
--- a/versioned_docs/version-v2.4.1/troubleshooting/troubleshooting.md
+++ b/versioned_docs/version-v2.4.1/troubleshooting/troubleshooting.md
@@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
-- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
+- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
diff --git a/versioned_docs/version-v2.4.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md b/versioned_docs/version-v2.4.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
index ae498abe..40cbeda5 100644
--- a/versioned_docs/version-v2.4.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
+++ b/versioned_docs/version-v2.4.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
@@ -4,7 +4,7 @@ title: Enable cambricon MLU sharing
## Introduction
-**We now support cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
***MLU sharing***: Each task can allocate a portion of MLU instead of a whole MLU card, thus MLU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.4.1/userguide/configure.md b/versioned_docs/version-v2.4.1/userguide/configure.md
index 8e3a34f3..536c889e 100644
--- a/versioned_docs/version-v2.4.1/userguide/configure.md
+++ b/versioned_docs/version-v2.4.1/userguide/configure.md
@@ -18,7 +18,7 @@ You can update these configurations using one of the following methods:
2. Modify Helm Chart: Update the corresponding values in the [ConfigMap](https://raw.githubusercontent.com/archlitchi/HAMi/refs/heads/master/charts/hami/templates/scheduler/device-configmap.yaml), then reapply the Helm Chart to regenerate the ConfigMap.
* `nvidia.deviceMemoryScaling:`
- Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if we set `nvidia.deviceMemoryScaling` argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with our device plugin.
+ Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if `nvidia.deviceMemoryScaling` is set argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with the HAMi device plugin.
* `nvidia.deviceSplitCount:`
Integer type, by default: equals 10. Maximum tasks assigned to a simple GPU device.
* `nvidia.migstrategy:`
diff --git a/versioned_docs/version-v2.4.1/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v2.4.1/userguide/hygon-device/enable-hygon-dcu-sharing.md
index a90f4086..ef143119 100644
--- a/versioned_docs/version-v2.4.1/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v2.4.1/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.4.1/userguide/metax-device/enable-metax-gpu-schedule.md b/versioned_docs/version-v2.4.1/userguide/metax-device/enable-metax-gpu-schedule.md
index 130992f4..5dc21839 100644
--- a/versioned_docs/version-v2.4.1/userguide/metax-device/enable-metax-gpu-schedule.md
+++ b/versioned_docs/version-v2.4.1/userguide/metax-device/enable-metax-gpu-schedule.md
@@ -2,7 +2,7 @@
title: Enable Metax GPU topology-aware scheduling
---
-**We now support metax.com/gpu by implementing topo-awareness among metax GPUs**:
+**HAMi now supports metax.com/gpu by implementing topo-awareness among metax GPUs**:
When multiple GPUs are configured on a single server, the GPU cards are connected to the same PCIe Switch or MetaXLink depending on whether they are connected
, there is a near-far relationship. This forms a topology among all the cards on the server, as shown in the following figure:
diff --git a/versioned_docs/version-v2.4.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v2.4.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index 9942c982..5e625e81 100644
--- a/versioned_docs/version-v2.4.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v2.4.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.4.1/userguide/nvidia-device/dynamic-mig-support.md b/versioned_docs/version-v2.4.1/userguide/nvidia-device/dynamic-mig-support.md
index 13dd62d1..df06c2b8 100644
--- a/versioned_docs/version-v2.4.1/userguide/nvidia-device/dynamic-mig-support.md
+++ b/versioned_docs/version-v2.4.1/userguide/nvidia-device/dynamic-mig-support.md
@@ -4,7 +4,7 @@ title: Enable dynamic-mig feature
## Introduction
-**We now support dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
+**HAMi now supports dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
***Dynamic MIG instance management***: User don't need to operate on GPU node, using 'nvidia-smi -i 0 -mig 1' or other command to manage MIG instance, all will be done by HAMi-device-plugin.
diff --git a/versioned_docs/version-v2.4.1/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v2.4.1/userguide/nvidia-device/examples/specify-card-type-to-use.md
index 397e984f..946f0e50 100644
--- a/versioned_docs/version-v2.4.1/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v2.4.1/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
+> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
diff --git a/versioned_docs/version-v2.5.0/contributor/adopters.md b/versioned_docs/version-v2.5.0/contributor/adopters.md
index 8c521c76..0d87e5eb 100644
--- a/versioned_docs/version-v2.5.0/contributor/adopters.md
+++ b/versioned_docs/version-v2.5.0/contributor/adopters.md
@@ -1,12 +1,12 @@
# HAMi Adopters
-So you and your organisation are using HAMi? That's great. We would love to hear from you! 💖
+HAMi is used in production by the organisations listed below.
## Adding yourself
[Here](https://github.com/Project-HAMi/website/blob/master/src/pages/adopters.mdx) lists the organisations who adopted the HAMi project in production.
-You just need to add an entry for your company and upon merging it will automatically be added to our website.
+Add an entry for your company - it will be added to the website once the PR merges.
To add your organisation follow these steps:
@@ -25,4 +25,4 @@ To add your organisation follow these steps:
6. Push the commit with `git push origin main`.
7. Open a Pull Request to [HAMi-io/website](https://github.com/Project-HAMi/website) and a preview build will turn up.
-Thanks a lot for being part of our community - we very much appreciate it!
+Thanks to all adopters for being part of the community!
diff --git a/versioned_docs/version-v2.5.0/contributor/contribute-docs.md b/versioned_docs/version-v2.5.0/contributor/contribute-docs.md
index f095e61a..6b2ba2d5 100644
--- a/versioned_docs/version-v2.5.0/contributor/contribute-docs.md
+++ b/versioned_docs/version-v2.5.0/contributor/contribute-docs.md
@@ -9,14 +9,14 @@ the `Project-HAMi/website` repository.
## Prerequisites
- Docs, like codes, are also categorized and stored by version.
- 1.3 is the first version we have archived.
+ 1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
-- For our docs we use markdown. If you are unfamiliar with Markdown,
+- The docs use markdown. If you are unfamiliar with Markdown,
please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or
[https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
-- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.
## Setup
@@ -88,7 +88,7 @@ title: A doc with tags
```
The top section between two lines of --- is the Front Matter section.
-Here we define a couple of entries which tell Docusaurus how to handle the article:
+These entries tell Docusaurus how to handle the article:
- Title is the equivalent of the `` in a HTML document or `# ` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document
@@ -106,7 +106,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
- with absolute paths. We use language-aware folders:
+ with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
@@ -202,6 +202,6 @@ If the previewed page is not what you expected, please check your docs again.
### Versioning
-For the newly supplemented documents of each version, we will synchronize to the latest version
+For the newly supplemented documents of each version, they are synchronized to the latest version
on the release date of each version, and the documents of the old version will not be modified.
-For errata found in the documentation, we will fix it with every release.
+For errata found in the documentation, fixes are applied with every release.
diff --git a/versioned_docs/version-v2.5.0/contributor/contributing.md b/versioned_docs/version-v2.5.0/contributor/contributing.md
index fab6d31c..72f9676d 100644
--- a/versioned_docs/version-v2.5.0/contributor/contributing.md
+++ b/versioned_docs/version-v2.5.0/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
#### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
### File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v2.5.0/contributor/github-workflow.md b/versioned_docs/version-v2.5.0/contributor/github-workflow.md
index 8582a392..a362a3b5 100644
--- a/versioned_docs/version-v2.5.0/contributor/github-workflow.md
+++ b/versioned_docs/version-v2.5.0/contributor/github-workflow.md
@@ -110,7 +110,7 @@ in a few cycles.
## Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/versioned_docs/version-v2.5.0/contributor/governance.md b/versioned_docs/version-v2.5.0/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v2.5.0/contributor/governance.md
+++ b/versioned_docs/version-v2.5.0/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v2.5.0/contributor/ladder.md b/versioned_docs/version-v2.5.0/contributor/ladder.md
index 26ea756f..6d93ab1c 100644
--- a/versioned_docs/version-v2.5.0/contributor/ladder.md
+++ b/versioned_docs/version-v2.5.0/contributor/ladder.md
@@ -4,7 +4,7 @@ title: Contributor Ladder
This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added to the [HAMi AUTHORS list](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
## Removing Maintainers
diff --git a/versioned_docs/version-v2.5.0/developers/bash-auto-completion-on-linux.md b/versioned_docs/version-v2.5.0/developers/bash-auto-completion-on-linux.md
index 489f46c4..6b22baf8 100644
--- a/versioned_docs/version-v2.5.0/developers/bash-auto-completion-on-linux.md
+++ b/versioned_docs/version-v2.5.0/developers/bash-auto-completion-on-linux.md
@@ -47,4 +47,4 @@ Both approaches are equivalent. After reloading your shell, karmadactl autocompl
## Enable kubectl-karmada autocompletion
Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178).
-We will update the documentation as soon as it does.
+Documentation will be updated when support is added.
diff --git a/versioned_docs/version-v2.5.0/developers/dynamic-mig.md b/versioned_docs/version-v2.5.0/developers/dynamic-mig.md
index fd22875b..9f71a993 100644
--- a/versioned_docs/version-v2.5.0/developers/dynamic-mig.md
+++ b/versioned_docs/version-v2.5.0/developers/dynamic-mig.md
@@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v2.5.0/developers/scheduling.md b/versioned_docs/version-v2.5.0/developers/scheduling.md
index 02270146..7f8ce100 100644
--- a/versioned_docs/version-v2.5.0/developers/scheduling.md
+++ b/versioned_docs/version-v2.5.0/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
### User Stories
@@ -104,7 +104,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -124,7 +124,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -147,7 +147,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -166,4 +166,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
diff --git a/versioned_docs/version-v2.5.0/faq/faq.md b/versioned_docs/version-v2.5.0/faq/faq.md
index c52d50a2..c057a888 100644
--- a/versioned_docs/version-v2.5.0/faq/faq.md
+++ b/versioned_docs/version-v2.5.0/faq/faq.md
@@ -25,9 +25,8 @@ It's worth noting that not all controllers are needed by Karmada, for the recomm
## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver?
The quick answer is `yes`. In that case, you can save the effort to deploy
-[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just
-share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
-can be inherited seamlessly. We do have some users using Karmada in this way.
+[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
+can be inherited. Some users run Karmada this way.
There are some things you should consider before doing so:
diff --git a/versioned_docs/version-v2.5.0/roadmap.md b/versioned_docs/version-v2.5.0/roadmap.md
index a50c8db3..8793e529 100644
--- a/versioned_docs/version-v2.5.0/roadmap.md
+++ b/versioned_docs/version-v2.5.0/roadmap.md
@@ -6,7 +6,7 @@ title: Karmada Roadmap
This document defines a high level roadmap for Karmada development and upcoming releases.
Community and contributor involvement is vital for successfully implementing all desired items for each release.
-We hope that the items listed below will inspire further engagement from the community to keep karmada progressing and shipping exciting and valuable features.
+The items below are intended to inspire further community engagement to keep HAMi progressing and shipping exciting and valuable features.
## 2022 H1
diff --git a/versioned_docs/version-v2.5.0/troubleshooting/troubleshooting.md b/versioned_docs/version-v2.5.0/troubleshooting/troubleshooting.md
index f4e9a15a..f99bfb8a 100644
--- a/versioned_docs/version-v2.5.0/troubleshooting/troubleshooting.md
+++ b/versioned_docs/version-v2.5.0/troubleshooting/troubleshooting.md
@@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
-- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
+- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
diff --git a/versioned_docs/version-v2.5.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md b/versioned_docs/version-v2.5.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
index ae498abe..40cbeda5 100644
--- a/versioned_docs/version-v2.5.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
+++ b/versioned_docs/version-v2.5.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
@@ -4,7 +4,7 @@ title: Enable cambricon MLU sharing
## Introduction
-**We now support cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
***MLU sharing***: Each task can allocate a portion of MLU instead of a whole MLU card, thus MLU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.0/userguide/configure.md b/versioned_docs/version-v2.5.0/userguide/configure.md
index 8e3a34f3..536c889e 100644
--- a/versioned_docs/version-v2.5.0/userguide/configure.md
+++ b/versioned_docs/version-v2.5.0/userguide/configure.md
@@ -18,7 +18,7 @@ You can update these configurations using one of the following methods:
2. Modify Helm Chart: Update the corresponding values in the [ConfigMap](https://raw.githubusercontent.com/archlitchi/HAMi/refs/heads/master/charts/hami/templates/scheduler/device-configmap.yaml), then reapply the Helm Chart to regenerate the ConfigMap.
* `nvidia.deviceMemoryScaling:`
- Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if we set `nvidia.deviceMemoryScaling` argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with our device plugin.
+ Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if `nvidia.deviceMemoryScaling` is set argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with the HAMi device plugin.
* `nvidia.deviceSplitCount:`
Integer type, by default: equals 10. Maximum tasks assigned to a simple GPU device.
* `nvidia.migstrategy:`
diff --git a/versioned_docs/version-v2.5.0/userguide/enflame-device/enable-enflame-gpu-sharing.md b/versioned_docs/version-v2.5.0/userguide/enflame-device/enable-enflame-gpu-sharing.md
index 4761c98c..3c0efd2a 100644
--- a/versioned_docs/version-v2.5.0/userguide/enflame-device/enable-enflame-gpu-sharing.md
+++ b/versioned_docs/version-v2.5.0/userguide/enflame-device/enable-enflame-gpu-sharing.md
@@ -4,11 +4,11 @@ title: Enable Enflame GCU sharing
## Introduction
-**We now support sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
***GCU sharing***: Each task can allocate a portion of GCU instead of a whole GCU card, thus GCU can be shared among multiple tasks.
-***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, we make sure that it does not exceed the boundary.
+***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, HAMi ensures it does not exceed the boundary.
***Device UUID Selection***: You can specify which GCU devices to use or exclude using annotations.
@@ -16,7 +16,7 @@ title: Enable Enflame GCU sharing
## Prerequisites
-* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, we only need gcushare-device-plugin here )
+* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, only gcushare-device-plugin is needed here )
* driver version >= 1.2.3.14
* kubernetes >= 1.24
* enflame-container-toolkit >=2.0.50
diff --git a/versioned_docs/version-v2.5.0/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v2.5.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
index a90f4086..ef143119 100644
--- a/versioned_docs/version-v2.5.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v2.5.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.0/userguide/iluvatar-device/enable-iluvatar-gpu-sharing.md b/versioned_docs/version-v2.5.0/userguide/iluvatar-device/enable-iluvatar-gpu-sharing.md
index 615adcce..cb0d0f04 100644
--- a/versioned_docs/version-v2.5.0/userguide/iluvatar-device/enable-iluvatar-gpu-sharing.md
+++ b/versioned_docs/version-v2.5.0/userguide/iluvatar-device/enable-iluvatar-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Iluvatar GCU sharing
## Introduction
-**We now support iluvatar.ai/gpu(i.e MR-V100、BI-V150、BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports iluvatar.ai/gpu(i.e MR-V100、BI-V150、BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-schedule.md b/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-schedule.md
index 130992f4..5dc21839 100644
--- a/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-schedule.md
+++ b/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-schedule.md
@@ -2,7 +2,7 @@
title: Enable Metax GPU topology-aware scheduling
---
-**We now support metax.com/gpu by implementing topo-awareness among metax GPUs**:
+**HAMi now supports metax.com/gpu by implementing topo-awareness among metax GPUs**:
When multiple GPUs are configured on a single server, the GPU cards are connected to the same PCIe Switch or MetaXLink depending on whether they are connected
, there is a near-far relationship. This forms a topology among all the cards on the server, as shown in the following figure:
diff --git a/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-sharing.md b/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-sharing.md
index 479c454d..2d059e56 100644
--- a/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-sharing.md
+++ b/versioned_docs/version-v2.5.0/userguide/metax-device/enable-metax-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Metax GPU sharing
## Introduction
-We support metax.com/gpu as follows:
+HAMi supports metax.com/gpu as follows:
- support metax.com/gpu by implementing most device-sharing features as nvidia-GPU
- support metax.com/gpu by implementing topo-awareness among metax GPUs
diff --git a/versioned_docs/version-v2.5.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v2.5.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index 9942c982..5e625e81 100644
--- a/versioned_docs/version-v2.5.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v2.5.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.0/userguide/nvidia-device/dynamic-mig-support.md b/versioned_docs/version-v2.5.0/userguide/nvidia-device/dynamic-mig-support.md
index 13dd62d1..df06c2b8 100644
--- a/versioned_docs/version-v2.5.0/userguide/nvidia-device/dynamic-mig-support.md
+++ b/versioned_docs/version-v2.5.0/userguide/nvidia-device/dynamic-mig-support.md
@@ -4,7 +4,7 @@ title: Enable dynamic-mig feature
## Introduction
-**We now support dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
+**HAMi now supports dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
***Dynamic MIG instance management***: User don't need to operate on GPU node, using 'nvidia-smi -i 0 -mig 1' or other command to manage MIG instance, all will be done by HAMi-device-plugin.
diff --git a/versioned_docs/version-v2.5.0/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v2.5.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
index 397e984f..946f0e50 100644
--- a/versioned_docs/version-v2.5.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v2.5.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
+> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
diff --git a/versioned_docs/version-v2.5.1/contributor/adopters.md b/versioned_docs/version-v2.5.1/contributor/adopters.md
index 8c521c76..0d87e5eb 100644
--- a/versioned_docs/version-v2.5.1/contributor/adopters.md
+++ b/versioned_docs/version-v2.5.1/contributor/adopters.md
@@ -1,12 +1,12 @@
# HAMi Adopters
-So you and your organisation are using HAMi? That's great. We would love to hear from you! 💖
+HAMi is used in production by the organisations listed below.
## Adding yourself
[Here](https://github.com/Project-HAMi/website/blob/master/src/pages/adopters.mdx) lists the organisations who adopted the HAMi project in production.
-You just need to add an entry for your company and upon merging it will automatically be added to our website.
+Add an entry for your company - it will be added to the website once the PR merges.
To add your organisation follow these steps:
@@ -25,4 +25,4 @@ To add your organisation follow these steps:
6. Push the commit with `git push origin main`.
7. Open a Pull Request to [HAMi-io/website](https://github.com/Project-HAMi/website) and a preview build will turn up.
-Thanks a lot for being part of our community - we very much appreciate it!
+Thanks to all adopters for being part of the community!
diff --git a/versioned_docs/version-v2.5.1/contributor/contribute-docs.md b/versioned_docs/version-v2.5.1/contributor/contribute-docs.md
index f095e61a..6b2ba2d5 100644
--- a/versioned_docs/version-v2.5.1/contributor/contribute-docs.md
+++ b/versioned_docs/version-v2.5.1/contributor/contribute-docs.md
@@ -9,14 +9,14 @@ the `Project-HAMi/website` repository.
## Prerequisites
- Docs, like codes, are also categorized and stored by version.
- 1.3 is the first version we have archived.
+ 1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
-- For our docs we use markdown. If you are unfamiliar with Markdown,
+- The docs use markdown. If you are unfamiliar with Markdown,
please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or
[https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
-- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.
## Setup
@@ -88,7 +88,7 @@ title: A doc with tags
```
The top section between two lines of --- is the Front Matter section.
-Here we define a couple of entries which tell Docusaurus how to handle the article:
+These entries tell Docusaurus how to handle the article:
- Title is the equivalent of the `` in a HTML document or `# ` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document
@@ -106,7 +106,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
- with absolute paths. We use language-aware folders:
+ with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
@@ -202,6 +202,6 @@ If the previewed page is not what you expected, please check your docs again.
### Versioning
-For the newly supplemented documents of each version, we will synchronize to the latest version
+For the newly supplemented documents of each version, they are synchronized to the latest version
on the release date of each version, and the documents of the old version will not be modified.
-For errata found in the documentation, we will fix it with every release.
+For errata found in the documentation, fixes are applied with every release.
diff --git a/versioned_docs/version-v2.5.1/contributor/contributing.md b/versioned_docs/version-v2.5.1/contributor/contributing.md
index fab6d31c..72f9676d 100644
--- a/versioned_docs/version-v2.5.1/contributor/contributing.md
+++ b/versioned_docs/version-v2.5.1/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
#### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
### File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v2.5.1/contributor/github-workflow.md b/versioned_docs/version-v2.5.1/contributor/github-workflow.md
index 8582a392..a362a3b5 100644
--- a/versioned_docs/version-v2.5.1/contributor/github-workflow.md
+++ b/versioned_docs/version-v2.5.1/contributor/github-workflow.md
@@ -110,7 +110,7 @@ in a few cycles.
## Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/versioned_docs/version-v2.5.1/contributor/governance.md b/versioned_docs/version-v2.5.1/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v2.5.1/contributor/governance.md
+++ b/versioned_docs/version-v2.5.1/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v2.5.1/contributor/ladder.md b/versioned_docs/version-v2.5.1/contributor/ladder.md
index 26ea756f..6d93ab1c 100644
--- a/versioned_docs/version-v2.5.1/contributor/ladder.md
+++ b/versioned_docs/version-v2.5.1/contributor/ladder.md
@@ -4,7 +4,7 @@ title: Contributor Ladder
This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added to the [HAMi AUTHORS list](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
## Removing Maintainers
diff --git a/versioned_docs/version-v2.5.1/developers/dynamic-mig.md b/versioned_docs/version-v2.5.1/developers/dynamic-mig.md
index fd22875b..9f71a993 100644
--- a/versioned_docs/version-v2.5.1/developers/dynamic-mig.md
+++ b/versioned_docs/version-v2.5.1/developers/dynamic-mig.md
@@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v2.5.1/developers/scheduling.md b/versioned_docs/version-v2.5.1/developers/scheduling.md
index 02270146..7f8ce100 100644
--- a/versioned_docs/version-v2.5.1/developers/scheduling.md
+++ b/versioned_docs/version-v2.5.1/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
### User Stories
@@ -104,7 +104,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -124,7 +124,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -147,7 +147,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -166,4 +166,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
diff --git a/versioned_docs/version-v2.5.1/troubleshooting/troubleshooting.md b/versioned_docs/version-v2.5.1/troubleshooting/troubleshooting.md
index f4e9a15a..f99bfb8a 100644
--- a/versioned_docs/version-v2.5.1/troubleshooting/troubleshooting.md
+++ b/versioned_docs/version-v2.5.1/troubleshooting/troubleshooting.md
@@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
-- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
+- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
diff --git a/versioned_docs/version-v2.5.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md b/versioned_docs/version-v2.5.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
index ae498abe..40cbeda5 100644
--- a/versioned_docs/version-v2.5.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
+++ b/versioned_docs/version-v2.5.1/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
@@ -4,7 +4,7 @@ title: Enable cambricon MLU sharing
## Introduction
-**We now support cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
***MLU sharing***: Each task can allocate a portion of MLU instead of a whole MLU card, thus MLU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.1/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v2.5.1/userguide/hygon-device/enable-hygon-dcu-sharing.md
index a90f4086..ef143119 100644
--- a/versioned_docs/version-v2.5.1/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v2.5.1/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.1/userguide/metax-device/enable-metax-gpu-schedule.md b/versioned_docs/version-v2.5.1/userguide/metax-device/enable-metax-gpu-schedule.md
index 130992f4..5dc21839 100644
--- a/versioned_docs/version-v2.5.1/userguide/metax-device/enable-metax-gpu-schedule.md
+++ b/versioned_docs/version-v2.5.1/userguide/metax-device/enable-metax-gpu-schedule.md
@@ -2,7 +2,7 @@
title: Enable Metax GPU topology-aware scheduling
---
-**We now support metax.com/gpu by implementing topo-awareness among metax GPUs**:
+**HAMi now supports metax.com/gpu by implementing topo-awareness among metax GPUs**:
When multiple GPUs are configured on a single server, the GPU cards are connected to the same PCIe Switch or MetaXLink depending on whether they are connected
, there is a near-far relationship. This forms a topology among all the cards on the server, as shown in the following figure:
diff --git a/versioned_docs/version-v2.5.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v2.5.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index 9942c982..5e625e81 100644
--- a/versioned_docs/version-v2.5.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v2.5.1/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.5.1/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v2.5.1/userguide/nvidia-device/examples/specify-card-type-to-use.md
index 397e984f..946f0e50 100644
--- a/versioned_docs/version-v2.5.1/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v2.5.1/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
+> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
diff --git a/versioned_docs/version-v2.6.0/contributor/adopters.md b/versioned_docs/version-v2.6.0/contributor/adopters.md
index 8c521c76..0d87e5eb 100644
--- a/versioned_docs/version-v2.6.0/contributor/adopters.md
+++ b/versioned_docs/version-v2.6.0/contributor/adopters.md
@@ -1,12 +1,12 @@
# HAMi Adopters
-So you and your organisation are using HAMi? That's great. We would love to hear from you! 💖
+HAMi is used in production by the organisations listed below.
## Adding yourself
[Here](https://github.com/Project-HAMi/website/blob/master/src/pages/adopters.mdx) lists the organisations who adopted the HAMi project in production.
-You just need to add an entry for your company and upon merging it will automatically be added to our website.
+Add an entry for your company - it will be added to the website once the PR merges.
To add your organisation follow these steps:
@@ -25,4 +25,4 @@ To add your organisation follow these steps:
6. Push the commit with `git push origin main`.
7. Open a Pull Request to [HAMi-io/website](https://github.com/Project-HAMi/website) and a preview build will turn up.
-Thanks a lot for being part of our community - we very much appreciate it!
+Thanks to all adopters for being part of the community!
diff --git a/versioned_docs/version-v2.6.0/contributor/contribute-docs.md b/versioned_docs/version-v2.6.0/contributor/contribute-docs.md
index f095e61a..6b2ba2d5 100644
--- a/versioned_docs/version-v2.6.0/contributor/contribute-docs.md
+++ b/versioned_docs/version-v2.6.0/contributor/contribute-docs.md
@@ -9,14 +9,14 @@ the `Project-HAMi/website` repository.
## Prerequisites
- Docs, like codes, are also categorized and stored by version.
- 1.3 is the first version we have archived.
+ 1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
-- For our docs we use markdown. If you are unfamiliar with Markdown,
+- The docs use markdown. If you are unfamiliar with Markdown,
please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or
[https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
-- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.
## Setup
@@ -88,7 +88,7 @@ title: A doc with tags
```
The top section between two lines of --- is the Front Matter section.
-Here we define a couple of entries which tell Docusaurus how to handle the article:
+These entries tell Docusaurus how to handle the article:
- Title is the equivalent of the `` in a HTML document or `# ` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document
@@ -106,7 +106,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
- with absolute paths. We use language-aware folders:
+ with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
@@ -202,6 +202,6 @@ If the previewed page is not what you expected, please check your docs again.
### Versioning
-For the newly supplemented documents of each version, we will synchronize to the latest version
+For the newly supplemented documents of each version, they are synchronized to the latest version
on the release date of each version, and the documents of the old version will not be modified.
-For errata found in the documentation, we will fix it with every release.
+For errata found in the documentation, fixes are applied with every release.
diff --git a/versioned_docs/version-v2.6.0/contributor/contributing.md b/versioned_docs/version-v2.6.0/contributor/contributing.md
index 32af0a21..42a0f175 100644
--- a/versioned_docs/version-v2.6.0/contributor/contributing.md
+++ b/versioned_docs/version-v2.6.0/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
## File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v2.6.0/contributor/github-workflow.md b/versioned_docs/version-v2.6.0/contributor/github-workflow.md
index 8582a392..a362a3b5 100644
--- a/versioned_docs/version-v2.6.0/contributor/github-workflow.md
+++ b/versioned_docs/version-v2.6.0/contributor/github-workflow.md
@@ -110,7 +110,7 @@ in a few cycles.
## Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/versioned_docs/version-v2.6.0/contributor/governance.md b/versioned_docs/version-v2.6.0/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v2.6.0/contributor/governance.md
+++ b/versioned_docs/version-v2.6.0/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v2.6.0/contributor/ladder.md b/versioned_docs/version-v2.6.0/contributor/ladder.md
index 26ea756f..6d93ab1c 100644
--- a/versioned_docs/version-v2.6.0/contributor/ladder.md
+++ b/versioned_docs/version-v2.6.0/contributor/ladder.md
@@ -4,7 +4,7 @@ title: Contributor Ladder
This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added to the [HAMi AUTHORS list](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
## Removing Maintainers
diff --git a/versioned_docs/version-v2.6.0/developers/dynamic-mig.md b/versioned_docs/version-v2.6.0/developers/dynamic-mig.md
index fd22875b..9f71a993 100644
--- a/versioned_docs/version-v2.6.0/developers/dynamic-mig.md
+++ b/versioned_docs/version-v2.6.0/developers/dynamic-mig.md
@@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v2.6.0/developers/scheduling.md b/versioned_docs/version-v2.6.0/developers/scheduling.md
index d80d3957..04b8c784 100644
--- a/versioned_docs/version-v2.6.0/developers/scheduling.md
+++ b/versioned_docs/version-v2.6.0/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread` or `topology-aware`. The `topology-aware` policy only takes effect with Nvidia GPUs.
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread` or `topology-aware`. The `topology-aware` policy only takes effect with Nvidia GPUs.
User can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
@@ -105,7 +105,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -125,7 +125,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -148,7 +148,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -167,7 +167,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
#### Topology-aware
@@ -231,7 +231,7 @@ gpu2 score: 100 + 200 + 200 = 500
gpu3 score: 200 + 100 + 200 = 500
```
-Therefore, when a **Pod requests only one GPU**, we randomly select either **gpu0** or **gpu1**.
+Therefore, when a **Pod requests only one GPU**, the scheduler randomly selects either **gpu0** or **gpu1**.
###### More than one GPU
@@ -253,4 +253,4 @@ For example: If a Pod requests 3 GPUs, take **gpu0, gpu1, gpu2** as an example.
(gpu1, gpu2, gpu3) totalScore: 200 + 100 + 200 = 500
```
-Therefore, when a **Pod requests 3 GPUs**, we allocate **gpu1, gpu2, gpu3**.
+Therefore, when a **Pod requests 3 GPUs**, the scheduler allocates **gpu1, gpu2, gpu3**.
diff --git a/versioned_docs/version-v2.6.0/faq/faq.md b/versioned_docs/version-v2.6.0/faq/faq.md
index 6a9b72f4..86677e1e 100644
--- a/versioned_docs/version-v2.6.0/faq/faq.md
+++ b/versioned_docs/version-v2.6.0/faq/faq.md
@@ -42,7 +42,7 @@ A vGPU is a logical instance of a physical GPU created using virtualization, all
4. **Design Intent**
The design of vGPU aims to **allow one GPU to be shared by multiple tasks**, rather than letting one task occupy multiple vGPUs on the same GPU. The purpose of vGPU overcommitment is to improve GPU utilization, not to increase resource allocation for individual tasks.
-## HAMi's `nvidia.com/priority` field only supports two levels. How can we implement multi-level, user-defined priority-based scheduling for a queue of jobs, especially when cluster resources are limited?
+## HAMi's `nvidia.com/priority` field only supports two levels. How to implement multi-level, user-defined priority-based scheduling for a queue of jobs, especially when cluster resources are limited?
**TL;DR**
diff --git a/versioned_docs/version-v2.6.0/troubleshooting-copy/troubleshooting.md b/versioned_docs/version-v2.6.0/troubleshooting-copy/troubleshooting.md
index f4e9a15a..f99bfb8a 100644
--- a/versioned_docs/version-v2.6.0/troubleshooting-copy/troubleshooting.md
+++ b/versioned_docs/version-v2.6.0/troubleshooting-copy/troubleshooting.md
@@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
-- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
+- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
diff --git a/versioned_docs/version-v2.6.0/troubleshooting/troubleshooting.md b/versioned_docs/version-v2.6.0/troubleshooting/troubleshooting.md
index f4e9a15a..f99bfb8a 100644
--- a/versioned_docs/version-v2.6.0/troubleshooting/troubleshooting.md
+++ b/versioned_docs/version-v2.6.0/troubleshooting/troubleshooting.md
@@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
-- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
+- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
diff --git a/versioned_docs/version-v2.6.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md b/versioned_docs/version-v2.6.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
index d5fe450d..cd176afa 100644
--- a/versioned_docs/version-v2.6.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
+++ b/versioned_docs/version-v2.6.0/userguide/cambricon-device/enable-cambricon-mlu-sharing.md
@@ -4,7 +4,7 @@ title: Enable cambricon MLU sharing
## Introduction
-**We now support cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
***MLU sharing***: Each task can allocate a portion of MLU instead of a whole MLU card, thus MLU can be shared among multiple tasks.
@@ -38,7 +38,7 @@ cnmon set -c 1 -smlu on
These two parameters represent enabling the dynamic smlu function and setting the minimum allocable memory unit to 256 MB, respectively. You can refer to the document from device provider for more details
-* Deploy the cambricon-device-plugin you just specified
+* Deploy the cambricon-device-plugin you specified
```
kubectl apply -f cambricon-device-plugin-daemonset.yaml
diff --git a/versioned_docs/version-v2.6.0/userguide/enflame-device/enable-enflame-gcu-sharing.md b/versioned_docs/version-v2.6.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
index 3f3eecbf..4176d149 100644
--- a/versioned_docs/version-v2.6.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
+++ b/versioned_docs/version-v2.6.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
@@ -5,11 +5,11 @@ title: Enable Enflame GPU Sharing
## Introduction
-**We now support sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
***GCU sharing***: Each task can allocate a portion of GCU instead of a whole GCU card, thus GCU can be shared among multiple tasks.
-***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, we make sure that it does not exceed the boundary.
+***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, HAMi ensures it does not exceed the boundary.
***Device UUID Selection***: You can specify which GCU devices to use or exclude using annotations.
@@ -17,7 +17,7 @@ title: Enable Enflame GPU Sharing
## Prerequisites
-* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, we only need gcushare-device-plugin here )
+* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, only gcushare-device-plugin is needed here )
* driver version >= 1.2.3.14
* kubernetes >= 1.24
* enflame-container-toolkit >=2.0.50
diff --git a/versioned_docs/version-v2.6.0/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v2.6.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
index 292040f3..f4dd8aed 100644
--- a/versioned_docs/version-v2.6.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v2.6.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.6.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md b/versioned_docs/version-v2.6.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
index 1de8daac..c1b5acc3 100644
--- a/versioned_docs/version-v2.6.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
+++ b/versioned_docs/version-v2.6.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
@@ -5,7 +5,7 @@ title: Enable Illuvatar GPU Sharing
## Introduction
-**We now support iluvatar.ai/gpu(i.e MR-V100、BI-V150、BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports iluvatar.ai/gpu(i.e MR-V100、BI-V150、BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.6.0/userguide/metax-device/metax-gpu/enable-metax-gpu-schedule.md b/versioned_docs/version-v2.6.0/userguide/metax-device/metax-gpu/enable-metax-gpu-schedule.md
index da0b5726..0cdf6ff6 100644
--- a/versioned_docs/version-v2.6.0/userguide/metax-device/metax-gpu/enable-metax-gpu-schedule.md
+++ b/versioned_docs/version-v2.6.0/userguide/metax-device/metax-gpu/enable-metax-gpu-schedule.md
@@ -4,7 +4,7 @@ title: Enable Metax GPU topology-aware scheduling
## Introduction
-**we now support metax.com/gpu by implementing topo-awareness among metax GPUs**
+**HAMi now supports metax.com/gpu with topo-awareness among metax GPUs**
When multiple GPUs are configured on a single server, the GPU cards are connected to the same PCIe Switch or MetaXLink depending on whether they are connected
, there is a near-far relationship. This forms a topology among all the cards on the server, as shown in the following figure:
diff --git a/versioned_docs/version-v2.6.0/userguide/metax-device/metax-sgpu/enable-metax-gpu-sharing.md b/versioned_docs/version-v2.6.0/userguide/metax-device/metax-sgpu/enable-metax-gpu-sharing.md
index 0b8d9132..4b55189b 100644
--- a/versioned_docs/version-v2.6.0/userguide/metax-device/metax-sgpu/enable-metax-gpu-sharing.md
+++ b/versioned_docs/version-v2.6.0/userguide/metax-device/metax-sgpu/enable-metax-gpu-sharing.md
@@ -5,7 +5,7 @@ translated: true
## Introduction
-**we now support metax.com/gpu by implementing most device-sharing features as nvidia-GPU**, device-sharing features include the following:
+**HAMi now supports metax.com/gpu with most device-sharing features as nvidia-GPU**, device-sharing features include the following:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.6.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v2.6.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index eda503ad..03e383cf 100644
--- a/versioned_docs/version-v2.6.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v2.6.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.6.0/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v2.6.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
index dd7239fd..a05bf560 100644
--- a/versioned_docs/version-v2.6.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v2.6.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
\ No newline at end of file
+> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
\ No newline at end of file
diff --git a/versioned_docs/version-v2.7.0/contributor/adopters.md b/versioned_docs/version-v2.7.0/contributor/adopters.md
index 8c521c76..0d87e5eb 100644
--- a/versioned_docs/version-v2.7.0/contributor/adopters.md
+++ b/versioned_docs/version-v2.7.0/contributor/adopters.md
@@ -1,12 +1,12 @@
# HAMi Adopters
-So you and your organisation are using HAMi? That's great. We would love to hear from you! 💖
+HAMi is used in production by the organisations listed below.
## Adding yourself
[Here](https://github.com/Project-HAMi/website/blob/master/src/pages/adopters.mdx) lists the organisations who adopted the HAMi project in production.
-You just need to add an entry for your company and upon merging it will automatically be added to our website.
+Add an entry for your company - it will be added to the website once the PR merges.
To add your organisation follow these steps:
@@ -25,4 +25,4 @@ To add your organisation follow these steps:
6. Push the commit with `git push origin main`.
7. Open a Pull Request to [HAMi-io/website](https://github.com/Project-HAMi/website) and a preview build will turn up.
-Thanks a lot for being part of our community - we very much appreciate it!
+Thanks to all adopters for being part of the community!
diff --git a/versioned_docs/version-v2.7.0/contributor/contribute-docs.md b/versioned_docs/version-v2.7.0/contributor/contribute-docs.md
index 58f4b0d4..b463684c 100644
--- a/versioned_docs/version-v2.7.0/contributor/contribute-docs.md
+++ b/versioned_docs/version-v2.7.0/contributor/contribute-docs.md
@@ -9,14 +9,14 @@ the `Project-HAMi/website` repository.
## Prerequisites
- Docs, like codes, are also categorized and stored by version.
- 1.3 is the first version we have archived.
+ 1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
-- For our docs we use markdown. If you are unfamiliar with Markdown,
+- The docs use markdown. If you are unfamiliar with Markdown,
please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or
[https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
-- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.
## Setup
@@ -88,7 +88,7 @@ title: A doc with tags
```
The top section between two lines of --- is the Front Matter section.
-Here we define a couple of entries which tell Docusaurus how to handle the article:
+These entries tell Docusaurus how to handle the article:
- Title is the equivalent of the `` in a HTML document or `# ` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document
@@ -106,7 +106,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
- with absolute paths. We use language-aware folders:
+ with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
@@ -202,6 +202,6 @@ If the previewed page is not what you expected, please check your docs again.
### Versioning
-For the newly supplemented documents of each version, we will synchronize to the latest version
+For the newly supplemented documents of each version, they are synchronized to the latest version
on the release date of each version, and the documents of the old version will not be modified.
-For errata found in the documentation, we will fix it with every release.
+For errata found in the documentation, fixes are applied with every release.
diff --git a/versioned_docs/version-v2.7.0/contributor/contributing.md b/versioned_docs/version-v2.7.0/contributor/contributing.md
index fab6d31c..72f9676d 100644
--- a/versioned_docs/version-v2.7.0/contributor/contributing.md
+++ b/versioned_docs/version-v2.7.0/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
#### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
### File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v2.7.0/contributor/github-workflow.md b/versioned_docs/version-v2.7.0/contributor/github-workflow.md
index 8582a392..a362a3b5 100644
--- a/versioned_docs/version-v2.7.0/contributor/github-workflow.md
+++ b/versioned_docs/version-v2.7.0/contributor/github-workflow.md
@@ -110,7 +110,7 @@ in a few cycles.
## Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/versioned_docs/version-v2.7.0/contributor/governance.md b/versioned_docs/version-v2.7.0/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v2.7.0/contributor/governance.md
+++ b/versioned_docs/version-v2.7.0/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v2.7.0/contributor/ladder.md b/versioned_docs/version-v2.7.0/contributor/ladder.md
index 50a277f9..a3eea679 100644
--- a/versioned_docs/version-v2.7.0/contributor/ladder.md
+++ b/versioned_docs/version-v2.7.0/contributor/ladder.md
@@ -6,7 +6,7 @@ This docs different ways to get involved and level up within the project. You ca
## Contributor Ladder
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -47,7 +47,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added [here](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -142,7 +142,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
## Removing Maintainers
diff --git a/versioned_docs/version-v2.7.0/developers/dynamic-mig.md b/versioned_docs/version-v2.7.0/developers/dynamic-mig.md
index fd22875b..9f71a993 100644
--- a/versioned_docs/version-v2.7.0/developers/dynamic-mig.md
+++ b/versioned_docs/version-v2.7.0/developers/dynamic-mig.md
@@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v2.7.0/developers/kunlunxin-topology.md b/versioned_docs/version-v2.7.0/developers/kunlunxin-topology.md
index fb66c728..cd192d78 100644
--- a/versioned_docs/version-v2.7.0/developers/kunlunxin-topology.md
+++ b/versioned_docs/version-v2.7.0/developers/kunlunxin-topology.md
@@ -30,7 +30,7 @@ The selection process is shown below:
## Score
In the scoring phase, all filtered nodes are evaluated and scored to select the optimal one
-for scheduling. We introduce a metric called **MTF** (Minimized Tasks to Fill),
+for scheduling. The metric used is called **MTF** (Minimized Tasks to Fill),
which quantifies how well a node can accommodate future tasks after allocation.
The table below shows examples of XPU occupation and proper MTF values:
diff --git a/versioned_docs/version-v2.7.0/developers/scheduling.md b/versioned_docs/version-v2.7.0/developers/scheduling.md
index 02270146..7f8ce100 100644
--- a/versioned_docs/version-v2.7.0/developers/scheduling.md
+++ b/versioned_docs/version-v2.7.0/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
### User Stories
@@ -104,7 +104,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -124,7 +124,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -147,7 +147,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -166,4 +166,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
diff --git a/versioned_docs/version-v2.7.0/faq/faq.md b/versioned_docs/version-v2.7.0/faq/faq.md
index 067ca2b8..64e8584b 100644
--- a/versioned_docs/version-v2.7.0/faq/faq.md
+++ b/versioned_docs/version-v2.7.0/faq/faq.md
@@ -42,7 +42,7 @@ A vGPU is a logical instance of a physical GPU created using virtualization, all
4. **Design Intent**
The design of vGPU aims to **allow one GPU to be shared by multiple tasks**, rather than letting one task occupy multiple vGPUs on the same GPU. The purpose of vGPU overcommitment is to improve GPU utilization, not to increase resource allocation for individual tasks.
-## HAMi's `nvidia.com/priority` field only supports two levels. How can we implement multi-level, user-defined priority-based scheduling for a queue of jobs, especially when cluster resources are limited?
+## HAMi's `nvidia.com/priority` field only supports two levels. How to implement multi-level, user-defined priority-based scheduling for a queue of jobs, especially when cluster resources are limited?
**TL;DR**
diff --git a/versioned_docs/version-v2.7.0/userguide/enflame-device/enable-enflame-gcu-sharing.md b/versioned_docs/version-v2.7.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
index a8a3026f..d98f188b 100644
--- a/versioned_docs/version-v2.7.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
+++ b/versioned_docs/version-v2.7.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
@@ -5,11 +5,11 @@ title: Enable Enflame GPU Sharing
## Introduction
-**We now support sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
***GCU sharing***: Each task can allocate a portion of GCU instead of a whole GCU card, thus GCU can be shared among multiple tasks.
-***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, we make sure that it does not exceed the boundary.
+***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, HAMi ensures it does not exceed the boundary.
***Device UUID Selection***: You can specify which GCU devices to use or exclude using annotations.
@@ -17,7 +17,7 @@ title: Enable Enflame GPU Sharing
## Prerequisites
-* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, we only need gcushare-device-plugin here )
+* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, only gcushare-device-plugin is needed here )
* driver version >= 1.2.3.14
* kubernetes >= 1.24
* enflame-container-toolkit >=2.0.50
diff --git a/versioned_docs/version-v2.7.0/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v2.7.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
index c0a950fa..5025b34c 100644
--- a/versioned_docs/version-v2.7.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v2.7.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.7.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md b/versioned_docs/version-v2.7.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
index 9771b6bf..640107c0 100644
--- a/versioned_docs/version-v2.7.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
+++ b/versioned_docs/version-v2.7.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
@@ -5,7 +5,7 @@ title: Enable Illuvatar GPU Sharing
## Introduction
-**We now support iluvatar.ai/gpu(i.e MR-V100、BI-V150、BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports iluvatar.ai/gpu(i.e MR-V100、BI-V150、BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.7.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v2.7.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index abbe2ca2..a491201a 100644
--- a/versioned_docs/version-v2.7.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v2.7.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.7.0/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v2.7.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
index 397e984f..946f0e50 100644
--- a/versioned_docs/version-v2.7.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v2.7.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
+> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
diff --git a/versioned_docs/version-v2.8.0/contributor/adopters.md b/versioned_docs/version-v2.8.0/contributor/adopters.md
index 8c521c76..0d87e5eb 100644
--- a/versioned_docs/version-v2.8.0/contributor/adopters.md
+++ b/versioned_docs/version-v2.8.0/contributor/adopters.md
@@ -1,12 +1,12 @@
# HAMi Adopters
-So you and your organisation are using HAMi? That's great. We would love to hear from you! 💖
+HAMi is used in production by the organisations listed below.
## Adding yourself
[Here](https://github.com/Project-HAMi/website/blob/master/src/pages/adopters.mdx) lists the organisations who adopted the HAMi project in production.
-You just need to add an entry for your company and upon merging it will automatically be added to our website.
+Add an entry for your company - it will be added to the website once the PR merges.
To add your organisation follow these steps:
@@ -25,4 +25,4 @@ To add your organisation follow these steps:
6. Push the commit with `git push origin main`.
7. Open a Pull Request to [HAMi-io/website](https://github.com/Project-HAMi/website) and a preview build will turn up.
-Thanks a lot for being part of our community - we very much appreciate it!
+Thanks to all adopters for being part of the community!
diff --git a/versioned_docs/version-v2.8.0/contributor/contribute-docs.md b/versioned_docs/version-v2.8.0/contributor/contribute-docs.md
index 179e55a8..c084d8f6 100644
--- a/versioned_docs/version-v2.8.0/contributor/contribute-docs.md
+++ b/versioned_docs/version-v2.8.0/contributor/contribute-docs.md
@@ -9,14 +9,14 @@ the `Project-HAMi/website` repository.
## Prerequisites
- Docs, like codes, are also categorized and stored by version.
- 1.3 is the first version we have archived.
+ 1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
-- For our docs we use markdown. If you are unfamiliar with Markdown,
+- The docs use markdown. If you are unfamiliar with Markdown,
please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or
[https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
-- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
+- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.
## Setup
@@ -88,7 +88,7 @@ title: A doc with tags
```
The top section between two lines of --- is the Front Matter section.
-Here we define a couple of entries which tell Docusaurus how to handle the article:
+These entries tell Docusaurus how to handle the article:
- Title is the equivalent of the `` in a HTML document or `# ` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document
@@ -106,7 +106,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
- with absolute paths. We use language-aware folders:
+ with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
@@ -202,6 +202,6 @@ If the previewed page is not what you expected, please check your docs again.
### Versioning
-For the newly supplemented documents of each version, we will synchronize to the latest version
+For the newly supplemented documents of each version, they are synchronized to the latest version
on the release date of each version, and the documents of the old version will not be modified.
-For errata found in the documentation, we will fix it with every release.
+For errata found in the documentation, fixes are applied with every release.
diff --git a/versioned_docs/version-v2.8.0/contributor/contributing.md b/versioned_docs/version-v2.8.0/contributor/contributing.md
index fab6d31c..72f9676d 100644
--- a/versioned_docs/version-v2.8.0/contributor/contributing.md
+++ b/versioned_docs/version-v2.8.0/contributor/contributing.md
@@ -20,7 +20,7 @@ HAMi is a community project driven by its community which strives to promote a h
## Your First Contribution
-We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
+Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.
If you have questions about the development process,
@@ -28,7 +28,7 @@ feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/cho
## Find something to work on
-We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
+Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.
@@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
-We can help new contributors who wish to work on such issues.
+Maintainers can help new contributors who wish to work on such issues.
Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.
#### Work on an issue
-When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
+When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.
### File an Issue
-While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
+Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.
*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
diff --git a/versioned_docs/version-v2.8.0/contributor/github-workflow.md b/versioned_docs/version-v2.8.0/contributor/github-workflow.md
index 8582a392..a362a3b5 100644
--- a/versioned_docs/version-v2.8.0/contributor/github-workflow.md
+++ b/versioned_docs/version-v2.8.0/contributor/github-workflow.md
@@ -110,7 +110,7 @@ in a few cycles.
## Push
-When ready to review (or just to establish an offsite backup of your work),
+When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:
```sh
diff --git a/versioned_docs/version-v2.8.0/contributor/governance.md b/versioned_docs/version-v2.8.0/contributor/governance.md
index f49b23b7..aaf1e568 100644
--- a/versioned_docs/version-v2.8.0/contributor/governance.md
+++ b/versioned_docs/version-v2.8.0/contributor/governance.md
@@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.
-* Inclusivity: We innovate through different perspectives and skill sets, which
+* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.
* Participation: Responsibilities within the project are earned through
diff --git a/versioned_docs/version-v2.8.0/contributor/ladder.md b/versioned_docs/version-v2.8.0/contributor/ladder.md
index b9f51fbc..87eb3585 100644
--- a/versioned_docs/version-v2.8.0/contributor/ladder.md
+++ b/versioned_docs/version-v2.8.0/contributor/ladder.md
@@ -4,7 +4,7 @@ title: Contributor Ladder
This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.
-Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
+This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.
Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.
@@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member
-A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
+A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.
As long as you contribute to HAMi, your name will be added [here](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.
@@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github
New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.
-When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
+When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
### Removing Maintainers
diff --git a/versioned_docs/version-v2.8.0/developers/dynamic-mig.md b/versioned_docs/version-v2.8.0/developers/dynamic-mig.md
index ebe587f2..e11c2dee 100644
--- a/versioned_docs/version-v2.8.0/developers/dynamic-mig.md
+++ b/versioned_docs/version-v2.8.0/developers/dynamic-mig.md
@@ -9,8 +9,8 @@ This feature will not be implemented without the help of @sailorvii.
## Introduction
-The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
-For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
+The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
+For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.
## Targets
diff --git a/versioned_docs/version-v2.8.0/developers/kunlunxin-topology.md b/versioned_docs/version-v2.8.0/developers/kunlunxin-topology.md
index eddcf225..c0a150e7 100644
--- a/versioned_docs/version-v2.8.0/developers/kunlunxin-topology.md
+++ b/versioned_docs/version-v2.8.0/developers/kunlunxin-topology.md
@@ -30,7 +30,7 @@ The selection process is shown below:
## Score
In the scoring phase, all filtered nodes are evaluated and scored to select the optimal one
-for scheduling. We introduce a metric called **MTF** (Minimized Tasks to Fill),
+for scheduling. The metric used is called **MTF** (Minimized Tasks to Fill),
which quantifies how well a node can accommodate future tasks after allocation.
The table below shows examples of XPU occupation and proper MTF values:
diff --git a/versioned_docs/version-v2.8.0/developers/scheduling.md b/versioned_docs/version-v2.8.0/developers/scheduling.md
index 02f1cc9c..db9d5662 100644
--- a/versioned_docs/version-v2.8.0/developers/scheduling.md
+++ b/versioned_docs/version-v2.8.0/developers/scheduling.md
@@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh
## Proposal
-We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
+The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.
### User Stories
@@ -105,7 +105,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Binpack` policy we can select `Node1`.
+So, in `Binpack` policy, the selected node is `Node1`.
#### Spread
@@ -127,7 +127,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```
-So, in `Spread` policy we can select `Node2`.
+So, in `Spread` policy, the selected node is `Node2`.
### GPU-scheduler-policy
@@ -153,7 +153,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Binpack` policy we can select `GPU2`.
+So, in `Binpack` policy, the selected node is `GPU2`.
#### Spread
@@ -175,4 +175,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```
-So, in `Spread` policy we can select `GPU1`.
+So, in `Spread` policy, the selected node is `GPU1`.
diff --git a/versioned_docs/version-v2.8.0/faq/faq.md b/versioned_docs/version-v2.8.0/faq/faq.md
index f5ee5c04..5817aa3c 100644
--- a/versioned_docs/version-v2.8.0/faq/faq.md
+++ b/versioned_docs/version-v2.8.0/faq/faq.md
@@ -41,7 +41,7 @@ A vGPU is a logical instance of a physical GPU created using virtualization, all
4. **Design Intent**
The design of vGPU aims to **allow one GPU to be shared by multiple tasks**, rather than letting one task occupy multiple vGPUs on the same GPU. The purpose of vGPU overcommitment is to improve GPU utilization, not to increase resource allocation for individual tasks.
-## HAMi's `nvidia.com/priority` field only supports two levels. How can we implement multi-level, user-defined priority-based scheduling for a queue of jobs, especially when cluster resources are limited?
+## HAMi's `nvidia.com/priority` field only supports two levels. How to implement multi-level, user-defined priority-based scheduling for a queue of jobs, especially when cluster resources are limited?
**TL;DR**
diff --git a/versioned_docs/version-v2.8.0/userguide/enflame-device/enable-enflame-gcu-sharing.md b/versioned_docs/version-v2.8.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
index c95ca130..29a7095e 100644
--- a/versioned_docs/version-v2.8.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
+++ b/versioned_docs/version-v2.8.0/userguide/enflame-device/enable-enflame-gcu-sharing.md
@@ -5,11 +5,11 @@ title: Enable Enflame GPU Sharing
## Introduction
-**We now support sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports sharing on enflame.com/gcu(i.e S60) by implementing most device-sharing features as nvidia-GPU**, including:
***GCU sharing***: Each task can allocate a portion of GCU instead of a whole GCU card, thus GCU can be shared among multiple tasks.
-***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, we make sure that it does not exceed the boundary.
+***Device Memory and Core Control***: GCUs can be allocated with certain percentage of device memory and core, HAMi ensures it does not exceed the boundary.
***Device UUID Selection***: You can specify which GCU devices to use or exclude using annotations.
@@ -17,7 +17,7 @@ title: Enable Enflame GPU Sharing
## Prerequisites
-* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, we only need gcushare-device-plugin here )
+* Enflame gcushare-device-plugin >= 2.1.6 (please consult your device provider, gcushare has two components: gcushare-scheduler-plugin and gcushare-device-plugin, only gcushare-device-plugin is needed here )
* driver version >= 1.2.3.14
* kubernetes >= 1.24
* enflame-container-toolkit >=2.0.50
diff --git a/versioned_docs/version-v2.8.0/userguide/hygon-device/enable-hygon-dcu-sharing.md b/versioned_docs/version-v2.8.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
index a721ecde..c173ebcf 100644
--- a/versioned_docs/version-v2.8.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
+++ b/versioned_docs/version-v2.8.0/userguide/hygon-device/enable-hygon-dcu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing
## Introduction
-**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.8.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md b/versioned_docs/version-v2.8.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
index 3462da64..88e59bde 100644
--- a/versioned_docs/version-v2.8.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
+++ b/versioned_docs/version-v2.8.0/userguide/iluvatar-device/enable-illuvatar-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Illuvatar GPU Sharing
## Introduction
-**We now support iluvatar.ai/gpu(i.e MR-V100, BI-V150, BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports iluvatar.ai/gpu(i.e MR-V100, BI-V150, BI-V100) by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.8.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md b/versioned_docs/version-v2.8.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
index 4dbe403f..a7227a5c 100644
--- a/versioned_docs/version-v2.8.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
+++ b/versioned_docs/version-v2.8.0/userguide/mthreads-device/enable-mthreads-gpu-sharing.md
@@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing
## Introduction
-**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
+**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.
diff --git a/versioned_docs/version-v2.8.0/userguide/nvidia-device/examples/specify-card-type-to-use.md b/versioned_docs/version-v2.8.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
index a2f0072a..0cda86a7 100644
--- a/versioned_docs/version-v2.8.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
+++ b/versioned_docs/version-v2.8.0/userguide/nvidia-device/examples/specify-card-type-to-use.md
@@ -22,4 +22,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
-> **NOTICE:** *You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
+> **NOTICE:** *You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*