Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/contributor/github-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ in a few cycles.

## Push

When ready to review (or just to establish an offsite backup of your work),
When ready to review (or to establish an offsite backup of your work),
push your branch to your fork on `github.com`:

```sh
Expand Down
2 changes: 1 addition & 1 deletion docs/contributor/governance.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.

* Inclusivity: We innovate through different perspectives and skill sets, which
* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.

* Participation: Responsibilities within the project are earned through
Expand Down
10 changes: 5 additions & 5 deletions versioned_docs/version-v1.3.0/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ HAMi is a community project driven by its community which strives to promote a h

## Your First Contribution

We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and
Help is available for contributing in areas like filing issues, developing features, fixing critical bugs and
getting your work reviewed and merged.

If you have questions about the development process,
feel free to [file an issue](https://github.com/Project-HAMi/HAMi/issues/new/choose).

## Find something to work on

We are always in need of help, be it fixing documentation, reporting bugs or writing some code.
Help is always welcome - fixing documentation, reporting bugs, writing code.
Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing.
Here is how you get started.

Expand All @@ -40,18 +40,18 @@ For example, [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi) has
[help wanted](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) and
[good first issue](https://github.com/Project-HAMi/HAMi/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
labels for issues that should not need deep knowledge of the system.
We can help new contributors who wish to work on such issues.
Maintainers can help new contributors who wish to work on such issues.

Another good way to contribute is to find a documentation improvement, such as a missing/broken link.
Please see [Contributor Workflow](#contributor-workflow) below for the workflow.

#### Work on an issue

When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you.
When you are willing to take on an issue, reply on the issue. The maintainer will assign it to you.

### File an Issue

While we encourage everyone to contribute code, it is also appreciated when someone reports an issue.
Code contributions are welcome, and bug reports are equally appreciated.
Issues should be filed under the appropriate HAMi sub-repository.

*Example:* a HAMi issue should be opened to [Project-HAMi/HAMi](https://github.com/Project-HAMi/HAMi/issues).
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v1.3.0/contributor/governance.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The HAMi and its leadership embrace the following values:
priority over shipping code or sponsors' organizational goals. Each
contributor participates in the project as an individual.

* Inclusivity: We innovate through different perspectives and skill sets, which
* Inclusivity: Innovation comes from different perspectives and skill sets, and this
can only be accomplished in a welcoming and respectful environment.

* Participation: Responsibilities within the project are earned through
Expand Down
6 changes: 3 additions & 3 deletions versioned_docs/version-v1.3.0/contributor/ladder.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Contributor Ladder

This docs different ways to get involved and level up within the project. You can see different roles within the project in the contributor roles.

Hello! We are excited that you want to learn more about our project contributor ladder! This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them. Community members generally start at the first levels of the "ladder" and advance up it as their involvement in the project grows. Our project members are happy to help you advance along the contributor ladder.
This contributor ladder outlines the different contributor roles within the project, along with the responsibilities and privileges that come with them.

Each of the contributor roles below is organized into lists of three types of things. "Responsibilities" are things that a contributor is expected to do. "Requirements" are qualifications a person needs to meet to be in that role, and "Privileges" are things contributors on that level are entitled to.

Expand Down Expand Up @@ -45,7 +45,7 @@ Description: A Contributor contributes directly to the project and adds value to
* Invitations to contributor events
* Eligible to become an Organization Member

A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. We wouldn't be where we are today without your contributions. Thank you! 💖
A very special thanks to the [long list of people](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md) who have contributed to and helped maintain the project. Thanks to everyone who contributed and helped maintain the project.

As long as you contribute to HAMi, your name will be added [here](https://github.com/Project-HAMi/HAMi/blob/master/AUTHORS.md). If you don't find your name, please contact us to add it.

Expand Down Expand Up @@ -140,7 +140,7 @@ The current list of maintainers can be found in the [MAINTAINERS](https://github

New maintainers are added by consensus among the current group of maintainers. This can be done via a private discussion via Slack or email. A majority of maintainers should support the addition of the new person, and no single maintainer should object to adding the new maintainer.

When adding a new maintainer, we should file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.
When adding a new maintainer, file a PR to [HAMi](https://github.com/Project-HAMi/HAMi) and update [MAINTAINERS](https://github.com/Project-HAMi/HAMi/blob/master/MAINTAINERS.md). Once this PR is merged, you will become a maintainer of HAMi.

## Removing Maintainers

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,4 +47,4 @@ Both approaches are equivalent. After reloading your shell, karmadactl autocompl
## Enable kubectl-karmada autocompletion
Currently, kubectl plugins do not support autocomplete, but it is already planned in [Command line completion for kubectl plugins](https://github.com/kubernetes/kubernetes/issues/74178).

We will update the documentation as soon as it does.
Documentation will be updated when support is added.
4 changes: 2 additions & 2 deletions versioned_docs/version-v1.3.0/developers/dynamic-mig.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ This feature will not be implemented without the help of @sailorvii.

## Introduction

The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, so we chose the MPS and MIG. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. We want to develop an automatic slice plugin and create the slice when the user require it.
For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, we consider the CPU, Mem, GPU memory and other user-defined resource.
The NVIDIA GPU build-in sharing method includes: time-slice, MPS and MIG. The context switch for time slice sharing would waste some time, MPS and MIG are preferred. The GPU MIG profile is variable, the user could acquire the MIG device in the profile definition, but current implementation only defines the dedicated profile before the user requirement. That limits the usage of MIG. The goal is an automatic slice plugin that creates slices on demand.
For the scheduling method, node-level binpack and spread will be supported. Referring to the binpack plugin, the scheduler considers CPU, memory, GPU memory, and other user-defined resources.
HAMi is done by using [hami-core](https://github.com/Project-HAMi/HAMi-core), which is a cuda-hacking library. But mig is also widely used across the world. A unified API for dynamic-mig and hami-core is needed.

## Targets
Expand Down
10 changes: 5 additions & 5 deletions versioned_docs/version-v1.3.0/developers/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Current in a cluster with many GPU nodes, nodes are not `binpack` or `spread` wh

## Proposal

We add a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
The scheduler adds a `node-scheduler-policy` and `gpu-scheduler-policy` to config, then scheduler to use this policy can impl node `binpack` or `spread` or GPU `binpack` or `spread`. and
use can set Pod annotation to change this default policy, use `hami.io/node-scheduler-policy` and `hami.io/gpu-scheduler-policy` to overlay scheduler config.

### User Stories
Expand Down Expand Up @@ -104,7 +104,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```

So, in `Binpack` policy we can select `Node1`.
So, in `Binpack` policy, the selected node is `Node1`.

#### Spread

Expand All @@ -124,7 +124,7 @@ Node1 score: ((1+3)/4) * 10= 10
Node2 score: ((1+2)/4) * 10= 7.5
```

So, in `Spread` policy we can select `Node2`.
So, in `Spread` policy, the selected node is `Node2`.

### GPU-scheduler-policy

Expand All @@ -147,7 +147,7 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```

So, in `Binpack` policy we can select `GPU2`.
So, in `Binpack` policy, the selected node is `GPU2`.

#### Spread

Expand All @@ -166,4 +166,4 @@ GPU1 Score: ((20+10)/100 + (1000+2000)/8000)) * 10 = 6.75
GPU2 Score: ((20+70)/100 + (1000+6000)/8000)) * 10 = 17.75
```

So, in `Spread` policy we can select `GPU1`.
So, in `Spread` policy, the selected node is `GPU1`.
5 changes: 2 additions & 3 deletions versioned_docs/version-v1.3.0/faq/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,8 @@ It's worth noting that not all controllers are needed by Karmada, for the recomm
## Can I install Karmada in a Kubernetes cluster and reuse the kube-apiserver as Karmada apiserver?

The quick answer is `yes`. In that case, you can save the effort to deploy
[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and just
share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
can be inherited seamlessly. We do have some users using Karmada in this way.
[karmada-apiserver](https://github.com/karmada-io/karmada/blob/master/artifacts/deploy/karmada-apiserver.yaml) and share the APIServer between Kubernetes and Karmada. In addition, the high availability capabilities in the origin clusters
can be inherited. Some users run Karmada this way.

There are some things you should consider before doing so:

Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v1.3.0/roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ title: Karmada Roadmap

This document defines a high level roadmap for Karmada development and upcoming releases.
Community and contributor involvement is vital for successfully implementing all desired items for each release.
We hope that the items listed below will inspire further engagement from the community to keep karmada progressing and shipping exciting and valuable features.
The items below are intended to inspire further community engagement to keep HAMi progressing and shipping exciting and valuable features.


## 2022 H1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,6 @@ title: Troubleshooting
- Currently, A100 MIG can be supported in only "none" and "mixed" modes.
- Tasks with the "nodeName" field cannot be scheduled at the moment; please use "nodeSelector" instead.
- Only computing tasks are currently supported; video codec processing is not supported.
- We change `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- The `device-plugin` env var name from `NodeName` to `NODE_NAME`, if you use the image version `v2.3.9`, you may encounter the situation that `device-plugin` cannot start, there are two ways to fix it:
- Manually execute `kubectl edit daemonset` to modify the `device-plugin` env var from `NodeName` to `NODE_NAME`.
- Upgrade to the latest version using helm, the latest version of `device-plugin` image version is `v2.3.10`, execute `helm upgrade hami hami/hami -n kube-system`, it will be fixed automatically.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Enable cambricon MLU sharing

## Introduction

**We now support cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:
**HAMi now supports cambricon.com/mlu by implementing most device-sharing features as nvidia-GPU**, including:

***MLU sharing***: Each task can allocate a portion of MLU instead of a whole MLU card, thus MLU can be shared among multiple tasks.

Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v1.3.0/userguide/configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ You can update these configurations using one of the following methods:
2. Modify Helm Chart: Update the corresponding values in the [ConfigMap](https://raw.githubusercontent.com/archlitchi/HAMi/refs/heads/master/charts/hami/templates/scheduler/device-configmap.yaml), then reapply the Helm Chart to regenerate the ConfigMap.

* `nvidia.deviceMemoryScaling:`
Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if we set `nvidia.deviceMemoryScaling` argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with our device plugin.
Float type, by default: 1. The ratio for NVIDIA device memory scaling, can be greater than 1 (enable virtual device memory, experimental feature). For NVIDIA GPU with *M* memory, if `nvidia.deviceMemoryScaling` is set argument to *S*, vGPUs split by this GPU will totally get `S * M` memory in Kubernetes with the HAMi device plugin.
* `nvidia.deviceSplitCount:`
Integer type, by default: equals 10. Maximum tasks assigned to a simple GPU device.
* `nvidia.migstrategy:`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Enable Hygon DCU sharing

## Introduction

**We now support hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:
**HAMi now supports hygon.com/dcu by implementing most device-sharing features as nvidia-GPU**, including:

***DCU sharing***: Each task can allocate a portion of DCU instead of a whole DCU card, thus DCU can be shared among multiple tasks.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: Enable Metax GPU topology-aware scheduling
---

**We now support metax.com/gpu by implementing topo-awareness among metax GPUs**:
**HAMi now supports metax.com/gpu by implementing topo-awareness among metax GPUs**:

When multiple GPUs are configured on a single server, the GPU cards are connected to the same PCIe Switch or MetaXLink depending on whether they are connected
, there is a near-far relationship. This forms a topology among all the cards on the server, as shown in the following figure:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Enable Mthreads GPU sharing

## Introduction

**We now support mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:
**HAMi now supports mthreads.com/vgpu by implementing most device-sharing features as nvidia-GPU**, including:

***GPU sharing***: Each task can allocate a portion of GPU instead of a whole GPU card, thus GPU can be shared among multiple tasks.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Enable dynamic-mig feature

## Introduction

**We now support dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:
**HAMi now supports dynamic-mig by using mig-parted to adjust mig-devices dynamically**, including:

***Dynamic MIG instance management***: User don't need to operate on GPU node, using 'nvidia-smi -i 0 -mig 1' or other command to manage MIG instance, all will be done by HAMi-device-plugin.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,4 @@ spec:
nvidia.com/gpu: 2 # requesting 2 vGPUs
```
> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, we want to run this job on A100 or V100*
> **NOTICE:** * You can assign this task to multiple GPU types, use comma to separate,In this example, the job targets A100 or V100*
14 changes: 7 additions & 7 deletions versioned_docs/version-v2.4.1/contributor/contribute-docs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ the `Project-HAMi/website` repository.
## Prerequisites

- Docs, like codes, are also categorized and stored by version.
1.3 is the first version we have archived.
1.3 is the first version is the first archived.
- Docs need to be translated into multiple languages for readers from different regions.
The community now supports both Chinese and English.
English is the official language of documentation.
- For our docs we use markdown. If you are unfamiliar with Markdown, please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or [https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
- The docs use markdown. If you are unfamiliar with Markdown, please see [https://guides.github.com/features/mastering-markdown/](https://guides.github.com/features/mastering-markdown/) or [https://www.markdownguide.org/](https://www.markdownguide.org/) if you are looking for something more substantial.
- The site uses [Docusaurus 2](https://docusaurus.io/), a model static website generator.

## Setup

Expand Down Expand Up @@ -85,7 +85,7 @@ title: A doc with tags
## secondary title
```

The top section between two lines of --- is the Front Matter section. Here we define a couple of entries which tell Docusaurus how to handle the article:
The top section between two lines of --- is the Front Matter section. These entries tell Docusaurus how to handle the article:

- Title is the equivalent of the `<h1>` in a HTML document or `# <title>` in a Markdown article.
- Each document has a unique ID. By default, a document ID is the name of the document (without the extension) related to the root docs directory.
Expand All @@ -101,7 +101,7 @@ You can easily route to other places by adding any of the following links:
You can use relative paths to index the corresponding files.
- Link to pictures or other resources.
If your article contains images, prefer storing them in `/static/img/docs/` and linking
with absolute paths. We use language-aware folders:
with absolute paths. Language-aware folders are used:
- `/static/img/docs/common/` for shared images
- `/static/img/docs/en/` for English-only images
- `/static/img/docs/zh/` for Chinese-only images
Expand Down Expand Up @@ -187,5 +187,5 @@ If the previewed page is not what you expected, please check your docs again.

### Versioning

For the newly supplemented documents of each version, we will synchronize to the latest version on the release date of each version, and the documents of the old version will not be modified.
For errata found in the documentation, we will fix it with every release.
For the newly supplemented documents of each version, they are synchronized to the latest version on the release date of each version, and the documents of the old version will not be modified.
For errata found in the documentation, fixes are applied with every release.
Loading
Loading