Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion versioned_docs/version-v1.3.0/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ You can customize the mig configuration by following the steps below:
## Running MIG jobs

MIG instance can now be requested by a container the same way as using `hami-core`
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource type.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource type.

```yaml
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Contributors mainly contribute documentation to the current version.

It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.

For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
Here is a quick example that explains the most relevant entries in **Front Matter**:

```yaml
---
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.4.1/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ You can customize the mig configuration by following the steps below:
## Running MIG jobs

MIG instance can now be requested by a container the same way as using `hami-core`
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource type.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource type.

```yaml
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Contributors mainly contribute documentation to the current version.

It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.

For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
Here is a quick example that explains the most relevant entries in **Front Matter**:

```yaml
---
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.5.0/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ You can customize the mig configuration by following the steps below:
## Running MIG jobs

MIG instance can now be requested by a container the same way as using `hami-core`
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource type.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource type.

```yaml
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Contributors mainly contribute documentation to the current version.

It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.

For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
Here is a quick example that explains the most relevant entries in **Front Matter**:

```yaml
---
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.5.1/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Please note that HAMi will identify and use the first MIG template that matches
## Running MIG jobs

A MIG instance can now be requested by a container in the same way as `hami-core`,
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.

```yaml
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Contributors mainly contribute documentation to the current version.

It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.

For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
Here is a quick example that explains the most relevant entries in **Front Matter**:

```yaml
---
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.6.0/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.6.0/faq/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ In summary, while HAMi's own priority serves a different, device-specific purpos
**Currently Supported**:

- **Volcano**: Can be integrated with Volcano by using the [`volcano-vgpu-device-plugin`](https://github.com/Project-HAMi/volcano-vgpu-device-plugin) under the HAMi project for GPU resource scheduling and management.
- **Koordinator**: HAMi can also be integrated with Koordinator to provide end-to-end GPU sharing solutions. By deploying HAMi-core on nodes and configuring the appropriate labels and resource requests in Pods, Koordinator can leverage HAMi’s GPU isolation capabilities, allowing multiple Pods to share the same GPU and significantly improve GPU resource utilization.
- **Koordinator**: HAMi can also be integrated with Koordinator to provide end-to-end GPU sharing solutions. By deploying HAMi-core on nodes and configuring the appropriate labels and resource requests in Pods, Koordinator can use HAMi’s GPU isolation capabilities, allowing multiple Pods to share the same GPU and significantly improve GPU resource utilization.

For detailed configuration and usage instructions, refer to the Koordinator documentation:
[Device Scheduling - GPU Share With HAMi](https://koordinator.sh/docs/user-manuals/device-scheduling-gpu-share-with-hami/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Please note that HAMi will identify and use the first MIG template that matches
## Running MIG jobs

A MIG instance can now be requested by a container in the same way as `hami-core`,
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.

```yaml
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Contributors mainly contribute documentation to the current version.

It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.

For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
Here is a quick example that explains the most relevant entries in **Front Matter**:

```yaml
---
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.7.0/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.7.0/faq/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ In summary, while HAMi's own priority serves a different, device-specific purpos
**Currently Supported**:

- **Volcano**: Can be integrated with Volcano by using the [`volcano-vgpu-device-plugin`](https://github.com/Project-HAMi/volcano-vgpu-device-plugin) under the HAMi project for GPU resource scheduling and management.
- **Koordinator**: HAMi can also be integrated with Koordinator to provide end-to-end GPU sharing solutions. By deploying HAMi-core on nodes and configuring the appropriate labels and resource requests in Pods, Koordinator can leverage HAMi’s GPU isolation capabilities, allowing multiple Pods to share the same GPU and significantly improve GPU resource utilization.
- **Koordinator**: HAMi can also be integrated with Koordinator to provide end-to-end GPU sharing solutions. By deploying HAMi-core on nodes and configuring the appropriate labels and resource requests in Pods, Koordinator can use HAMi’s GPU isolation capabilities, allowing multiple Pods to share the same GPU and significantly improve GPU resource utilization.

For detailed configuration and usage instructions, refer to the Koordinator documentation:
[Device Scheduling - GPU Share With HAMi](https://koordinator.sh/docs/user-manuals/device-scheduling-gpu-share-with-hami/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Please note that HAMi will identify and use the first MIG template that matches
## Running MIG jobs

A MIG instance can now be requested by a container in the same way as `hami-core`,
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.

```yaml
apiVersion: v1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ Contributors mainly contribute documentation to the current version.

It's important for your article to specify metadata concerning an article at the top of the Markdown file, in a section called **Front Matter**.

For now, let's take a look at a quick example which should explain the most relevant entries in **Front Matter**:
Here is a quick example that explains the most relevant entries in **Front Matter**:

```markdown
---
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.8.0/contributor/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ This is a rough outline of what a contributor's workflow looks like:

## Creating Pull Requests

Pull requests are often called simply "PR".
Pull requests are often called PRs.
HAMi generally follows the standard [github pull request](https://help.github.com/articles/about-pull-requests/) process.
To submit a proposed change, please develop the code/fix and add new test cases.
After that, run these local verifications before submitting pull request to predict the pass or
Expand Down
2 changes: 1 addition & 1 deletion versioned_docs/version-v2.8.0/faq/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ In summary, while HAMi's own priority serves a different, device-specific purpos
**Currently Supported**:

- **Volcano**: Can be integrated with Volcano by using the [`volcano-vgpu-device-plugin`](https://github.com/Project-HAMi/volcano-vgpu-device-plugin) under the HAMi project for GPU resource scheduling and management.
- **Koordinator**: HAMi can also be integrated with Koordinator to provide end-to-end GPU sharing solutions. By deploying HAMi-core on nodes and configuring the appropriate labels and resource requests in Pods, Koordinator can leverage HAMi’s GPU isolation capabilities, allowing multiple Pods to share the same GPU and significantly improve GPU resource utilization.
- **Koordinator**: HAMi can also be integrated with Koordinator to provide end-to-end GPU sharing solutions. By deploying HAMi-core on nodes and configuring the appropriate labels and resource requests in Pods, Koordinator can use HAMi’s GPU isolation capabilities, allowing multiple Pods to share the same GPU and significantly improve GPU resource utilization.

For detailed configuration and usage instructions, refer to the Koordinator documentation:
[Device Scheduling - GPU Share With HAMi](https://koordinator.sh/docs/user-manuals/device-scheduling-gpu-share-with-hami/)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Please note that HAMi will identify and use the first MIG template that matches
## Running MIG jobs

A MIG instance can now be requested by a container in the same way as `hami-core`,
simply by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.
by specifying the `nvidia.com/gpu` and `nvidia.com/gpumem` resource types.

```yaml
apiVersion: v1
Expand Down
Loading