# 🤖 Models - Guide

## What are Models?

Kiwi X External works on AI. The AI collects images that will help the Aim Aligner be more accurate. Models can be manually created by users, shared or modified to improve it's capability. Esentially, the more images that are trained of enemy player skins, the more efficient the model configuration will be. We also have an option **'Collect Data While Playing'** to automatically train images too.

### **Terminology:**

**Image Training** -  In AI the term **"Trained Images"** refers to the process a computer has learned from to understand things like recognizing objects or patterns.

**Labelling Images** - In AI the term "**Labelling images"** refers to the process of adding meaningful information to images specifically to train machine learning models.

**Creating Models** - An **ONNX** model file is like a ready-to-use cookbook for teaching computers to understand images. It contains all the instructions and knowledge needed for an AI to recognize things in pictures. This file can be easily shared and used by different AI systems, making it convenient for developers to build image-related applications. In the case of Kiwi X External, **.onnx** files are used for recognizing images that Kiwi X External needs for it's Aim Aligner to function.

### **This guide will explain step by step on how to create your own models.**

## Collecting Images:

**1)** Manual Method - Download images online such as skins of enemy players. You can find model packs, screenshoting manually from in-game or videos and find other resources.

**2)** Auto Train Method **(Fastest)** - Use Kiwi X External's **'Collect Data While Playing'** feature with **'Aim Only On Trigger Button'** enabled. When the Trigger Button is clicked, a screenshot is snapshotted. These images can be found inside the **bin** > **images** folder from Kiwi X External.

<figure><img src="https://2400500621-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlBmafik244KULerYHQTQ%2Fuploads%2FQTFcr80EWzhuub9zvFGB%2F1.png?alt=media&#x26;token=27d96f5a-dadb-4992-b34c-0d48d7eeb7ce" alt=""><figcaption></figcaption></figure>

<figure><img src="https://2400500621-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FlBmafik244KULerYHQTQ%2Fuploads%2FqHK0EXfJ26MWscbN0Bv3%2F2.png?alt=media&#x26;token=eae624e8-6f7d-4350-9f02-fad8b65d44b6" alt=""><figcaption></figcaption></figure>

**Tip:** When training images- whether it's manual or automated. It's important to get **high quality images**, **different skins with added cosmetics,** and to get as much detail as possible.&#x20;

**Such as;** different positions of the body **(Running, jumping, crouching etc)**. It's also recommended to snapshot images with **different distances** and to **remove images that are irrelevent**. Collect and use images that will help train the **Aim Aligner** to be more accurate in locking onto enemies.

## Labelling Images:

**Step 1)** Visit [**https://www.makesense.ai** ](https://www.makesense.ai)and click **'Get Started'**

**Step 2)** Click **'Drop an image'** (The more **Collected Images** the better).

**Step 3)** Click **'Object detection'**

**Step 4)** Click **'Start project'**

**Step 5)** In the top left corner under **'Actions'** click **'Run AI Locally'**

**Step 6)** Select the **'YoloV5'** option and click **'Use model!'**

**Step 7)** Click the down arrow and select **'Yolov5n/COCO'** then click **'Use model!'**

**Step 8)** Click **'Select all'** and **Accept**. Deselect any model that's not relevent.

**Step 9)** Under **'Actions'** click **'Export Annotations'** and choose the first option.

**Step 10)** Save the **Model** onto your desktop for easier access.

## Training Images

### Required:

**Step 1) -** Install [**Python**](https://www.python.org/downloads/).

**Step 2)** Install Ultralytics

Run **CMD** as **Administrator**

Copy and paste the below command into **CMD** to install Ultralytics **(Must have Python first)**

```
pip install ultralytics
```

Note: If you load up **CMD** and get this error **'pip' is not recognized as an internal or external command, operable program or batch file.'** Watch the tutorial [**here** ](https://www.youtube.com/watch?v=jPRIzVZulhA)to fix the path error.

**Step 3)** - Install PyTorch **(Optional)**

(This will train images from your **GPU** instead of **CPU**, making the process **10X** quicker).

To install PyTorch check the following page [**here**](https://pytorch.org/get-started/locally/). Choose the **CUDA** option.

To check if PyTorch is installed open Python and run these two commands:

```
import torch
```

```
print(torch.cuda.is_available())
```

If the result outputs as **True** then **PyTorch** is succesfully installed and running.

### Tutorial of **Training Images**

**Step 1)** Download[ **Image Training Pack**](https://cdn.discordapp.com/attachments/1207738021739495477/1215305408948998155/Image_Training.rar?ex=65fc445c\&is=65e9cf5c\&hm=f5e2ab9a7abc98515e98eba22dd6502a144218bc9ed028eff0541255d3a84f6f&)[.](https://mega.nz/file/zy5lzC5Y#Aw6LjMhSiTdJh486X9MzFwQyJEEk6aWzMpRCg5qdhoA)

(The pack contains a ready made folder structure and the tool required for training images).

**Step 2)** Open the the **data.yaml** file and set the correct paths.

**Train Path** -  Go to **images > train > Copy Path**

Example **- C:\Users\User\OneDrive\Desktop\Image Training\images\train**

**Val Path -** Go to **images > val > Copy path**

Example **- C:\Users\User\OneDrive\Desktop\Image Training\images\val**

**Step 3)** Upload the **collected images** inside the **images** folder under **train**.

**Step 4)** Upload the **labelled images** inside the **labels** folder under **train**.

**Step 5)** Run the following commands in **CMD** to train the images.

**Train Command**

{% code overflow="wrap" %}

```
yolo task-detect modestrain ingsz=640 data-data.yaml epochs=100 batch-16 name putyourmodelnamehere
```

{% endcode %}

Replace the **'putyourmodelnamehere'** part with the name of the folder. On default this should be **'Image Training'** make sure there aren't any added spaces or that the name is different.

Note: If you increase the epochs number it will increase the speed of the model.

After running this command- it will start training images in **'runs'** folder that it creates.

**Export Model Command**

```
yolo export model-best.pt format-onnx
```

Boom! That's it. You've now created your own model from scratch and ready to use.

## How to use Model:

Upload the Model File you've exported inside the **bin** > **models** folder (From Kiwi X External) then drag and drop. You'll then have this saved under **'Local Models'** under the **Model Selector**.

Kiwi X External works on **'Hot Swapping'** no reload of the app is required for the model to show.

## Use Models made by other users:

Join our [**Discord Server**](https://kiwiexploits.com/discord), we have channels where users can request for models or share models within the community. We'll also post models ourselves under the '**#**✅**verified-models**' channel.

### Use Models made by Aimmy:

Aimmy's public Model directory can be used on Kiwi X External.

Link - [**https://github.com/Babyhamsta/Aimmy/tree/master/models**](https://github.com/Babyhamsta/Aimmy/tree/master/models)

Note: These models may have their own recommended configs, which can be found [**here**](https://github.com/Babyhamsta/Aimmy/tree/master/configs).&#x20;

We don't own the models or configs so we're not able to verify it's accuracy. It's recommended to make your own or to find directly from our [**Discord Server**](https://kiwiexploits.com/discord) shared between community members.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://kiwi-development.gitbook.io/kiwi-x-external/more-information/models-guide.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
