Introduction
The Model Service is a full-process service launched by the GpuGeek platform, covering model creation, model push, model inference, and model deployment. In the Model Marketplace, you can conveniently call any public model through the interface or API, or publish your own models on the platform. For a more stable and efficient inference service, you can choose model deployment.
Model Categories
Models are divided into public models and private models. Public models are released by the platform or individual users. You can view all public models in the Model Marketplace, covering multiple task types such as text dialogue, text-to-image, and text-to-video. They support interface experience or API calls, and billing is based on the number or duration of calls. Private models are not open externally and can be used for debugging or deployment.
View Public Models
Visit the Model Marketplace page. Use the task classification tags on the left or search by model name to find the models you need. On the model detail page, you can quickly test or call the model via API. You can also bookmark the model to quickly access it later on the My Favorites page.
Publish Your Own Model
You can create your own model on the Model Management page and push it to the platform using the provided publishing tools, running on the hardware you specify. You can choose to make your model public so others can use it. Public models will go through a backend review process to ensure completeness of information. Alternatively, you can keep the model private, so only you can call or deploy it.
For more information, see Model Creation and Model Upload.