AI and ML models on local machine is like a hell. We know that and we motivated to create a bridge between local and cloud.
Prerequisite You should have magicpoint.yaml
to use bridge!
Letβs continue with an example. If you have an magicpoint application like that;
from magicpoint import Magicpoint, Request, Response
from diffusers import StableDiffusionPipeline
import torch
import os
app = Magicpoint("MyCoolApp π")
@app.init
def init():
model = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
revision="fp16",
torch_dtype=torch.float16,
cache_dir="./models",
use_auth_token=os.environ["HUGGINGFACE_API_KEY"],
).to("cuda")
context = {
"model": model
}
return context
@app.background_task("/generate_async")
def generate_image_async(context, request):
return
@app.training("/background")
def training(context : dict, request : Request) -> None:
return
app.serve()
If you are working on training method you can run following commands to serve on cloud:
magicpoint bridge example_app.py:training
Output will shown like that;
[β] Creating bridge...
[β] Deploying magicpoint magic into the bridge...
[β] Setup completed
Your bridge URL is ready: https://each.tech/bridge/eliminate-retirement-x1723xn