Sign In Get Started
llama-4-maverick-instruct

llama-4-maverick-instruct

LLM

A 17 billion parameter model with 128 experts

Run the model to see results

Generation time: 0s

Run the model to see JSON response

API Endpoint

POST https://lumozai.com/api/v4/run

Use this endpoint to run the model with your parameters. Authentication is required using your API key.

Sample Code (PHP cURL)

<?php
// API endpoint URL
$apiUrl = 'https://lumozai.com/api/v4/run';

// Your API key
$apiKey = 'YOUR_API_KEY';

// Model parameters
$params = [
    'model_id' => llama-4-maverick-instruct,
    'prompt' => '',
    'max_tokens' => '1024',
    'temperature' => '0.6',
];

// Initialize cURL session
$curl = curl_init();

// Set cURL options
curl_setopt_array($curl, [
    CURLOPT_URL => $apiUrl,
    CURLOPT_RETURNTRANSFER => true,
    CURLOPT_POST => true,
    CURLOPT_POSTFIELDS => $params,
    CURLOPT_HTTPHEADER => [
        'Accept: application/json',
        'Authorization: Bearer ' . $apiKey
    ]
]);

// Execute the request
$response = curl_exec($curl);
$httpCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);

// Check for errors
if (curl_errno($curl)) {
    echo 'cURL Error: ' . curl_error($curl);
} else {
    // Process the response
    $result = json_decode($response, true);
    
    if ($httpCode == 200 && isset($result['success']) && $result['success']) {
        // Handle successful response
        echo "Model output: ";
        print_r($result['data']['output']);
    } else {
        // Handle error response
        echo "Error: " . ($result['message'] ?? 'Unknown error');
    }
}

// Close cURL session
curl_close($curl);

Request Parameters

Parameter Type Required Description
model_id string Required The ID of the model to run (llama-4-maverick-instruct)
api_key string Required Your API key for authentication (sent via Bearer token in the Authorization header)
prompt string Required
max_tokens number Optional
temperature array Optional

Response Format

{
    "success": true,
    "data": {
        "output": "This is the text output from the model based on your input parameters."
    }
}

Model Pricing

Usage Cost

$0.0023 Cost Per Seconds

Cost Per Seconds refers to a pricing model where the total cost is calculated based on the number of seconds the model uses GPU time. In this system, the longer the model runs (measured in seconds), the higher the cost, making it directly proportional to the actual compute time required.

Features Included:

  • Full API access
  • Web UI usage
  • Output in multiple formats
  • Commercial usage rights

Note: Pricing is subject to change. For bulk usage or custom packages, please contact us.