Next-Gen AI Desktop

Hala Nexus (STHT1)

Revolutionizing Business with AI Desktops

AI Desktop Workstation powered by AMD Ryzen AI Strix Halo. Run massive LLMs locally with up to 235B parameters. Unprecedented on-device AI performance.

Run up to 235B parameter models
2.5X better graphics installed
Multi-unit clustering
Hero
0TOPS NPU
0GBMaximum Memory
0BParameter Models
0GBGraphics Memory
Features

Powerhouse Features

Everything for personal workstations to team deployments.

icon

80 TOPS NPU

80 TOPS NPU dedicated AI processing. Run any LLM locally with zero latency.

icon

RDNA 3.5 Integrated Graphics

AMD integrated graphics with up to 96GB shared memory. Discrete-level performance for AI and creative workflows.

icon

Dual M.2 NVMe Storage

2× M.2 2280 PCIe×4 SSD. SD4.0 card reader for SDXC. Expand for datasets and model caching.

icon

Industrial Connectivity

WiFi 7 (802.11be), Bluetooth 5.0+, 2.5G Ethernet. Deploy anywhere, connect reliably.

Premium Mini PC Workstation

A New Class Of AI
Desktop

H13 combines high-power CPU cores, RDNA 3.5 graphics, and an industry-leading NPU into a single compact system. It is designed for developers, teams, and enterprises that need local, scalable, high-performance AI compute without relying on cloud infrastructure.

Mini PC rear view showing ports and connectivity

🚀Unprecedented AI Acceleration

  • 80 TOPS NPU — industry leading
  • Optimized for AI inference, Copilot+ PC, and LLM workloads
  • High power range (120W / 132W) for sustained performance

🖥Discrete-Level Graphics Performance

  • Built-in RDNA 3.5 graphics (up to 40CU)
  • Up to 2.5× graphics performance vs previous gen AI chips
  • Competitive with RTX 4070 mobile performance
  • Superior thermal design for sustained local compute

🧠Massive Unified Memory

  • Up to 128GB LPDDR5 (8533MT/s)
  • Allocate up to 96GB as variable graphics memory
  • Ideal for large AI models and high-resolution workloads
Market Disruptor

Hala Nexus vs. The Giants

Why settle for cloud-taxed, proprietary hardware when you can own your data and your compute? Hala delivers industrial-grade local AI at a fraction of the workstation cost.

FeatureHala NexusMac Studio (M2 Ultra)Dell Precision 3280
AI Performance (NPU)80 TOPS15.8 TOPS11 TOPS
Unified Memory128GB (LPDDR5x)128GB (Unified)128GB (Split VRAM)
Memory Bandwidth256 GB/s800 GB/s64 GB/s
Local LLM CapUp to 235B ParamsHigh (Proprietary)Limited (20GB VRAM)
Privacy / Stack100% Zero-CloudApple IntelligenceWindows / Cloud-First
Starting Price$2,899$3,999+$3,500+
Zero Cloud Data Tax
Local LLM Optimization
Superior Compute Value

Endless Possibilities With The Ultras Powered H13

H13 processor chip

Edge AI Inference

Run AI models locally for real-time processing without cloud dependency

Local LLM Deployment

Deploy large language models on-premises for privacy and performance

AI-Powered Search

Search and retrieve information instantly with semantic capabilities

3D Rendering & Content

High-performance graphics processing for visualization and creation

Built For All, Built
For Everyone

From personal to enterprise, one system for all, RDNA 3.5 graphics, and an industry-leading NPU into a single compact system. Designed for developers, teams, and enterprises that need local, scalable, high-performance AI compute without relying on cloud infrastructure.

icon

AI Research

Experiment with large models, fine-tune locally, no vendor dependency.

  • LLM Training
  • Model Research
  • Local Deployment
icon
icon

Creator Studio

Video editing, 3D rendering, and AI-assisted production in one box.

  • Video Editing
  • 3D Rendering
  • VFX & AI
icon
icon

Gaming & Content

Local LLM AI, 2.5X graphics performance, and real-time content generation.

  • AAA Gaming
  • AI-Generated Content
  • 3D Modeling
icon
icon

Enterprise

Scalable clusters for regulated industries and high-security environments.

  • Data Sovereign
  • Multi-Unit Scale
  • Zero Trust
icon

Key Specifications

See what innovators are building

AI Acceleration

80 TOPS

Dedicated 80 TOPS NPU engine for edge AI inference. Run 235B parameter models locally at high speeds.

Model Compatibility: Qwen3, Llama, DeepSeek, and any OSS LLM

Memory

Up to 128GB

LPDDR5 @ 8533MT/s

Graphics

RDNA 3.5

40CU Flagship Performance

Storage

2× M.2 2280

PCIe Gen 4×4 NVMe SSD

Processor

AMD Strix Halo

AI 395 · 16-core

Networking

1× 2.5G LAN

WiFi 7 · Bluetooth 5+ · USB4

Power

240W

20V/12A external adapter

Ready To Experience AI Dominance?

Run LLMs locallyUnlimited clusteringZero data cloud
Mini PC view 1
Mini PC view 2
Mini PC view 3