Recommended hardware

Operating system

Nucleate is available on both Windows and macOS and runs on modern versions of each OS. There is no planned Linux release at this time.

In-app optimization

Too many options? No problem. In the “AI Engine” settings panel, the one-button “Optimize” will sort you out and make recommendations based on OS and available hardware. You can also view persistent and peak hardware usage estimates in the same panel and adjust settings manually if desired.

Hardware recommendations

Windows

On Windows, an NVIDIA GPU is strongly recommended and provides the most flexible configuration support. If you have an AMD/Intel GPU or a NVIDIA GPU with inadequate VRAM, local transcription and summarization model settings will be optimized based on available hardware or will automatically fall back to CPU-only support. If running on CPU-only hardware, at least 16GB of RAM is recommended for local transcription or summarization.

Windows hardware recommendations
CPURAM (min)GPUVRAMTranscriptionSummarizationOpenAI supportLocal speed
Any<16GB--LimitedLimitedYesSlow
Any16-128GB--CPU-onlyCPU-onlyYesSlow
Any16-128GBAMD-CPU-onlyCPU-onlyYesSlow
Any16-128GBIntel-CPU-onlyCPU-onlyYesSlow
Any16-128GBNVIDIA<8GBGPU**CPU-onlyYesSlow
Any16-128GBNVIDIA8-12GBGPUGPUYesMedium
Any16-128GBNVIDIA12-24GBGPUGPUYesFast
ℹ️
**Use “medium” or smaller faster-whisper model to comfortably stay within VRAM limits.

Mac

On Mac, Nucleate supports hardware acceleration via Apple Metal (M-Series)/Unified Memory and will fall back to CPU-only support on older platforms (Intel series chips). Both Ollama and Whisper support native Metal acceleration and are strongly recommended. Faster-Whisper transcription does not currently support hardware acceleration on macOS and runs on CPU only.

Mac hardware recommendations
CPURAM (min)GPUVRAMTranscriptionSummarizationOpenAI supportLocal speed
Any<16GB----YesN/A
Intel**16-128GB--CPU-onlyCPU-onlyYesSlow
M116-128GBMetalSharedGPU**GPUYesMedium
M216-128GBMetalSharedGPU**GPUYesMedium
M316-128GBMetalSharedGPU**GPUYesFast
ℹ️

**Legacy Mac cannot use Whisper or diarization models, which require PyTorch 2.8.x. The last official release on Intel hardware was PyTorch 2.2.x.

**Metal acceleration requires using the “Whisper” backend.

OpenAI API

In-app, you can change the default summarization or transcription models to optionally use OpenAI’s GPT models, Whisper, or both. These are useful alternatives if you need faster speed or desire higher-quality summaries. However, these options require an internet connection and may may incur usage costs via OpenAI.