Independent Gemma 4 community site

Gemma 4: Try online now, or run locally with guides

A real Gemma 4 playground with optional reasoning traces. Trusted run‑locally and download guides (Ollama, LM Studio, llama.cpp, MLX, vLLM, Unsloth). Not official. Privacy‑aware and transparent.

This is an independent community site. Studio currently uses the logged-in chat experience.

Start here

What most people want to know first

Overview

Release: March 31, 2026

Announcement: April 2, 2026

31B, 26B, and E4B

Download, requirements, and local setup

Release date

March 31, 2026

Announcement post

April 2, 2026

If you are choosing a model

31B is the strongest, 26B is the balanced choice, and E4B is the easiest place to start for lighter local use.

If you want to try it now

Open Studio, or jump into the Ollama guide for the fastest local setup.

Choose a size

Which Gemma 4 model should you start with?

If you are unsure, start smaller and move up. The best model is the one that fits your hardware, speed needs, and everyday tasks.

31B

Hard reasoning tasks, longer structured outputs, best-quality comparisons.

size

Choose 31B when output quality matters more than speed and you can afford bigger hardware or hosted inference.

Hardware

Large local GPUs or hosted inference.

Best for

Hard reasoning tasks, longer structured outputs, best-quality comparisons.

View page

26B

Coding help, research, daily assistant work.

size

Start here if you want strong results without jumping straight to the largest model.

Hardware

Good local rigs or managed inference.

Best for

Coding help, research, daily assistant work.

View page

E4B

Prompt testing, prototypes, lightweight workflows.

size

Choose E4B if you want the easiest local starting point.

Hardware

Laptops, edge devices, smaller test setups.

Best for

Prompt testing, prototypes, lightweight workflows.

View page

Run locally

Pick the setup path that matches how you work

Use Ollama for fast CLI setup, LM Studio for a desktop UI, llama.cpp for GGUF workflows, MLX for Apple silicon, vLLM for serving, and Unsloth for tuning.

Ollama

CLI-first local startup for fast experiments and repeatable demos.

LM Studio

Desktop workflow if you want visual download and model switching.

llama.cpp

Lean local runtime for GGUF workflows and lower-level tuning.

MLX

Apple silicon path for Mac-focused local Gemma 4 experiments.

vLLM

Serving path for production-grade hosted inference and APIs.

Unsloth

Finetuning-friendly route when you move from testing to adaptation.

Try it online

Open Studio when you want to test ideas

Studio is the current logged-in chat experience. Use it to compare models, draft prompts, and turn notes into something you can actually run.

  • Ask which size fits your machine and tasks.
  • Draft an Ollama or LM Studio setup plan.
  • Turn research notes into prompts, scripts, or briefs.

Studio prompts

Prompt 1

Compare Gemma 4 31B and 26B for coding plus daily local use.

Prompt 2

Write an Ollama-first setup checklist for Gemma 4 on a Mac mini.

Prompt 3

Turn these notes into a cinematic video prompt and a short voiceover outline.

Create next

Use Gemma 4 to think first, then create

Once you have a clear prompt, brief, or workflow, move into video or music tools. The homepage stays focused on helping you choose and set up Gemma 4 first.

1. Choose the model or setup

Use Studio or the guides to decide what to run and how to run it.

2. Build the prompt or workflow

Turn the answer into a reusable prompt stack, script, checklist, or creative brief.

3. Create

Move that structured output into video or music generation when the direction is clear.

Turn an answer into a video brief

Use Studio outputs as a shot list, prompt stack, or short script before opening video tools.

Open Video Creator

Turn notes into a music brief

Convert mood, pacing, and scene notes into something ready for music generation.

Open Music Creator

About this site

Know what this site is before you rely on it

gemma-4.org is an independent community site. We link to official Gemma materials, publish practical guides, and offer a community Studio on top.

Identity

Independent community site. Not an official Google product.

Sources

Guides are based on official release notes, model cards, and integration docs.

Data

Studio uses account-based chat. Do not expect an anonymous playground.