Skip to content

Search the site

NewsDockerNeo4JOllamaLLMsAINews source

A local, free, open source, generative AI stack for developers? Docker and friends aim to cover all the bases

Package includes Ollama, which lets you download a range of open source LLM model packages, bundling weights, configuration and data into a single portable file

docker genai stack

Software platform provider Docker has teamed up with a trio of open source partners to offer a freely available AI stack that it claims will let developers start building generative AI apps locally within minutes.

The offering (available on GitHub here) is being made available under an unrestricted Creative Commons Zero v1.0 Universal licence and includes some sample applications for “inspiration or starting point.”

(These include, for example, one that lets you load PDFs into the free version of graph database Neo4j so you can ask questions about their content and have the LLM answer them using vector similarity search.)

At heart the GenAI stack includes a set of Docker containers orchestrated by Docker Compose (a way of running multi-container Docker applications), including a management tool for local open source LLMs (Ollama), a free database (Neo4j), and applications based on LangChain

(Ollama, which is now available as an official Docker sponsored open-source image, lets you download a range of LLM model packages –  bundling weights, configuration and data into a single portable file, although be warned that this will eat up disc space – for local builds or experimentation if you don’t fancy feeding the ChatGPT/Bard et al beast with your enterprise data. It can even support GPU acceleration inside Docker containers for NVIDIA GPUs for those planning this…) 

Docker GenAI stack: I could build it myself...

Cynics might argue that they could get all of these things free and stick ‘em together themselves anyway, but the companies contend that they’ve packaged everything up tidily for easy consumption in a one-stop-shop.

They've also included supporting tools and templates.

Neo4j’s CPO Sudhir Hasbe told The Stack: “We’ve heard from large enterprises across the Fortune 2000 that ‘LLMs are cool tech but I want to play with it locally and in the context of my own enterprise data…”

Doing this on-premises initially is a priority for most, he suggests and a much easier lift for developers to get approved as they start experimenting with what generative AI might be able to achieve for their organisation: “We wanted to make it super easy for any developer, free of cost, to get in a few clicks the full generative AI stack on your own desktop,” says Hasbe.

The former Google executive adds: “It’s not taking anything out of your enterprise environment, it’s sitting on your machine; if you’re disconnected you can still run it. It’s customisable so if you want to use GPT-4 APIs instead of Ollama you can do that; use our fully managed offering in the cloud instead of Neo4j Community Edition, or use a different database, you can do…”

Neo4j, like 99% of database providers, has now added vector search capabilities to its database offering in a bid to capture generative AI application workloads early. For its part, the graph database specialist holds that by turning unstructured data of the sort many enterprise want to feed into their LLMs into knowledge graphs, which users can then query using natural language, organisations will be able to "ground their LLMs against factual set of patterns and criteria to prevent hallucinations".

Have a play here and share your thoughts with The Stack.