We've cut through the hype. Now you can benefit from what actually works.
We've spent 18+ months cutting through the fluff, the hype, and the marketing to figure out exactly where AI can be used as a tool, not a gimmick. The result? Battle-tested workflows and agentic processes that deliver real value, running on hardware you own and control, with full data sovereignty and no spiralling API costs.
Our value: We don't build or train AI models. We build intelligent workflows that harness the best available models. Model-agnostic by design, our processes work with any current or future LLM.
The GPU-powered machine that runs AI models. Think of it as the engine that powers everything. On its own, it does nothing useful for your business.
From R30K (purchase) or R1,500/mo (rent)
The software that makes the engine useful: chatbots, document processing, automation, etc. This is where the actual business value comes from.
From R10K per workflow (see our development pricing)
Key insight: Hardware without workflows is just an expensive box. Workflows run on hardware. You need both.
We develop specific workflows for your business, deploy them on hardware (yours or rented), and manage everything. You get a working product.
Working production code, not a drop-in app. Our code routines, workflow examples, and hands-on mentoring so your team can build exactly what's shown here. Hardware guidance included.
This page is for: Business owners, IT managers, and dev teams looking to implement AI without sending sensitive data to public platforms.
Get in touch
Think of an AI model as a digital brain. It has "baked-in" instincts from its training: it can talk, it knows things, it can reason and hold a conversation. But on its own, it's an empty vessel, like cloning a brilliant person's brain without their specific personality or purpose. The raw capability is there, but it needs direction.
What makes AI useful is the software built around it. When you chat with ChatGPT, Grok, Claude, or Gemini, you're not talking directly to the AI model. You're using software that those companies built to harness their "digital brain" in a specific way: a chat interface, with certain rules, behaviours, and capabilities.
We do the same thing, but with open-source "brains". Instead of relying on proprietary models locked behind APIs, we use freely available AI models and build our own software and workflows around them. This means we control the entire process: how the AI is used, what data it sees, where it runs, and what it does. And because the "brain" is separate from the workflows we build, we can swap it out at any time for a newer, better model, keeping the same functionality with improved results. The result is AI that works for your business, on your terms, and can get better over time as newer models become available.
When you use Copilot, ChatGPT, or third-party AI plugins, your sensitive data (including client statements, source code, or internal communications) is sent to servers you don't control. Multiple data leaks have already occurred. Learn more
Public AI charges per query. A few users experimenting? Cheap. But roll it out to your whole team or integrate it into your products? Costs spiral quickly. Private AI has fixed hardware costs—use it as much as you want without per-query fees.
There is no 100% clear POPIA-compliant path through these "black hole" platforms. Private AI runs all inference on your premises—no public endpoints, no uncontrolled data transfer, full compliance with local data governance.
We don't build or train AI models. We build intelligent workflows that leverage the best available models. The value is in the process, not the underlying architecture.
Our agentic processes work with any LLM: open-source models for local deployment (Llama, Qwen, Gemma, Mistral) or enterprise APIs (OpenAI, Anthropic, Google) when appropriate. Switch models mid-process, by task type, or as better models emerge. No lock-in, no rebuilding.
The AI landscape moves fast, with new models releasing monthly. Our workflows can be easily adapted to new models, with minimal changes required, if any. When a better model drops, you swap it in and immediately benefit. The workflow is the asset; models are interchangeable.
Thousands of production-ready LLMs are available under permissive licenses (Apache 2.0, MIT), all free for commercial use. Your organisation chooses which models suit your needs. We prefer Google, Qwen, and Meta models, but any compatible model works.
Why this matters: You're not buying a product tied to one AI provider's roadmap. You're getting workflows that harness the entire open-source AI ecosystem, including today's models and tomorrow's breakthroughs.
These are example workflows that can be developed to run on your private AI hardware. Each requires development work—either by us (Option A) or by your team with our guidance (Option B).
Pricing depends on complexity and integrations. Full development pricing details →
Our approach links AI directly to your live SQL databases, so when something changes, the AI knows instantly. No out-of-sync 'memory,' no missed updates.
Process multi-page PDFs or scanned images (job cards, invoices, expense slips). AI extracts, classifies, and organises structured data for ERP ingestion or expense tracking.
"What happened today?" → AI analyzes system-wide logs and returns a natural-language summary of key events, errors, and trends with remediation steps.
These are actual workflows we've built and proven in production environments, not concepts or demos. Each example below represents patterns we've refined with real clients that your team can replicate.
R15K–R250K if we build (depending on volume and workflow complexity)
IT company scans physical job cards, and AI extracts technician notes, parts used, time logged, and customer details. Data flows directly into billing and CRM systems.
Scan receipts and expense slips, and AI classifies, organises, and digitises the data. Automatically extract vendor, amount, category, and VAT for expense tracking or reimbursement workflows.
Feed AI your raw data exports (system uptime, sales figures, support tickets), and AI generates formatted reports with insights, trends, and executive summaries.
Technicians dump raw event notes, installation logs, and worksheets. AI transforms them into consistent, professional reports formatted for management review.
Upload contracts or agreements, and AI extracts key terms, parties, dates, renewal clauses, and obligations. Surface critical details without reading 50-page documents.
Paste long email chains, and AI extracts action items, commitments, deadlines, and key decisions. Get the substance without re-reading 30 messages.
R10K–R100K if we build (depending on integration and customisation)
Managers paste messy braindumps. AI extracts action items, assigns priorities, and formats into clear task lists ready for staff, saving hours of back-and-forth clarification.
Type rough thoughts, AI transforms them into polished emails matching your authentic writing style. Recipients never know it started as a 3am voice note.
Before pasting into CRMs, ticketing systems, or client portals, AI cleans up grammar, formatting, and tone. Consistent professional output across all platforms.
"What happened this week?" → AI analyzes logs, tickets, and communications to produce executive-level summaries for quick catch-ups, handovers, and archiving.
Translate between English and Afrikaans (or other languages) in any web app or document. Flip between languages on demand with surprisingly good accuracy, all processed locally.
Feed AI your project details, specs, and constraints, then discuss ideas, explore options, and think through problems. Your AI sounding board that knows your context.
R25K–R100K if we build (depending on data sources and security requirements)
Critical: This type of sensitive data should never be exposed to public AI platforms. Private AI makes secure conversational access possible.
"What's my login for the Azure portal?" → AI retrieves your personal passwords, API keys, and service logins from secure data stores. Conversational access to your own sensitive credentials, never logged, never exposed externally.
"What's the admin password for Client X's server?" → AI retrieves your clients' credentials, financial info, and sensitive data from secure stores. Manage the data you're entrusted with via natural conversation, with full data sovereignty.
Combine notes, credentials, and instructions in one chat, and AI steps you through complex processes dynamically. "Walk me through deploying to Azure" pulls in your saved notes, API keys, and procedures.
R15K–R75K if we build (depending on training scope and materials)
New employees practice client calls, support queries, and sales conversations with AI that plays the customer. Realistic scenarios using your company's actual scripts and protocols.
Feed AI your reference docs, SOPs, and product specs. AI generates structured training materials, quizzes, and onboarding guides, always current with your latest documentation.
AI tests staff on company material with dynamic questions, simulates client conversations, and provides scoring with real-time correction. Training that adapts to each employee.
R15K–R100K if we build (depending on complexity and integrations)
AI chats with clients on your behalf, answering FAQs, providing status updates, handling routine queries. Trained on your company's knowledge base and brand voice.
Chatbots and assistants that flip to any language on-the-fly based on user preference. Strong support for Afrikaans and African languages, so you can serve your entire customer base in their preferred language.
AI pre-populates answers to client queries, then routes to human review queue. Approved responses release to client, or hold for manual followup. Can save hours per employee, per week, depending on query volume.
Custom scoping if we build (based on your business model and requirements)
Your clients and their employees will use AI somewhere, so why not on your hardware? Offload your customers' AI usage to your own infrastructure. Charge a fee for AI features in your software, keep data local, and create a scalable revenue stream with excellent ROI.
Position your business as privacy-conscious and a technical leader. Offer your clients secure, private AI offloading, a genuine differentiator when everyone else is sending data to public clouds. Market the security, own the narrative.
Everyone wants the "AI" label, but most offerings are superficial. Deliver actual, measurable AI capabilities to your products and clients: automation that works, insights that matter, assistants that help. Substance over hype.
ROI varies by implementation. Each use case above represents potential time savings and efficiency gains based on typical workflows. During consultation, we'll help you identify which workflows could offer the highest impact for your specific operations.
Real implementations with measured results. Case studies show credential lookup (85% faster), report writing (70% time saved), ticket triage (60% faster), and more, with exact ROI figures and implementation costs.
Access all these production-ready AI capabilities through the route that best matches your organization's requirements.
Turnkey AI deployment. We build it; you run it.
Skip 18 months of R&D. Learn to build AI into your systems.
This is working production code, not a drop-in magic app. You receive our actual code routines, workflow examples, architectural patterns, plus hands-on mentoring and consulting to help your team build exactly what's demonstrated on this page—using our provided code or adapting the patterns to your own stack. Hardware guidance included; hardware acquired separately.
We transfer 18 months of hard-won AI engineering knowledge to your team: working code examples, proven architectural patterns, and the expertise to avoid costly dead ends. Your team uses this foundation to build whatever AI capabilities they need.
Code examples, architectural blueprints, and the knowledge to build your own AI capabilities:
Example scenario: A mid-sized company with manual document processing consuming 300+ staff hours monthly could automate this workflow within 6 weeks of implementation—freeing their team to focus on high-value work. In such a scenario, the investment could pay for itself in under 4 months.
We don't just hand you code and walk away. We help you master the hardware side too:
Included consulting hours are for guiding your team: implementation support, debugging workflow results, process optimization, and knowledge transfer. Your team does the building; we provide expert guidance.
Need us to build something specific? Custom development can be quoted separately after IP purchase. We build it, you own it, it becomes part of your codebase.
Every engagement is scoped to your needs, so let's talk about what makes sense for your team.
AI performance scales with hardware. Start affordably. Scale linearly as required.
Two options: Purchase hardware outright (you own it), or rent from us at a monthly fee (we host and manage it). Either way, your data stays private and under your control.
Buying R30K hardware gives you the engine to run AI models. It does not include chatbots, document processing, or any specific features. Those are workflows that run on the hardware—built by us (Option A) or by your team with our guidance (Option B). See our workflow development pricing →
We host and manage the hardware; you use it. Zero upfront investment, still fully private.
SMME / internal / light client use
Multi-client, faster models
High-throughput, large-context agentic workflows
An AI model is like a digital brain: it can talk, reason, and hold conversations based on its training. But on its own, it's an empty vessel that needs software to make it useful. ChatGPT, Grok, and Claude are examples of software built around AI models. We do the same, but with open-source models that run on hardware you control. Read the full explanation above
Behind ChatGPT and Copilot sits hardware with AI processing capabilities; those products are essentially software running on that hardware. We implement your own hardware with highly capable open-source AI models, then build agentic workflows that utilise these systems. The result: equivalent AI capabilities running entirely on infrastructure you own and control, with no per-query costs or data leaving your premises.
Key distinction: We don't build or train AI models. We build intelligent workflows that leverage the best available models. Our processes are model-agnostic and can target any current or future LLM, whether open-source (for local deployment) or enterprise APIs (when appropriate).
We can walk you through the entire code pipeline and demonstrate exactly where your data goes, because we control the entire process. There are no external API calls, no cloud dependencies, and no third-party data processors. Your data stays on your hardware, period.
We'll help you structure solid Terms & Conditions, similar to what "big AI" uses, to appropriately manage liability. We like to position AI-generated content with a friendly nudge: "This was done by AI, which is nice, but might be a good idea to double-check with an actual human overlord." For more structured agentic workflows, we implement validation layers that can measure AI responses and make deterministic decisions to re-infer, abandon, or flag for human review when outputs seem incorrect.
Our codebase is primarily C#, but the principles and workflows are stack-agnostic. The architecture is mostly Web API-based, with logic offloaded to whatever your tech stack is. Your developers can easily replicate our proven workflows in their own preferred languages and frameworks.
No per-query or API costs: that's the whole point. Ongoing costs include: optional consultation hours when you need our input (R950/hr), and hardware power/administration (either outsourced to Brainwave or handled in-house, similar to existing server management structures). Unlike cloud AI, your costs don't scale with usage.
If it's your own software: almost certainly yes. If it's closed-source software with integration/API capabilities: likely yes. If it's fully closed with no integration points: probably not, but that's rare. A brief consultation call will clarify exactly what's possible with your specific systems.
Scaling is straightforward. Scaling up: Upgrade to more powerful hardware and repurpose your initial equipment for smaller workflows. Scaling out: Simply add more of what you already have; the architecture supports horizontal scaling by a factor of N. No architectural changes required.
We prefer models from established providers like Google (Gemma), Alibaba (Qwen), Meta (Llama), and Mistral, but our workflows can target virtually any model. The open-source LLM ecosystem is massive and growing rapidly: Hugging Face alone hosts over 1 million models, with hundreds of new LLMs released monthly. New models consistently improve on benchmarks, and our model-agnostic architecture means you can swap in better models as they emerge, with minimal workflow changes required, if any.
Most open-source models come with commercially permissive licenses (Apache 2.0, MIT, or similar), meaning your organisation can deploy them freely without licensing fees or usage restrictions.
Companies like Meta, Google, and Alibaba release open-source models for several strategic reasons:
The result: you get access to world-class AI models at zero licensing cost, backed by billions in R&D investment.
Model-agnostic means our workflows aren't locked to any specific AI model. The value lies in the workflows and agentic processes we build, not the underlying model. This gives you:
Model licensing: Free. Open-source models under Apache 2.0, MIT, or similar licenses have zero licensing fees, commercial use included.
Your actual costs:
At scale, this model is dramatically cheaper than API-based alternatives. A single query to GPT-4 might cost approximately R0.50; multiply that by thousands of daily requests and the savings become substantial.
Your data is never stored on the AI hardware. The entire process is stateless by design. Here's how it works:
Each request starts fresh. Your data temporarily traverses the connection, gets processed, and the response returns. After that, the state is destroyed on the hardware side. The next request starts a new, clean process and carries only the data and instructions needed for that specific inference trip. There is no persistent "memory" or stored copy of your data on the AI server.
The key point: Unless conversations or data are explicitly logged on the AI hardware, there is no physical way for your data to remain on the AI server after a call completes. We control the entire process, including whether anything is logged on the AI hardware or not.
Compare this to public AI: When you use ChatGPT or similar services, your data is sent to servers you don't control, processed somewhere you can't verify, and you have no visibility into what happens to it. With private AI, you control exactly where your data goes: to physical hardware that you or we own and manage.
Physical security: If someone were to steal the AI inference hardware, they would get the AI model (which is freely available anyway) but none of your business data. Your sensitive information remains in your existing systems, databases, and secure storage, protected by your existing security measures.
We don't develop AI models. We develop code, workflows, and agentic processes that leverage existing open-source models to deliver AI-powered results.
What you own:
The models themselves: Open-source models (Llama, Qwen, Gemma, etc.) are freely available under permissive licenses. You don't "own" them, but you don't need to. They're free to use commercially, and your real value lies in the workflows built on top of them.
We know this is a lot of information, and you may want to make sure we actually know what we're talking about. How about asking some of the most intelligent AI models out there to help you make sense of it?
Each link opens a major AI platform with a pre-loaded prompt. Feel free to ask follow-up questions to dig deeper.
Gemini requires you to paste the prompt manually. Use the copy icon above.
I'd love to show you what's possible. Book a 45-minute session and let's solve a problem together.
No slides. No fluff. Just working AI, on your terms.