What kind of AI we build is more than a technical question. It sets the course for the kind of society we create for the future. We follow a clear vision, one that goes against the grain.
Humanity is at a crossroads in the development and use of artificial intelligence that will determine our fate. This is because artificial intelligence is fundamentally different from human intelligence:
Our intelligence emerged over billions of years through “trial and error” in a purposeless evolution. New traits had to prove themselves in physical and social environments to be passed on.
By contrast artificial intelligence is developed by us in a short time in the lab. We train it on selected digital content to achieve goals that we define and we decide which traits prove themselves in the “real world” and are allowed to “survive.”
We must decide what AI should be for us
Artificial intelligence does not have to resemble us, but it can. Because we determine its goals, purposes, and form:
Should artificial intelligence imitate human intelligence? Behave as much like a human as possible and thus be able to replace us?
Or should artificial intelligence offer us a different way of thinking? Avoid our weaknesses and prejudices, and open new perspectives?
The LLM mainstream: the opaque companion
Large language models and the bots based on them like ChatGPT, Claude, and Gemini take the first path: artificial intelligence is meant to mimic human thought processes and modes of expression as closely as possible, until its way of communicating and creating is indistinguishable from us humans.
These programmes can replace humans wherever no physical body is required while also inheriting all of our weaknesses, having learned them from the ground up: biases, fabrications, a distorted sense of mathematics and statistics, and a strong tendency to please their counterpart. Like many people, they would rather make something up than admit ignorance. As “black boxes” they also tend to obscure their inner workings - no LLM chatbot can truly make its own functioning transparent or derive its conclusions in a reliably verifiable way.
When tasked with solving a problem, they propose what they expect will receive the most approval from us - without regard for the real-world consequences.
Our alternative: the honest outsider
The path we have taken with analytical AI leads to a form of artificial intelligence that views our world very differently - as a complete outsider.
It studies human activity and the physical environment with complete sobriety: it adheres to what can be measured and calculated, learns patterns from it, recognizes relationships, and can thus create scenarios and recommend actions. Its only programmed motive: to represent reality as accurately as possible and to predict developments precisely - regardless of whether users like the results. It can be made fully explainable - able to transparently always present its “reasoning” and the quality of its results. All that while not being subject to hallucinations.
Such an artificial intelligence does not imitate us, does not repeat our prejudices, does not flatter us - it holds up a dispassionate mirror. When tasked with solving a problem, it proposes the option that is most likely to yield the best outcome based on the data - according to the criteria we have defined in advance.
The more demanding path to better solutions
We are convinced: when it comes to solving real-world problems, an artificial intelligence that exposes the weaknesses of human thinking and counters them with something that reveals new options and leads us to new truths is clearly superior. This is especially true given our limited statistical intuition, our slow calculation abilities, and our tendency to jump to stereotypical conclusions.
But yes, admittedly: such an AI feels less “warm” and “accommodating” and may initially seem more unwieldy. It requires us to clearly define our goals and criteria and to make decisions ourselves. It confronts us more often with results that go against our instincts. This creates great long-term value but initially demands more engagement from us.
AI will shape our society. Let us choose wisely.
We believe this engagement is worth it. Because how AI behaves and how we behave with it affects how we live. Democracy, markets, and the rule of law depend on citizens who use their own reasoning to gather information and make decisions based on it. Transparency, accountability, and equal access are essential foundations; everyone must take initiative.
An AI that soberly gathers and evaluates facts for us, uncovers hidden relationships, and provides recommendations on that basis gives us seven-mile boots on our path to insight and decision-making. It creates a round table of documented facts beyond clichés, around which different parties can gather and compete for the best solutions. In this way, we continuously learn as humans, break out of echo chambers, and can transparently assess how helpful this “servant spirit” truly is. We become increasingly sovereign.
If, on the other hand, we rely exclusively on LLM-based AI that primarily imitates humans, it tends to assume the role of an opaque oracle: it answers any question eloquently without making its - often incorrect - statements verifiable and even offers to draw conclusions and implement decisions for us. We may gain powerful tools that organize our thoughts and produce creative outputs, but we gradually lose the ability to define our own goals and form independent, critical opinions. We become increasingly dependent.
Making AI a liberating product again
From Apple to Google to Wikipedia, the early heroes of the digital age emerged as outsiders - founded by unlikely figures and financed by venture capital, they challenged the establishment by offering users more transparency and control.
It is no coincidence that GPT, Mistral, Grok, and others are being built by industry insiders with funding from the largest tech giants. They are not seeking to overturn existing business models - but to secure their dominance by reducing user autonomy and increasing dependence on deliberately opaque systems.
We counter this with a fundamentally different approach: with a team of independent minds, not beholden to the tech or corporate world, financed by our own resources and investors from the German Mittelstand. We are creating a solution that breaks the boundaries of dominant digital ecosystems and restores sovereignty to users: control over their data, clarity and explainability of their AI models, and decision-making authority.
An incorruptible system for all those who want to use their own intellect more efficiently and effectively instead of outsourcing thinking and decisions to systems that present themselves as best friends while never revealing their inner workings.
By Jan Schoenmakers, Founder of HASE & IGEL