Posts in: AI

If you have to deal with what is called #AI in a professional environment these days, you will often come across the term #RAG. These Retrieval-Augmented Generation systems are intended to address many of the weaknesses of Large-Language Models (#LLM).

These are the core elements:

๐Ÿ” Retriever

  • Fetches relevant documents from external knowledge bases.
  • Utilizes vector representations for efficient text search.
  • Employs methods like keyword-based search, semantic search, or vector search to find pertinent information.

๐Ÿงฉ Augmentation

  • Integrates retrieved data into the model’s input.
  • Filters and structures information for relevance.
  • Prepares data to optimize the generation process.

๐Ÿ’ก Generator

  • The Large Language Model (LLM) processes both the query and augmented information.
  • Generates responses conditioned on the retrieved data.
  • Delivers the final, synthesized response of the RAG system.

RAG systems help create more accurate, contextual and efficient solutions. This makes them more useful in many areas. They are just one step in the increasingly meaningful application of machine intelligence to real-world use cases.

An abstract digital visualization featuring the letters RAG formed by interconnected nodes and lines, resembling a network or neural connections, set against a gradient background transitioning from pink to purple and blue.

๐Ÿ’ก A few weeks ago, I posted about #OpenWashing in #AI. What is OpenWashing? ๐Ÿšจ Itโ€™s the practice of labeling content as open source to signal transparency, while the actual openness… leaves much to be desired.

๐ŸŒŸ Now, the Open Source Initiative (OSI) has released version 1.0 of its Open Source AI Definition ๐Ÿ“œ, outlining what it truly means for AI to be #OpenSource.

๐Ÿ”‘ Built on the four freedoms of open source software (#OSS) โ€“ use, understand, modify, and redistribute โ€“ this definition extends these principles to AI systems, including their code, weights, parameters, and documented training data.

๐Ÿ“ข The initiativeโ€™s goal? Promote transparency ๐ŸŒ and enable reproducibility ๐Ÿ”„, ensuring AI systems can be independently reconstructed and adapted. ๐Ÿ’ก Open source AI should allow users to build on and adapt models, fostering innovation and trust.

โš–๏ธ While the OSI cannot enforce compliance, it plans to publicly name and shame AI models that falsely claim open source status. This move could influence policy, including the EU’s AI Act๐Ÿ‡ช๐Ÿ‡บ, which introduces exemptions and obligations for open-source AI.

โœจ Will this definition shape global legislative approaches? Only time will tell. โณ

While the OSI can’t enforce compliance, it plans to publicly name and shame AI models falsely claiming open-source status. This bold move could influence policies like the EUโ€™s AI Act ๐Ÿ‡ช๐Ÿ‡บ, which includes exemptions and obligations for open-source AI.

โœจ Will this definition shape legislative approaches? Only time will tell. โณ

๐Ÿ“– Read the OSI’s definition here: The Open Source AI Definition โ€“ 1.0

A road sign displaying the words four freedoms stands against a backdrop of a futuristic, illuminated tunnel.

๐ŸŒŸ Rethinking AI: From Misconceptions to Clarity ๐ŸŒŸ

The term ‘artificial intelligence’ (AI) still causes a lot of confusion, so we should rebrand to ‘machine intelligence’.

What we call AI doesn’t “understand” in the same way as humans do. It recognises and creates patterns based on probabilistic models. It’s important to understand this difference because it helps us have realistic expectations and make the most of machine intelligence.

Even though this might seem a bit hair-splitting, let’s embrace the nuances and educate others about the true nature of AI. If we do this, we can help to build a more informed and innovative community, driving forward the impressive advances that machine intelligence offers.

If we can cut through the noise and get to the heart of what AI is really about, it’s much easier to work in a meaningful way. And this clarity doesn’t ignore the challenges, such as high resource consumption and potential negative impacts, e.g. on the workforce. But it helps us to talk more clearly and sensibly about what we can use it for and what problems we might run into.

#MachineIntelligence #Innovation #FutureOfWork

A vibrant digital art piece depicts swirling waves of interconnected particles and light, creating a dynamic flow of energy.

While the large and resource-intensive #LargeLanguageModels #LLMs are grabbing headlines, it's likely that 'tiny AI' #tinyAI is already subtly enhancing your daily life.

Thats what I posted almost one year ago.

๐ŸŒ Working in innovation across low-resource environments globally, I constantly see the impact that access to cutting-edge technology has in creating opportunities. One technology stirring both excitement and challenge is AI ๐Ÿค–. In this context, the direction I believe holds an underrated promise for responsible real-world impact is open source, light models, and small/edge/embedded solutions.

Why this? In resource-constrained environments three key factors affect the practical and responsible use of AI: hardware requirements and model size, which are closely related. Large AI models and high-demand systems require significant resources, whereas small, edge or embedded devices offer a more efficient alternative. Lightweight models offer accessible, efficient solutions, which is particularly important where infrastructure is a limiting factor. Open source is a concept that has yet to be fully defined in AI, but is essential for the responsible use of the technology.

EdgeAI - AI performed directly on devices - is an interesting opportunity. Here’s why:

๐Ÿ”น Data Sovereignty: By keeping data local, we protect sensitive information and align with data sovereignty principles. ๐Ÿ”น Energy Efficiency: With reduced reliance on cloud computing, we can cut down on energy costs and make solutions more sustainable ๐ŸŒฑ. ๐Ÿ”น Low Latency: Faster data processing leads to more responsive and effective solutionsโ€”crucial in many critical applications.

Therefore, the focus of AI use case development in resource-constrained environments must also be on small applications. This is no less of a technological challenge! Building a future with Edge AI means integrating smart sensors, advanced processors, and optimised data management, supported by smart devices at the edge. Again, also the potential of this flavour of AI is transformative!

See examples for possible applications in this EU supported program: https://edge-ai-tech.eu/

๐Ÿ’ก #AI #Innovation #EdgeAI #OpenSource #SustainableTech #TechForGood

A small robot greets a much larger robot with a screen displaying code, set against a city street backdrop.

German IT magazine #ct tested the usual suspect LLMs, focusing on how they handle German language content and their energy footprint. The results speak for themselves. The French LLM #Mistral delivered impressive results, almost matching the quality of the market leader chatGPT while consuming far less energy. You can download this model and run it on-premises with just four #H100 cards (although they still cost 30,000 EUR each). A comparable Meta #LLama model requires four times this number. The more cards you use, the higher your energy consumption, which leads to a cost difference of 5.40 EUR per 1 mio token for Mistral, to 34.40 EUR for LLama (!) just to cover the electricity bill (and a lower quality than Mistral, at least in German). Read the complete article here (German, paywall).

Having a super-smart LLM (Large Language Model) ready for a new task, however, requires some specific knowledge to be included. So instead of a complete overhaul of the model, you just tweak it with โ€ฆ