Touro GST Search
Go to Top of Touro GST website

Ol’ McDonald Had Components EA AI O

Prof. Ben Ettlinger

2026-03-16


An accessible overview of AI stack architecture, exploring the key layers, interoperability, and the continued importance of enterprise architecture in AI systems.

AI Foundation & Infrastructure

…And he built AI with his components EA AI O.



EA, AI, and the Modern Technology Stack


Artificial Intelligence introduces a level of complexity that requires clear architectural thinking. Just as enterprise architecture helped organizations understand how applications interacted across an ecosystem, AI now demands structured diagrams to explain how its many components work together.



Why AI Requires an Architecture


AI systems are not single pieces of software; they are interconnected systems composed of multiple software components. These components ingest data, prepare it, analyze it, and return insights rapidly. Understanding how these parts interact is essential for building, integrating, and governing AI solutions.



The point here is that AI, no matter what the flavor is complex with many components. As time progresses and AI advances, it will become more complex with many more components.



As Dr. Kion Ahadi notes, “AI is not a tool; it is a system.” Like other complex technologies, AI relies on a stack—a layered set of software and infrastructure that enables data processing, model training, and intelligent output.



Core Components of an AI Stack


1. Framework / Orchestration Layer

Coordinates all components, manages workflows, and ensures that tools, models, and data sources communicate effectively.


2. Database / Data Store

Holds raw data, curated datasets, embeddings, and model outputs. Serves as the central repository for all information used in training and inference.


3. Data Ingestion Tools

Extract data from external systems—APIs, files, applications, or cloud sources—and bring it into the AI environment. Often one of the most technically challenging steps.


4. Data Curation & Preparation Tools

Clean, normalize, segment, and structure data so it can be used for analysis. This includes labeling, deduplication, formatting, and quality checks.


5. Analytical Data Layer

Stores curated and structured data in a form optimized for model training and querying. This is the “ready-to-use” data zone for data scientists.


6. Model Development Environment

Provides the platform for building, testing, and training algorithms. Supports experimentation, versioning, and evaluation of models such as Large Language Models (LLMs).


7. Model Execution / Inference Layer

Runs trained models at scale, processes user queries, and generates predictions or insights in real time.


8. Search / Query Interface

Translates user questions into mathematical operations, retrieves relevant data or embeddings, and converts model outputs into human-readable responses.


9. Optional Quality-Review Layer

Validates outputs for accuracy, safety, or compliance before presenting results to users.


Typical AI Stack

AI Stack Models


AI stacks are commonly represented in 4- to 7-layer models, ranging from hardware at the bottom to applications at the top. Some architects propose 8-layer frameworks.


https://medium.com/@jeevitha.m/the-8-layer-agentic-ai-architecture-illustrated-with-autogen-examples-d07f66f320f2


Rahul Agarwal’s widely shared diagram illustrates component classes and product options available at each layer.


https://youtu.be/RRKwmeyIc24


https://www.ibm.com/think/topics/ai-stack


Typical AI Technology Stack (Layered Model)

Interoperability Across AI Stacks


Just as traditional enterprise systems communicate through protocols, AI stacks must also interoperate.


  • Model Context Protocol (MCP) for structured data access
  • Agent-to-Agent (A2A) protocols for messaging
  • HTTPS / REST / gRPC for cross‑system communication


Example: Multi-Stack AI Architecture


  • A local organizational AI stack with orchestrators, safety layers, LLMs, tools, and vector stores
  • External cloud-based AI stacks providing additional models or plugins
  • Partner stacks offering specialized domain models
  • Shared protocols enabling communication between them


Why Enterprise Architecture Still Matters


AI systems are expanding in complexity, with more components, more data sources, and more interactions. The same reasons that made enterprise architecture essential for traditional IT now apply to AI:


  • Clarity
  • Governance
  • Interoperability
  • Scalability
  • Security

AI requires architectural discipline—whether labeled EA, AA, or EA-AI-O—to remain understandable, governable, and interoperable as it evolves.


The point here is that AI, no matter what the flavor is complex with many components. As time progresses and AI advances, it will become more complex with many more components. For similar reasons enterprise architecture was need for traditional IT, enterprise architecture is required for AI.


The bottom line is, AI needs EA or call it AA or EA AI O. In fact Ol’ McDonald may have AI running his farm already. “And on his farm he had AI, EA AI O”.



What Does the “O” Stand For?


Oh! What does the “o” stand for in the title? It’s the first letter of the enterprise architecture tool I used for quite a long time.


The bottom line is, AI needs EA or call it AA or EA AI O.


In fact Ol’ McDonald may have AI running his farm already. “And on his farm he had AI, EA AI O”.



Glossary

More insights on AI architecture:
https://thinkata.com/news/insights/ai-architecture/


  • AI framework – AI framework is a structured, open-source or proprietary software ecosystem providing essential tools, libraries, and pre-built components designed to simplify, accelerate, and standardize the development and deployment of artificial intelligence systems.
  • AI model – AI model is a computer program trained on vast datasets to recognize patterns, make predictions, or generate content by mapping inputs to outputs. These models, foundational to AI systems, learn through algorithms to perform tasks like image recognition, text generation, or decision-making. Key types include supervised, unsupervised, and generative models.
  • AI protocols – AI protocols (or AI Agent Protocols) are standardized communication frameworks that define how artificial intelligence agents, tools, and data sources interact with each other safely, effectively, and at scale. They act as a common language allowing disparate, siloed AI systems to collaborate and share context, regardless of the underlying framework or provider.
  • AI stack – AI stack is a layered collection of technologies, frameworks, and infrastructure used to build, train, deploy, and manage AI applications from start to finish.
  • Enterprise Architecture (EA) – Enterprise Architecture (EA) is a strategic, conceptual blueprint that aligns an organization's business strategy, processes, information, and IT infrastructure to achieve goals effectively. It serves as a mapping and governing framework to optimize technology investments, improve operational efficiency, and manage business transformation.
  • Interoperability – Interoperability the ability of computer systems or software to exchange and make use of information interoperability between devices made by different manufacturers.
  • Large Language Model (LLM) – Large Language Model (LLM) is a type of AI trained on massive datasets using deep learning, specifically transformer architectures, to understand and generate human-like text. By analyzing patterns in vast amounts of data, LLMs predict the next word in a sequence to perform tasks like translation, summarization, and content creation.
  • MLOps (Machine Learning Operations) – MLOps (Machine Learning Operations) is a set of practices that automates and streamlines the entire lifecycle of machine learning models—from development and training to deployment and monitoring.
  • Natural Language Processing (NLP) – Natural Language Processing (NLP) is a branch of artificial intelligence (AI) that enables computers to understand, interpret, generate, and manipulate human language (text or speech). It combines computational linguistics, machine learning, and deep learning to bridge the gap between human communication and machine understanding.
  • Predictive analytics – Predictive analytics is a branch of advanced analytics that utilizes historical data, statistical modeling, data mining, and machine learning to forecast future outcomes.
  • Scalability – Scalability in computing is a system's ability to handle growing workloads—such as increased data, users, or traffic—by adding resources without compromising performance, stability, or requiring major redesigns.
  • Vector database – Vector database is a specialized system designed to store, index, and query high-dimensional vector embeddings (numerical representations of data) for fast, semantic similarity searches. Unlike traditional databases, they find "nearest neighbors"—items conceptually similar to a query—making them essential for AI, recommendation engines, and GenAI/RAG applications.


More Posts