Artifical Intelligence

../../../_images/bastion_ai.png

We have a dedicated AI channel where we’ve curated a collection of frameworks and tools, including Langflow, Langchain, and Ollama, along with various vendor client SDKs. These tools are designed to streamline AI development and deployment. Additionally, we’ve packaged Machine Control Protocol (MCP) servers into machine images available in cloud marketplaces. Our upcoming releases will be enhanced with AI capabilities, making them more intelligent and responsive.

We’re collaborating with our cloud vendors to define the architecture of our AI appliances, which will be tailored for specific AI workloads. We’re evaluating the potential of offering GPU-intensive machines optimized for running Large Language Models (LLMs). As a distribution vendor, we can easily configure drivers, accelerators, and AI stacks to leverage these capabilities. However, unless there’s a consistent demand for such high-capacity workloads, it might be more cost-effective to rely on specialized LLM providers.

For users with limited resources, such as those on desktops or laptops, a CPU-only solution is appealing. This setup aligns with the reality of most users and developers, who often lack the hardware for local AI processing. Instead, they can configure their systems to access various vendor AI services.

We’re exploring a concept similar to Llama Stack, where both we and our clients can publish executor or agent scripts. These scripts will perform specific AI tasks, integrating seamlessly with our marketplace offerings and allowing users to choose their preferred AI vendor mix. We will manage and create scripts that coordinate with our infrastructure, and users can extend this functionality to their other resources.