online event

Building an Open Future:

Core Components for the AI-Driven Enterprise

June 24, 2025 - July 15, 2025 | 12:00 PM - 12:15 PM, (UTC-04:00) Eastern Time (US & Canada)
EVENT MENU

OVERVIEW

No single company has an accurate map of the future. The most accurate and resilient vision of what’s ahead is an open one - rooted in collaboration and empowered by AI. Your AI enterprise will consist of many moving parts, but at its core, it must be grounded in a few key principles: the democratization of data through accessible training tools that lower barriers to entry; the use of methods that enhance resource efficiency; robust lifecycle management of models at an enterprise scale; and the flexibility to extend your AI capabilities wherever your mission leads. Together, these pillars form a foundation for sustainable, scalable, and adaptable AI practice.

6/24 - Session 1:  Lowering Barriers with Instructlab

  • Tuning Generative AI models is cost prohibitive, demanding highly skilled resources to tune mission-centric data at scale and across hybrid cloud and edge environments.
  • InstructLab offers an open source methodology for tuning LLMs of your choice and easing the burdens of creating synthetic data with far fewer computing resources and lowering the costs 

7/1 - Session 2:  Driving Efficiency with vLLM

  • Serving LLMs is resource-intensive with high demands on hardware with hefty price tags to meet the scalability and speed that agencies demand
  • (virtualized)LLMs allows agencies to “do more with less” offers LLM inferencing and serving with greater efficiency, scale, and speeds up to 24x higher throughput.  

7/8 - Session 3:  Managing Model Lifecycles with OpenShift AI

  • Cross-functional AI teams want self-service access to workspaces, on available or GPU accelerated computing resources, integrated with choice of tools to collaborate and quickly get to production without the toil and friction with current approaches.
  • OpenShift AI provides a platform that streamlines and automates the MLOps lifecycle with pipelines for Generative and Predictive AI models from data acquisition and preparation, model training and fine-tuning, through model serving and monitoring with consistency from the edge through hybrid clouds.

7/15 - Session 4:  Inferencing at the Edge

  • Agencies demand actionable intelligence where the mission occurs, but managing a distributed network of AI-enabled edge devices in constrained/disconnected environments bring significant operational complexities.
  • Red Hat enables AI inferencing and serving in disconnected, resource-constrained environments by providing lightweight, flexible platforms that allows models to run locally, without relying on cloud connectivity through containerized deployments, efficient resource usage, and secure, automated updates
     

Virtual event details

Time: 12:00 PM - 12:15 PM (UTC-04:00) Eastern Time (US & Canada)

Date: June 24, 2025 - July 15, 2025

Any questions? Please email kmccabe@redhat.com


Speakers

Michael Hardee

Chief Architect, Law Enforcement and Justice, Red Hat

Michael Hardee is a career changer that started his journey in technology in 2010. 15 years later he is serving as the Chief Architect for Law Enforcement and Justice @ Red Hat. Over the course of his career he has supported public sector missions from the data center, to the cloud, and now the edge! Automating everything along the way.

Sompop Noiwan

Application Strategist, Department of Homeland Security, Red Hat

Sompop Noiwan brings 25+ years of experience enabling public sector missions across State & Local and Federal agencies in various roles across application and infrastructure ecosystems.  He champions solutions that maximize the value of existing IT assets while embracing modern, innovative technologies and architectural patterns that minimize adoption risk that accelerate value to meet mission demands.

Dan Domkowski

AI Specialist, Red Hat

Dan is an AI Specialist mapping Red Hat AI capabilities with the mission and business needs of Red Hat customers worldwide. Previous to Red Hat, he’s held roles in product management, consulting services, and data analytics for SIs, software ISVs, and government alike, including several years in the US Intelligence Community as a computer network operations specialist.
Dan holds a MS in Computer Science with a concentration in AI from The George Washington University and a BA in International Relations from Syracuse University.
 
© 2025 Red Hat, Inc.