无忧传媒

无忧传媒: AI Insights for Federal Innovators

Tech at Full Speed

VELOCITY V3. 2025 | Bill Vass

The modern flywheel effect explained

Artificial intelligence (AI), from traditional machine learning (ML) to generative AI (GenAI), is the great accelerant because of its versatility. It can augment human productivity by automating rote tasks that would otherwise consume precious time and effort. It can ingest data and use that information to continuously strengthen its capabilities. Most importantly, it can optimize the performance of other technologies.

听In the article 鈥,鈥 which ran in the second issue of Velocity, 无忧传媒 President and CEO Horacio Rozanski correctly pointed out that AI鈥檚 true potential will be unleashed when it is paired with other technologies to drive transformational outcomes. This desired future state is now well within reach. A convergence of software and hardware is making it possible to build intelligent ecosystems that enable federal agencies to unlock the full potential of AI and other technologies.

The Infrastructure to Power AI

Early in my career, I worked on a program that endeavored to use neural networks to teach very early generation autonomous subs how to navigate on their own. That project failed to achieve its objective, but it clued me in to an important point about AI. The foundational vector and scalar math we were using for the navigation neural network was basically the same as the foundation for today鈥檚 AI systems. What we didn鈥檛 have was the requisite data storage capacity and compute power to adequately train the neural networks to achieve the desired results.

Today, the compute power and data storage capacities available to federal agencies are exponentially greater, and advances in GenAI have opened new pathways to innovation. To return to the challenge of self-navigating machines, in November 2024 reported that a trio of researchers had used GenAI models and a physics simulator to teach a robotic dog to go upstairs and climb over a box without first training the robot on real-world data. This is one example of how AI, when smartly paired with other technologies, can radically redefine the art of the possible.

To scale AI so that it can be applied to the biggest challenges our nation faces鈥 from managing the national debt to outpacing near-peer adversaries in terms of technological supremacy鈥攖he government should look beyond the algorithms and AI models and invest in the infrastructure that will enable AI to flourish. Just as the construction of networks of fiber optic cables helped create the foundation for the internet boom, taking the following steps will position agencies to unleash the full power of AI.

  • Prioritize data: Connect all enterprise networks and devices (e.g., satellites, drones) to collect and store as much operational technology (OT) data as possible. Additionally, start extending the volume of information technology (IT) data that is saved. Both OT and IT data can and should be used for AI model training and simulation. The more clear, relevant data you feed a model, the denser its parameter sets will be, which will increase its accuracy and capabilities.听
  • Leverage software-defined environments: Apply software engineering practices to transform hardware-dependent systems into dynamic, software-defined environments. Certain organizations have used software-defined processes to automate complex tasks in enterprise resource planning systems. Now, it鈥檚 time to extend this concept to all aspects of physical and virtual deployments.
  • Use digital twins: Use digital twins to create realistic virtual environments that can host simulations and testing, then use AI to optimize the performance of these systems.

This paradigm can be understood in terms of something I call the modern technology flywheel. Imagine a frictionless enterprise, where every piece of information that鈥檚 collected feeds into a virtual machine. That virtual machine uses digital twins to train AI models through thousands or even millions of simulated outcomes. It then pushes those models out to an edge device or back into the cloud where the models learn from real-world deployments and feed those insights back into the real and synthetic data environments. This sequence repeats itself over and over. Each turn generates more data that can be used to improve AI and the system software, and that new data optimizes the next turn of the wheel. By harnessing this intricate but achievable paradigm, government agencies can achieve the flexibility, scale, and acceleration essential to unleashing AI and tech at full speed.

The individual components that drive the flywheel鈥攔eal and synthetic data, software-defined environments, digital twins鈥攈ave matured on different paths, which is why its outsized potential wasn鈥檛 fully recognized until recently. Its interdependencies are also tremendously complex. But each time the flywheel turns, the performance gets better and the enterprise benefits鈥攇rowing more efficient, more innovative, and less vulnerable to dislocation.

Let鈥檚 look at each part of the process:

Connect and Collect

Today, vast amounts of OT data roll in around the clock, from internet of things (IoT) sensor signals to logistics and supply chain updates, all of it transmitted, often via Wi-Fi. It is easier than ever to use electronic data interchange (EDI) to integrate these diverse sources. In addition, private 5G networks and LoRaWAN, a low-power network for wireless devices, offer seamless connectivity under demanding conditions, while new satellite systems provide low-latency, high-bandwidth connections for remote sites at the extreme edge. With capabilities like these, you can continuously connect and collect data everywhere, from exabyte to potentially zettabyte scale. IT systems that run on premises and in the cloud also generate streams of data (e.g., logfiles) that are often deleted too soon. This data can now be saved economically and should be added to the larger pool of data that can be used for simulation, testing, and training of AI models.

Leverage the Cloud

But what do you do with all this potential insight? With the cloud, enterprises can cost-effectively store it forever, augmenting what鈥檚 collected with synthetic data for known gaps in data or for scenarios that can鈥檛 be tested in the real world. Military units, for example, cannot run thousands of different combat scenarios, but agencies can generate 鈥渞epresentative鈥 classified data in synthetic form to train autonomous systems for the future battlefield. This same approach allows civilian agencies to simulate crisis response scenarios, test critical infrastructure resilience, and evaluate policy impacts at scale. With real and synthetic data merging in pipelines, enterprises can assemble the dense parameter sets required to build full-fidelity simulations of real-world environments.

Invest in Software-Defined Systems

Given how software continues to 鈥渆at鈥 hardware, agencies that don鈥檛 think enough about software-defined environments risk falling behind. The imperative for federal agencies is to convert as many enterprise processes and systems as possible into software-defined systems. This switch lets developers send over-the-air updates, which extend their systems鈥 lifespans and increase adaptability by enabling remote deployment of new features, security patches, and performance improvements. Further, by facilitating real-time data processing and connectivity, software-defined environments enhance performance and facilitate the seamless incorporation of AI algorithms and ML models, resulting in intelligent automation and decision-making capabilities. This approach accelerates innovation cycles, reduces development costs, and allows for greater customization to meet diverse user needs while empowering systems to learn and adapt over time.

Simulate and Test

Once an agency has software-defined itself as much as possible, it can feed its real and synthetic data into the cloud and run the system in simulation using a digital twin. With the storage and processing power of the cloud, organizations can operationalize full-fidelity twins for the first time with full physics and 3D ray-traced photorealistic representation. This combines the logical digital twin, the process digital twin, and the 3D structural twin into one complete representation, enabling organizations to analyze scenarios in a risk-free virtual environment.

Networks of full-fidelity digital twins can model and optimize sprawling ecosystems. A digital twin of a power grid, for example, could simulate the impact of various energy policies and potential cyber threats for an entire nation. It鈥檚 a paradigm-shifting advantage that can accelerate years of testing into minutes or enable the processing of a massive volume of combinations in hours. In the past, there was no world in which frontline military units could realistically simulate battle conditions with no risk to warfighters and reasonable costs for the unit. Now, they can rehearse and train as they fight.

Optimize with AI

Advanced ML algorithms bring OT and simulation environments together in real time. ML merged with high-performance computing allows for the analysis of billions of possible combinations that would be inconceivable for humans to cope with through conventional means. It is at this point in the flywheel that we start to see how AI is learning from the digital twin simulations and building billions of parameters in the model as part of its training.听Enhancements are continuously fed back into the system. This overall process then creates the AI and GenAI models that are pushed to production.

Push to Production

Now you can push the digital twins to production and operate them at the edge, on premises, and in the cloud. Revolving through technologies that have now matured to the point where they can be integrated and set in motion, the flywheel continues to improve enterprise performance. As you collect more data from virtual and real-world operations, you can feed it back into the system to help turn the flywheel again and again.

With a self-renewing, AI-enabled system like this, the potential for transforming core missions is also limitless. A defense agency, for example, could use globally distributed digital twins to run supply chain simulations, integrating IoT sensor data and edge computing to test demand scenarios and prepare for any crisis adversaries create. For government organizations tasked with protecting national security, delivering citizen services, and managing critical infrastructure, the modern flywheel offers unique advantages that align with their complex mission requirements.

Benefits of the Modern Flywheel

The modern technology flywheel combines real and synthetic data, software-defined environments, digital twins, and AI and ML to produce enterprise-level advantages.

Modularity and Flexibility

Separates the rigid dependency on hardware infrastructure by creating AI systems that can be adapted, upgraded, or optimized through software layers, allowing AI models to be easily updated or reconfigured without needing specific, costly hardware changes.

Continuous Improvement

Provides AI models with streams of new data to learn from so that it can continuously evolve, staying aligned with the latest advancements and demands.听

Edge-to-Cloud Harmony

Enables AI models to perform real-time inference at the edge (e.g., in satellites, IoT devices) while offloading heavier training tasks to cloud systems. This harmonized approach ensures both quick, responsive decision making and the ability to process larger datasets in powerful, centralized locations.

Safety and Security Enhancements

Provides the dynamic updates required to ensure robustness against new threats, biases, or vulnerabilities. As AI is deployed in critical applications, such as autonomous vehicles or national security, the ability to patch and enhance AI systems via software becomes essential for maintaining safety and reliability.

Flying Toward the Future

To realize the advantages of the modern flywheel, the right foundation must be in place. That foundation starts with data, which is the lifeblood of AI and digital twins. Agencies that have more specialized datasets at their disposal will be better positioned to create realistic simulations of mission environments, and perhaps one day, build their own AI models. In addition, software-defining assets of all sizes and scales, from networks to warships, enable the construction of systems that can be adapted, upgraded, or optimized through software layers without needing specific, costly hardware changes. It鈥檚 also critical to invest in security. Connected systems introduce new attack vectors, which is why agencies should use layered encryption and comprehensive firewalls. Scalability should be supported through multiple, physically separated availability zones, with a unified set of application programming interfaces (APIs) from edge to cloud. Given the speed and scale of cyberattacks, AI will play a critical role in the identification of threats, monitoring the attack vectors, and proactive threat hunting.

With these layers of support, federal agencies are positioned to use the modern flywheel to help AI drive mission-critical outcomes faster. To innovate in the 21st century, you need more than just a set of disparate technology tools. It鈥檚 no longer enough to scale a traditional AI application for a narrowly defined use case. Instead, it鈥檚 about integrating multiple modern AI-enabled systems to create a perpetual, self-renewing source of improvement. Blending emerging technologies across phases of data collection, simulation, training, and deployment provides the key to achieving exponential leaps in efficiency, innovation, and mission success.

Where mission success often has national implications, this technology integration isn鈥檛 just about efficiency鈥攊t鈥檚 about maintaining strategic advantage and ensuring continuous delivery of critical services. The modern flywheel approach enables federal organizations to rapidly adapt to emerging threats, scale services to meet citizen needs, and maintain technological superiority in a complex global environment.

A Primer on Digital Twins

A logical digital twin focuses on the functional and behavioral aspects of a physical system鈥檚 software and control logic rather than its physical properties. It replicates the decision-making processes, algorithms, and workflows to allow for simulation, analysis, and optimization of operations. Think of a digital twin of a smart grid鈥檚 operational logic that models how electricity is distributed and optimized based on demand patterns. By modeling the logical components, this type of digital twin enables developers and engineers to test software updates, control strategies, and system integrations in a risk-free virtual environment. It鈥檚 possible and advisable to make logical digital twins of software-defined environments, such as Enterprise Resource Planning systems, and systems consisting of physical components and operational technology (OT).

A process digital twin models the operational processes of a physical system or workflow. It simulates the sequence of actions, interactions, and transformations within a process, allowing for real-time monitoring, analysis, and optimization. Think of a digital twin of an assembly line that simulates the interaction of machines, workers, and materials. By mirroring the behavior of the actual process, this type of digital twin enables engineers and operators to predict outcomes, identify inefficiencies, and test modifications.

A 3D structural twin models the physical geometry and structural characteristics of a real-world object or system in three dimensions. It captures detailed information about the shape, materials, and mechanical properties of the object or system, allowing for simulation and analysis of structural behavior under various conditions. By providing a virtual environment to test stress, strain, loadbearing capacity, and other physical interactions, a 3D structural twin enables engineers and designers to predict performance, optimize designs, and identify potential issues before they occur in the physical counterpart. Think of a digital twin of a bridge, simulating stresses under various conditions to predict maintenance needs.

Key Takeaways

  • To innovate in the 21st century, the federal government needs to do more than scale a traditional AI application for a narrowly defined use case. Instead, agencies should invest in integrating multiple modern AI-enabled systems that can unlock the full potential of AI and other technologies.
  • To build these intelligent systems, agencies must collect and store as much operational technology and informational technology data as possible, convert hardware-dependent processes into software-controlled environments, and invest in digital twins.
  • This achievable technical paradigm will enable federal organizations to rapidly adapt to emerging threats, scale services to meet citizen needs, and maintain technological superiority in a complex global environment.

Meet the Author

is 无忧传媒鈥檚 chief technology officer. He previously served as vice president of engineering at Amazon Web Services and was also in the Senior Executive Service at the U.S. Department of Defense.

References

Horacio Rozanski, 鈥淎I& Everything: A Future of Limitless Possibilities,鈥 Velocity, Vol. 2 2023, .

Rhiannon Williams, 鈥淕enerative AI Taught a Robot Dog to Scramble Around a New Environment,鈥 MIT Technology听Review, November 12, 2024, .听

VELOCITY MAGAZINE

无忧传媒's annual publication dissecting issues at the center of mission and innovation.

Subscribe

Want more insights from Velocity? Sign up to receive more stories about emerging technologies and the impacts they鈥檙e making on missions of national importance.



1 - 4 of 8