ÎÞÓÇ´«Ã½

Insights from the 2025 Space+AI Summit

Hear from the Experts

Introduction and Formal Welcome

Introduction

Eric Jones, Principal, Aerospace, ÎÞÓÇ´«Ã½

Formal Welcome

Tom Pfeifer, President, National Security Sector, ÎÞÓÇ´«Ã½

Click Expand + to view the video transcript

Readiness to proceed to launch ACS. Now you'll see. EPM go SRPO SRPO is go. Hey. 43. To be here online. I understand we have over 800 registrants that registered virtually. You could have been anywhere else in the world today, but you're here with us. Thank you for taking the time to participate. Now let's have some fun. My name is Eric Jones. I'm a principal at ÎÞÓÇ´«Ã½ and director of Space Mission infrastructure and security in our NASA group. I'll be your MC today. And I can already tell you. We've got a fantastic lineup for you today. So sit back, relax, and let's speak some AI in space. But before I get to that, I'd like to cover a couple of housekeeping things to keep us on track for the day. First, we would ask that you please silence your cellphones. Second, bathrooms are located down the hall to the right. Third, in the lobby when you walked walked in, we have demos for you today. We have a lot of technology and things on display, and I wanna give you a details today of what we will be seeing. First, you will see our integrated intelligence space domain awareness, space traffic management team. A virtual space ground systems team. I gen AI at the edge. Space-based domain awareness processing and last but not least, our newest brilliant swarms capability. Finally, if you have any questions, there are many of us from ÎÞÓÇ´«Ã½ in the audience today that can help you in direct traffic if you do have any questions. So find somebody with a ÎÞÓÇ´«Ã½ badge, and they will be more than happy to answer questions for you today. Now, without further ado, I'm happy to introduce Mr. Tom Pfeiffer, an executive vice president here at ÎÞÓÇ´«Ã½ and the president of our national security sector. In this role, Tom oversees our work across intelligence community and in areas like cyber, intel, space, and artificial intelligence. Tom has been with ÎÞÓÇ´«Ã½ for more than 30 years. And he supports many of the critical missions of our country's great intelligence community. Please welcome to the stage Mr. Tom Pfeiffer. Hey, thank you so much, uh, Eric, um, and good afternoon, and, uh, yeah, I have been here a long time. I actually started here in 1984 when the, uh, the space shuttle was just, uh, getting started, so, um, a long, long career here, but always tremendously interested. So let me start start out real quickly and um. I think about my role here. Number one. It's a welcome So all of you hope you feel welcome. You know this is a big part is uh it's a, it's a thank you, it's a little bit of a fear, a little bit, a lot of redemption. And then I'll introduce our next our keynote speaker. So let's start with the, you know, the, the, again, the welcome. 100 people in the room here, 500 people online. Really welcome. I'm glad you're here. I'm glad you're here to collaborate with the whole group here. And I wanna You're listening to a bunch of distinguished speakers, many, many distinguished speakers, experts in industry. The panels are gonna be fantastic. The sidebar conversation is gonna be great. The demos and, uh, you know, exhibits will be fantastic. So really, really, um, welcome to the whole group, um. Now, let's go a little bit to a dark place just for a second. Um, I, I, you did mention I, I live in the national security sector and so I've had a clearance for over 40 years, so forgive me, but um. You know, the stakes are high. They, they've never been higher, uh, they're real and the status quo is not good enough. We all know that years ago for the folks here in the Air Force and the military, we did a uh a war exercise to see what life would be like without space, war without space, and we found that that exercise did not last 3 days, it didn't last barely 3 minutes so much we're so dependent on space capabilities. That was before the space force was created. So, um, fast forward another 12 years. And our adversaries near peers or peers have advanced their capabilities and You know, we are Do what we can, but the threat is very real, and there's 25,000 objects out there, 10,000 active satellites, and, uh, or, or, or capabilities and, and not all, most are good, most are some are nefarious and so we need to address and resolve those and um. In my world of national security, that's where I, uh, I, I, it's, it's, it's address and resolve those threats, but we also, um, realize that, um, the threat is also for the natural environment, you know, we need better, uh, better capabilities in in space for sensing natural disasters, addressing natural disasters, um, the folks that that work for me in in in the NGA have been helped with many, many, uh, national disasters, uh, finding folks that using their tradecraft in one way repurpose it for, uh, disaster recovery. So again. That's, uh, that's a, that's a bigger problem. I also think about the competitors in the world that are, uh, from our adversaries, um, who name some, some, uh, near peers have 2 or 3 times the population we have, and, uh, they tend to, uh, have an aspiration for you number one in AI by 2030. They also have um about 73% of their population when they go to college major in STEM. And you're wondering where the United States is, it's probably about 23%, 20%. So we have, we have an issue there. I wanna encourage everybody in your world to continue to invest in STEM. It's very important for us, and we're gonna need it from a national standpoint. I also think about it again one more fear item and I'll get into redemption. It's gonna get better, um. Is that, uh, you know, the Department of Home Security has 16 critical infrastructures. Each one of them are resilient, are, are reliant on some aspect of space-based capability. If nothing else, PNT, and, um, and I won't go through listing all of them their their vulnerabilities, but, uh, so public and both public and private rely on um space enabled capabilities and AI will play a bigger and bigger role on that because it simply is not enough humans to do what needs to be done. So, um One last thing is that uh. When I think about our adversaries, they are unhappy with the current world order. They want to change it and in my world. Which may not be all your worlds. Um, the school of thought is fight tonight. Be ready to fight tonight. So we're in a hurry to get that going and every capability we're, we're, we're pulling together and Eric talked a little bit about some things ÎÞÓÇ´«Ã½'s investing and we're always looking for partners, colleagues, things to make it better, improve it, put it in something. So let's take our way out of the hole. Let's get a little brighter now. Uh, let's, uh, let's talk about this group here, you know, I, I, I'm very thrilled with all the people that are here, willing to and able to address and resolve the challenge we face, um. So proud that we have a Space Force 2019 um capability that was essential. It was, it was it was very difficult for the Air Force to share the capabilities. It was and I think that that's uh I'm not very political here, but I think they really focused our our need on that kind of capability. Unfortunately some of our adversaries were already weaponizing space long before we recognized that, so we're, we're still catching up on that, um. Very proud of uh commercial space and where that has been as a young man growing up in this in this world that didn't exist that was, it was, it was government funded across the board but you think about all the capabilities across the uh the board and all the, all the open source abilities and intelligence capabilities, um, I'm going back to Oun go there, but, um, um, the capilities we have to collect all these capabilities is phenomenal. So we, uh, uh, it's actually there's, there's more data than we know what to do with that's where AI comes in. And so um You know, I, I'm very excited about what we do here. It's a coalition of the willing. There's probably 700 people on this call, you know, either on the call or in here right now, coalition of the willing to do what needs to be done, and if you do a good job, you know, the money will come, but mostly it's do a good job defend this nation and um. And uh try trying to put all the piece parts together, um. You mentioned a lot of capabilities we have, uh, I won't get into some of them with freeing swarms, but that's a clever way Buzan's doing for missile defense. It's a, it's a constellation of proliferated uh uh cubesats that um. Both are sensors and and and and have lethality to them. But I, I, you know, uh, so that's my welcome. Three parts is uh uh a thank you, a welcome, I, um, a fear, a little redemption, more redemption coming on these panels. And now I'd like to introduce our um keynote speaker and uh needs no introduction because he is the former president of the Lake Hiawatha Chess Club. I'd like to introduce. Like they're So, uh, Garrett Rees, please come to the stage. 

Avoiding a Space Odyssey: Ethically Applying AI to Critical Space Missions

Panelists shared insights on ethical frameworks and governance strategies to ensure the responsible application of AI technologies in space.

Moderator

Dr. Heather Pringle, Ph.D., Maj Gen, USAF (Ret.) and CEO, Space Foundation

Panelists

Dave Prakash, Principal, Chief Technology Office, ÎÞÓÇ´«Ã½

Seth Whitworth, Acting Deputy S6 (DCSO for Cyber and Data), USSF

Ramsay Brown, CEO, Mission Control AI

Click Expand + to view the video transcript

Really appreciated hearing his thoughts on, you know, the freedom to experiment, the freedom to learn, uh, and, and so much that we could do your, your life's journey has just been an inspiration, uh, to hear that today and we we truly appreciate you coming out and and kicking off our space and AI summit. Um, thinking back to, you know, when I came and had my first introduction to NASA, I can remember I came from the national security segment and I was brought in on a project to, uh, map Artemis and uh I said exactly what does that mean? They said, hey, we want you to take a look at the cyber vulnerabilities for end to end, you know, from takeoff, uh, to launch. So you know me and a team got together, got started. And I, I can remember as we did that, uh, we, we drew a lot of things on the board and we was like we're done and Karen Fields, uh, so kindly looked at me and said, Eric, I, I know in the national security segment you send things up but here at NASA we have to bring them back, right? So it was a learning lesson for me. Uh, that you know, everything that we did, we had to make sure that it worked in reverse, uh, because there were human lives at stake, uh, astronauts, uh, such as Mr. Reisman, and I really appreciate that, uh, that was a learning experience for me, but, but now it looks like we have everyone settled for our panel, uh, so we're ready to get into our first panel which is focused on ethically applying AI to critical space missions. I'll turn it over to our moderator, Heather Pringle. The CEO of Space Foundation and a retired general in the United States Air Force. First, I'd like to say, ma'am, thank you for your service. And the stage is yours. Thank you. uh, I'm Heather. Thanks to Garrett uh for that little uh blurb, but, um, it truly is an honor and a pleasure to be here and really grateful uh that you can all join me here today, so. Artificial intelligence comes with a whole series of opportunities as well as challenges we're really excited about what it can do to accelerate, but uh the challenges have been in the news and in the press about the most dire consequences, and that's really where ethics can be brought in to help govern this process and today that's what we'll explore. And with me on stage I have 3 very distinguished individuals and I'm going to introduce them briefly before we get into the conversation here. So immediately to my left we have Mr. Seth Whitworth who is the associate deputy chief of space operations for for cyber and data. And so he's at the headquarters just across the street and he's charged with equipping Space Force guardians with innovative technologies to digitally transform the space force and gain asymmetric advantage over our adversaries. You oversee the development of policy and strategy shaping digital infrastructure, machine learning artificial intelligence and digital workforce requirements. He also serves as a reservist in the Air Force and prior to his time in the Pentagon, you were also part of a consulting company helping local and state governments in other digital transformation. And then to my uh in the middle here we have Doctor Dave Prakash and he's the director of AI governance at ÎÞÓÇ´«Ã½ and you developed the processes, policies and infrastructure to ensure ÎÞÓÇ´«Ã½ develops and delivers AI safely and responsibly for itself and others. uh, Dave also leads efforts to help clients develop their own AI governance structures and develop a develop and deploy AI responsibly. In your background you have a physician, a medical doctor background, and you also joined the Air Force as a pilot, a test pilot, and you served for 12 years and thank you as well for your service. You have a BA in chemistry from John Hopkins and MD from Upstate Medical University Syracuse, and a master's in business from Stanford University. So thank you, Dave, for being here. And at the end we have Mr. Ramsey Brown, who's CEO of Mission Control AI, a San Francisco-based artificial intelligence company building the synthetic workforce and warfighter, leads a small, mostly human team in realizing and securing tomorrow's advanced agentic autonomy today. Mission control serves the US, UK, and European commercial and defense markets where they deliver synthetic workers end to end autonomous digital twins for any knowledge worker role, and Ramsey is also a senior research associate at Cambridge University in the UK where you focus on. Cyborgization and posthuman economics and holds an MS in computational neuroscience from the University of Southern California. So welcome to the three of you. Thank you all for being here. So the first question is going out to Ramsey. And it's really about why is the topic of ethics in AI important today? What makes this moment unique in the timeline that we see, uh, ethics in AI is not new. It's been around. We've been discussing it for a while, but why is it unique to discuss today? I think it's an interesting question and it's an interesting point in time we find ourselves in having this conversation. February 2025 compared to even three or four months ago for the reasons that we're sitting only a few blocks away from the Pentagon. Which is going to be one of the few places that is not going to be touched by the reversal of executive order 14,110, 13 110, 13 110? 1414 1 time. So under the Biden administration's, uh, functionally AI ethics AI safety bill, which was describing a trustworthy mandate requirement across the. Uh, departments and agencies that are on the CFO list with the exception of the national security community that has now been reversed only a few months ago we could have sat here on the stage with relative confidence and relative security about not only the importance of AI ethics as a concept in a practice space, but that this was a national level priority and that is not the case anymore and yet. As each day passes and we find that the capabilities of machine intelligence systems. Both get deeper, more competent, broader, capable of doing new things and further along in deployment, so actively rubber hitting the road out in real both commercial and mission contexts, we find that this conversation actually is gaining importance even while it appears to be disappearing from our federal discourse. The national security community is one of the exceptions on this for which they get to make up their own rules about what AI ethics and AI safety should be. Um, 2, I think that this is particularly interesting because as we're coming to find most of the AI ethics conversations that the United States and its allies have had have all been largely whiteboard exercises. We ourselves immediately are not per se. Actively engaged in first party hot conflict for which we would have to assumption test our decisions about where ethics and intelligence in the field of warfare have to intersect we haven't actually had to have that rubber hit the road quite yet, so to speak, for ourselves and yet right now there are people making decisions today. About the code they're going to write to train a neural network that will deploy on a small Arduino chip that they'll put on a balsa wood drone that they will fly over Russia or Ukraine as a semi loitering munition that may autonomously make decisions that end in the destruction of human life. That is a hard conversation to have and it is fundamentally a conversation that a butts up against our assumptions about AI ethics, and we're kind of watching it from the sidelines. And yet we spend so much time thinking about this topic, so I find it to be a really compelling thing to think about now because. It is still such the center of what our focus needs to be, and we're watching all of these things play out in real time, in real life in real ways that are no longer whiteboard conversations. And we need to come to find what do we really stand for and believe in when it comes to AI ethics, particularly for strategic engagement, before we find ourselves in those similar positions where our our tires do finally hit the road and we have to see whether or not our assumptions about AI ethics um meet the light of day of hot conflict and we aren't there yet but we we we we should hope to never find ourselves there but in the event that we do, we're gonna want to know what we actually do believe. So what I hear you saying is that uh on the one hand the commercial market might be helpful in determining where some of those lines in ethics are playing it out uh with the end user but for national security purposes we truly want to determine where the lines are. Before we get into something that is more of a hot conflict that would be great and if we have to find out those things on a whiteboard and in plate panels and in nice workshops, the AI ethics remains everybody's favorite golf clap TED Talk concept until we have to admit that there are real tradeoffs here and people are going to have to become comfortable with the real trade offs of do we make that decision or not. Because it abutts up against maybe an ethical percept we have about what our virtues and values are, and we have to determine what is in line and what is out of line, and we haven't had that moment yet. We hope to never have to have that moment, but the more time we spend working on that reality, the better we're gonna find ourselves prepared to make. Good decisions that still adhere this technology to our virtues should that time come. Thanks Ramsey. So let's let's talk a little about how do you define ethics in AI and who determines what is ethical and what is not and I'll start with you, Dave. Right. So There is no single definition. I mean, to talk about ethics, it's a philosophical debate about right and wrong, good and bad, what's fair, what's not fair, what's just, unjust. And that's very much context driven so you know you can make the case in China it's gonna be uh just based on countries, for example, there's gonna be a very different set of, uh, experiences and a different environment that's gonna drive a different set of uh baseline for what's ethical. So by country it's gonna vary by religion it's gonna vary and even in this country I would argue we're having a lot of debates on what's ethical and not. And, and I would go so far as to say, look, I don't think we could get everyone in this room to align on 100 AI or ethical principles because we're all gonna have our own opinions. So. Starting with that framework there's. There's a lot of gray area and so when we apply to AI how I think about it is, uh, you know what are the things that are less debatable. So is an AI system accurate? Is it, uh, resilient? Is it reliable? um is it uh safeguarded against uh adversarial attack? Uh, is there accountability and traceability? Those are the things that I think about as some of the standards for uh uh ethical AI. But part of your question was, you know, who determines what's ethical AI? And you know, on the one hand, a lot of what Ramsey said, you know, these are all great concepts in the void of peacetime. But I always look at it, you know, every problem is fundamentally a people problem and so ethical AI, I offer this, um. You know, or let me add this other context, you know, I've been a part of a lot of conversations with regulators and politicians, especially AI and healthcare, which is where I first cut my teeth in the AI space, and people said, wait, you're building robot doctors, they're going to kill people. There was this fear of AI and so ethics, this whole conversation about ethical AI, I think is in response to the fear of what AI can do when it goes wrong. But I would argue It's not the AI it's not the code, it's not the technology you need to be concerned about so much as the people building it, the incentives and motivations of the people designing this AI because AI is just math, it's numbers. It doesn't have an opinion on right or wrong, uh, and, and so the real concern is, you know, who's building it and I would make the case that, you know, it would, you know, if you're looking to purchase AI or you're looking to develop AI and you're looking to find a partner, you might wanna ask. What's the history for your partner? Do they have a history of exploiting privacy for profit? Do they have a history of deploying AI that's known to cause harm to certain segments of the population? And if the answer to that is yes, then how confident can you be that the AI that they're gonna develop or deploy is going to be ethical or match your particular principles and values so um I think I. We can have the conversation about ethics, but ultimately it's the motives and incentives of the people designing it that I think has the greatest impact. Thank you Dave and Seth, uh, any thoughts on how to define ethics in AI, who determines it and from a military standpoint being a part of the service, you know, uh, what if adversaries have a different approach to ethical AI? Yeah, absolutely this is for sure something we discussed a lot in the small building down the road um but it kind of goes back to the the piece of how do you define ethics for which component of it, right? So like if I. Discuss just from a military perspective, we're not starting from ground zero. We operate under a law of armed conflict. We operate under national security strategy, national defense strategy that advocates for rule-based orders. So how do we apply those principles to the use of the technology and AI, Gen AI, all of those components are just the combination of people, process, and technology. So we're not starting from ground zero when we're evaluating what our ethical use or behavior or how we'll use these tools are. We have a Good strong foundation from which to evaluate and apply them. The DOD has gone out and kind of defined what some of the responsible uses of AIR and a lot of it was already touched on. We want to make sure that it's traceable and accountable. We need to ensure that it's governable, getting back to the, we want to make sure that we're not releasing AI that our adversaries could then potentially used against us. So is it cyber secure? I is it meeting all its marks and then we really have to get after the people. I think there's this scary thing in the world right now that is AI and maybe it's based on Hollywood or movies or wherever it is, but if you just look at it as people process and technology, how do we build, train and equip a workforce to leverage those tools and capabilities from the lowest level of I need to write a report all the way up to the use of lethal force leveraging AI tools. It's it's about the people using it, the people building it, and the partners along the way that make that happen. Well, and I, I think uh the human aspect of this is an important one and so if we are talking about humans in the loop or humans interacting uh with this, what have you seen, Seth? Yeah, the space domain in particular is one that we are not currently experiencing with many people, although you heard some before that have. We're going to continue to experience it through the data, and that's data perceived through sensors or capabilities through all of these pieces, and this massive amount of data is where we can really start to leverage this AI to make sense of it, but again. I, I asked AI, right, we all go, go to this question of, um, what's the worst case scenario? What's the Skynet, what's the whatever I, I was curious what AI thought the most terrifying thing of a human would be, uh, and so I went to EPT and I said, what are you afraid of when we think about humans? And the response I gave was exactly what I'm talking about. It said, I'm afraid of data. I'm afraid of bad data. I'm afraid of not being able to explain a decision that I made and I'm afraid of not being able to understand that process. So if we boil this all down to. The ingestion and evaluation of data and start from there we can then enable for me guardians to go out and train the models and equip those pieces and on the tail end of that we can train them for things to look out for when the model goes awry because I have yet to see a perfect piece of technology that does not have an error of some sort. And so how do we Change our focus and our TTPs to ensure that we're looking out for those things ahead of time. How are we evaluating them as they're being built so we can know where to evaluate further down the line. Dave, you have a lot of experience with human-computer interaction and thoughts on this, so how has the role of humans with the technology been evolving? Sure, and what we're getting at is nothing new or specific to AI. I mean, you know, we've been talking about human systems integration, human factors for decades across so many other industries, and we really need to just take all those lessons learned in, you know, how we build a car, how we create systems for, you know, people to monitor hydroelectric dams, and so one of the key areas here is automation. I think a lot of what AI is going to do is automation. And one day when it's fully trustable to completely automate some processes, I think we can step back, but I think for the near future there's going to be this period of time where there's going to be human machine teaming and how do we get, how do we design systems so that they're uh most adapted to the unique characteristics of humans and so that's human factors and the example with automation I always think about flying and autopilot systems. I mean you've all seen the news about. Uh, these complex aircraft now that are on autopilot 95% of the time and when something goes wrong, pilots sometimes struggle to actually intervene when intervention is necessary and, and there have been studies in the human factors world for years where the less a person. The frequency with which someone interacts with the system as that frequency goes down, the less likely that person is to intervene when intervention is necessary. So this is like your train engineer that sees a fully automated train suddenly malfunction they go wait a minute, the machine's gonna. Fix itself. Let me just wait until the train goes off the tracks and something catastrophic happens. Uh, I like to argue I flew a Cold War bomber with an autopilot system that tried to kill me at least once a week, so I never trusted the system, so I never got complacent, but I think all of you have experienced this. How many times have you navigated somewhere using your GPS and it's somewhere you've been 10 times and each time you get in the car you're like, wait a minute, I need to follow the GPS that's automation buys that's the complacency that kicks in because you're no longer paying attention the way you used to, uh, I, I think many of you. Remember writing directions on the back of an envelope and then having a map in your glove box and then the one time you finally made it there, you know, it's so ingrained in your brain that you don't need to you you remember how and so I think this is what we're gonna see with AI as automation. is still dependent on humans. Uh, we really need to understand the characteristics of humans. Uh, you know, another example is with, uh, self-driving cars. It's there's a reason why we don't do Level 3 automation because we can't trust people to put their phone down and to intervene and take over when the computer vision model in a car can't tell the difference between sunlight glare and a white tractor trailer and you know someone dies, someone has, and then so. I, I, we're not paying enough attention to the human element in the in the teaming today, and I think when we think about space applications, long endurance space flight when you're isolated. You know, potentially for a couple of years at a time and you're using LLMs that are disconnected from the internet, we really need to think about where it can go awry, you know, what kind of testing is involved so that astronauts can have confidence in the system. That it's not going or Astronauts in isolation for 2 years, that's a different subject than an astronaut in, you know, a ground speed zero in a controlled environment and so we really need to think about the human element and design AI to address those human factors. So we've kind of started with a bit of a near term today discussion but Ramsey, you're thinking well past that so where do you see the future of human and AI interaction and what is the importance. Of ethics and all of that. Sure, and I just want to submit to Dave my Waymo account as counter evidence. I live in San Francisco. I take a level 3.5, probably fully autonomous jaguar like 4 times a week where I I hail it on my phone and it politely greets me with my name when I opened the door because the Bluetooth got close enough. And then like magic this thing just takes off through the suburban war zone that is downtown San Francisco and fails to uh strike any of the uh pedestrians, urban ne'er do wells, potholes, uh muni buses, random ad hoc things in its environment. Um, as I'm sitting there, you know, just fumbling around on, on Twitter on my phone, it is wild how fast our baselines drift around trust with autonomous systems and the things that seem. impossibly complicated for an AI system to solve, it does appear we now have the data to suggest because we've been we've been doing this for as a field, the better part of slightly more than half a century and in modern times, so I'd say everything probably you know 2012 cloud revolution to today so we've got a good decade and a half of. Cloud progress under our belt which allowed for really greater data density and on demand compute which is a lot of what kicked off the current trend and revolution and why we're having a panel like this now as opposed to 5 years from now or 5 years ago, uh, we're just finally at this part of the maturity curve. What we found time and time again is that things that we said are categorically off the table for AI to do, especially on autonomy, are entirely downstream of how willing we are to be patient or spend the money or the cost of electricity, the things that we said. Uh, GPT2 couldn't do or shouldn't do, and I do mean those as distinct, couldn't do in the sense of man, this chatbot is trash it barely produces legible English, um, and then the risk community looking at this chat trash bot uh sorry, trash chatbot and saying um. This thing is actually so good we should not publicly release its weights. That would be a national security concern, and that was 4 or 5 iterations ago of language model capability. We looked at and confidently said it was probably going to take us decades to get beyond an absolutely garbage chatbot to a thing that even remotely generated passable human text, and then it turns out decades was really like months. And then when we got to GPT 3 through to chat through to 4 through to 40 through to 40103, uh, now we're finding that we are probably in a good hard fast takeoff scenario towards really good generalized machine intelligence. Everything I just said two years ago would have sounded asinine. You would have booed me off this stage and sent me back to the loony bin that is California if I'd said that in 2023. And yet we find that with training runs with our willingness to sink the cost into the electricity and the fundamental like GPU watt hours that it goes into training these systems, the quality of data we get against them, things that previously seemed science fiction become science fact, and we have now become logistics, and I think about the old four star quote of experts or amateurs talk strategy experts talk logistics. So what's the interesting question around logistics for the future here? Um, to me it is that the interesting future of how humans and sufficiently advanced and autonomous machines interact, um, downstream and apropos of nothing of safety risk. Of employment risk of some of the long tail that we consider really AI safety things around containment, control, alignment, corrigibility, the holy cow, we invented the thing as smart, if not smarter than us, and it runs at an industrial scale. How many of our critical systems can we still count on ranging from culture and psychological warfare containment through to who runs the critical infrastructure of things like the power grid and the things that fall under DHS that keep the country safe. Um, apropos, none of those things. What does all the good outcomes look like and how do humans and machines interact? The interesting question to me becomes that of taste. There are some things that no matter what Dave Prakash is going to be better at than the most sophisticated AI system because they are downstream of the unique collected experience of your life and the things that are very, very hard to trap in tokens and latent space that you can't get into the training data properly for which your unique lived experience provides you strategic edge on particular taste driven decisions. And the way that we end up operating with these machines in that not very far near future is Dave when so many of the daily things that today seem like, yeah, we could barely trust a machine to do that now become as rudimentary as the machines that route the packets of your traffic for which the people on Zoom are listening to this right now like those low we we treat how things like those low level abstraction layers when more and more of our life goes to that. The last thing that you and I or Seth or Heather are doing are things that involve taste. There are things that involve that human discernment and human judgment that no one but Dave could do or no one but Seth and no one but Heather could do. That to me is the interesting question like what is the near and even farther afield future of this look like is. When more and more just becomes logistics with these things, where do we still have our edge and that that edge lies in those things that no one but Heather could accomplish. That to me is an interesting thing. And I think that's a really great jumping off point to discuss applications and how do you see that. Uh, being applied into the space force, where is AI most effective? Where do you want the humans? Where are you going to want more, uh, agent autonomized uh uh activities going on and how are you doing it today? And uh if you would. Um, there was a memo that once put a pause on artificial intelligence in the space force and then you've rescinded that so that AI could continue to be developed. Can you kind of roll that all into an understanding for us? Well, pause, you didn't hear about a pause, did you? For sure I'd love to touch on it. I think we we need to go back for just a quick second to the risk conversation because it is a really unique one to explore when we start talking about space. I think back to Lieutenant Whitworth who operated a satellite, and it was me to one satellite and the partner next to me sat them to one satellite, and we operated that satellite individually. And that made sense at the time because the DOD had the largest constellation on orbit and we were doing just fine. That very quickly exponentially changed as more commercial providers started launching more equipment and then the DOD itself pivoted to more resilient and proliferated architectures. No longer can I have one guardian flying one satellite. There's gonna just be too many satellites and not enough guardians. And so we have to evaluate the risk of of where can we pull guardians out where is that trust level because unlike the other domains, if there's a conflict on land when the conflict ends, you can clean up the land when there's a conflict of the sea, you can clean up the sea. Space is not one we can actively go up and clean out right now. And so what happens if there is kinetic conflict in space and we can no longer use or levy that domain? We have to prevent that first mover or first strike capability. And so where can we start to leverage some of these automations and what trust level is required where I now can have one guardian reviewing 1020, 1000. I don't know what that number is. It's the conversations that we're having. And there is a trust level that the domain will no longer become available because a mistake in the AI caused a repeated chain. And so we're exploring all of those capabilities. We're looking heavily to industry thankfully we can look to some constellations that are already on orbit who are who are leveraging AI capabilities to do this functionality now. If you go out and talk to guardians, I think they would say they, they want a chat like capability. um, we've been doing AI in the space AI and ML just under kind of different contexts and names as we talk about space domain awareness. That's that's probably our number one priority right now is as we bring in more and more data. For more and more providers, how can we churn through, analyze, and make useful decisions on that space. So we're targeting space domain awareness. We're targeting some of the more autonomous pieces, and we can pull out. And then guardians constantly ask me how can I have more time to go focus on operations. I want to use that unique experience that that Guardian gained doing whatever operation that they were doing that I can't train into a model right now to war game and practice and exercise and test. So can I simplify their OPR EPR writing using a chat like capability? Can I uh simplify their summarization of reports Again, Lieutenant Whitworth used to look through a green book and see what happened the day before, and I would read line by line of like each little contact and see what happened. Can I sum that up in a one sentence that AI analyzes and said yesterday was there were no anomalies. You're free to proceed. Save me 30 minutes and now I can go focus on the mission. Yeah, absolutely, and those are the capabilities that we're looking to bring. We want to enable our program offices to work with industry to bring in amazing cots and. Umar solutions. We want to ensure that we're doing that in a cyber secure and thoroughly reviewed and trusted process, but we also want to enable the functionality to target the more, I don't want to say mundane because those things are very important, but where can we free up that time? I appreciate that and uh you know guardians are not quiet when they want a capability that helps them go faster, so I'm glad you're listening to them. I realize I didn't answer your pause question, which was a good question. Uh, I don't wanna dwell on the past, which is probably why I didn't answer it because yes, it happened, uh, leadership at the time decided to take a strategic pause. It was not a ban, it was a pause. Uh, this was before rag-like capabilities existed. There was, excuse me, a whole lot of unknowns, and we were champing ourselves as the innovation service, and there was fear that um. Data would leak or we didn't know those pieces and so yeah we said take a strategic pause. We have done so much since then, since that first memo went out. We held a generative AI challenge where we said let's get after the training aspect. Let's get after the industry aspect. How can we bring people in? We've targeted more use case scenarios. What do you want to use Gen AI or AI capabilities for and how do we get after those pieces. So while we took a pause, we learned a whole lot along the way and we were able to work with the Department of the Air Force and the Department of Defense to reestablish some of those guidelines and ensure that we were moving forward in a secure and way that didn't hamper innovation. Well, and I think that's why forums like this are so important because you mentioned fear and really there's a lot of information or whiteboarding or experimentation that can help increase the human's understanding and the human's adoption of a technology so that it can be employed in a way. That uh is relevant and uh in some senses what you're saying is the only thing that's stopping us from employing it more rapidly uh is ourselves and our own understanding so AI where is AI failure not an option in space either Ramsey or Dave, uh, do you have thoughts on this? You wanna go first? Sure, um, so. The moniker of my organization Mission Control AI is per the old belief that failure is not an option. When we look at what AI failure is made of either in the short term, medium term, long term, and long tail kinds of contexts. Even outside of space in any domain, these become clear and these go from things like ethics compliance responsibility, governance through to we've got to have a talk about the economy at some point to holy cow, I don't want to be turned into a paper clip. It would be prefer, which is a bad inside joke from the AI field of it's very hard. To encode quiet implicit human virtues and human values into machines that have to always do the right thing even when the right thing is nebulous and we don't know what we're asking for and no one really understands what good and evil are anyway, um, those are all hard things to try to figure out on land, on earth. When I think about those in this context of the domain of space, I think that AI actually has a lot to learn from something very unique that space has to figure out. Um, particularly around now the advanced edge of AI that that we work on around autonomy, um. If for whatever reason. Uh, astronauts are incapable of direct communication, uh, with Earth and with ground control, or mission control, about what should be going on at any given point in time or what the correct course of action would be. There have been developed different protocols and ways of thinking around accountability and decision making in slightly more autonomous and disconnected situations. Her point that I think Dave made earlier about what does this mean in terms of really long haul kinds of situations where we might be going quite far from Earth or quite far from communication or incredibly communication degraded environments. Where someone needs to be able to make a decision that is the good and accountable and just and right and effective decision um either in a strategic context or even in a peaceful context, that's actually a problem in space. Like there's a real thing space space has to contend with as a domain. We're now finding that we have to do that in AI too as we continue to build machines that are increasingly autonomous and we give them greater and greater time horizons of autonomy. It is one thing for one of our synthetic workers to wake up and look at my email inbox and make some triage decisions automatically about what I should be alerted to or not, and that whole operation takes only a few moments and I get a nice ping or text message. Hey, you're actually all good this morning. Keep going, man. um, that operation might last a few seconds. What happens when those operations actually last a few minutes where I say look, I actually want you to prep for me um a little bit of background on the panel talk I'm gonna go give on Wednesday. Can you, can you help me prepare for this? And a synthetic goes out and performs some deep research to brief me on uh what I would need to know and that task might take minutes. What happens when those start expanding out to hours and days and weeks of autonomous operation where there's no human supervising that in the loop. These are problems that are not dissimilar to the types of things space has had to figure out, and I think there's going to be a lot of lateral knowledge transfer and overlap from the human factor studies that happen around space to the human factors studies that need to be happening around AI. And what I can guarantee you is these problems will inevitably converge because space is very cold and very hot. There's not a lot of atmosphere whatsoever. There's an incredibly high amount of radiation at certain points in time, none at others, and it is generally a terribly inhospitable place for for. Fleshy things to be in, which is why we spend so much time building really wonderful apparatuses that help us survive up there and uh even though so on uh other other planets or or other celestial objects. All this is to say If you, if you, you really made me make a long bet about the future of space in AI, it's that space becomes a place that is mostly inhabited by AI because it is so inhospitable to life, whether that is entirely digital systems or these are increasingly embodied systems, we already find that there is at least a planet that is entirely inhabited by robots. Mars is entirely inhabited by robots um that we we do send already autonomous things up there. um, the interesting questions become. As The work that we're doing today in increasing levels of of determining degrees of freedom for behaviors and how you achieve your goals using things that are downstream of language models as that starts trickling into what ends up into space. We will see that some of the problems that space has already figured out about how to make good decisions in degraded communication environments are actually to the benefit of AI when it has to go to space. We will find that, Dave, to your point, you've got a language model up there that maybe has not, that doesn't have the ability to go pull resources or get information has to be relatively clever, and the ways we think about resilience and trust and accountability should learn from what space has had to figure out even without AI in that formula. I'll just make one quick comment on that and that's, you know, the, the question was what's critical in AI and so in my research doing AI for supply chain or even trying to think about what's the minimum mission required equipment in a B-52, well, I've learned is it's whatever you don't have when you need it most, that's critical. So, uh, and then I think that's that concept is can't be more true than pretty much any environment related to space efforts. Absolutely well we have uh time for one more question and I'd love to ask each and every one of you what uh advice or what change would you like to see uh from either government leaders, congressional policy makers, or others uh to further advance artificial intelligence ethics and space applications, Seth. Absolutely I think that again going back to people process and technology and really focused on the people and process would be to enable ways to get after the training and the knowledge piece, but for me specifically getting after unlocking industry and how do we enable industry to go solve these problems. A lot of our adversaries do a lot of their work in a vacuum and they don't have the benefit of competition. In these systems to really spurn the innovation that we're going to need moving forward. So focus on the training and demystifying AI, I think would be a great way to say that, but enabling the technology to be developed and leveraged. Thanks. I would say You know, I think we're all here because we want to accelerate AI development and deployment in this domain. And sometimes we think about hey well there's too many restrictions there's too many regulations and then that's just slowing us down um I would argue that there's a counterpoint to that that if we can set expectations set standards, what's required from developers, and I think you'll actually accelerate AI deployment in the long run and what I mean is, uh, you know, right now the theme is moved quickly with AI and that's great if you're talking about what's the next. YouTube video that needs to show up on your phone, but what we're talking about here is something that's high impact and no room for failure and in those environments it's not about just being first it's or it's not just about being first, but it's about being first and being right the first time and each and every time and so uh you know that that's uh. Well, my advocacy for AI governance, not as a source of friction or tedium, but actually a way to build confidence. So it's it's not about moving fast and just hoping nothing goes wrong. It's not about being paralyzed with fear and trepidation of all the things that can go wrong with AI, so you water down the capabilities. So guardians are now burdened with not only do I have to use AI, but I got to do the manual process and double check it, and I'm not actually making any labor savings, but it's actually the third option where AI is a seat belt, not a speed bump, to accelerate delivery of AI for mission critical applications. Can I say one quick thing because you hit on my my favorite topic. I, I work in policy guidance and oversight, so sorry I don't have any money, but uh what I do have is this continued idea that governance should come in the form of enablement and not in control. So where can we implement government governance and policy that enables people within these guard rails and guidelines to to get after these ideas as soon as governance becomes about controlling and stopping and doing whatever the mission fails, we we need to get after an enablement piece. How would you do that? Very carefully, yeah, no, it is, it's a very nuanced answer and to a degree it depends. I strongly believe we need to bring more people to the table um in all of these conversations. It's not just about what I believe or or what my leaders or senior leaders believe it's about pulling in the guardian who's doing the actual work. It's about pulling in industry who's building the tools. It's about pulling in the organizations that are trying to push this. We all have that little unique talent that we can bring and so we need to bring that to the fight and come unified with one voice and that kind of gets back to our earlier point that why this conversation is important right now. So if the burden is. On industry to figure out where those lines are in ethics and you want to do it well before the rubber hits the road or before the conflict gets hot so we need to make sure we're defining our requirements appropriately and. Um, setting them up for success as well, we have just as much of an onus to define what we're looking for to get after those pieces. Is that hard to do in AI? That is hard to, I think it's hard to do in in a lot of these emerging technologies, um, but it can and it and it needs to be done, um, so, so. OK, well, uh, more of that and we wish you a lot of luck there, Seth. So, uh, Ramsey. Any thoughts, advice for congressional policy makers, the government, uh, industry? And, and for everyone, so I think just for general purpose, um, we're lucky that at this point in time. We have a little bit of of of knowledge institutionally built about progress studies. So like the study of human technological progress, there are people who only focus on this. How do we understand the last 1000 years of what drove human civilization forward? What are the mechanisms? What are the trends? And one of the funny things that comes out of progress studies is that a lot of our. The notion of adoption of technology is totally warped by hyper scalers from comparative general purpose technologies like the advent of the steam engine. The first steam engine commercially available in the United Kingdom in something like the 1st 50 years of its existence, sold something like 20 units. In the first half century of there being a steam engine available for purchase on Earth, they could not sell 2 dozen of them. No one knew what to do with a steam engine, and that would from today's perspective of a highly technological society seem nuts like of course you do this and this and this and this but people were staring at me going, well, I don't know do you know what it should do? And you fast forward to today and you, you hear the proponents of AI technologies say things like chat GPT. Uh, and now I guess DeepSeek has been the most adopted high velocity adopted technology ever to go from 0 to some amount of millions of people who've tried it. This number, the velocity keeps picking up on how fast that happens, and yet you chat with people and their answer is, oh yeah, I played with it once or twice. I touched chat GPT once. I tried it. It was OK. I asked it's some impossibly hard problem my friend goaded me on to try it with and it didn't get it right, so obviously this field is a farce and I can go back to, um, situation normal. Go play. That's the imperative that would be the recommendation I would have is everyone, uh, whether you are a personal citizen, private sector, uh, federal sector, defense space, play, go get your hands on the technologies, go start fiddling with them. Even if you do not think of yourself as a computer's person or a data person or an innovator kind of person, if you do not have that kind of self-efficacy story of I'm the type of person who just goes and fiddles with new tech for the heck of it, you don't need to. There's no programming involved in this one. It will gladly take your Visa or American Express and charge you $20 for the good version that has an IQ of $153. Go start playing and as you play, imagine what it means to have a general purpose mind available to you at industrial scale. And how that would change how you do things. That play that hands on, look, I just fiddled with it and then after 45 minutes I kind of got that aha moment of oh wait, what if I showed it this? I wonder what it would do if I X that starts setting in and that's what's needed and that's what's needed at national scale and that's what's needed within. Our different branches to look at and then find pathways to your point of enabling and saying for guardians just get your hands on the stuff and here's pathways to getting your hands on the stuff because no one's going to tell you from the top what you should and shouldn't be doing with this to be efficacious at it. We'll tell you what you shouldn't do with it to not break the rules, but we won't tell you what to do with it to win. You have to figure that out and the only way you figure it out is play. You've just got to go play. I think that's great advice, and if you can get that in the hands of guardians, there's no stopping what you can accomplish. So if you would please audience, please thank the panelists for their time and expertise today. Thank you very much. That was wonderful. Thank you, Heathere, Ramsey, and Dave for those for for such really interesting perspectives on such an intrinsically important topic. Um, one of the things that I think that is very, you know, intriguing about the discussions, uh, specific to space and AI is you hear the terms and. are starting to emerge, uh, from last year to this year's discussion we're hearing conversations about guardians and who they are and astronauts and even the machines are beginning to have a characteristic of their own in this conversation. So as this language continues to evolve and AI and ethics keeps being discussed, it's just, uh, I'm I'm keeping focused on these. Actors that are emerging, especially in the space industry, it's truly fascinating and the conversation keeps evolving and I just love how we are uh focusing on that and how AI can be an enablement um to those actors to get more uh fascinating things done. So, so great job with that. So at this point we're gonna take a 30 minute break, as I previously mentioned, the bathrooms are down the hall to the right. And uh and our demos are set up right in the entryway. Once again you can learn from our technologist uh about integrated intelligent uh situational domain awareness space dowain awareness, and space traffic management, virtual space ground systems, Gen AI at the edge, space space situ space domain awareness processing, and our brilliant new swarms capability. So I'll see you back here in 30 minutes. 

Live Long and Analyze: AI Breakthroughs for Intelligent Space Domain Awareness, Advanced Mission Management, and Space Control

Panelists explored the latest AI innovations driving intelligent mission management and enhanced space domain awareness.

Moderator

Jim Shell, Owner, Novarum Tech, LLC

Panelists

Pat Biltgen, Vice President, Chief Technology Office, ÎÞÓÇ´«Ã½

Nate Hamet, Cofounder and CEO, Quindar

Brien Flewelling, Director of Strategic Program Development, ExoAnalytic Solutions

Click Expand + to view the video transcript

All right, we are ready to kick off our second panel. Welcome back, everyone. Um Before we get started, I would like to uh thank our host the Air and Space Forces Association, uh, for gracious graciously allowing us to, uh, have our event here today. So uh I hope you enjoyed, uh, the break, but I wanted to make sure that we, uh, gave homage to our host, and, uh, now we're gonna get ready for our second panel, uh, live long and analyze AI breakthroughs for intelligence-based domain awareness, advanced mission management. And space control. Our moderator would be Mr. Jim Shell, who owns Navorum Tech and who is an expert in space domain awareness, space situational awareness, and over the debris. Jim, the stage is yours. All right, thank you. I guess with the mic is good. Got a thumbs up on the coms here OK what ÎÞÓÇ´«Ã½ doesn't know is that I am a bit of an AI skeptic, right? Yes, yes, so yes, I know a lot about space domain awareness, but the overlay of that data rich environment and how AI applies, I'm scratching my head a bit, so we're gonna explore that today and I'm honored to have these panelists. Uh, Doctor Pat Bilton is vice president of Space Mission Engineering. We'll go from this end coming down. He is vice president of Space Mission Engineering at ÎÞÓÇ´«Ã½. He has a background in aerospace engineering, complex systems design, activity-based intelligence, and AI. And last year he published his second book AI for Defense and Intelligence. Nate Hammett Nate Hammett is the CEO of Quindar where he where they are revolutionizing satellite operations with intelligent automated software designed to unify hybrid and proliferated fleets. Prior to that he was the lead software engineer for OneWeb C2, where he helped architect the ground system that controls the mega constellation today. He also worked at Lockheed Martin as a certified test conductor and assembly test and launch operations for the MuoS constellation. He holds a master's and bachelor in space engineering from the University of Michigan. Thank you Nate and last but not least Brian Flewelling, he is the director of strategic program development at Exoanalytic Solutions. Exoanalytics is a private US company that tracks the position and behavior of satellites using the oldest and largest commercial network of privately funded and maintained optical telescopes, providing real-time space situational awareness data products and services to government and commercial customers. OK, gentlemen, Help me with my skepticism, OK, so let's start this off. So SDA lends itself to this very well, right? A very data rich environment, but here's my question where does where does AI start and stop, right? Where does just employing good physics do the job versus machine language versus AI versus automation so could you help me understand some of these lines between these different areas? Open it up who wants to go? Well Jim, I'll start, uh, first of all I wanna thank you and the other panelists, uh, Brian and Nate, for being here today. You guys made a special trip to be part of this event with us, so I wanna thank you for doing that, um, and for all that you do for the community, especially for educating people about space domain awareness. I think a lot of people are, uh, really educated and horrified by the kind of things that you expose us to that are happening in space every day, so thank you um for that thought leadership. You know, your question is a good one, and a lot of people are very skeptical with AI. In the last panel, they talked about how Chad GPT was the fastest growing app ever. And I think that's both good and bad. I think that SAT GPT really opened up a big domain for all of us. That was the thing that motivated me to go write the book is like, hey, let's catch this wave, but I think it was also put out in the wild way too soon because it caused a lot of skepticism because it makes stuff up. And it isn't always right and it violates its guard rails and people are misusing it in crazy ways and for people like you that are steeped in the physics you would go, but it can't do physics or it can't do physics as well as we can and if we're trying to do a correlation function or we're trying to do a pattern of life or we're trying to look for anomalies, there are ways of solving that problem with physics and you go, Jim, I got you, and I'm not gonna disagree that this can be solved with physics, but in every domain. Where we've said, Hey, I can't do that. There was a comment in the last panel like the decades we thought were actually like 2 months and so I too am a skeptic that it will do all those things. And we do have clients that say this is how you solve the problem we know the physics we know the ideal rocket equation we know how orbits work, but in almost every domain medicine, self-driving cars, etc. when you put enough data into the system, there's an emergent behavior that we don't quite understand and so I think there's a possibility that this domain could be enhanced by AI. I think there's promises that if you combine. Generative AI technologies that are grounded in physics, they can help solve this one guardian, one satellite problem, and they can help solve this. I have terabytes of data that I can't even load onto a computer, so I, I know you're skeptical. I'm skeptical sometimes too, and I work on it every day, but I think that we're right at the very beginning of something and maybe that place where we've sold the 13 steam engines. That's great. Yeah and Nate Brian, yeah, please. We need to start with what is the problem that we have today and does AI does machine learning help to solve that? And so where AI is a goal really, which is, you know, let's make machines intelligent and uh where machine learning is a tool so you know how do we take a bunch of data and train on it um to you know to to get the results that we're looking for and then in the space. Industry and the SDA industry as well where can we use intelligence for intelligence space traffic control of assets in space? Where can we use it for analyzing nefarious objects? Like why is that object moving so much? What is it actually doing versus what was out in the public? What are you predictive solutions for what's going on on a satellite. So if there's a degradation in a subsystem or a component, is how can we use machine learning to take the history of what's been going on with that satellite or a sister satellite of its in order to solve the problem that really human. there for you know humans are there to sit in front of a console, look at limits, you know, the Christmas tree of, you know, uh, green, red, yellow, and to act when something pops up. uh, but in the age of proliferation, as we've said, and especially the Guardians one satellite to many just doesn't work and so what what are the day to day tasks that they are doing. Um, and you know if this was 1958 and we're recreating, um, you know, the space industry and like the tools that we're gonna actually be using we would use modern day technology we would 100% use modern day technology, um, but you know what we need to be doing with the modern day technology is solving the problems which is as we proliferate, how many people can we actually assign to dozens, hundreds of satellites or where satellites are actually sending. Less Information about what's wrong and moving more of the anomaly prediction uh on board at the edge onto the spacecraft and so there's a lot of tools for AI NML but where we always need to start is what's the problem that we're actually trying to solve and is this solving that problem? OK, that's great. I, I, I wanna come at this from a more of a human analogy. Um, most of you probably have someone in your life that does some DIY work at home. Right, and as hard as they work they're just not that skilled with their tools. You walk in their house and you're like, you know, I'm glad you spent all that money on that, but man, your results vary, right? We're just not, you know, I'm not gonna see this on House Hunters anytime soon, right? We're, we're still looking for the craftsmen of AI, right? There are people out there, salesmen, they'll sell you something whether they understand it completely or not, so you're gonna get hallucinations and that's gonna be the best thing since sliced bread. But you asked about how it applies to space domain awareness and an exo analytic that's what we do. We consider ourselves a craftsmen for how to make data organized for applications like AI or in the future autonomy or to inform that guardian that needs to make that split second decision or that quick decision to support their mission. And so from a data standpoint, right, if you want to be a craftsman with this, you need to understand which tool to use when, exactly how much pressure to apply, and when it's used in what conditions. So you start with, in our case, optical telescopes, we'll point them at the sky where we believe spacecraft are or are supposed to be. We will take new measurements in the form of imagery and reduce those those detections. That takes an algorithm every time we use an algorithm. We need to translate that. I mean, perhaps you could train an AI with the results of that algorithm. I had an image. I got these answers. I'd like to recreate that process. I don't completely understand that process perfectly, but my AI is an oversized computational machine. I could use this tool for that. The cost is I might need a nuclear power plant worth of energy to train that process to get the same answer that I got out of a very efficient algorithm I've had for two decades, right? So is that the place to apply AI? Maybe if you don't have a solution, but the cost might be that you're spending more power than you need to solve that part of the problem. So if you need a scalpel or a sledgehammer, you need to figure that out in each step of the chain. So now detections go into something that is called orbit determination as I watch an object multiple times, I'd like to be able to describe the motion of that spacecraft or that piece of space debris. We do that through the process of orbit determination, and the first question is from the new data, is the orbit I'm getting the same as the orbit I had? And if it's not right, is that because that object may have maneuvered or is there some other explanation for that data? And as my analyst figures that out, he may write notes called annotations that say, yes, there in fact was a maneuver or there was this change in stability or whatever it is that I'm monitoring as I accumulate the detections, the description of that space objects motion and behavior, and the analyst notes, I now have what is called an expert label data set. And if I've organized this to the point where a machine can interpret this at speed and scale, now I can empower you to make the decisions you need to across a fleet of spare spacecraft where before we were using a slide rule and a pencil to figure this stuff out. It could only do it one at a time. So there's an analogy to I love this movie, uh, a river runs through it, and the child is going to talk to his father he's writing an essay and he's editing his essay and he marks it up the first time and gives it back to him and they just want to go fishing, right? So they take the essay, he writes it again, and his father marks it a couple of times and says it's great again half as long. AI is different. You've been doing your job. It's now about to be replaced by automation, but why is it about to be replaced by automation? Because the thing you're being asked to do is again 1 trillion times. We want to fight the whole space war in the next 10 minutes before we have to be on the pressure to make that next decision, right? The kinds of cognitive load we wanna use our computation to achieve are that much more significant on these shorter time scales. That's now the bar and if we're still trying to achieve, oh, can I do initial orbit determination followed by some statistics after 30 years of doing this problem, guys, we're out of a job, big tech or somebody's going to put you out of it they're gonna figure it out with some other math they're gonna apply a bigger computer and we're going to move on because industry demands that we move at that speed and scale. And so there are places where it's going to be applied. I just hope the folks that are employed to do so are the craftsmen that we need to take it to the next level. OK, I'm still a skeptic, but maybe coming around a little bit. So This next question again is going to be an open one. When we look at the commercial market for AI enabled information products, it, it requires that a customer really know, I think what they're after, right? I've, I've seen this tension between, uh, in particular for the US government as a customer selling raw data products versus selling services which add information content from raw data, right? So AI is absolutely adding this information content. How is the value of that appreciated? Um, how does that need to be understood by prospective customers and how, how is this marketable? Hm I think that boils down to trust. We've sold data. To various customers and that definition of even the word data is loaded, right? Do I want OS data? Do I want state vector data? Do I want data that describes the the the history? Do I want just your scared straight briefing that you gave at the Space Power conference where everybody looked and said, oh my God, what is China and Russia doing this week, right? It depends on the question that needs answering and then I think there is a lack of appreciation for the fact that that data is the integral of the infrastructure from a sensing internet power thing to generate it. And then the analytics and the team or maybe the human uh supervising team that's helping do the curation right? it's not just something that came out of nowhere and and data if it came off of your cell phone, there's a whole data exchange and piracy and ethics concern today associated with are you being appropriately compensated for your data. So it is marketable, but we have to make sure that customers understand that that data is the product of a very complex process that wasn't just we bought some AI one day and it just does all the projects and no one needs to do any checks and balances against it or any of those kinds of things, right? So it is the sum total of an organization's expertise and resources to make sure that that data isn't just data for a use case, especially if that use case is high impact. Right, if it's got high impact, high, uh, risk of or or consequence for being incorrect, right, you know, we call it a hallucination. It almost sounds cute. Well, I don't want to hallucinate if the job is to intercept a missile, right? if my data is going to go support Golden Dome. Then it better be right. The last panel they said be first, be right. That's absolutely the coin of the realm. Yeah, and I'll take the, uh, time and accuracy point. So how is it saving time? Uh, so at Quindar we've created a chatbot that essentially is natural language query. So the conversations that we see our users have on console which is. Um, hey, I got this CDM, you know, this conjunction message, um, what are the mission rules do I need to maneuver? Um, who is the, uh, peer and maybe it's, uh, you know, a friendly, maybe they have actor propulsion, who is going to be maneuvering? Um, and so asking the system, you know, like what is the probability of collision or understanding even like the agenttic approach, which is, you know, taking it to that next level of how can we actually act upon, you know, at a proliferated constellation, you know, hundreds of conjunction messages a day or even to the extent where you're doing orbit raising and your conjunctions are. Constantly changing, um, these are the discussions that people are having on console and so how can AIML provide a solution that can fast forward that decision and is accurate enough whether you act on it or have a discussion that summarizes it versus taking a day to do all that analysis beyond that is like the human power and resources. just to come up with that for the time frame that exists, you know, another example, taking the data route is if you get an image down, you know how can you disseminate that information, information that you're looking for information that you don't have that the software can understand and present to you, how can you have even. You know, threat intelligence, understanding, you know, natural language processing, so taking in different resources um to understand you know what's the threat intelligence for today and you know over a period of time and making the right decision of what are we doing today? Like these are conversations that you're having every day on consoles. What are we doing today? What did we do yesterday? how did things go and as the previous panel talked about is. If you can summarize those up into everything was great, you know, don't worry about it. That is, you know, Quindar that's the end goal is not screens for commanding for understanding situation awareness and position, but more of what's the information that you want and everyone wants that information in a different form factor. So how can you present that to the user, uh, chatbots. Are really good at that or you know uh generative uh user interfaces as well for people will actually just say hey I wanna see a dashboard but I want a configurable dashboard like that is the solution that we are seeing from our customers but that's like the old Henry Ford quote that was just like our you know users would just want you know faster horses uh that is a faster horses moment yeah and Jim, I don't know where we are in the story anymore. I mean there is an element of. AI can do things that it shouldn't be able to do and we actually to Brian's point, we don't know if it's true yet, but it's weird that it kind of looks like it can do that. There was a recent study that came out it was one of the first that's ever had this result. It was a radiologists, radiologists with AI and AI alone. Performing that function and for the first time the AI beat the radiologist and the radiologist at the AI he said no no no no the theory says that. is better than both, and you go, yes, the radiologist spent more time questioning the AI's decision than just going with it. OK, there are tremendous human mechanized work flows throughout the federal government, especially in the IC. Sometimes they call it tradecraft, and you know, this is my tradecraft. To Brian's point about craftsmen and craftspeople is, well, you know, it's my tradecraft. You show me how you do it, and you go, I take this from Excel and then I copy this over here and then I do this, then I color code these columns, and then I sort it and you're like. This is what you do every day. This is how we do. This is how we. The expertist thing in, in the whole world does this thing. And then AI goes like, I just got the same answer. So we're not yet comfortable with that. And you mentioned the thing about like analytics as a service we're starting to see the government acquire analytics as a service or results as a service. NGA released a large contract called Luno where they say we're gonna buy analytic results as one of the enriched products as one of the kinds of things that they would buy. But we're, but we still go and I wanna see the data that's inside there because I wanna know how it made that decision. We still don't trust it, Brian, to your point, and we still wanna know and this is like that scene in that movie, uh, it's like a few good men where he goes like, I want the data. You can't handle the data, right? It's like how did you make this decision? Oh, I used all. All of the observations and all of history and you're like please show them to me Brian and you're like, well, do you have a 900 petabyte flash drive where I can give them to you. So we're still at this thing where we don't know what to make of it. We're confused that sometimes it seems better than us like why is that Tesla drive better than me? I don't know, but it does and. I think, I don't wanna like sound mean to all of us here on the stage, but I feel like we're at a level like I remember the World Wide Web came out when I was in high school. And the kids of this generation would go like, I know my parents were born in the 1900s, so I think the problem is largely gonna be solved by the AI natives that are growing up with this as part of their life, and we're sort of like having a weird oogy feeling about it and we'll get over it when they just turn us into batteries and plug us into the machine. Well thanks, I am. You're feeling better, feeling really old and scared. OK, you're feeling better, so I appreciate that. I'm just trying to be a little edgy since Brian, yeah, since Brian literally insulted my home improvement DIY skills I'm feeling like, like I can go out a little on a limb. OK, um. All right, so some elements we've already touched on on on our sort of our remaining question bank, but I think we can get some tease out some elaborate, uh, more elaborations on this. So Brian. Is space domain awareness data ready to support responsible application of AI? What needs to change for better solutions for AI to enable space control to be effectively trained and applied? It's not ready yet. I'd say there's more data there's more precise data, there's more diversity in data but the data engineering and data organization in order to truly feed those space control autonomous war fighting systems effectively, I think we're in that process. now, right? I think we're becoming those craftsmen. I think we're building the trust that we need to build inside of our, our government customers, but the rate of adoption just needs to accelerate, and it's the threat gets a vote. So does industry, so does the the number. Uh, there have been articles in the last year about the number of autonomous maneuvers that are happening already at scale, and you know, imagine it doesn't have to be China. If somebody hacked a mega constellation and wanted to shut down every launch window, it would be very easy to do conceptually, right? It's the changes that are happening up there. Every time there's a state change, there needs to be an update in the catalog for that model of motion for that spacecraft. We need to generate those at speed and scale. And the only way we do that is to move away from the ideal of having to have a guardian or a subject matter expert in the loop to enabling these things to happen autonomously, so we need to promote ourselves. Congratulations, we all get a promotion, but we get to train these processes and use the tools available to us, hopefully responsibly and ethically and efficiently and without a nuclear power plant if we don't need one. But we need to use them or we're not going to keep up with the rate at which things are scaling, and so it's not good enough to be 6 or 7 years later than SPD 3 and to be trying to solve the problem of 10 years ago implemented in a new computer system because that's not going to be ready to handle the space traffic this year, let alone next year, or what's coming from the people that are already planning. They have told the FCC how many more spacecraft we plan to fly. What is our plan to scale the commensurate amount of information we have to collect, process, and understand to support our decision making to keep up? And if the answer to that is I don't know, then I welcome our AI leaders because they're the only ones that can solve the problem that's coming for space. Yes, a lot is coming, and Nate, I know you'll be able to highlight some of this, um. Starlink The Chinese mega constellations and all these other launches. Why is this challenging? Oh, why is this challenge of applying AI to space mission so hard? I know you have a great background in this, so. Yeah, I've, uh, I've helped build and you know, operate a mega constellation. I know firsthand, you know, uh, what it takes to do that and, um, you know, for what is coming and for the proliferation of space and for, you know, our adversaries and proliferating and trying to gain the high ground is it's, it's a culture change, um, you know, the. Attitude of of speed is built into culture um and so if our uh speed is uh blocked or inhibited by you know bureaucracy, you know, continuing resolutions, um a misunderstanding of, you know, can we use AI can't we use AI, especially in the government. Um, that's gonna slow us down so hesitation just like that, um, is one of the reasons, um, that you know it's so difficult, um, to be able to maintain what we're trying to do today. We need to be using modern technology, um, you know, I know systems, uh, existing systems like airspace, and we all know like TRL-9 is like the best thing that you can tell a customer. Um, and on the software side, we still have to tell customers we're TRL-9 we're not sending anything to space. It's all on the ground. We push a change and it's fixed in minutes with, you know, DSecops pipelines, but it's that attitude of it needs to have TRL-9 before you can, you know, command this satellite or this new satellite, um, and that same concept goes around, you know, how do we manage mega constellations um. You know, one of the things that's really challenging about mega constellation is staffing, you know, like, and that's not the solution, but that's like the immediate solution is, hey, we need to staff you're gonna be constrained just by buildings, um, and the tools that we use kind of back to that comment just about like, um, you know what we're using today in the industry. I I'm sure you all know of customers or or you know uh servers that you're running. That are still running, you know, Windows 2008 and that was because you know it works and you're paying Microsoft who doesn't even update those you know software and security, but you're paying them directly to keep them up and running because that's what the constellation that's what the fleet uses but um that technology does not scale was not meant for today's proliferation. Um, and so where we, you know, need a cultural shift is we have a problem and we have competition, you know, like as a startup, you know, there's two things that we're always looking out for one is making sure that you know we're building the business and that we're staying alive and that we're continuing to build momentum, uh, but at the same time, time is making sure that we're doing this efficiently as well, uh, because our otherwise our comp. Editors will and then we'll be obsolete. You're either forced, uh, you either are ahead of the game or you're forced to change. And so I think that cultural shift in today's attitude will help us adopt, uh, you know, AI in the near future, um, and that's what we're building, you know, at Quindars to show that hey, we have these products we're not waiting for, you know, necessarily, you know, funding for these products for whether it's, you know, the, the chat. Or predictive analytics um or understanding how do we optimize and dynamically retask when the um when um the vignettes, um, you know that we have, uh, are different per customer and now we have to reprioritize because there's a ground asset that's out so a lot of that cultural change I think will shift uh the industry and and where we need to be versus where we are today. Yeah, and I so good segue to something near and dear to your heart, Pat, the staffing of ground stations and you know we've already talked about the Apollo program and the the room full of people of course that's crude flight a little bit different animal, but as as the US government goes to proliferated architectures, what does this mission management need to look like, um. Yeah, so Jim, that's a great question. If you guys have like 4 or 5 hours to kind of go through. I brought some slides. OK, so, so, you know, a couple of big things there, Jim. One of the things is, um, so I like the, the comment. I mean when Seth was talking, I wanted to like get out my phone and start waving it with the flashlight on because he had such a great perspective on on such a number of issues that hey, we can't be operating one satellite with one guardian and it's like, yes, that is so. True, but when you see these, these mission control stations, one of the things I think is bonkers is we always go like Fido go engines go, and you're like, why are we doing that like out loud with people looking at screens? Why don't they just all push a button or just say, are we ready to go, right? It's an algorithm so. First of all, we have to get to an environment where we decide what is the split between what happens on the ground, you know, to your point about rolling changes and what happens in space. There are a lot of people that are saying like, oh I'm gonna do onboard processing, but there are tremendous challenges with how much processing you can get, how much power you can generate, how you get the heat off the spacecraft, and then how do you roll updates, how do you coordinate them, what are you gonna do? Um, we do have some burning platforms like the recent push towards Golden Dome with the missile defense agency and this idea that the timelines are going to drive us to have automation, some kind of sense making from multiple sensors that is happening very, very, very, very, very quickly at scale across a constellation that maybe doesn't even know if the processing is happening on board itself. On board another space-based cloud node or on the ground, that to me is going to be the big breakthrough is instead of going I have a ground station and a spacecraft, you go actually you can't tell the difference. Like when you do something on your phone, you don't know if it's computing on the phone or if it's computing in the cloud and we're gonna have to get to that with a with a space ground operating system that just says how do I distribute that processing. The second thing is I mentioned the combining of multimodal sensor data. It's I have observations coming in, Brian, maybe some of them are from your telescopes, but maybe they're my organic sensors on board, or I'm getting tips and cues from some other system. Great when they agree, it's a math problem to put them together. When they disagree, how do you know which one to trust when the adversary is actively injecting things into each of those which they might do. If I followed along with, with Tom's uh despair comments. And then there's the thing where you go, by the way, when the SHTF, Google it, no one's going to be able to talk to anybody. Everything is gonna be jammed and you're just gonna have whatever you have, so that to me is like a series of problems that will take us through the next decade of like how do you enable these things now exciting that we're gonna have the opportunity to solve them, but I think AI gives us a chance to say, OK, don't just sprinkle AI on it, but how would AI help us with the data fusion problem? How would it help us with the multi-sensor orchestration problem? How would it help us with the decision making? And then the last thing that I want to say in this regard is like one of the most exciting things that I've read in the last couple of months was an interview. Uh with with um Frank Kendall as he was outgoing as the Air Force secretary, and I don't have the exact quote memorized, but he, he kind of said something like what I just said things are going to happen so fast you're gonna have to have AI in the loop, and that was the first senior official I've ever heard say that. I worked an IC program where they said we're always going to have a human in the loop, and I said, I understand that's a requirement. I'll do what you told me to do. Then it evolved to, well, we're gonna have a human on the loop to confirm and just watch the thing run. And by the way, that's my worst nightmare. That's my worst nightmare. Because there's this massive multi-billion dollar system that a bunch of super smart people have put together and engineered over a series of time that's all designed to work together in network centric warfare, and there's one person on Christmas Eve that is the lowest ranked person that's. Stuck with the least amount of leave that has a big red button that says turn off United States and something crazy happens on the screen and they're being injected and confused there's a lot going on. They've had too much eggnog and they just push the button and the whole thing turns off and you go like. And that's how the war ends. It's like the next book I'm writing, Jim spoiler alert. Or if there's anyone from Netflix in the audience that would like, that was the pitch. OK, wow. OK. Any, any final thoughts on that? We're, we're gonna, I wanna pilot that just real quickly, um, so for a space domain awareness, uh, just a simple model, right? If you're one sensor, whether you're on the ground or you're in space and you're watching another object in an orbit. Your goal is to understand that object's orbit. The thing you need to have is geometric diversity before you can converge that orbit. What does that mean? It means I have to wait for that object to move enough for me to be confident in being able to converge its position and velocity vector and how they change. I don't have time to trade in the way we're talking about this problem, which means it's better to have more than one sensor looking for multiple places or multiple modalities, because I don't need to give up that all precious time. In the problem. I need that left to still think about it or make a decision or transmit to cue somebody else so favoring architectures that combine that off board sensor data with on board sensor data is where we're going to need to go, whether that's civil, IC, uh, DOD or Space Force or what have you, which means we're all going to be talking to each other in order to collaboratively navigate the evolving hazard and threat population that is now in space. Which is a paradigm shift. It used to be, you could design your own ground segment, your link segment, your space segment, and run your system pun intended, as though it were in a vacuum. That those days are over. You are a part of an ecosystem of collaborative space systems that must navigate the domain the same way you would do inland sea aerospace, yeah, and to follow up on that and to Pat's point is. Uh, I think one of the, uh, myths about satellite operations, spacecraft operations, mission management is it's about like the satellite, but it's about the ground network, it's about the cyber, it's about the the ISPs, it's about the software that then is like in control of all of that and treating those as another node or an asset will help find which route should I take in order to task and. That could be across domains, you know that could be just finding a different antenna that could be you know finally using cross links when they come to be. But in the end these are just TCP IP addresses and you're creating this virtual mesh network of how do I route myself around and you know that's where our vision is that satellites are flying servers in space and so think of, you know, Netflix thousands and thousands. Thousands of AWS servers, they have like a handful of people on call, and that is just to keep them up and running. That's not the people who have deployed the Zoom application that make this conversation work you know that's where enabling operator guardians to focus on the payload management while mission management is focusing on how. Or do we connect these nodes and find a path, and they don't care about what ISPs you know this is going over what data center, you know, antenna this is going through if there's any fail failovers like that's our job, but what we have to present to the guardians and to our users is the up time so that they can, you know, communicate to the to their end user uh what what the uh what the objective is. Yeah, and Brian, I, I very much agree with your point about geometric diversity and those timelines. I mean, not to sound like too much of a fanboy, but the network that you've built is, I think, from a nerd standpoint, a wonder of the world. The, the fact that you can get all that data from those diverse perspectives, um. Honestly, I don't know how you came up with the idea to be able to deploy that, but it provides such a unique capability that I think if you try to, if you tried to pitch that to someone as like, I want a contract to go do this, they would be like, you're nuts, you're not gonna be able to put those telescopes in all those places and you just did it. So that is that then I think causes people to think a different way that says if I can route this information and combine it with other things that I have, I can do unexpected things and I think that's very important when you take what you've done with observations and processing and then you combine that with some of these other types of capabilities like a proliferated communication network like Starlink, then you would go, OK, let me play some trade games about what if I had a couple of these things plus a bunch of little classified toys that nobody knows about. And so that is a different paradigm than the way that the government has always tried to acquire systems. It says like, I gotta do it all myself and commercials like sometimes also there. So uh another enabler, Jim, the next couple of years here might be. How those public private partnerships, you know, the same way we got Bob and Doug to the space station that's a hard thing to say, public private partnerships without embarrassing yourself, how those uh those types of constructs could be used to give us resiliency because I think the adversaries are going to. Uh, go after the nodes that they know we have, and we know that they know we have them and so you always gotta have a backup plan in your pocket and I think one of the big solutions there is the kind of collaboration that we've been talking about that says I need geometric diversity. I also need network path diversity because I do believe that there's gonna be a major comms blackout SHTF but you go, I got a backup plan that can operate through that and that I think is gonna be the way that we're gonna enable success. OK, thank you, um. So let's touch on the final closing topic of trust, and specifically trust when it comes to space control decisions and does a human need to be on the loop, in the loop? Sounds like you want them out of the room. Um, but no, seriously, right? So for the most vital functions and protecting the, the, the capabilities enabled by space for the US. What does that look like? Is a person required? Is a person a hindrance? What does that look like in the decision space that may have to be made in the near future? There's gonna be some functions where. You're gonna have to take people out of the loop, and that has become a less controversial view. It's a view I've been afraid to admit for a long time because it was very unpopular, but like the self-driving cars will work when you take all the people off the road. It's not that they can't, it's not that the Teslas can't drive, it's that they don't know how to react to the people that are unpredictable. But if the Tesla could say, I need to take that exit and I want to be in that lane. In 6 minutes, can everybody make me a hole because I, uh, I've got a lady with a baby and, you know, having a baby in the back seat. I'm trying to get her to the hospital. So those are things that the machines can figure out. I mean, we teach teenagers how to drive. They're not very good at it, but the, the computers are certainly way better, so. We are gonna have to identify some functions and I, I don't think it's like firing nuclear missiles. Probably not. There'll be some things where there's a person loop for policy and even for our comfort, but as a society we're gonna have to get used to. Some things are done in a completely automated way. Like, do you yell at traffic lights? I know they're not AI, but you go like, Oh, I can't believe that just changed. Can maybe yell because they won't change, but like there's a controller in there that's making the lights change, but it's very rare that you just see two cars barrel into each other. We have, because the lights were both green, right? It's not people over there flipping switches. It was, right? It was. A really long time ago when the first traffic lights were started, they were people, right? And then they were traffic directors and such and, and, and I mean stop signs were invented at some point it was like, oh, you know, we ought to put some kind of guard rail here when two roads intersect with the first time they had two roads. So we're gonna have to think about the functions that become automated and just let it go and just we're gonna have to be comfortable with letting that go, yeah, maybe, and I don't know a lot about the exact disciplines of space control and maybe Nate, you've got some thoughts on that. Um, but some parts of it we're gonna have to say are automated for for keeping that asset up and running, you shouldn't have a person in the room for making certain decisions, yeah, for, you know, detecting is this, uh, you know, an asteroid or is this a nuclear missile and are we under attack and what should we do? Might want to have some guard rails in place for that, but you know that just goes back to why are people in the room today and understanding what is the problem that they are solving, you know, why is that solution there? When was it implemented sometimes even just saying the background of why is this why is this person here and what what the job that they're doing, you know, what can we do to use good physics, what can we do to use ML to use automation to use artificial intelligence whatever the solution really is. Um, but a lot of this is just to, you know, day to day daily tasks is how do we keep this asset up and running? How do we make sure that the antennas that we're using, which are, you know, very archaic and not as automated as they need to be on a lot of different infrastructures, is, um, you know, how can we automate those processes and why that person is there, um, and that just gives us more time back that gives us more time back to understand especially. As we proliferate is, you know, how does this architecture look, um, what is the mission, what are the mission objectives, um, and when it comes to some of those critical tasks is what are the guard rails that are set in place and what can we automate out so I think especially when it comes to the up time of the bus on the spacecraft side and the up time of the ground system like you need high availability and the high availability is not having two people or multiple people. If a critical war fighting decision could be analogized to playing a single game of chess. And you have to win. How do we make sure that every human that ends up on the loop is at least as good as Magnus Carlsen? How do we get to the point? Do you want the AI playing that game for you? Uh, if this is an AI conference, so you've probably followed some of the progression for like Mero and AlphaGo and how they won those bigger games for the things that we can bound as closed games or things that we understand well enough that we can trust that the AI is performing at least as well as the Magnus Carlsen for this chess game that we can train. Then you might be willing to trust that system well enough, but we need to rapidly get to the point where whether we're using that human or AI is making that decision that it's implemented because again it's the speed and the scale that's gonna drive the policy of this. We need to be able to understand this problem and to be deriving policies because we've seen this conflict before if nothing else in modeling and simulation and everything else that we can say about it before this emergent behavior occurs. And then you make the best decisions that you can. But it's it shouldn't be out of pride or it shouldn't be out of this is the way we've always done it, right? If you have available compute and it's your supercomputer against theirs and it's down to this one game, this one match, I want the one that's seen more possible games, more possible moves, and is making the wisest possible decision either for near term prior prioritization of goals or for strategic ones, and that needs to be a dialogue. With the folks that are making that decision. That is not our commercial company's responsibility. We can inform on the technology and how we might be able to get there and support in the roles that we do, but that is the dialogue that I think needs to be had. And if we assume that we have the convenience of time, I think that's a strategy for failure. Um, so we're nearing the end of our time. Any final, any save rounds here? Did we bring you back from the edge, Jim? You started as a skeptic, well. I heard I may be getting obsolete, but I think physics is still important, so I'm gonna rest on that truth, Jim, we're not, you're not obsolete. China has 10 of you and we, and, and, and we need 1000 to keep up with the way the problem's growing. Thanks for making me feel better, Brian. Jim, we should try to make Jim GPT and see what it comes up with. That's it. We should train on your posts and we should say, hey, this vehicle just moved in space. What would Jim think it's doing? Let's see how close he gets. How long do you have? Yeah, yeah, but we could do a fly off. We could do the chess game, right? We could have real Jim Shell versus virtual gym shell. We could sell tickets. We could pay for the whole thing by selling tickets, I think. All right, well, gentlemen, thank you and uh let's give our panelists a round of applause please. excuse me. Uh, so thank you so much Jim, Brian, Nate, and Pat. Those were, uh, great antidotes, uh, great stories, great conversations around, um, those topics and one of the things, you know, kind of bringing it to, uh, a little reality of some of the things that Pat brought up. Um, I took a note here when he brought up the World Wide Web. I remember early in MySpace days, uh, we, we would deliver, you know, an entire stack, uh, all the way to the warfighter, and I, I had the honor of doing that, uh, early in my career. Uh, there was a tasking system. Um, that was produced and we developed that tasking system and we, we put that tasking server all the way down into Humvees, right? So, uh, and then when we updated the software, we updated the hardware with it and we went down range again and we would, we would take those things out. But when I first started my career, one of the things that happened, you know, uh, 9/11. Um, so I got to start my career right when doing a, you know, active, you know, theater, action, and uh we were in war, uh, at that time. And I remember, uh, one of the times I went down, uh, I visited the eighty-Second Airborne and I was, I was going down to, uh, replace a server. And they had just got back from theater and they were, they were willing this Humvee out and you know, I went with my, I had a big, you know, notebook of all the things that I needed to do to the server and before I got there, a young captain pulled me to the back and said, hey, let me show you something real quick. One of the things that he did was, uh, he took a leaf blower and uh he blew a lot of sand in my face and he said, you know what, this server is taking up too much space. He said we need to do something else with the space in this Humvee. Can you do something different with this server? We took that information back and it was the advent of putting tasking on the on the worldwide web or or a closed network that those warfighters could get access to and and we could take that server away. Now they could go to a website and get access to tasking. And then from so forth and so on, time just got better, the capabilities got better, and now we're at the advent where we're talking about AI and and how can that AI enable and continue to mature that in use of the warfighter, um, because I learned from that experience that technology, um, and we had a statement that, you know, if you're not ahead of the game, the game will catch up with you and then you're gonna have to make a decision in the game. So, you know, I look at these opportunities that we can make decisions about AI, the conversations that we're having today, uh, it's been truly enlightening, um, and I've seen this firsthand throughout my career. So great job to that last, uh, panelists, uh, truly engaging conversation. 

May the Facts Be with You: AI at the Edge for Faster Processing of Onboard Data

Panelists considered how cutting-edge AI solutions enable real-time data processing and decision making at the edge of space.

Moderator

Tobias Naegele, Vice President, Strategic Communications, AFA

Panelists

Dr. Omar Hatamleh, Ph.D., Chief Artificial Intelligence Officer, NASA Goddard Space Flight Center

Dan Wald, Principal, Aerospace, ÎÞÓÇ´«Ã½

Clint Crosier, Maj Gen, USAF/USSF (Ret.) and Director, Aerospace and Satellite Solutions, AWS

Click Expand + to view the video transcript

Um, now it looks like our panel three is, uh, ready to go. Uh, we're gonna, uh, the title of this panel is May the facts be with you, AI at the Edge for Faster Processing of onboard Data. Uh, but this panel will be moderated by Tobias Nagle. Uh, thank you so much, uh, for our moderator. We also wanna give a special, uh, round of applause again, uh, Mr. Mr. Nagle for the Air and Space Forces Association. Um, being such a gracious host, uh, you're not only the moderator of this, uh, you're the vice president of strategic communications here at the Air, uh, Space Forces Association, so we just wanna thank you for your hospitality today, agreeing to come and, uh, moderate our session, and now I turn it over to you. Thank you very much. It's a very nice introduction. Uh, it's kind of fun to, to be at someone else's party at our house. Um, so we're talking about AI at the edge and um. Kind of what that means, you know, autonomy or you know some kind of autonomy has been, uh, in outer space since man has been in outer space. I mean, beginning with Sputnik, everything since then, uh, has had some level of autonomy, um, but I, I, when we talk about edge we can talk about lots of different things and I thought I'd start, uh, plan with you and I, I guess I've, I've already blown my opening because I'm supposed to introduce you guys, so let me, let me do that first. Uh, Major General Clint Crozier is director of aerospace, and satellite solutions at Amazon Web Services. Uh, he is also, uh, has been called the architect of the Space Force, um, and he, he really was one of the original people who helped stand up, uh, the, the space force, which is now 5 years old, so he left there and he came to AWS to stand up their space force, uh, so to speak. Um, Omar Hattale is, uh, chief artificial intelligence officer at NASA Goddard. Um, he created the AI strategy for NASA, not just NASA Goddard, I believe. Am I correct? Yeah, well, so we, we led NASA biggerwrit large, uh, so you got the right guy, uh, from NASA here and Dan Wald, uh, is principal of artificial for artificial intelligence at ÎÞÓÇ´«Ã½. Um, and he led the team that developed and deployed the first generative AI large language model into space, so and that's on the ISS. That's right. So, um, again I think it's a great panel to, to, to discuss these things. Clint, let's start with you and just define what we're talking about edge when when. To level set the conversation. Sure, and thanks again to ÎÞÓÇ´«Ã½ for inviting us and holding the event today and thanks to AFA for letting us party in your house, as you said. We appreciate that. So I think that's a great way to start when we talk about edge. Edge means a lot of different things to a lot of different people. I'll tell you how we think about it at AWS. At AWS we have more than 100 data centers around the globe and in dozens of different countries. And our data centers are everything exactly that you would think. These are the very large multi-acre campuses with uh you know, acres and acres of storage and compute and large power generation and everything else. The way we think about uh edge is if you're operating inside one of those data centers or plugged directly into that data center, you have the access to everything you would ever want or need. Right, you've got an unlimited capacity as we have all of our data centers are connected. You're living, living in the cloud and and and and every capability you'd ever want. If you're not operating physically inside the data center or directly plugged into the data center, we consider you at the edge. Now there are a lot of different parts of being at the edge. You can be at a narrow edge or, you know, a longer edge, and there are a lot of different. Ways to think about edge, but if you're not inside the data center, we consider it to be an edge capability. And what we learned a long time ago, a decade ago, is it's much more efficient, cost effective, and innovative for us to bring the cloud to our customers rather than their customers have to bring the data to the cloud. And so we're in the business of pushing the cloud, pushing the edge wherever we can. Uh, and when it comes to this organization and this group of people and the customers we all represent, we're talking about edge computing pushing all the way to the edge, and that's operating in space. OK, so Omar. You've got, you're looking at various extensive edge as you uh make your way from uh from the space station to to Mars. Talk a little bit about how that works. So the on the edge is gonna be very essential for us, especially the further we get from Earth, uh, the lag and signal, it makes it imperative for us to be able to to make work with on edge. So right now we're working on several projects, uh, there's a product called BrainStack. And what we're doing is, uh, we're looking at multiple uh uh compute systems, GPUs, TPUs, neuromorphic computers, and getting off the shelf, uh, and making sure they work properly on their low Earth orbit. Because if you go through conventional methods, it might take a long time, a lot of effort, and very costly to be able to do something that's radiation harden, for example, and then by the time you finish this, there might be something more modern, so it might be already obsolete. So we need to experiment getting off the shelf components and see how they work, what is their performance degradation with cosmic radiation and so on. But when we go eventually to Mars, it's very important also we send autonomous systems ahead of humans ahead of astronauts. We need to start having systems that can actually build habitats with 3D printed materials from in situ resources. Uh, it's important for us also to have systems that can go on and get fuels, uh, create hydrogen and oxygen, create the fuel for astronauts to come back. All the stuff has to be done autonomously a very, very high degree of autonomy. But even so you're gonna need that's you're you're not talking about a little bit of compute power you're talking about a ton of compute power. It it could be it could be specialized in nerd. You don't have to have a large language model that is specialized in multiple things. You could have smaller ones specialize in specific concrete tasks that make it much more effective and it doesn't need the bigger infrastructure. But something as simple as as medicine sometimes somebody gets sick, for example, and, and depending on the orbit could take up to 40 minutes between. You send a signal from Mars until you get it back from Earth, so we're creating something called Doctor in the box as well. Uh, imagine if you have medical issues you can have interactions with these, and these, uh, systems are trained specifically on medical domains, and we don't need, for example, the system to be expert in structural design or thermal, uh, components. So that's why I'm saying it has to be components that specified fine tuned or specific domains that will be much more effective for you. So Omar, you were talking about. Uh, uh, basically commercial products. Dan, you've done some work in in that area. How easy or difficult is it to take a modern commercial GPU with it's a big chip, very fine, uh, laser lithography, geometry. How do you make that safe to work in space and reliable in space, or, or is, is, you know. When, when I was writing about this stuff back in the 80s to date myself, that was not even, you know, viewed as a possibility, right? It had to be special materials custom made until very recently, that was right. Um, and I think that our biggest challenge is, uh, the same challenge that Quint was talking about earlier, um, where the big data centers are wonderful if we can afford them, right, if we can afford the size, weight, power, and cosmic radiation hardening. So bigger is always better until it doesn't fit when we're launching things into space, every kilogram every kilogram counts. Every watt counts. So my biggest focus is really about how do we walk down the wattage, and you know we're lucky that right now our test beds are in low Earth orbit, which is largely shielded from the harshest of the radiation that's coming in from the sun. Um, but we are indeed working through some of those challenges today through partnerships with, uh, other startups, uh, uh, and cosmic shielding companies that are, are really working to, uh, take the advances of commercial off the shelf GPUs, which is the the engine behind AI acceleration to parallelize all of that processing and really use it. Right on the edge and so instead of you know a a gigawatt or or a kilowatt or 100 watts or 10 watts we're not talking 10 to 1 watt is what we're really looking for for specialized use cases. I think that's really powerful because when you think about commercial off the shelf or cots, right, it's always a go to strategy to employ cots as much as you can. But think about this, and I love the example that was given in the first panel today where the gentleman talked about the steam engine and only selling 20 copies of the steam engine in the 1st 50 years or so. But think about this. Even what's considered cots today was purpose built 10 years ago or 15 years ago, right? So there are things that are going to appear purpose built today that we are going to turn into cot. Products and by that I mean even when you think about generative AI and AI at AWS, so at AWS we recognize the enormous power requirements and efficiency requirements for computer chips. So we went out and optimized, built, design our own computer chips, training them and inferentiate that are optimized for costs and performance for AI, ML, and Gen AI capabilities. So that was a purpose-built capability optimized for a specific mission that is now considered a COS. We're going to. Through the same evolution in space, you know, it used to be the joke was in any acquisition program, do you want it fast, cheap, or effective, you can pick any two of the three, right? Any acquisition person knew that joke, right now we have to balance sort of the trinity of computational power, energy efficiency, and RAD hardening, right? That's sort of the three things we've got to balance against the standard swap. And so we've got to figure out as much cos as we can use great, but, but the space environment. It being so very different, limited in our bandwidth capability, our connectivity capability, we're going to have to make sure and we're doing this right now today at AWS, by the way, we're going to have to develop some purpose-built things optimized for allowing us to do advanced AIML and generative AI on orbit that may not exist today but are going to be cos for tomorrow and that's part of what we're driving through at AWS with partners like Booz right now today. So you're going to have data centers that don't need to be connected to. So we have edge capability right now today that operate both in a connected fashion and in a disconnected fashion, right? So we built the software such that if I if I can make a connection back to a data center ET phone home that I can do all my orchestration and data sharing and synchronization, everything else, but if I go into a denied environment, I've got enough compute storage and analytical capability on the edge device that I can continue to operate successfully over a period of time, even in a denied environment. And that's one of the things, by the way, ÎÞÓÇ´«Ã½ has developed an interesting capability called Edge Extend, which is a hardware device that operates on AWS software that has been purpose built specifically for that disconnected and operational environment. So when Omar, when you go to Mars. You're going to We're talking long distance. You're going to want to phone home somewhere in between. Are you going to have AWS data centers, um, partway there I mean are will or will all your compute power have to be resident on site? Yeah, so because of the, the signal issues we have to have systems that are self-sufficient on edge, uh, that actually train for certain things, uh, but also it tells you that all these domains, all these fields are moving at such a fast exponential field that wherever we see today is a is a is top state of the art and and. could be obsolete almost so we need to design things with an element of foresight. Where where is technology heading? Not not what we have today, but where do we see from from certain points in in research papers, patents, investments we can kind of have an idea where is technology actually headed and then start designing things based on on what they're gonna be look like and and few in the different time domain. So I think that's gonna be essential, not basing everything on what we have today, even when you're solving problems. Challenges as well we need to adopt a different mentality that have elements of foresight and futuristic elements as well to be able to resolve the issues there. And can I piggyback on that too, by the way, because you asked about data center in space and so again we have to be a little bit careful when we say data center we go to what I talked about earlier, that you know, multi-acre campus somewhere that's not what we mean when we talk about data centers in space, right? The size, weight, and power, you simply can't do that. So how do you build a capability that's optimized for the size, weight, and power available on orbit or on the moon or in Mars, and how do you build it such that it has enough storage capability and enough analytical capability and enough compute power to handle that specific mission? I wouldn't call it a data center, but it certainly is a cloud edge computing capability that allows you to push the edge wherever you need it in orbit, and then you can work your way backwards to the big cloud if you need to. You've got an edge device on Mars. You've got edge computing capability on the moon, then you've got an orbital capacity in space and then reaching back to the Earth. There are people working these kind of capabilities right now today, so you're. Those successive layers become growth capacity, right? I think that one of the biggest problems you have in space is you deploy something that's You know, uh, years away you're not going to be able to send up spare parts, right? So you obviously you have to have redundancy redundancy, but you also have to have growth. You're going to have things that you didn't anticipate capabilities that you're going to want to add 100% of device and, and you know, if you look at space, uh, hardware usually takes a long time to be able to get certified properly and especially when you talk about a mission to Mars, you know, you don't, you cannot send it like in a small amount of time. So we need to look into things that could self-sustain and self develop self upgrade. Uh, in terms of what they're working on, and you know I was looking at even how everything is evolving here in terms of IQ, the, the larger language models. So they estimated cloud, for example, um, at the beginning of cloud 3.0 that it had an IQ level of about 100, and then they estimated in May we're gonna have um uh open AI 4.5, it's gonna have about 155. And the community is actually saying that it's gonna be doubling every 5.9 to 6 months. So just to give you a context, yeah, no, it's, it's incredible. So just the difference, uh, uh, Einstein, for example, had an IQ of 162, 163. It's estimated that that level. And an average person could be 110, you know, approximately 40 points. Look how much difference it made. You know, he was able to come up with a theory of relativity and incredible advances in the quantum field. So imagine now you have a system that's 1000 or 2000 or 3000 IQ levels. So how do we keep evolving the the the resources and capabilities we have according to that the advances we have on Earth? We need to find that bridge to be able to. making progress and advancing at that same speed that we have here on Earth. Well, even even Einstein had a collection of experts that he, you know, compared against and worked with. And so, you know, one of the one of the bright spots that I see in the future is really a collection of experts, right? You talked about a doctor in a box. We've got a technician in a box out there in the demo floor that you can go and take a look at afterwards and and how many more are really necessary and really the answer is. As many as we can send up there and, and with each one of those kind of being smaller and smaller, how many of these collection of boxes can we send up? How many of those need to be at 1000 IQ versus 160 versus 110, right? And and you know, bigger is always better until it doesn't fit. That's really that's really something to come on back to and the collection of experts can sometimes be greater than the sum of the parts. I think digital experts will be something that will be leveraging on substantially. And even something as simple as you're doing, um, a research paper, for example, before you send it for review for peer reviewed, you can have digital expert trained on tens of thousands of research papers on that domain on papers and everything else, and gives you critiques the paper in a way that could have one of the best experts in the world before you even submit it. So I think this is the domain of digital experts are gonna have it transversally across multiple organizations. Can I say something a little embarrassing. My new favorite prompt is two words proofread. OK, so. Lots of technology, lots of opportunities. It's, you know, and, and it's really fun to spend all this time talking about all the cool things that are possible. But when you actually have to acquire something, our acquisition systems are not built for acquiring things that change, right? Everything that has ever been made slower and taken longer to get there has been because of requirements created because it wasn't in the original requirements because we had to go back and change the language that's not what what what what was agreed on. How do we get over that and still move at optimum speed? I'll take this first, right? And I imagine, you know, Omar, you've got a lot to say on this as well. Um, I sometimes introduce myself as a recovering systems engineer. I spent the 1st 10 years of my career, uh, you know, diving deep into that. And so requirements creep is something I fought against diligently for years and sometimes lost. Against um and what I see today with especially with generative AI but but you know with with all sorts as well, including computer vision which we've got another model in the back uh for for us to take a look at as well is how do we trust it we talked we touched on this in the in the ethics panel as well. How do we trust it, um. We have a we have a nondeterministic system that's right most of the time, right, more often than people, and yet we can't necessarily hit all those requirements that we're setting out in in the same way that we would design a software system. So one of the things that. I, I, I think that we need to be focusing on is what are the guard rails to ensure no harm occurs because if you're right, you know, 80% of the time, does that mean you're wrong 20% of the time, or does that mean that you're doing nothing 20% of the time and what is OK and and I think there's a lot of. You know, open space for us to rethink those systems and we really do need to rethink them because designing a nondeterministic system is fundamentally different than designing a deterministic system. You're gonna have to help an ignoramus. Define deterministic and nondeterministic I'll touch on that in the past the talking stick. Determining the deterministic system is basically a set of rules or algorithms. These have been around. It's it's in statements. It's a whole bunch of, you know, collection of rules and statements that says if this passes go down that path, go down this path, go down that path. A nondeterministic system is using something called inference to basically. You know, reason and and come up with the right answer based off of all of the context, all of the all of the training information they have, the, the information they have from the prompts, the information they have about you and this the pre-prompts and the, the, the embeddings that we might have for RAG and and the agent of course capability AI fundamentally different. We, we get questions all the time. Can you guarantee it I'll get the same response next time? No. OK, so how do you, how do you change the, uh, change the acquisition process and, oh, well, changing culture we all know that that's really easy. So, so yeah, so, so you're acquiring systems today for NASA. Let, let me speak and, and I feel qualified to say what I'm about to say as I, uh, tell you that I was the deputy. The programmer and budget chief at Air Force Space Command. I was the chief programmer and budget chief at Air Force Global Strike Command, and I was chief of requirements for the US Air Force. The JSID system, the requirement system, the PPBS, the planning programming, budgeting system within the Department of Defense is the most perfect system you could ever design. If you have an unlimited amount of money and unlimited amount of time to execute it right that and that worked in the 70s and 80s, we didn't have a near peer competitor. There was no pacing threat and we seemed to have more money than we do today. But, but so when you have a near peer competitor, when you have limited resources, limited funding, and now you've got a time element about, you know, staying ahead of the adversary, that system just doesn't work anymore and we have to develop a new acquisition. requirements and development process that takes advantage of things like dev ops and spiral development and we have to procure software as a service rather than hardware as a platform or a weapons system. Those are key changes we have to make. But can we procure software as a service for something that we put on Mars that's supposed to support life? Right, that's we're not so right somebody's going to say, where's the guarantee. Yes, if you procure it as a service rather than as a as a deterministic output, and I'll turn it over to you, but it's got to be you can't define all the requirements up front. You have to define the service or the capability you're looking for up front and then allow the software systems to spiral to keep up with the changing requirements of the mission. Yeah, no, we need to differentiate between the conventional way of certifying and and getting things and and and what's happening today in the incredibly fast evolution. So I think we need to have a look and see where it's, it's very complicated, very, you know, I mean obviously for a reason because when you're dealing with human lives it has to be rigorous and and there's no room for mistakes, but we need to look at other ways, you know, how do we expedite the process maybe that can make a big difference. And just to jump on what Dan was saying about computer vision. Also this is essential to have computer vision on the edge as well. When we send uh systems to be able to to to build habitats with three dimensional 3D printed, uh, whatever happened at the base could affect the entire integrity of these habitats and these structures by having computer vision can make. Tell you and determine if actually there is a mistake or an error at the base and self correct automatically without having to, you know, to continue the building so this could could save you so so much resources and and and save, you know, amazing things. So we need to look into the computer vision as well on the edge and and then we go into something much more essential that um that we need to evaluate in the future also we're talking about robots and everything else but humanoid robots might will be something. Yeah that will be able to send on Mars and uh these humanoid robots will be working hand in hand with humans so we need to look and also into the ethical implications of everything we we look typically at everything that uh ethical fair, transparent, accountable, safe, secure, interpretable, and so on, but let me give you the scenario so you can appreciate how complex these systems might be from an ethical perspective, not only from a technological perspective. So you have a humanoid robot that has on edge systems that enable it to to do incredibly complex issues. This robot, the human robot, is gone with a couple of astronauts on a mission, working on something, and they get into an accident and both of them get injured. And uh and they both have equal uh chances of survival. Which ones do you choose, you know, which one does the human robot select and chooses to come back to the to the base, um, and, and also, you know, we talk about Asimov laws which we abided by forever, you know, that the human might not injure a human and so on, but what if we have on the edge systems humanoid robots will be surgeons on the surface of a distant planet from here. The fact that he's doing an incision on a person that's harming a person and that goes completely against the laws of Asim. So even the most fundamental basic laws that we abided by for long term, they need to be reevaluated, reassessed for the next evolution of these technological advances, and that robot is going to be operating on the edge, so it is an edge device by itself, not just that. And it has to be autonomous or semi-autonomous. I'll I'll piggyback what you said, but I'll take a step back to as was described in the previous panel, AIML, generative AI have become table stakes for our future on orbit systems and capabilities. We have reached the limit of human capacity. To digest petabytes and petabytes of data in real time and make any sort of intelligent decisions about them, right? We've culminated, so we must further embrace AIML and generative AI capabilities for the future. Table stakes non-negotiable right now today, NASA has already started. Porting many of their technical manuals into AWS generative AI capabilities such that you can do a rag chatbot right now in certain parts of NASA and say give me all the specs on a human lander capability and modify X or Y by mass or payload and the system will come back and provide you all that and recommendations right now today at NASA underway. Think about the famous. Scene in excuse me, in the Apollo 13, where you know Houston, we have a problem and you remember they walk into the conference room and they dump out in a cardboard box. Here are all the pieces of hardware that we have available on the on the capsule right now today, and we have to build a recovery system of only what's here right now. Think about porting that into the future in Houston I have a problem becomes Houston, I have a solution because you've got. Autonomous capability on the surface of Mars and you program into your Gen AI capability. Here's all the in-situ resources I have. Here's the storage and compute capability I have. Now go generate me 3 courses of action to solve the problem that I have, and Gen AI Systems will bring back courses of action that will solve whatever challenges you're facing without any human involvement at all. Humans will make the ultimate decision. Necessary fine, but that's the power of where we're at and where the future will be and what advanced cloud computing capability edge device all the way to the edge of Mars and is going to enable us to do. So what does that do to mission control, right? Everybody who's ever watched a movie or anything or watched the lift, you know, a launch, we hear about we see or hear mission control countdown room full of all those people. Will we not need them because this is all gonna be uh native control up there and just you know a 3 person help desk back home so it's it's gonna be an evolutionary process if you look at how many people that were responsible for each console in mission control in the Apollo eras and you see how much, how many people do we have for the space shuttle, how many people do we have in the next vehicle. The more technology advances, I think we're gonna be able to make autonomous decisions or self-sufficient decisions without having humans in the loop. So, uh, the amount of technology advancements is gonna require less people to be involved in the whole cycle, but the amount of people not that's the term will be determined in the future. But if you allow me also to jump on on on on Clint was saying. So, uh, large language models as well, it, it we have to determine where they're good and when they're not good. So they're excellent at, for example, pattern recognition at, uh, mathematical solutions, but they're really, really bad at, uh, social intelligence at abstract reasoning at complex reasoning. So and they keep evolving, so we need to see and understand when can we use these systems for our benefits and when. When we understand that they have a lag or they have a challenge in achieving these results and and for us to understand, you know, where to use them best and where we cannot depend on them. Yeah, so I didn't really like your robot doing surgery very much that didn't excite me. I just really quickly, it's got to be a hybrid because think about it, what did you tell me? It takes 40 minutes round trip. For a signal to move from Mars to the Earth and then back again, that's just a pain. That's just a pain. So the answer is going to be where are those situations where human life is at risk or catastrophic failure of mission is at risk, and if it's less than 40 minutes or an hour round trip, that's likely where you're going to have to default to an autonomous capability to make the decision for you and that's, by the way, could be true of satellites on orbit here in low Earth orbit or GEO. Around the earth, think about a catastrophic collision that's about to take place, and can you use AI? Can you? The answer is yes, but can you use AI to do automated collision avoidance maneuvers and can you do it in a time frame shorter than transmitting all the data to the Earth and having somebody in a command center, you know, manufacture a satellite maneuver, and the answer is yes. With the capabilities we have. Can do that on orbit with AI gen AI software faster and potentially more effective ultimately than what you can do commanding controlling from the ground. I'd love to jump in on that, right? And so like I live in Cape Canaveral and I can tell you I can feel in my bones the exponential acceleration of launches, and I don't mean that I feel that. Literally my chest shakes at 2 in the morning sometimes and it's and it's SpaceX or or or whoever launching and what this means is you know what's happening to our our our mission control our help centers it's really funny um you know we're we're having a scale we're having a scale in at speeds that we've never seen before and it's, it's never gonna move this slow again. And so these are the things that really are needing new ways of thinking in order to accelerate, to scale, to parallelize ourselves, and to augment our operational capacity, uh, our, our ability to make critical decisions with all the information that we have space domain awareness, you know, you just talked about. 5000 objects today. What is it going to be next year? 30,000. What's it going to be the year after that? 60,000, 50,000. It's growing exponentially and it's already too hard. And so what do we do when we move forward? How do we maintain custody of tracks and understand what's going on, you know, throughout the system and then act on it? Some of the. Great conversation of the previous. So one of the discussions that that was that came up earlier was uh the point about manned on man teaming, right? And uh my, my colleagues at the Mitchell Institute talk a lot about man on man teaming because some of, uh, some of this capability isn't necessarily ready to be fully automated, um, and we're not necessarily emotionally, uh, uh, or ethically ready to, to unleash, uh, totally automated weapons, but. Ethical issues deep in space, uh, it's not always going to be an automated solution. It could be an automated set of options, right? I mean, we're to me, ethics is, is really all about can we trust it. Right, and you know, bringing us back to the to the edge compute, you know, uh, topic, can we, can we trust what we're seeing and what, what the satellites are, are looking at and sending down basically 4%, 5% of the data instead of 100% of the data saving us 95% or 96% of our time, um, you know, when we're talking about the scale of things we may not, we not, may not, we may not be saving any time we may simply be actually achieving the mission. And so, so, you know, ethics to me is trust and can we achieve the mission that we have we set out to, to, to do? Can we do what we say is kind of what ethics really means to me and this could actually go into into multiple dimensions of of because you know we're talking about, OK, uh, it's making the best decision, but when these systems start getting maybe orders of magnitude more intellectually capable than humans. Uh, they will probably make a decision that we think it's not the right decision, but in fact it ends up being the best decision possible. And the way I think about it sometimes is like imagine having, you know, quantum mechanics book you're trying to explain it to a 5 year old, you know, that's how, how gap we're gonna have in the future in terms of incredible intellectual capability. These systems we have compared to humans we have our limitations so we'll be, will there be a disconnect between what these systems are telling us and what makes sense and what doesn't make sense and logically one would think there is. I mean that 5 year old might be smarter than you, but you still won't believe that that he is, right? It's gonna take a while for him to prove himself. I think that let's, I mean, I think we're kind of kind of uh peaked here and maybe, maybe, maybe this is the right time to, to just go through last thought and, and then, and wrap up and we'll just come in, in this direction. Glenn, let's start with you. Yeah, thank you. So, so at AWS I'm honored and privileged that we've been thinking about this for a period of time. We established aerospace and satellite business unit at AWS 4.5 years ago. I was privileged to be asked to lead it as I retired from the US Air Force and US Space Force, and so we've been thinking about this problem for a while, to the point that even 2 years ago we did an on orbit. Demonstration TRL Level 9, where we built a purpose-built hardware software capability, put it on a satellite launched by our friends at DObit, and we ran a series of experiments for a few months on hyperspectral data where we determined if you can do on orbit pre-processing of data rather than having to push it to the Earth and process it. We reduced overall bandwidth requirements by 42% while achieving 100%. Accomplishment, right? So what it did is it allowed you to do more mission. It didn't save any money necessarily, but allowed you to do more mission for the same amount of expenditure. That's where we're going for the future. And at AWS, our team and in our new partnership with ÎÞÓÇ´«Ã½, by the way, we just signed a strategic collaboration agreement together. ÎÞÓÇ´«Ã½ is now part of the AWS Gen AI Innovation Center, and so we're continuing to team on cloud. Computing edge capabilities and it won't surprise anybody if I tell you keep an eye on this space because the space team at Booz and the space team at AWS along with a bunch of mission partners are continuing to push the envelope here on where we need to go in the next steps for on orbit edge computing and capabilities and so stand by more to follow. OK, so Omar, you're you're surrounded. So you get the next word. OK, so I think on edge will be eventually ecosystem of edges, and it's gonna be agentic systems where actually you have multiple things robotics, you know, non-robotic systems and IT IT components and agentic systems controlling and whole ecosystem of products that are on the edge. I think that's something we're starting to see right now and and it's gonna be much more amplified obviously in the future and we will be definitely levering on on capabilities like these for for longer missions as well. All right, Dan, you get to wrap it up. All right, I think, um. That's, that's a wonderful place to be. So, so one thing that one thing that I've noticed throughout, uh, throughout my entire career is that space technology has a funny history of being extremely useful here on the ground. And so as much as we're talking about sending agents to Mars, which is wonderful but might be a little hard for some people to understand. What is that going to do for us on the ground a lot, and this is why we do it. This is why we get up every day and say I'm going to look at the heavens. And my last thought here is is you know, just that, and I want to invite you all out to the lobby and check out our 4 demos. 2 of them are very relevant to Edge AI. We've got a RSO. we were able to not the whole mission but just the image collection translating RSOs are resident space objects. Can we see a satellites in a starry field. We were able to reduce that that data stream down by 94% in some of our studies didn't include any of the operational satellite systems, so that's. where a lot of, a lot of the, the extra, you know, 40% might might exist, as well as our, our technician in a box, a generative AI rag that fits inside of a box completely disconnected from the internet. When you log into Chat GPT, it works great. Turn yourself into airplane mode and you will watch it fail. This is what we're talking about. How do we make that work in airplane mode? So constraints drive innovation. That's what you guys, you specialize in constraints. That's all three of you. That's a Jeff Bezos quote. It's a good one. Go innovate. That's right. All right, thank you very much. Another great discussion. Thank you, Tobias, Clint, Omar, and Dan. Uh, as I think about, uh, talking about the edge, uh, you know, one specific project that we're working with NASA on, uh, ÎÞÓÇ´«Ã½, is, you know, the lunar terrain vehicle, pressurized rover. Uh, in their attempts to go to Mars, but there's another, you know, edge device that, uh, it kinda encapsulates this whole day when I talked about actors, is the spacesuit itself, becoming an actor, right? The, the spacesuit is an edge device of itself, and I think that um being able to You know, think about that along with the discussions, um, you know, now the living thing that's working, uh, to support, uh, that astronaut or that actor, um, being in the spacesuit and the edge device, we, we've genuinely move the conversation forward, uh, and it's exciting to have this, this panel together. 

Closing Remarks

Chris Bogdan, Executive Vice President, Aerospace, ÎÞÓÇ´«Ã½

Click Expand + to view the video transcript

Uh, so, uh, well, folks that, that, that brings us to the end of our programming. Um, I hope you've enjoyed the discussions. I hope you've enjoyed the demonstrations today. Uh, again, thank you to AFA for, uh, your support, uh, and letting us, uh, host this event here. Um, you know, I'm gonna do a test really quick. Uh, they said that they had the, uh, the chat GPT on on my performance, so, uh, will I be asked to ever host another space in AI summit ever again? Um, we'll we'll wait and see what the response is, but, uh, I'm just humbled to be on the stage with, with so many bright minds and so many great leaders in the space. Uh, this has been an awesome experience and I'll be happy to do it anytime. But at this time, uh, let me introduce our final speaker for the day, Chris Bogdan, an executive vice president for at Boos Island and the leader of our space business. He is retired United Air Force general and recently, uh, this deserves an applause, won the Washington Exec Pinnacle Award for Space Executive of the Year. Chris, come to the stage. He called me a speaker. I'm just gonna say thank you and then because I know first of all for the folks that are here I'm standing between you and snacks and drinks, so you're not gonna wanna listen too long and for those out in virtual land you either have to use the restroom or go eat dinner or or get some lunch so I'm not gonna spend a lot of time talking here um I do want to thank the folks that did uh come here in person. Um, we do appreciate that, and I do appreciate the folks, uh, in virtual land who who took the time out of their day, um, to listen to some really great folks talk about a very, very important topic. Um, I wanna thank AFA for letting us use this space. I believe we are the inaugural symposium at at AFA. They just opened this place up uh uh a month. Or so ago, so, um, thank you to our partners there. If you haven't looked around this place, it's pretty fantastic at the end, take a look, um, and come visit sometime virtual land if you haven't seen the new AF headquarters in Pentagon City, please come by and, and, and see it. It's a wonderful place for, uh, for the advocacy of uh of our Air Force, um. I want to thank our speakers and panelists, um, actually, even though there was some doom and gloom, uh, there was a lot of humor and and a lot of fun and a lot of entertainment and when you can combine. Teaching people things or learning things or having good information with something that's entertaining I think you have hit a home run um and on top of that. We're gonna end 10 minutes early. When was the last time you were at a symposium that ended early? So if the administration is listening, ÎÞÓÇ´«Ã½ knows how to go fast, OK, um, and again, and the, and the last thank you goes to the ÎÞÓÇ´«Ã½ team. Tom Pfeiffer, our, our, our leader, uh, uh, and president of NSS as well as MySpace team. Hillary Comeer sitting in the back who, who was really the, the point lady for getting all of this done, and this is the 2nd year. She's done that so she probably earns a free pass the next time we do this maybe but the rest of our ÎÞÓÇ´«Ã½ team who is fantastic and our folks outside her doing the demos, uh, we do have a great team of Bruce Allen. It is a great place to work if you're looking for a job. OK, we're always hiring, um, so I just want to close by saying this is an incredibly important topic we're on the precipice of using AI in so many different ways and and for so many different things, um, but. It's a double edged sword. It can be wonderful and it can be dangerous, so we have to be thoughtful in how we apply it, how we build it, how we use it. I hope that today gave you some ideas on how you can go back to your organizations and think the way, think in the ways that I can help you. And, and then be advocates for that, but at the same time you need to be guardians, not the US Space Force kind of guardians, guardians of this technology because we don't want it to get out of hand to the to the point where we can't use it to make humanity better. So thank you very much again. Just remember ÎÞÓÇ´«Ã½ got you out of here 10 minutes early and an A a beautiful AFA place and thank you very much. Have a good day. 

Questions on how to accelerate outcomes for space? Contact us!



Solve Complex Space Challenges