![](https://www.militaryspot.com/wp-content/uploads/2025/02/navy-laser-drone.png)
FEBRUARY 13, 2025 – Lasers enable the U.S. Navy to fight at the speed of light. Armed with artificial intelligence (AI), ship defensive laser systems can make rapid, accurate targeting assessments necessary for today’s complex and fast-paced operating environment where drones have become an increasing threat.
To counter the rapidly mounting threats posed by the proliferation of inexpensive uncrewed autonomous systems (UAS), or drones, Naval Postgraduate School (NPS) researchers and collaborators are applying AI to automate critical parts of the tracking system used by laser weapon systems (LWS). By improving target classification, pose estimation, aimpoint selection and aimpoint maintenance, the ability of an LWS to assess and neutralize a hostile UAS greatly increases. Enhanced decision advantage is the goal.
The tracking system of an LWS follows a sequence of demanding steps to successfully engage an adversarial UAS. When conducted by a human operator, the steps can be time consuming, especially when facing numerous drones in a swarm. Add in the challenges of an adversary’s missiles and rockets traveling at hypersonic speeds, efforts to mount proper defenses become even more complicated, and urgent.
Directed energy and AI are both considered DOD Critical Technology Areas. By automating and accelerating the sequence for targeting drones with an AI-enabled LWS, a research team from NPS, Naval Surface Warfare Center Dahlgren Division, Lockheed Martin, Boeing and the Air Force Research Laboratory (AFRL) developed an approach to have the operator on-the-loop overseeing the tracking system instead of in-the-loop manually controlling it.
“Defending against one drone isn’t a problem. But if there are multiple drones, then sending million-dollar interceptor missiles becomes a very expensive tradeoff because the drones are very cheap,” says Distinguished Professor Brij Agrawal, NPS Department of Mechanical and Aerospace Engineering, who leads the NPS team. “The Navy has several LWS being developed and tested. LWS are cheap to fire but expensive to build. But once it’s built, then it can keep on firing, like a few dollars per shot.”
To achieve this level of automation, the researchers generated two datasets that contained thousands of drone images and then applied AI training to the datasets. This produced an AI model that was validated in the laboratory and then transferred to Dahlgren for field testing with its LWS tracking system.
Funded by the Joint Directed Energy Transition Office (DE-JTO) and the Office of Naval Research (ONR), this research addresses advanced AI and directed energy technology applications cited in the CNO NAVPLAN.
During a typical engagement with a hostile drone, radar makes the initial detection and then the contact information is fed over to the LWS. The operator of the LWS uses its infrared sensor, which has a wide field of view, to start tracking the drone. Next, the high magnification and narrow field of view of its high energy laser (HEL) telescope continues the tracking as its fast-steering mirrors maintain the lock on the drone.
With a video screen showing the image of the drone in the distance, the operator compares it to a target reference to classify the type of drone and identify its unique aimpoints. Each drone type has different characteristics, and its aimpoints are the locations where that particular drone is most vulnerable to incoming laser fire.
Along with the drone type and aimpoint determinations, the operator must identify the drone’s pose, or relative orientation to the LWS, necessary for locating its aimpoints. The operator looks at the drone’s image on the screen to determine where to point the LWS and then fires the laser beam.
Long distances and atmospheric conditions between the LWS and the drone can adversely affect the image quality, making all these identifications more challenging and time consuming to conduct.
After all these preparations, the operator cannot just simply move a computerized crosshair across the screen onto an aimpoint and press the fire button as if it were a kinetic weapon system, like an anti-aircraft gun or interceptor missile.
Though lasers move at the speed of light, they don’t instantaneously destroy a drone like the way lasers are depicted in sci-fi movies. The more powerful the laser, the more energy it delivers in a given time. To heat a drone enough to cause catastrophic damage, the laser must be firing the entire time.
But there’s a catch. The laser beam must be continually held at the same spot. If the drone turns and the laser beam doesn’t adjust, the initial spot it was targeting will no longer heat up. Whatever new spot now hit by the laser beam will start to heat, but it might not be the aimpoint.
If the drone continuously moves, then the laser beam will wander along its surface if not continuously re-aimed. In this case, the laser’s energy will be distributed across a large area instead of concentrated at a single point. This process of continuously firing the laser beam at one spot is called aimpoint maintenance.
In 2016, construction of the High Energy Laser Beam Control Research Testbed (HBCRT) was completed by the NPS research team. The HBCRT was designed to replicate the functions of an LWS found aboard a ship, such as the 30-kilowatt, XN-1 Laser Weapon System operated on USS Ponce (LPD 15) from 2014 to 2017.
Early on, the HBCRT was utilized at NPS to study adaptive optics techniques to correct for aberrations from atmospheric conditions that degrade the quality of the laser beam fired from an LWS. Later, the addition of state-of-the-art deformable mirrors built by Northrup Grumman allowed NPS researchers to investigate further impacts of deep turbulence.
Over the years, 15 masters and 2 PhD degrees have been earned by NPS officer-students contributing their interdisciplinary research into hardware and software related to the HBCRT. Investigations by U.S. Navy Ensigns Raymond Turner, MS astronautical engineering in 2022, and Raven Heath, MS aeronautical engineering in 2023, added to this research. Turner helped integrate AI algorithms into the HBCRT for aimpoint selection and maintenance, and Heath used deep learning to research AI target key points estimation.
Now the HBCRT is also being used to create catalogs of drone images to make real-world datasets for AI training.
Built by Boeing, the HBCRT has a 30 cm diameter, fine-tracking, HEL telescope and a course-tracking, mid-wavelength infrared (MWIR) sensor. The pair is called the beam director when coupled together on a large gimble that swivels them in unison up-and-down and side-to-side.
“The MWIR is thermal,” says Research Associate Professor Jae Jun Kim, NPS Department of Mechanical and Aerospace Engineering, who specializes in optical beam control. “It looks at the mid-wavelength infrared signal of light, which is related to the heat signature of the target. It has a wide field of view. The gimbal moves to lock onto the target. Then the target is seen through the telescope, which has very small field of view.”
A 1-kilowatt laser beam (roughly a million times more powerful than a classroom laser pointer) can fire from the telescope. If the laser beam were to be used, it’s generated by a separate external unit and then directed into the telescope, which then projects the laser beam onto the target. However, its use with the HBCRT isn’t required for the initial development of this research, which allows the work to be easily conducted inside a laboratory.
With a short-wavelength infrared (SWIR) tracking camera, the telescope can record images of a drone that is miles away. Although necessary, replicating the view of a distant drone in a small laboratory is impossible. To resolve this dilemma, researchers mounted 3D-printed, titanium miniature models of drones fabricated by AFRL into a range-in-a-box (RIAB).
Constructed on an optical bench, the RIAB accurately replicates a drone flying miles away from the telescope by using a large parabolic mirror and other optical components. This research used a miniature model of a Reaper drone. When a SWIR image is taken of the drone model by the telescope, it appears to the telescope as if it were seeing an actual full-sized Reaper drone.
The drone model is attached to a gimble with motors that can change its pose along the three rotational flight axes of roll (x), pitch (y) and yaw (z). This allows the telescope to observe real-time changes in the direction that the drone model faces.
Simply put, pose is the orientation of the drone that the telescope “sees” in its direct line of sight. Is the drone heading straight-on or flying away, diving or climbing, banking or cruising straight and level, or moving in some other way?
By measuring the angles about the x-, y- and z-axes for a drone model in a specific orientation, the pose of the drone can be precisely defined and recorded. This important measurement is called the pose label.
The NPS researchers created two large representative datasets for AI training to produce the AI model for automating target classification, pose estimation, aimpoint selection and aimpoint maintenance. The AI training used convolutional neural networks with deep learning, which is a machine learning technique based on the understanding of neuropathways in the human brain. A recent journal article in Machine Vision and Applications by NPS faculty Leonardo Herrera, Jae Jun Kim, and Brij Agrawal describes the datasets and AI training in detail.
Each piece of data in the dataset contained a 256´256-pixel image of a Reaper drone in a unique pose with its corresponding pose label. Lockheed Martin used computer generation to create the synthetic dataset, which contained 100,000 images. Created with the HBCRT and RIAB at NPS, the real-world dataset contained 77,077 images.
“If we train on only clean pictures, it won’t work. That is a limitation,” says Agrawal. “We need a lot of data with different backgrounds, intensities of the sun, turbulence and more. That’s why when using AI, it takes a lot of work to create the data. And the more data you have, the higher the fidelity.”
For the AI model, three different AI training scenarios were generated and compared to determine which scenario performed the best. The first scenario only used the synthetic dataset, the second used both the synthetic and real-world datasets, and the third only used the real-world dataset.
Because the large sizes of datasets and their individual pieces of data required enormous amounts of computational power for the AI training, the researchers used an NVIDIA DGX workstation with four Tesla V100 GPUs. NPS operates numerous NVIDIA workstations. And in December 2024, to continue advancing AI-based technologies, NPS formed a partnership with NVIDIA to become one of its AI Technology Centers.
“Once we’ve generated a model, we want to test how good it is,” says Agrawal. “Assume you have a dataset with 100,000 data. We’ll train on 80,000 data and test on 20,000 data. Once it’s good with 20,000 data, we’re finished training it.”
U.S. Navy Ensign Alex Hooker, a Shoemaker Scholar who recently earned his M.S. in astronautical engineering from NPS and is now a student naval aviator, contributed to testing the pose estimations of the AI model.
“A way to improve the reliability of the model at predicting the pose of a UAS in 3D space by taking 2D input images is detecting what’s called out of distribution data,” he says. “There are different ways to detect whether an image can be trusted or whether it is out of distribution.”
By feeding the test data images from the dataset into the existing AI model and then comparing the output poses from the AI model to pose labels of the test data images, Hooker could continually train and refine the AI model itself.
Working now with Agrawal is NPS Space Systems Engineering student U.S. Navy Ensign Nicholas Messina, who graduated from the U.S. Naval Academy in aerospace engineering last year and is a Boman Scholar headed for the Nuclear Navy career track after NPS.
“My thesis is a little bit of a sidestep in the way that I am working with artificial intelligence and optics, but Dr. Agrawal and Dr. Herrera have been great,” said Messina. “My research is specifically working on optical turbulence prediction and classification. I train my AI models off large image datasets and am working to improve accuracy in how the model predicts the wavefronts from a picture.”
One of the biggest challenges that has faced automated image-based drone identification and classification is pose ambiguity. This occurs when the pose of the actual drone in the distance is indistinguishable from one or more of its other poses.
Because an LWS views the 3D drone flying far away as 2D images in the infrared spectrum, the features of the drone’s shape effectively disappear into a silhouette. For example, the silhouette of a drone flying directly head-on would look the same as if it were flying away in the exact opposite direction.
The researchers solved pose ambiguity for the AI model by introducing radar cueing. Tracking data from a radar can reveal if a drone is approaching, withdrawing or moving in some other way. For the AI training, the pose labels of the drone images were used to mimic real radar sensor output. The team also developed a separate method to simulate the radar data and provide radar cuing during LWS operation if actual radar data is not available.
Overall, the AI model from the scenario using only the real-world dataset performed best by producing the least amount of error.
For the next phase of the research, the team transferred the AI model to Dahlgren for field testing on its LWS tracking system.
“Dahlgren has our model, which we trained on the dataset collected indoors on the HBCRT and complemented with synthetic data,” says Leonardo Herrara, who runs the AI laboratory at NPS and is a faculty associate in the Department of Mechanical and Aerospace Engineering. “They can collect live data using a drone and create a new dataset to train on top of ours. That’s called transfer learning.”
Creating more data under additional conditions and of other drone types will also continue at NPS. Just because the AI model is already trained on a Reaper doesn’t mean it’s reliable for other drones. But even before the AI model can be deployed, it must first be integrated into Dahlgren’s tracking system.
“We now have the model running in real-time inside of our tracking system,” says Eric Montag, an imaging scientist at Dahlgren and leader of a group that developed an LWS tracking system currently in use by High Energy Laser Expeditionary (HELEX), which is an LWS mounted on a land-based demonstrator.
“Sometime this calendar year, we’re planning a demo of the automatic aimpoint selection inside the tracking framework for a simple proof of concept,” Montag adds. “We don’t need to shoot a laser to test the automatic aimpoint capabilities. There are already projects—HELEX being one of them—that are interested in this technology. We’ve been partnering with them and shooting from their platform with our tracking system.”
When field testing occurs, HELEX will start tracking from radar cues and use pose estimation to automatically select an aimpoint. The tracking system of HELEX will be semi-autonomous. So, instead of manually controlling aspects of the tracking system from in-the-loop, the operator will oversee it from on-the-loop.
Besides LWS, this research also opens other possibilities for use throughout the fleet. Tracking systems across other platforms could also see potential benefit from this type of AI-enabled automation. At a time when shipboard defenses can be threatened by massive waves of drones, missiles and rockets, a jump in the efficiency of determining friend or foe, and engaging hostile threats, could be a game-changer to speed decision-advantage.
From Dan Linehan