PART 1 - The need for edge processing

Resolving the competition for bandwidth issue will lead to advances in AI, Machine Learning, Neural Networks (NN) and Deep Learning (DL) techniques and associated technologies so great that the association between humans and machines on the Battlefield, inside Command Centres and even in the streets will be the most significant since the Industrial Revolution. These advancements are not confined to the processor for they will change the way we interact with systems and equipment. Their physical and systems design will change dramatically due to the initial and increasing interaction with humans but primarily by their ability to create determinations independent from servers and databases.

Here is the nature of the dilemma and one of the primary drivers of AI development. The need for greater situational awareness through technical superiority has mandated the demand for near real time information and answers to increasingly complex questions. This has led to an exponential growth in data and the transmission of that data to human attention and centralised systems. As humans we lack the time to digest it in its entirety. The monitoring of this information is labour intensive and often critical pieces of information are missed. Added to this is that our communication channels can be saturated to at or near capacity and the data lakes where this information is stored for historical analysis and safekeeping often just sit there waiting to be mined for the insights they may offer. As demand for information grows and the competition for specific information associated to each user/customer intensifies, we will max-out our networks and be saved only by priorities. This means that critical information with less mission/operational priority will not be delivered.

These requirements are driving the direction of much AI development. Instead of sending every piece of data with large numbers of human operators scanning for key pieces of information, why can’t we make effective decisions at the source? Additionally, why can’t we send only the data/insights which have mission/operational importance in delivering all user and customer priorities as well as aiding their situational awareness?

But “importance” implies a level of understanding and determination of information within the activity, so for a piece of field equipment such as an optical sensor on a surveillance camera monitoring a particular field of view, knowing what is important is a quantum leap, technologically. This is the first hurdle in the application of AI within the field of Intelligence Surveillance and Reconnaissance (ISR) — how can an optical device know what is important and what is essentially just pixel values across a sensor? The solution to this problem comes in three capability gaps.

This first gap has already been bridged in AI development. Optical sensors fitted with edge processors can now be trained to identify and track predefined objects of interest within their field of view and even anomalies (undefined objects which may be of importance or objects which were not there during the last scan). These edge processors can analyse the sensor data and once a particular object has been identified it can notify the user/customer. This notification may come as a simple predefined alert, an image frame with descriptive data superimposed, it may even include a probability of correctness and a reason as to why the edge processor has determined this alert so that the user may understand the background or nature of the alert.

The second gap we must bridge in order to even further reduce data throughput while at the same time increase meaningful situational awareness is ‘context’. If our sensor and edge processor can identify an object of interest within its field of view then in order to determine its importance the sensor must be able to scan, identify and determine any connections within the image frame/s. If the edge processor can determine connections to other objects within the frame/s then a level of context may be derived. For example; if our edge processor sends an alert and image/s of a group of individuals with large tubes and plate-like objects which are within its field of vision it tells a limited story. However, if our edge processor sends an alert that those same individuals are assembling those plates and tubes into an arrangement which has an 89% probability of being a Mortar position in the act of setup, that is a significant increase in situational awareness. If our edge processor sends the image frames, data overlay of coordinates, probabilities and a reason as to why it has determined this situational assessment as an alert to a user/customer who is tending to other priorities, then that use of AI is of great value to a user within a command/operations centre who would otherwise be using human operators to stare at a screen for hours or days on end.

The degree of complexity and accuracy of the ‘context’ will be determined by how advanced the AI is within the edge processor, but this leads into our third gap in reducing the necessity for continuous communication traffic while increasing determinations at the edge processor; the network. The network I am alluding to is an array of independent edge processor species that may be embedded into an array of different assets such as UAS/RPAS (Drone) optics, armoured vehicle turrets, vehicle mounted sensors, surveillance equipment such as optical tracking towers, missile launch platforms, helmet and body cameras. Even all the way down to the humble binocular or laser range finder.

These edge processors of varying complexity and tasking which come together to form temporary networks to query each other, identifying key data from other sensors so as to build a greater picture for their own independent mission. These edge processors will seamlessly connect and share information about the environment they have just surveyed, each one focussed not only on their own priorities but the collective priorities of the network too. The information each one shares may only contain just a small fragment of a greater information packet which in turn may add to the overall situational awareness or a significant determination by the network. The network may also independently validate determinations by the one against the many so as to provide network validated information.

On the battlefield this behind the scenes collaboration and information vetting could form targeting packets which may be fed to gun turrets or Battle Management Systems even though the vehicle and turret may be pointing in a different direction. When a turret slews around target packets consisting of direction (Observer to target), munition recommendation (High Explosive, Delayed, Proximity etc) elevation, GPS coordinates, wind conditions (perhaps fed by small environmental UAS overhead), have already been captured by the weapon system’s Fire Control Units awaiting human activation. It isn’t hard to imagine edge processors onboard UAS flying overhead, identifying hardened defensive positions such as bunkers, soft/hard skinned vehicles, weapons systems (type) and quickly handballing that information to a Main Battle Tank or Anti-Armour Team for targeting without any human intervention until the moment that a decision is required for weapons firing.