![](https://dronelife.com/wp-content/uploads/2023/12/Artificial-Intelligence-300x200.jpg)
CC0, via Wikimedia Commons
By: Dawn Zoldi (Colonel USAF Ret.)
ASTN Group™, creators of the Valmiz™ Augmentive Artificial Intelligence (AAI) recently captivated audiences on the 4,000 attendee-strong AI Asia Expo 2023 within the Philippines, the situation of one in every of the corporate’s two international headquarters (the opposite is within the U.S.). At this exclusive high-caliber event focused on the distinct needs and challenges faced by the Southeast Asian region, over 90 speakers from 15 countries engaged in strategic discussions around AI and its responsible integration across diverse sectors. Amongst them, Rommel Martínez, ASTN Group’s™ Chief Technology Officer, an AI researcher with over 24 years of experience within the tech industry and the brains behind Valmiz™, a ground-breaking multi-agent human-centric AI, made his mark. This text provides highlights from his various presentations on the problems surrounding contemporary AI and his company’s cutting-edge AI solutions that use novel approaches.
Martínez outlined the constraints of popular AI models equivalent to typical machine learning (ML) and neural networks, including GPT systems.
“Modern AI systems have unpredictable behavior,” Martínez explained. “They’re known to hallucinate. There have been quite a few cases of accidents with self-driving cars, namely, Tesla and Cruise. There was also a case of a military drone that attacked its operator during a simulation.”
They’re “black boxes,” he said, which should not good at handling “black swans,” events which are highly improbable but still occur. Such black box systems also don’t allow users to examine data while using them.
Modern AI systems also cannot stand alone reliably. Most, if not all, of them have centralized operations. That signifies that if the important thing servers develop into unavailable, then the important thing AI functionality becomes disabled or impaired.
They’re inefficient, too. It takes the “energy of a small city to coach them,” Martínez said. He noted that OpenAI enlisted Kenyan employees, at a pay of lower than $2 USD per hour, to actively comb, sift, and filter data for its popular generative AI.
Finally, these systems should not environmentally sound. The carbon footprint of ML systems in 2022, he said, reached 2020 metric tons.
In its place, Martínez presented Valmiz™, an Augmentive AI, a term he coined, which has been greater than 20+ years within the making. Augmentive AI presents toolkits that augment an organization’s existing ideas, workflows, and pipelines, using knowledge and expertise from different knowledge domains, while putting a human at the middle to supervise operations.
It combines features from knowledge bases, traditional databases, and symbolic AI.
Martínez explained the concepts behind his recent approach to AI. (Nerd alert – this gets technical!)
When a chunk of data is connected to other pieces of data, he said, they form a network. Each of those connecting nodes of data are, in turn, connected to more pieces of data. There’s some extent, a threshold, where an information branch has only a few connecting nodes relative to the starting node. If you collect this information together, it forms a compound object, a collective network that has each direct and indirect paths to the parent node.
Martínez refers back to the amount of data that might be accessed from the middle of this network, all of the strategy to the sting, because the “information radius.” This radius sets a fringe around what might be considered throughout the context of the central idea.
“Once we are capable of compute the data radius of any idea, we’re capable of effectively contain and aggregate information right into a single globular unit,” he said. “This unit can then interact with other such units to form super networks.”
In principle, every idea, every object, is connected to one another. Martínez used the instance of a mango and a truck. A mango is connected to a truck insofar as a truck has the potential to move mangoes. Computing the data between those two items is what Martínez calls the “information distance.” The smaller the data distance from a mango to a truck, the less contextual information they should share. The larger the data distance, nevertheless, the more contextual information they’ll each have to share. This might be derived each actively and passively.
“By having the ability to compute information distances, we’re capable of determine the quantity of data traversal needed to properly contextualize them. This also provides information between two points which could also be of great interest to a user,” in response to Martínez.
Having the knowledge mandatory to perform a task is the important thing to doing them effectively. Having this sort of accessible and connected knowledge at one’s disposal can enable doing a task in a month as a substitute of days. Normally, acquiring that sort of information can be difficult and time consuming. Now there’s Augmentive AI to do that across a big selection of use cases.
Introducing Augmentive AI
“We named it Augmentive AI,” Martínez said, “because to enhance means to boost, to extend and to support,” Martínez explained. “Valmiz™ is used to boost a corporation’s existing process without changing the workflow.”
ASTN Group™ uses the identical type of AI technology as NASA’s Distant Agent on Deep Space 1. That mission, a flyby of an asteroid and comet 100 million miles away from Earth, required NASA engineers to develop AI that enabled distant code updates on the spacecraft, with a purpose to make mission corrections.
“Constructing on NASA’s legacy, we created true distributed AI,” Martínez noted. “We removed the normal heavy reliance on dedicated servers and took a non-monolithic approach.”
Valmiz™ employs multiple agent redundancy. By decoupling AI agents, they will act independently performing specific tasks or they might be used for tighter integration. Such redundant AI agents can receive and execute instructions and still have the power to converge, to form a “hive mind.”
This further allows the Valmiz™ program and data flow to be examined and patched while operations are being executed. It enables users to perform preemptive manipulation and task adjustments, literally on the fly.
Each agent in Valmiz™—Vera, Veda, Vega, Vela, Vix—has their specific roles.
Veda is the core unit that fuses knowledge graphs and knowledge bases. It’s the component of Valmiz™ answerable for converting raw data into indexable knowledge stores. When Vega ingests data sources, it creates a semantic network of all of the available data points from various sources.
“The true power of Veda,” Martínez said, “comes from creating worlds inside worlds.” Users can collect heterogeneous information banks right into a single block of data. “That is what I call ‘fusing,’” he continued.
The info might be pictures, it could be logistics data, etc. A user can mix them together and they’re going to aggregate right into a single block of data. The knowledge backs up once it’s collected together.
“Inside Veda, you’ll be able to mix different sorts of bindings to correlate and connect information together. These are highly malleable. Registries are the top-level constructing blocks of Veda. You possibly can manipulate information inside Veda across time. You possibly can have a time series layer traversal and you’ll be able to set data snapshots—intending to say, at any point within the computation, you’ll be able to rollback,” Martínez explained.
Every computation performed inside Valmiz™ is captured with no loss of data. With traditional systems, when you make the computation that’s lost in the longer term. You can not return to it.
Vera is the reflective and reflexive key-value database that enables full references. In Vera, the input data are called “declarations.” When computing a single object, they contain an identifier, a primary value and arbitrary amount of metadata. All changes that occur with declarations are tracked linearly. This enables users to execute those rollbacks at any given time limit.
Vega is the dynamic storage manager that enables for instantaneous restoration of compound information. With Vega, users can store and restore highly sophisticated varieties of computation comfy. Unlike modern AI, within the event of a full power shutdown, using Valmiz™’ specially designed state-of-the-art algorithms, users can easily restore terabytes of information in a matter of seconds. In precision operations, seconds count.
Valmiz™ can be fault-tolerant by design. It has a repair mode that enables operators to perform surgical operations and get well from any anomaly.
Vela is the info collector. It compiles data from local and external sources to facilitate information augmentation. Vela essentially acts as a scout that continually scans data regions to increase the data distance of any stored piece of information.
Vix is the human-to-machine and machine-to-machine interface that receives and processes text and voice commands, input and compounds and processes them as they’re being made, in real time. When users make a request to Vix, because the user speaks to it, it’s already computing. Computations are done on-the-fly as they arrive through the computation-communication channels, giving users a stream of query-answer pairs.
Humans, Machines, & The Future
With Valmiz™, humans are also the ultimate arbiters, not machines. When unsure, the higher AI systems default to human control. For Martínez, the absence of morals, values and ethics in machines require that humans be the ultimate decision-makers in AI. He designed Valmiz™ to complement, not supplant, humans in various operations.
Take the drone industry for instance. Valmiz™ can direct certain actions or provide maintenance updates to a single drone or a whole fleet. It might be used to observe and supply temperature regulation for autonomous and distant medical package deliveries. Within the event of a temperature discrepancy, Valmiz™ could direct that dry ice be disbursed inside a transport box robotically. These are only a couple of of the use cases in a single industry that may benefit from this technology.
But there are markets and use cases on the market in the longer term that we haven’t even considered yet. For that reason. Martínez designed Valmiz™ to be fully integrated into other systems, and future proof.
It also has full modularity. It might be used as a single compound system or as select parts. The source code of Valmiz™ is platform independent and guaranteed to work with definite hardware architectures. Within the event that a brand new computer architecture comes out, users can still have the opportunity to construct with it.
Martínez purposefully built the system to be reliable. Data that the shopper owns becomes the authoritative source for its pre-validated data. Valmiz™ turns that data into an organization’s own knowledge base.
Martínez says his tech is currently in its “alpha phase.” He anticipates it to achieve beta status ultimately of the second quarter 2024, with an initial public release to follow shortly after.
Within the meantime, you’ll be able to watch Martínez on the Full Crew newscast on Tuesday January 23, 2024 at 9am MT | 11am ET. He can even plug into an all-star AI panel at P3 Tech Consulting’s third Annual Law-Tech Connect Workshop at AUVSI XPONENTIAL 2024 on April twenty second in San Diego, CA. And the parents in command of the AI Asia Expo have already invited ASTN Group™ back for a reprise at their AI Asia Expo 2024 event in Thailand, scheduled for next August.
Read more: