Myrtle.ai Enables Microsecond ML Inference Latencies running VOLLO on Napatech SmartNICs
Summary by AiThority
3 Articles
3 Articles


Myrtle.ai Enables Microsecond ML Inference Latencies running VOLLO on Napatech SmartNICs
Myrtle.ai, a recognized leader in accelerating machine learning inference, today released support for its VOLLO inference accelerator on the NT400D1x series of SmartNICs from Napatech. VOLLO achieves industry-leading ML inference compute latencies, which can be less than one microsecond. This new release enables those who need the very lowest latencies possible to run inference next to the network in a SmartNIC. A wide range of models may be run…
Coverage Details
Total News Sources3
Leaning Left0Leaning Right0Center0Last UpdatedBias DistributionNo sources with tracked biases.
Bias Distribution
- There is no tracked Bias information for the sources covering this story.
Factuality
To view factuality data please Upgrade to Premium