Optimizing AI/ML Model Deployment Across Distributed Systems: Advances in Training Efficiency, Inference Performance, and Fault Tolerance

Authors

  • Venkateswarlu Poka

Keywords:

Distributed AI Systems, Model Parallelism, Federated Learning, Inference Optimization, Gradient Compression

Abstract

AI and machine learning have grown so fast that computing systems have had to be completelyredesigned. Single computers can't handle the massive datasets and complicated model structures that today's AI needs. Distributed

References

[[1] Research and Markets, "Artificial Intelligence Market Size,

Downloads

Published

2025-11-19

How to Cite

Venkateswarlu Poka. (2025). Optimizing AI/ML Model Deployment Across Distributed Systems: Advances in Training Efficiency, Inference Performance, and Fault Tolerance . Journal of Computational Analysis and Applications (JoCAAA), 34(11), 580–588. Retrieved from https://eudoxuspress.com/index.php/pub/article/view/4178

Issue

Section

Articles