TensorFlow-Serving: Flexible, High-Performance ML Serving

December 17, 2017 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, Gemfile, _config.yml, _layouts, _sass, acceptedpapers.md, assets, cfp.md, index.md, jekyll-theme-architect.gemspec, schedule.md, script, talks.md

Authors Christopher Olston, Noah Fiedel, Kiril Gorovoy, Jeremiah Harmsen, Li Lao, Fangwei Li, Vinu Rajashekhar, Sukriti Ramesh, Jordan Soyke arXiv ID 1712.06139 Category cs.DC: Distributed Computing Cross-listed cs.LG Citations 345 Venue arXiv.org Repository https://github.com/LearningSys/nips17 ⭐ 21 Last Checked 6 days ago
Abstract
We describe TensorFlow-Serving, a system to serve machine learning models inside Google which is also available in the cloud and via open-source. It is extremely flexible in terms of the types of ML platforms it supports, and ways to integrate with systems that convey new models and updated versions from training to serving. At the same time, the core code paths around model lookup and inference have been carefully optimized to avoid performance pitfalls observed in naive implementations. Google uses it in many production deployments, including a multi-tenant model hosting service called TFS^2.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing