{"id":8350,"date":"2021-10-13T18:22:23","date_gmt":"2021-10-13T18:22:23","guid":{"rendered":"http:\/\/TheNextWeb=1369823"},"modified":"2021-10-13T18:22:23","modified_gmt":"2021-10-13T18:22:23","slug":"ibm-commits-high-scale-inference-platform-modelmesh-to-open-source","status":"publish","type":"post","link":"https:\/\/www.londonchiropracter.com\/?p=8350","title":{"rendered":"IBM commits high-scale inference platform ModelMesh to open source"},"content":{"rendered":"\n<div><img decoding=\"async\" src=\"https:\/\/img-cdn.tnwcdn.com\/image\/neural?filter_last=1&amp;fit=1280%2C640&amp;url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2021%2F10%2Faihose.jpg&amp;signature=5ea52f11b7b455de1abfe19670c0b321\" class=\"ff-og-image-inserted\"><\/div>\n<p>IBM today announced it has committed its <a href=\"https:\/\/github.com\/kserve\/modelmesh\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">ModelMesh<\/a> inference service to open source. This is a big deal for the MLOps and DevOps community, but the implications for the average end-user are also huge.<\/p>\n<p>Artificial intelligence is a backbone technology that nearly all enterprises rely on. The majority of our coverage here on <a href=\"https:\/\/thenextweb.com\/neural\">Neural<\/a> tends to discuss the challenges involved in training and developing AI models.<\/p>\n<p>But when it comes to deploying AI models so that they can do what they\u2019re supposed to do when they\u2019re supposed to do it, the sheer scale of the problem is astronomical.<\/p>\n<p>Think about it: you log in to your banking account and there\u2019s a discrepancy. You tap the \u201cHow can we help?\u201d icon at the bottom of your screen and a chat window opens up.<\/p>\n<p>You enter a query such as \u201cWhy isn\u2019t my balance reflecting my most recent transactions?\u201d A chat bot responds with \u201cOne moment, I\u2019ll check your account,\u201d and then, like magic, it says \u201cI\u2019ve found the problem\u201d and gives you a detailed response concerning what\u2019s happened.<\/p>\n<p>What you\u2019ve done is sent an inference request to a machine learning model. That model, using a technique called natural language processing (NLP), parses the text in your query and then sifts through all of its training data to determine how best it should respond.<\/p>\n<p>If it does what it\u2019s supposed to in a timely and accurate manner, you\u2019ll probably walk away from the experience with a positive view on the system.<\/p>\n<p>But what if it stalls or doesn\u2019t load the inferences? You end up wasting your time with a chat bot and still need your problem solved.<\/p>\n<p>ModelMesh can help.<\/p>\n<p>Animesh Singh, IBM CTO for Watson AI &amp; ML Open Tech, told Neural:<\/p>\n<blockquote readability=\"22.735124760077\">\n<p>ModelMesh underpins most of the Watson cloud services, including Watson Assistant, Watson Natural Language Understanding, and Watson Discovery and has been running for several years.<\/p>\n<p>IBM is now contributing the inference platform to the <a href=\"https:\/\/github.com\/kserve\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">KServe<\/a> open source community.<\/p>\n<p>Designed for high-scale, high-density, and frequently-changing model use cases, ModelMesh can help developers scale Kubernetes.<\/p>\n<p>ModelMesh, combined with KServe, will also add Trusted AI metrics like explainability, fairness to models deployed in production.<\/p>\n<\/blockquote>\n<p>Going back to our banking customer analogy, we know that we\u2019re not the only user our bank\u2019s AI needs to serve inferences to. There could be millions of users querying a single interface simultaneously. And those millions of queries could require service from thousands of different models.<\/p>\n<p>Figuring out how to load all these models in real-time so that they can perform in a manner that suits your customer\u2019s needs is, perhaps, one of the biggest challenges faced by any company\u2019s IT team.<\/p>\n<p>ModelMesh manages both the loading and unloading of models to memory to optimize functionality and minimize redundant power consumption.<\/p>\n<p>Per an IBM press release:<\/p>\n<blockquote readability=\"9\">\n<p>It is designed for high-scale, high-density, and frequently changing model use cases. ModelMesh intelligently loads and unloads AI models to and from memory to strike an intelligent trade-off between responsiveness to users and their computational footprint.<\/p>\n<\/blockquote>\n<p>You can learn more about ModelMesh here on IBM\u2019s <a href=\"https:\/\/developer.ibm.com\/blogs\/kserve-and-watson-modelmesh-extreme-scale-model-inferencing-for-trusted-ai\/\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">website<\/a>.<\/p>\n<p> <a href=\"https:\/\/thenextweb.com\/news\/ibm-commits-modelmesh-open-source\">Source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>IBM today announced it has committed its ModelMesh inference service to open source. This is a big deal for the MLOps and DevOps community, but the implications for the average end-user are&#8230;<\/p>\n","protected":false},"author":1,"featured_media":8351,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/8350"}],"collection":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=8350"}],"version-history":[{"count":0,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/posts\/8350\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=\/wp\/v2\/media\/8351"}],"wp:attachment":[{"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=8350"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=8350"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.londonchiropracter.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=8350"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}