{"id":608,"date":"2024-04-08T10:00:00","date_gmt":"2024-04-08T10:00:00","guid":{"rendered":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/"},"modified":"2024-04-08T10:00:00","modified_gmt":"2024-04-08T10:00:00","slug":"lm-studio-modelos-escritorio","status":"publish","type":"post","link":"https:\/\/jacar.es\/en\/lm-studio-modelos-escritorio\/","title":{"rendered":"LM Studio: Exploring AI Models from Your Desktop"},"content":{"rendered":"<p><strong><a href=\"https:\/\/lmstudio.ai\/\">LM Studio<\/a><\/strong> is a desktop app (Mac, Windows, Linux) that downloads and runs local LLMs with a polished UI. No terminal, no complicated setup: open, pick model, chat. For exploratory developers, data analysts, journalists with sensitive data, and anyone wanting to try LLMs without sending queries to the cloud.<\/p>\n<p>This article covers what it offers, when it\u2019s better than Ollama or OpenWebUI, and where it has limits.<\/p>\n<h2 id=\"what-lm-studio-does\">What LM Studio Does<\/h2>\n<p>Main features:<\/p>\n<ul>\n<li><strong>Model download<\/strong> from Hugging Face with one click.<\/li>\n<li><strong>Local execution<\/strong> over llama.cpp (under the hood).<\/li>\n<li><strong>Polished chat UI<\/strong>.<\/li>\n<li><strong>Local OpenAI-compatible API<\/strong> that other apps can consume.<\/li>\n<li><strong>RAG with your documents<\/strong> (PDF, TXT, DOCX) \u2014 chat with your files.<\/li>\n<li><strong>Saved prompt management<\/strong>.<\/li>\n<li><strong>Side-by-side model comparison<\/strong>.<\/li>\n<\/ul>\n<p>All in a desktop binary, no terminal, no YAML config.<\/p>\n<h2 id=\"installation\">Installation<\/h2>\n<p>Download from <a href=\"https:\/\/lmstudio.ai\/\">lmstudio.ai<\/a>. DMG for Mac, MSI for Windows, AppImage for Linux. Open.<\/p>\n<p>First time asks to select a model. Recommended to start:<\/p>\n<ul>\n<li><strong>Mac Apple Silicon<\/strong>: Llama 3 8B Q4_K_M (~5GB) or Phi-3 Mini (3GB).<\/li>\n<li><strong>PC with 16GB RAM<\/strong>: Mistral 7B Q4 (~4GB) or Phi-3.<\/li>\n<li><strong>PC with 32GB+ RAM<\/strong>: Mixtral 8x7B Q4 (~25GB) or quantised Llama 3 70B (~40GB).<\/li>\n<\/ul>\n<p>Download and load, ready to chat.<\/p>\n<h2 id=\"usage-experience\">Usage Experience<\/h2>\n<p>For a non-technical user:<\/p>\n<ul>\n<li><strong>UI with model selector<\/strong> at start.<\/li>\n<li><strong>Chat with visual parameters<\/strong> (temperature, top_p, context length).<\/li>\n<li><strong>File upload<\/strong> for local RAG.<\/li>\n<li><strong>Export\/import<\/strong> conversations.<\/li>\n<li><strong>Pre-configured prompt templates<\/strong> for common cases.<\/li>\n<\/ul>\n<p>For a developer:<\/p>\n<ul>\n<li><strong>API server<\/strong> at <code>localhost:1234<\/code> OpenAI-compatible.<\/li>\n<li><strong>Multiple models loaded<\/strong> simultaneously.<\/li>\n<li><strong>Logs<\/strong> of each query and tokens consumed.<\/li>\n<li><strong>GPU offloading<\/strong> configurable (CPU+GPU hybrid).<\/li>\n<\/ul>\n<h2 id=\"openai-compatible-api\">OpenAI-Compatible API<\/h2>\n<p>An underrated feature: LM Studio exposes an OpenAI-compatible API. Your existing code works:<\/p>\n<div class=\"sourceCode\" id=\"cb1\">\n<pre class=\"sourceCode python\"><code class=\"sourceCode python\"><span id=\"cb1-1\"><a href=\"#cb1-1\" aria-hidden=\"true\" tabindex=\"-1\"><\/a><span class=\"im\">from<\/span> openai <span class=\"im\">import<\/span> OpenAI<\/span>\n<span id=\"cb1-2\"><a href=\"#cb1-2\" aria-hidden=\"true\" tabindex=\"-1\"><\/a><\/span>\n<span id=\"cb1-3\"><a href=\"#cb1-3\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>client <span class=\"op\">=<\/span> OpenAI(<\/span>\n<span id=\"cb1-4\"><a href=\"#cb1-4\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>    base_url<span class=\"op\">=<\/span><span class=\"st\">&quot;http:\/\/localhost:1234\/v1&quot;<\/span>,<\/span>\n<span id=\"cb1-5\"><a href=\"#cb1-5\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>    api_key<span class=\"op\">=<\/span><span class=\"st\">&quot;not-needed&quot;<\/span><\/span>\n<span id=\"cb1-6\"><a href=\"#cb1-6\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>)<\/span>\n<span id=\"cb1-7\"><a href=\"#cb1-7\" aria-hidden=\"true\" tabindex=\"-1\"><\/a><\/span>\n<span id=\"cb1-8\"><a href=\"#cb1-8\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>response <span class=\"op\">=<\/span> client.chat.completions.create(<\/span>\n<span id=\"cb1-9\"><a href=\"#cb1-9\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>    model<span class=\"op\">=<\/span><span class=\"st\">&quot;local-model&quot;<\/span>,  <span class=\"co\"># ignored, LM Studio uses loaded<\/span><\/span>\n<span id=\"cb1-10\"><a href=\"#cb1-10\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>    messages<span class=\"op\">=<\/span>[{<span class=\"st\">&quot;role&quot;<\/span>: <span class=\"st\">&quot;user&quot;<\/span>, <span class=\"st\">&quot;content&quot;<\/span>: <span class=\"st\">&quot;Hi&quot;<\/span>}]<\/span>\n<span id=\"cb1-11\"><a href=\"#cb1-11\" aria-hidden=\"true\" tabindex=\"-1\"><\/a>)<\/span><\/code><\/pre>\n<\/div>\n<p>Useful for offline development, privacy-sensitive apps, or as fallback if OpenAI falls.<\/p>\n<h2 id=\"local-rag-with-your-documents\">Local RAG with Your Documents<\/h2>\n<p>LM Studio integrates ingestion and RAG:<\/p>\n<ol type=\"1\">\n<li>Drag PDFs\/docs to the chat.<\/li>\n<li>System extracts text, generates local embeddings.<\/li>\n<li>Chat uses relevant context from your docs.<\/li>\n<\/ol>\n<p>For lawyers, doctors, journalists with confidential data: zero cloud exposure. Document store stays local.<\/p>\n<h2 id=\"hardware-and-performance\">Hardware and Performance<\/h2>\n<p>On Apple Silicon M2\/M3:<\/p>\n<ul>\n<li><strong>Llama 3 8B Q4<\/strong>: 30-50 tokens\/s on M2 Pro.<\/li>\n<li><strong>Mistral 7B Q4<\/strong>: similar.<\/li>\n<li><strong>Mixtral 8x7B Q4<\/strong>: 15-25 tokens\/s on M3 Max 64GB.<\/li>\n<li><strong>Llama 3 70B Q4<\/strong>: 5-10 tokens\/s if it fits unified memory.<\/li>\n<\/ul>\n<p>On Windows with NVIDIA GPU:<\/p>\n<ul>\n<li><strong>RTX 4090<\/strong>: Llama 3 70B Q4 at ~15 tokens\/s.<\/li>\n<li><strong>RTX 4070\/4080<\/strong>: 7B-13B are sweet spot.<\/li>\n<li><strong>Laptop with 3050\/4050<\/strong>: limited, better CPU inference.<\/li>\n<\/ul>\n<p>CPU-only is viable for small models (3B) with slower but usable responses.<\/p>\n<h2 id=\"lm-studio-vs-ollama\">LM Studio vs Ollama<\/h2>\n<p>Honest comparison:<\/p>\n<table>\n<thead>\n<tr class=\"header\">\n<th>Aspect<\/th>\n<th>LM Studio<\/th>\n<th>Ollama<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr class=\"odd\">\n<td>UI<\/td>\n<td>Rich desktop<\/td>\n<td>Minimal (CLI + optional web)<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>Installation<\/td>\n<td>DMG\/MSI install<\/td>\n<td>CLI binary<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>Models<\/td>\n<td>Direct Hugging Face<\/td>\n<td>Own registry + GGUF<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>API<\/td>\n<td>OpenAI-compat<\/td>\n<td>OpenAI-compat<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>Built-in RAG<\/td>\n<td>Yes<\/td>\n<td>Via OpenWebUI<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>Multi-model loading<\/td>\n<td>Yes<\/td>\n<td>Yes<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>Linux<\/td>\n<td>AppImage (beta)<\/td>\n<td>Mature native<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>Target audience<\/td>\n<td>Non-tech users + devs<\/td>\n<td>Devs<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>License<\/td>\n<td>Closed (free)<\/td>\n<td>Open MIT<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong>LM Studio<\/strong> wins for non-technical-user UX. <strong>Ollama<\/strong> wins for dev\/CLI stack integration and open-source.<\/p>\n<h2 id=\"lm-studio-vs-openwebui\">LM Studio vs OpenWebUI<\/h2>\n<p><strong><a href=\"https:\/\/openwebui.com\/\">OpenWebUI<\/a><\/strong> is a web UI for Ollama\/other LLM backends.<\/p>\n<table>\n<thead>\n<tr class=\"header\">\n<th>Aspect<\/th>\n<th>LM Studio<\/th>\n<th>OpenWebUI + Ollama<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr class=\"odd\">\n<td>Deploy<\/td>\n<td>Local desktop app<\/td>\n<td>Docker container<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>Multi-user<\/td>\n<td>No (single-user)<\/td>\n<td>Yes<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>UI quality<\/td>\n<td>Excellent<\/td>\n<td>Very good<\/td>\n<\/tr>\n<tr class=\"even\">\n<td>Self-hosted<\/td>\n<td>Per user<\/td>\n<td>For team<\/td>\n<\/tr>\n<tr class=\"odd\">\n<td>Open-source<\/td>\n<td>No<\/td>\n<td>Yes<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>LM Studio is <strong>personal \/ single-user<\/strong>. OpenWebUI is <strong>team \/ multi-user self-hosted<\/strong>.<\/p>\n<h2 id=\"real-use-cases\">Real Use Cases<\/h2>\n<p>Where we see LM Studio:<\/p>\n<ul>\n<li><strong>Developers<\/strong> testing models before deploy.<\/li>\n<li><strong>Data scientists<\/strong> iterating with LLMs without cloud.<\/li>\n<li><strong>Journalists and lawyers<\/strong> with confidential documents.<\/li>\n<li><strong>Students<\/strong> learning about LLMs without spending on APIs.<\/li>\n<li><strong>Small companies<\/strong> with laptop fleets and strict compliance.<\/li>\n<\/ul>\n<p>Where it doesn\u2019t fit:<\/p>\n<ul>\n<li><strong>Production servers<\/strong> (use Ollama\/vLLM).<\/li>\n<li><strong>Simultaneous multi-user<\/strong> (use OpenWebUI).<\/li>\n<li><strong>Scaling<\/strong> with multiple concurrent sessions.<\/li>\n<li><strong>Non-GUI environments<\/strong> (SSH-only servers).<\/li>\n<\/ul>\n<h2 id=\"limitations\">Limitations<\/h2>\n<p>Honestly:<\/p>\n<ul>\n<li><strong>Closed-source (not OSS)<\/strong>, though free. Potential lock-in.<\/li>\n<li><strong>Update cadence<\/strong> depends on LM Studio team.<\/li>\n<li><strong>Not easily integrable<\/strong> into CI pipelines.<\/li>\n<li><strong>Single-machine<\/strong>: doesn\u2019t distribute inference.<\/li>\n<li><strong>Optional telemetry<\/strong> but worth verifying settings.<\/li>\n<\/ul>\n<h2 id=\"performance-tuning\">Performance Tuning<\/h2>\n<p>Three key tunings:<\/p>\n<ul>\n<li><strong>GPU layers<\/strong>: how many model layers go to GPU. More = fast but needs VRAM.<\/li>\n<li><strong>Context length<\/strong>: max tokens. Lower = faster + less memory.<\/li>\n<li><strong>Thread count<\/strong>: for CPU inference, match physical cores (not HT logical).<\/li>\n<\/ul>\n<p>Play with these until finding your hardware\u2019s speed\/memory balance.<\/p>\n<h2 id=\"recommended-models-to-start\">Recommended Models to Start<\/h2>\n<p>For Apple Silicon M2\/M3:<\/p>\n<ul>\n<li><strong>General chat<\/strong>: Llama 3 8B Instruct Q4_K_M.<\/li>\n<li><strong>Code<\/strong>: DeepSeek Coder 6.7B Q4.<\/li>\n<li><strong>Spanish<\/strong>: Mixtral 8x7B if it fits.<\/li>\n<li><strong>Reasoning<\/strong>: Phi-3 Medium.<\/li>\n<\/ul>\n<p>For modest hardware:<\/p>\n<ul>\n<li><strong>Phi-3 Mini<\/strong> (3.8B): excellent for size.<\/li>\n<li><strong>Gemma 2B<\/strong>: very light.<\/li>\n<li><strong>TinyLlama 1.1B<\/strong>: experimentation only.<\/li>\n<\/ul>\n<h2 id=\"privacy-and-data\">Privacy and Data<\/h2>\n<p>LM Studio runs everything locally:<\/p>\n<ul>\n<li><strong>Models<\/strong> downloaded and stored on disk.<\/li>\n<li><strong>Chats<\/strong> stored in <code>~\/.cache\/lm-studio\/<\/code>.<\/li>\n<li><strong>RAG documents<\/strong> stay local.<\/li>\n<li><strong>Optional telemetry<\/strong> for analytics (check settings).<\/li>\n<li><strong>No mandatory cloud<\/strong>.<\/li>\n<\/ul>\n<p>For sensitive data, it\u2019s reasonable guarantee \u2014 nothing leaves your machine unless you enable it.<\/p>\n<h2 id=\"conclusion\">Conclusion<\/h2>\n<p>LM Studio is the best option for <strong>individuals<\/strong> wanting to explore local LLMs with polished UI. For teams, Ollama + OpenWebUI offers more flexibility. For production, neither \u2014 use vLLM or TGI. LM Studio occupies a specific but important niche: democratising local LLM access for non-technical users. Free and polished, it\u2019s the obvious choice in its category. For people handling private data or wanting to experiment without paying for APIs, it\u2019s worth downloading this afternoon.<\/p>\n<p>Follow us on jacar.es for more on local LLMs, AI tools, and privacy.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LM Studio turns any modern laptop into a local-LLM lab. Who it&#8217;s for and when it beats Ollama or OpenWebUI.<\/p>\n","protected":false},"author":1,"featured_media":609,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[24,22],"tags":[425,424,248,152,423,151],"class_list":["post-608","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-herramientas","category-inteligencia-artificial","tag-apple-silicon","tag-desktop-ai","tag-gguf","tag-llm-local","tag-lm-studio","tag-ollama"],"translation":{"provider":"WPGlobus","version":"3.0.2","language":"en","enabled_languages":["es","en"],"languages":{"es":{"title":true,"content":true,"excerpt":true},"en":{"title":true,"content":true,"excerpt":true}}},"gutentor_comment":0,"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>LM Studio: Exploring AI Models from Your Desktop - Jacar<\/title>\n<meta name=\"description\" content=\"LM Studio: desktop UI to run local LLMs (Llama, Mistral, Qwen). Comparison with Ollama and when to choose each.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"LM Studio: Exploring AI Models from Your Desktop - Jacar\" \/>\n<meta property=\"og:description\" content=\"LM Studio: desktop UI to run local LLMs (Llama, Mistral, Qwen). Comparison with Ollama and when to choose each.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/\" \/>\n<meta property=\"og:site_name\" content=\"Jacar\" \/>\n<meta property=\"article:published_time\" content=\"2024-04-08T10:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\/wp-content\/uploads\/2020\/09\/favicon.png\" \/>\n\t<meta property=\"og:image:width\" content=\"252\" \/>\n\t<meta property=\"og:image:height\" content=\"229\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"javi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"javi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/\"},\"author\":{\"name\":\"javi\",\"@id\":\"https:\\\/\\\/jacar.es\\\/#\\\/schema\\\/person\\\/54a7f7b4224b38fafc9866eb3e614208\"},\"headline\":\"LM Studio: Exploring AI Models from Your Desktop\",\"datePublished\":\"2024-04-08T10:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/\"},\"wordCount\":1822,\"publisher\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/20034629\\\/jwp-1529221-26627.jpg\",\"keywords\":[\"apple silicon\",\"desktop ai\",\"gguf\",\"llm local\",\"lm studio\",\"ollama\"],\"articleSection\":[\"Herramientas\",\"Inteligencia Artificial\"],\"inLanguage\":\"en-US\"},{\"@type\":[\"WebPage\",\"ItemPage\"],\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/\",\"url\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/\",\"name\":\"LM Studio: Exploring AI Models from Your Desktop - Jacar\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/20034629\\\/jwp-1529221-26627.jpg\",\"datePublished\":\"2024-04-08T10:00:00+00:00\",\"description\":\"LM Studio: desktop UI to run local LLMs (Llama, Mistral, Qwen). Comparison with Ollama and when to choose each.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#primaryimage\",\"url\":\"https:\\\/\\\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/20034629\\\/jwp-1529221-26627.jpg\",\"contentUrl\":\"https:\\\/\\\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\\\/wp-content\\\/uploads\\\/2024\\\/04\\\/20034629\\\/jwp-1529221-26627.jpg\",\"width\":1200,\"height\":798,\"caption\":\"Pantalla de port\u00e1til con interfaz de aplicaci\u00f3n sobre escritorio ordenado\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/jacar.es\\\/lm-studio-modelos-escritorio\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Portada\",\"item\":\"https:\\\/\\\/jacar.es\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"LM Studio: explorar modelos de IA desde el escritorio\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/jacar.es\\\/#website\",\"url\":\"https:\\\/\\\/jacar.es\\\/\",\"name\":\"Jacar\",\"description\":\"Passion for Technology\",\"publisher\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/jacar.es\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/jacar.es\\\/#organization\",\"name\":\"Jacar\",\"url\":\"https:\\\/\\\/jacar.es\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/jacar.es\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/jacar.es\\\/wp-content\\\/uploads\\\/2020\\\/09\\\/favicon.png\",\"contentUrl\":\"https:\\\/\\\/jacar.es\\\/wp-content\\\/uploads\\\/2020\\\/09\\\/favicon.png\",\"width\":252,\"height\":229,\"caption\":\"Jacar\"},\"image\":{\"@id\":\"https:\\\/\\\/jacar.es\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.linkedin.com\\\/in\\\/javiercanetearroyo\\\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/jacar.es\\\/#\\\/schema\\\/person\\\/54a7f7b4224b38fafc9866eb3e614208\",\"name\":\"javi\",\"sameAs\":[\"https:\\\/\\\/jacar.es\"],\"url\":\"https:\\\/\\\/jacar.es\\\/en\\\/author\\\/javi\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"LM Studio: Exploring AI Models from Your Desktop - Jacar","description":"LM Studio: desktop UI to run local LLMs (Llama, Mistral, Qwen). Comparison with Ollama and when to choose each.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/","og_locale":"en_US","og_type":"article","og_title":"LM Studio: Exploring AI Models from Your Desktop - Jacar","og_description":"LM Studio: desktop UI to run local LLMs (Llama, Mistral, Qwen). Comparison with Ollama and when to choose each.","og_url":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/","og_site_name":"Jacar","article_published_time":"2024-04-08T10:00:00+00:00","og_image":[{"width":252,"height":229,"url":"https:\/\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\/wp-content\/uploads\/2020\/09\/favicon.png","type":"image\/png"}],"author":"javi","twitter_card":"summary_large_image","twitter_misc":{"Written by":"javi","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#article","isPartOf":{"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/"},"author":{"name":"javi","@id":"https:\/\/jacar.es\/#\/schema\/person\/54a7f7b4224b38fafc9866eb3e614208"},"headline":"LM Studio: Exploring AI Models from Your Desktop","datePublished":"2024-04-08T10:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/"},"wordCount":1822,"publisher":{"@id":"https:\/\/jacar.es\/#organization"},"image":{"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#primaryimage"},"thumbnailUrl":"https:\/\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\/wp-content\/uploads\/2024\/04\/20034629\/jwp-1529221-26627.jpg","keywords":["apple silicon","desktop ai","gguf","llm local","lm studio","ollama"],"articleSection":["Herramientas","Inteligencia Artificial"],"inLanguage":"en-US"},{"@type":["WebPage","ItemPage"],"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/","url":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/","name":"LM Studio: Exploring AI Models from Your Desktop - Jacar","isPartOf":{"@id":"https:\/\/jacar.es\/#website"},"primaryImageOfPage":{"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#primaryimage"},"image":{"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#primaryimage"},"thumbnailUrl":"https:\/\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\/wp-content\/uploads\/2024\/04\/20034629\/jwp-1529221-26627.jpg","datePublished":"2024-04-08T10:00:00+00:00","description":"LM Studio: desktop UI to run local LLMs (Llama, Mistral, Qwen). Comparison with Ollama and when to choose each.","breadcrumb":{"@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jacar.es\/lm-studio-modelos-escritorio\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#primaryimage","url":"https:\/\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\/wp-content\/uploads\/2024\/04\/20034629\/jwp-1529221-26627.jpg","contentUrl":"https:\/\/jcs-wp-jacar-es.fsn1.your-objectstorage.com\/wp-content\/uploads\/2024\/04\/20034629\/jwp-1529221-26627.jpg","width":1200,"height":798,"caption":"Pantalla de port\u00e1til con interfaz de aplicaci\u00f3n sobre escritorio ordenado"},{"@type":"BreadcrumbList","@id":"https:\/\/jacar.es\/lm-studio-modelos-escritorio\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Portada","item":"https:\/\/jacar.es\/"},{"@type":"ListItem","position":2,"name":"LM Studio: explorar modelos de IA desde el escritorio"}]},{"@type":"WebSite","@id":"https:\/\/jacar.es\/#website","url":"https:\/\/jacar.es\/","name":"Jacar","description":"Passion for Technology","publisher":{"@id":"https:\/\/jacar.es\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jacar.es\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/jacar.es\/#organization","name":"Jacar","url":"https:\/\/jacar.es\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jacar.es\/#\/schema\/logo\/image\/","url":"https:\/\/jacar.es\/wp-content\/uploads\/2020\/09\/favicon.png","contentUrl":"https:\/\/jacar.es\/wp-content\/uploads\/2020\/09\/favicon.png","width":252,"height":229,"caption":"Jacar"},"image":{"@id":"https:\/\/jacar.es\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/in\/javiercanetearroyo\/"]},{"@type":"Person","@id":"https:\/\/jacar.es\/#\/schema\/person\/54a7f7b4224b38fafc9866eb3e614208","name":"javi","sameAs":["https:\/\/jacar.es"],"url":"https:\/\/jacar.es\/en\/author\/javi\/"}]}},"_links":{"self":[{"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/posts\/608","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/comments?post=608"}],"version-history":[{"count":0,"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/posts\/608\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/media\/609"}],"wp:attachment":[{"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/media?parent=608"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/categories?post=608"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jacar.es\/en\/wp-json\/wp\/v2\/tags?post=608"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}