Jump to content

AI: Difference between revisions

From 太極
Brb (talk | contribs)
Brb (talk | contribs)
 
(139 intermediate revisions by the same user not shown)
Line 2: Line 2:
[https://www.ithome.com.tw/article/154628 人類如何勝過AI?]
[https://www.ithome.com.tw/article/154628 人類如何勝過AI?]


= Applications =
= Prompts =
* [https://health.udn.com/health/story/6005/3034944 人何時走完全未知?美研發AI預測臨終準確度達90%]
== Research prompts ==
* [https://www.ithome.com.tw/news/122511 美國FDA首次批准AI醫療儀器上市,能自動即時偵測糖尿病視網膜病變]
* [https://medium.com/@KanikaBK/10-high-quality-chatgpt-prompts-that-i-use-to-research-any-topic-370289496adf 10 HIGH QUALITY ChatGPT Prompts that I use to research any Topic]
* [https://www.worldjournal.com/5518499/article-美國現象/在家養老-科技幫大忙/  在家養老-科技幫大忙]
* [https://medium.com/@Vugar_Ibrahimov/10-chatgpt-and-claude-ai-prompts-for-faster-academic-reading-ece583dd132c 10 ChatGPT and Claude AI Prompts for Faster Academic Reading]
* [https://www.ithome.com.tw/news/122507 病理研究有新幫手,Google以AR顯微鏡結合深度學習即時發現癌細胞]
* [https://earther.com/this-new-app-is-like-shazam-for-your-nature-photos-1823952757 This New App Is Like Shazam for Your Nature Photos]. [https://www.inaturalist.org/pages/seek_app Seek App].
* [https://liliputing.com/2018/07/draw-this-camera-prints-crappy-drawings-of-the-things-you-photograph-diy.html Draw This camera prints crappy drawings of the things you photograph (DIY)] with Google's [https://quickdraw.withgoogle.com/ quickdraw].
* [https://www.makeuseof.com/tag/machine-learning-algorithms/ What Are Machine Learning Algorithms? Here’s How They Work]
* [https://jamanetwork.com/journals/jama/article-abstract/2754798 How to Read Articles That Use Machine Learning] Users’ Guides to the Medical Literature
* [https://www.techbang.com/posts/62754-googles-artificial-intelligence-open-source-oracle-is-three-years-old-and-its-being-used-in-a-lot-of-places-you-cant-imagine Google的人工智慧開源神器三歲了,它被用在很多你想不到的地方] Nov 2018
* [https://www.makeuseof.com/what-is-natural-language-processing-and-how-does-it-work/ What is Natural Language Processing and How Does It Work?] NLP works via preprocessing the text and then running it through the machine learning-trained algorithm.
* [https://arxiv.org/abs/2110.12112 Why Machine Learning Cannot Ignore Maximum Likelihood Estimation] van der Laan & Rose 2021


= Learning prompts =
== Learning prompts ==
* [https://www.makeuseof.com/how-to-reduce-ai-hallucination/ How to Reduce AI Hallucination With These 6 Prompting Techniques]
* [https://www.makeuseof.com/how-to-reduce-ai-hallucination/ How to Reduce AI Hallucination With These 6 Prompting Techniques]
* [https://www.makeuseof.com/prompting-techniques-to-improve-chatgpt-responses/ 7 Prompting Techniques to Improve Your ChatGPT Responses]
* [https://www.makeuseof.com/prompting-techniques-to-improve-chatgpt-responses/ 7 Prompting Techniques to Improve Your ChatGPT Responses]
Line 42: Line 34:


* [https://readmedium.com/how-i-won-singapores-gpt-4-prompt-engineering-competition-34c195a93d41 How I Won Singapore’s GPT-4 Prompt Engineering Competition]
* [https://readmedium.com/how-i-won-singapores-gpt-4-prompt-engineering-competition-34c195a93d41 How I Won Singapore’s GPT-4 Prompt Engineering Competition]
== Creating images ==
[https://www.makeuseof.com/can-you-spot-the-ai-generated-images/ 5 of these 10 photos are AI-generated — can you spot them?]
== Interesting prompts ==
* Can you tell me everything you know about me, based on our past conversations?


= Carbon footprint =
= Carbon footprint =
Line 62: Line 60:
* [https://github.com/wong2/chat-gpt-google-extension?utm_source=pocket_reader chatGPT google extension] -A browser extension to display ChatGPT response alongside search engine results  
* [https://github.com/wong2/chat-gpt-google-extension?utm_source=pocket_reader chatGPT google extension] -A browser extension to display ChatGPT response alongside search engine results  
* [https://www.makeuseof.com/gpt-models-explained-and-compared/ GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared]
* [https://www.makeuseof.com/gpt-models-explained-and-compared/ GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared]
* Tokens
** [https://www.makeuseof.com/what-is-chatgpt-token-limit-can-you-exceed-it/ What Is the ChatGPT Token Limit and Can You Exceed It?]
** https://openai.com/api/pricing/.  0.04 - 2¢ per 1k tokens/words (language models). You can think of tokens as pieces of words, where 1,000 tokens is about 750 words. How much does the OpenAI’s cost per session? About $0.05 to $0.1, if you send 25 to 50 requests. $18.00 free trial. Cost depends on how many words in the input and output  at $0.000002 per word.
* [https://www.makeuseof.com/online-communities-to-learn-about-ai/ 9 Communities for Beginners to Learn About AI Tools]
* [https://www.makeuseof.com/online-communities-to-learn-about-ai/ 9 Communities for Beginners to Learn About AI Tools]


Line 71: Line 65:
* [https://downforeveryoneorjustme.com/chatgpt Down for Everyone or Just Me]
* [https://downforeveryoneorjustme.com/chatgpt Down for Everyone or Just Me]
* [https://www.howtogeek.com/883074/is-chatgpt-down-heres-what-to-do/ Is ChatGPT Down? Here’s What to Do]
* [https://www.howtogeek.com/883074/is-chatgpt-down-heres-what-to-do/ Is ChatGPT Down? Here’s What to Do]
== Network error ==
[https://help.openai.com/en/articles/9247338-network-recommendations-for-chatgpt-errors-on-web-and-apps Network recommendations for ChatGPT errors on web and apps]
== Differences among platforms ==
[https://www.howtogeek.com/chatgpt-features-you-cant-access-on-all-platforms/ 8 ChatGPT Features You Can't Access on All Platforms]
== Settings ==
* [https://www.howtogeek.com/881659/how-to-create-chatgpt-personas-for-every-occasion/ How to Create ChatGPT Personas for Every Occasion] 2023
* [https://www.looppanel.com/blog/chatgpt-persona How to Create a ChatGPT Persona in 2025]
* [https://www.makeuseof.com/chatgpt-setting-better-answers/ This ChatGPT setting I skipped over ended up making my answers way better]. 2025. Personas


== Plugins ==
== Plugins ==
Line 80: Line 85:
* [https://www.makeuseof.com/use-chatgpt-write-work-emails/ How to Use ChatGPT for Writing Difficult Emails at Work]
* [https://www.makeuseof.com/use-chatgpt-write-work-emails/ How to Use ChatGPT for Writing Difficult Emails at Work]
* [https://www.makeuseof.com/can-chatgpt-be-used-as-proofreader/ Can ChatGPT Be Used as a Proofreader?]
* [https://www.makeuseof.com/can-chatgpt-be-used-as-proofreader/ Can ChatGPT Be Used as a Proofreader?]
* [https://www.makeuseof.com/tools-use-ai-to-make-presentation/ The 7 Best Tools That Use AI to Make Presentations for You]
* Presentation/Powerpoint
** [https://www.makeuseof.com/tools-use-ai-to-make-presentation/ The 7 Best Tools That Use AI to Make Presentations for You]
** https://mindshow.fun/ 快速演示你的想法 Auto-generated Slides
** https://mindshow.fun/ 快速演示你的想法 Auto-generated Slides
** [https://freedium.cfd/https://generativeai.pub/how-to-create-a-powerpoint-using-ai-1160311b36c2 How to Create a PowerPoint Using AI]
* Learn a language
* Learn a language
** [https://www.makeuseof.com/how-chatgpt-plus-can-help-you-learn-a-language/ How ChatGPT Plus Can Help You Learn a Language]
** [https://www.makeuseof.com/how-chatgpt-plus-can-help-you-learn-a-language/ How ChatGPT Plus Can Help You Learn a Language]
Line 96: Line 104:
** [https://ivelasq.rbind.io/blog/macos-rig/index.html Setting up macOS as an R data science rig in 2023]
** [https://ivelasq.rbind.io/blog/macos-rig/index.html Setting up macOS as an R data science rig in 2023]
** [https://twitter.com/scaffeoa/status/1608643439868141570 Describe 20 possible generative AI use cases in detail across society that could create early impact.]
** [https://twitter.com/scaffeoa/status/1608643439868141570 Describe 20 possible generative AI use cases in detail across society that could create early impact.]
== Live voice ==
[https://www.makeuseof.com/ways-use-chatgpt-live-voice-and-vision/ 7 Interesting Ways You Can Use ChatGPT's Live Voice and Vision]
== Reasoning ==
[https://www.makeuseof.com/chatgpt-search-vs-chatgpt-reasoning/ How I Know When to Use ChatGPT Search vs. ChatGPT Reasoning]
== Deep research ==
* [https://www.howtogeek.com/how-chatgpts-deep-research-feature-is-helping-me-do-better-work/ How ChatGPT’s Deep Research Feature Is Helping Me Do Better Work]
* [https://www.makeuseof.com/chatgpt-deep-research-how-to-use/ 8 Ways I Use ChatGPT’s Deep Research Tool]


== API, Extension tools ==
== API, Extension tools ==
Line 103: Line 121:
* [https://chrome.google.com/webstore/detail/merlin-chatgpt-assistant/camppjleccjaphfdbohjdohecfnoikec Merlin ChatGPT Assistant for all Websites]
* [https://chrome.google.com/webstore/detail/merlin-chatgpt-assistant/camppjleccjaphfdbohjdohecfnoikec Merlin ChatGPT Assistant for all Websites]
* [https://chrome.google.com/webstore/detail/chatgpt-writer-write-mail/pdnenlnelpdomajfejgapbdpmjkfpjkp ChatGPT Writer - Write mail, messages with AI]
* [https://chrome.google.com/webstore/detail/chatgpt-writer-write-mail/pdnenlnelpdomajfejgapbdpmjkfpjkp ChatGPT Writer - Write mail, messages with AI]
* Tokens
** [https://www.makeuseof.com/what-is-chatgpt-token-limit-can-you-exceed-it/ What Is the ChatGPT Token Limit and Can You Exceed It?]
** https://openai.com/api/pricing/.  0.04 - 2¢ per 1k tokens/words (language models). You can think of tokens as pieces of words, where 1,000 tokens is about 750 words. How much does the OpenAI’s cost per session? About $0.05 to $0.1, if you send 25 to 50 requests. $18.00 free trial. Cost depends on how many words in the input and output  at $0.000002 per word.
** [https://puppycoding.com/2023/08/25/openai-api-key-guide/ Unofficial Guide to OpenAI API Keys]
* [https://github.com/cheahjs/free-llm-api-resources Free LLM API resources]
* [https://dev.to/_37bbf0c253c0b3edec531e/how-to-access-the-free-gemini-25-pro-api-via-ai-studio-in-2025-step-guides-216k How to Access the Free Gemini 2.5 Pro API via AI Studio in 2025? Step Guides]


== Create your GPT ==
== Create your GPT ==
Line 108: Line 132:


== call from R ==
== call from R ==
* [https://www.sumsar.net/blog/call-chatgpt-from-r/ Call ChatGPT (or really any other API) from R]
<ul>
* [https://github.com/samterfa/openai openai]-This R package provides an SDK to the Open AI API  
<li>[https://www.sumsar.net/blog/call-chatgpt-from-r/ Call ChatGPT (or really any other API) from R]
* [https://cran.r-project.org/web/packages/openai/index.html openai] package from CRAN & [https://github.com/irudnyts/openai github]
<li>[https://github.com/samterfa/openai openai]-This R package provides an SDK to the Open AI API  
* [https://cran.r-project.org/web/packages/askgpt/index.html askgpt] package
<li>[https://cran.r-project.org/web/packages/openai/index.html openai] package from CRAN & [https://github.com/irudnyts/openai github]
* [https://shiny.posit.co/blog/posts/shiny-on-hugging-face/ Shiny on Hugging Face]
<li>[https://cran.r-project.org/web/packages/askgpt/index.html askgpt] package
* [https://mlverse.github.io/chattr/ chattr] package -Interact with Large Language Models in 'RStudio'.
<li>[https://shiny.posit.co/blog/posts/shiny-on-hugging-face/ Shiny on Hugging Face]
** [https://cran.r-project.org/web//packages/chattr/index.html CRAN].
 
** [https://blogs.rstudio.com/ai/posts/2024-04-04-chat-with-llms-using-chattr/ Chat with AI in RStudio].
<li>[https://mlverse.github.io/chattr/ chattr] package -Interact with Large Language Models in 'RStudio'.
** [https://www.business-science.io/code-tools/2024/05/11/chattr-chatgpt-in-r.html How to Get ChatGPT in R with chattr]
* [https://cran.r-project.org/web//packages/chattr/index.html CRAN].
* [https://blogs.rstudio.com/ai/posts/2024-04-04-chat-with-llms-using-chattr/ Chat with AI in RStudio].
* [https://www.business-science.io/code-tools/2024/05/11/chattr-chatgpt-in-r.html How to Get ChatGPT in R with chattr]
 
<li>[https://cran.r-project.org/web/packages/ollamar/index.html ollamar] package
* [https://blog.stephenturner.us/p/use-r-to-prompt-a-local-llm-with Use R to prompt a local LLM with ollamar]
<li>[https://ellmer.tidyverse.org/ ellmer].
* [https://posit.co/blog/setting-up-local-llms-for-r-and-python/ Setting up local LLMs for R and Python] 2025/8/19
* I tested it on Manjaro OS VM with 4GB ram and 4 cpu.
<syntaxhighlight lang='sh'>
sudo pacman -S openssh
sudo systemctl start sshd
sudo systemctl enable sshd
 
sudo pacman -S lapack blas
sudo pacman -Sy r
sudo pacman -S base-devel
</syntaxhighlight>
R
<syntaxhighlight lang='sh'>
install.packages("ellmer", repos = "https://cloud.r-project.org")
library(ellmer)
chat <- chat_ollama(model = "llama3.2:1b")
chat$chat("Tell me a joke")
live_console(chat) # seems not working on ollama
</syntaxhighlight>
 
<li>Connect to '''[[AI#LM_Studio|LM Studio]]''' for local hosting
<li>[https://www.r-bloggers.com/2024/12/harnessing-azure-openai-and-r-for-web-content-summarisation-a-practical-guide-with-rvest-and-tidyverse/ Harnessing Azure OpenAI and R for Web Content Summarisation: A Practical Guide with rvest and tidyverse]
<li>[https://www.tidyverse.org/blog/2025/01/experiments-llm/ Three experiments in LLM code assist with RStudio and Positron]
<li>[https://gettinggeneticsdone.blogspot.com/2025/06/r-production-ai.html The Modern R Stack for Production AI]
<li>[https://www.r-bloggers.com/2025/06/chat-with-llms-on-your-r-environment/ Chat with LLMs on your R environment]. Google allows to use an api key for free.
</ul>


== call from Python ==
== call from Python ==
Line 124: Line 180:
=== Jupyter-ai ===
=== Jupyter-ai ===
[https://github.com/jupyterlab/jupyter-ai A generative AI extension for JupyterLab]
[https://github.com/jupyterlab/jupyter-ai A generative AI extension for JupyterLab]
== Bing chat ==
It's been 18 days but Bing chat says R 4.3.0 is not yet released. It says the latest version of R is 4.1.3. The default conversion style is balanced (blue). After I changed it to more precise (green), the results are right.
== DuckDuckGo ==
https://duck.ai
== Brave AI chatbot: Leo ==
[https://www.makeuseof.com/everything-leo-brave-ai/ Everything You Need to Know About Leo: Brave Browser’s AI Chatbot]


== GPT-4 ==
== GPT-4 ==
Line 143: Line 190:
* [https://openaimaster.com/how-to-use-gpt-4-for-free/ How to use GPT-4 for free]
* [https://openaimaster.com/how-to-use-gpt-4-for-free/ How to use GPT-4 for free]
** [https://openai.nat.dev/ Nat.dev]
** [https://openai.nat.dev/ Nat.dev]
== GPT o1 ==
[https://www.howtogeek.com/what-is-chatgpts-o1-model-and-how-can-you-use-it/ What Is ChatGPT's o1 Model and How Can You Use It?]


== GPT-4o ==
== GPT-4o ==
Line 160: Line 210:
* [https://www.howtogeek.com/ai-tools-to-analyze-pdfs-for-free/ 5 AI Tools to Analyze PDFs For Free]. ChatGPT, Claude, Perplexity AI, Copilot, HuggingChat.
* [https://www.howtogeek.com/ai-tools-to-analyze-pdfs-for-free/ 5 AI Tools to Analyze PDFs For Free]. ChatGPT, Claude, Perplexity AI, Copilot, HuggingChat.
* [https://blog.stephenturner.us/p/biorecap-r-package-for-summarizing-biorxiv-preprints-local-llm biorecap: an R package for summarizing bioRxiv preprints with a local LLM]
* [https://blog.stephenturner.us/p/biorecap-r-package-for-summarizing-biorxiv-preprints-local-llm biorecap: an R package for summarizing bioRxiv preprints with a local LLM]
* [https://www.howtogeek.com/chat-with-your-pdfs-in-google-drive/ You Can Now Chat With Your PDFs in Google Drive—Here’s How]


== Word ==
== Word ==
[https://www.makeuseof.com/automate-document-creation-with-chatgpt-in-word/ How to Automate Your Document Creation With ChatGPT in Microsoft Word]
* [https://www.makeuseof.com/automate-document-creation-with-chatgpt-in-word/ How to Automate Your Document Creation With ChatGPT in Microsoft Word]
* [https://www.onlyoffice.com/ai-assistants.aspx OnlyOffice AI assistants]
** [https://www.tecmint.com/gpt4all-ai-editing-in-onlyoffice/ AI Document Editing: Connect GPT4All to ONLYOFFICE on Ubuntu]
** It works. I tested on a Ubuntu Mate 24.04.1 VM (4 Host CPUs) and 8GB RAM. I am using the Llama 3.2 3B instruct model. The AI settings allow us to possibly select different models for different tasks: 1) '''Ask AI''', 2) '''Summarization''', 3) '''Translation''', 4) '''Text analysis'''. We can also select to rewrite differently or make the text longer or shorter.
** Note that the original text could be overwritten by AI.
** [https://www.makeuseof.com/onlyoffice-ai-plugin/ Unlocking the Power of AI in ONLYOFFICE]
** [https://www.tecmint.com/integrate-localai-with-onlyoffice-desktop/ ONLYOFFICE + LocalAI: AI Document Editing Setup on Ubuntu]
 
== Government ==
https://go.hhs.gov/chatgpt


== Research ==
== Research ==
Line 185: Line 245:
** Changing a Technical Document Into a Popular Article  
** Changing a Technical Document Into a Popular Article  
** Turning a Short Story Into a Movie Script
** Turning a Short Story Into a Movie Script
* [https://www.howtogeek.com/how-to-write-a-great-essay-with-chatgpt-without-cheating/ How to Write a Great Essay With ChatGPT Without Cheating]
== Meeting notes ==
* [https://otter.ai/ Otter AI]


== Detect AI text ==
== Detect AI text ==
Line 190: Line 254:
* [https://www.makeuseof.com/gptzero-detect-ai-generated-text/ What Is GPTZero? How to Use It to Detect AI-Generated Text]
* [https://www.makeuseof.com/gptzero-detect-ai-generated-text/ What Is GPTZero? How to Use It to Detect AI-Generated Text]
* [https://readmedium.com/words-and-phrases-that-show-chatgpt-generated-it-ca7e28ae8e8f Words and Phrases That Show ChatGPT Generated It]
* [https://readmedium.com/words-and-phrases-that-show-chatgpt-generated-it-ca7e28ae8e8f Words and Phrases That Show ChatGPT Generated It]
* [https://medium.com/the-writers-pub/15-signs-that-ai-wrote-it-9bc37e165973 11 Signs That AI Wrote It]


== Youtube summary ==
== Youtube summary ==
Line 200: Line 265:
[https://www.howtogeek.com/4-ai-search-engines-i-use-every-day/ 4 AI Search Engines I Use Every Day]. Perplexity, Exa, You AI, Andi AI.
[https://www.howtogeek.com/4-ai-search-engines-i-use-every-day/ 4 AI Search Engines I Use Every Day]. Perplexity, Exa, You AI, Andi AI.


== Google Bard ==
== Google Gemini ==
* [https://blog.google/technology/ai/code-with-bard/ Bard now helps you code] 4/21/2023
* [https://blog.google/technology/ai/code-with-bard/ Bard now helps you code] 4/21/2023
* [https://lifehacker.com/set-up-google-bard-extensions-1850853309 You Can Now Connect Bard to Gmail, Google Docs, YouTube, and More]
* [https://lifehacker.com/set-up-google-bard-extensions-1850853309 You Can Now Connect Bard to Gmail, Google Docs, YouTube, and More]
* [https://www.makeuseof.com/google-career-dreamer-ai-experiment/ Google's Newest AI Tool Helps You Choose Your Perfect Career]
* [https://lifehacker.com/tech/google-has-dropped-the-paywall-for-these-gemini-features Google Has Dropped the Paywall for These Gemini Features] 3/13/2025
** Gems
** Deep Research
** Gemini 2.0 Flash model
* [https://www.howtogeek.com/google-gemini-can-now-turn-almost-anything-into-a-podcast/ Google Gemini Can Now Turn Almost Anything Into a Podcast]
* [https://www.youtube.com/watch?v=XvsfsAMv_H0 Google 放大招! Gemini 2.5 Pro 震撼發布,程式碼能力太強了,完爆 Claude 3.7 ?免費實測效果!]
=== Google AI Studio ===
https://aistudio.google.com/ 
* [https://aistudio.google.com/plan_information API Plan Billing Information]
* [https://www.makeuseof.com/google-ai-studio-for-learning/ Google’s AI Studio Wasn’t Built for Learning—but It’s the Best Tutor I’ve Ever Used]
* [https://www.youtube.com/watch?v=3cvczHJSRNs The End of Tutorials? This Free AI Changes How You Learn Software Forever | Google AI Studio] (video)
* [https://freedium-mirror.cfd/https://medium.com/lets-code-future/how-to-run-your-google-ai-studio-project-locally-step-by-step-guide-6be9830fc29c How to Run Your Google AI Studio Project Locally (Step-by-Step Guide)] (Click the "Build" button)
=== Google AI ===
* https://google.com/ai
* [https://freedium.cfd/https://generativeai.pub/google-introduces-a-new-url-for-ai-mode-in-search-40943f0ab3bd Google Introduces A New URL For AI Mode In Search]
=== NotebookLM ===
* NotebookLM is Google's tool for building Retrieval-Augmented Generation (RAG) systems without coding.
* [https://www.makeuseof.com/reading-read-it-later-list-notebooklm-trick/ I'm actually reading my read-it-later list thanks to this brilliant NotebookLM trick]
* New Features in NotebookLM
** Featured Notebooks: Expert-created templates that showcase best practices and help users learn how to build their own notebooks.
** Discover Sources: A new button that suggests curated, high-quality sources (e.g., from universities or news outlets) to enrich your notebook.
** Quizzes: Automatically generated quizzes based on your sources, with instant feedback and customizable difficulty, topic, and language.
** Flashcards: 60 default cards for memorizing key concepts, with options to customize and request explanations.
** Mindmaps: Interactive visual summaries of your sources, showing branching relationships between concepts. Not yet editable, but shareable.
** Audio Overview: Create podcasts in multiple styles and languages, with prompts to guide topic focus and length.
** Video Overview: Generate slide-based videos from your sources, structured into chapters and customizable by topic and language.
* How RAG Works
** Query Encoding: The user's question is converted into a vector (a mathematical representation).
** Document Retrieval: The system searches a database or document set for the most relevant matches.
** Context Injection: Retrieved documents are inserted into the model’s prompt.
** Response Generation: The model uses both its training and the retrieved context to generate a response.
* Why RAG Is Useful
** Up-to-date answers: It can pull in current or domain-specific info not included in the model’s training.
** Custom knowledge bases: You can feed it your own documents (e.g., PDFs, research papers, manuals).
** No retraining needed: It improves accuracy without modifying the model itself.
* Example Use Cases
** Scientific research assistants (like in phosphoproteomics 🧬)
*** Ask the question: What are three recurring ideas throughout these texts/documents
** Customer support bots using internal documentation
** Legal or medical AI tools referencing case files or journals
== Microsoft Copilot ==
* [https://www.youtube.com/watch?v=sdN7C8xRoH4 Microsoft Copilot Tips and Tricks to Boost Your Productivity]


== perplexity.ai ==
== perplexity.ai ==
https://www.perplexity.ai/
* https://www.perplexity.ai/
* [https://youtu.be/ZC3L94U0_sc 為什麼科技巨頭都愛用 Perplexity?]
 
=== Perplexity Assistant ===
[https://www.makeuseof.com/chatgpt-operator-vs-perplexity-assistant/ Can't Afford ChatGPT Operator? Try Perplexity Assistant Instead]
 
== Multiple AI Chatbots ==
* [https://poe.com/ Poe]
* [https://monica.im/ Monica]
* Perplexity
 
== Grok ==
https://grok.com/. Designed by xAI.
 
== Groq ==
* https://groq.com/. https://console.groq.com/home is blocked.
* [https://en.wikipedia.org/wiki/Groq Wikipedia]
 
== Deepseek ==
* [https://www.geeksforgeeks.org/deepseek-r1-vs-deepseek-v3/ DeepSeek R1 vs V3: A Head-to-Head Comparison of Two AI Models]
* [https://github.com/deepseek-ai/DeepSeek-Coder DeepSeek Coder]: Let the Code Write Itself
* [https://levelup.gitconnected.com/building-deepseek-r1-from-scratch-using-python-a967fbfdaac4 Building DeepSeek R1 from Scratch Using Python]
 
== Qwen ==
https://chat.qwen.ai/
 
== 文心一言 ==
https://yiyan.baidu.com/
 
== Duck.ai ==
https://duck.ai
 
== Proton Lumo ==
* https://lumo.proton.me/
* https://proton.me/blog/lumo-ai
 
== Brave AI chatbot: Leo ==
[https://www.makeuseof.com/everything-leo-brave-ai/ Everything You Need to Know About Leo: Brave Browser’s AI Chatbot]


== You.com ==
== You.com ==
Line 216: Line 366:
* [https://www.makeuseof.com/claude-vs-chatgpt-which-llm-is-best/ Claude vs. ChatGPT: Which LLM Is Best for Everyday Tasks?]
* [https://www.makeuseof.com/claude-vs-chatgpt-which-llm-is-best/ Claude vs. ChatGPT: Which LLM Is Best for Everyday Tasks?]
* [https://www.makeuseof.com/why-claude-is-better-alternative-to-chatgpt/ I've Ditched ChatGPT for This Superior Alternative: 3 Reasons Why]
* [https://www.makeuseof.com/why-claude-is-better-alternative-to-chatgpt/ I've Ditched ChatGPT for This Superior Alternative: 3 Reasons Why]
* [https://www.makeuseof.com/watch-claude-ai-play-pokemon-on-twitch/ I Can't Stop Watching This AI Chatbot Play Pokémon]


== Meta's LLaMA ==
== Mistral/Le Chat ==  
* [https://ai.meta.com/blog/meta-llama-3/ Introducing Meta Llama 3: The most capable openly available LLM to date] 2024/4/18
* https://chat.mistral.ai/chat
* [https://huggingface.co/blog/lyogavin/llama3-airllm Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU!] 2024/4/21. AirLLM. ''It’s not designed for real-time interactive scenarios like chatting, more suitable for data processing and other offline asynchronous scenarios.''
* [https://www.pcworld.com/article/2603692/this-free-european-ai-chatbot-is-13-times-faster-than-chatgpt.html This free European AI chatbot is 13 times faster than ChatGPT]


=== ollamar R package ===
== Trae AI ==
[https://www.r-bloggers.com/2024/08/use-r-to-prompt-a-local-llm-with-ollamar/ Use R to prompt a local LLM with ollamar]
* https://www.trae.ai/
* [https://www.youtube.com/watch?v=RCSTP36YEuw Trae AI:免费编程神器来了!集成顶级大模型,开发者必备,自建MCP,效率翻倍!]


== Open source chats ==
== Open source chats ==
Line 236: Line 388:
= Run locally =
= Run locally =
* [https://freedium.cfd/https://medium.com/illumination/list-of-different-ways-to-run-llms-locally-55f7268c55a2 List of Different Ways to Run LLMs Locally]
* [https://freedium.cfd/https://medium.com/illumination/list-of-different-ways-to-run-llms-locally-55f7268c55a2 List of Different Ways to Run LLMs Locally]
* [https://www.pcworld.com/article/2590365/4-free-ai-chatbots-you-can-run-directly-on-your-pc.html 4 free AI chatbots you can run directly on your PC]
* [https://www.makeuseof.com/best-apps-to-run-llm-locally/ Anyone Can Enjoy the Benefits of a Local LLM With These 5 Apps]
** Ollama
** Msty
** AnythingLLM
** Jan.ai
** LM Studio
== Jan.ai ==
* [https://jan.ai/ Jan], https://github.com/janhq/jan
* [https://jan.ai/ Jan], https://github.com/janhq/jan
** [https://www.youtube.com/watch?v=gf8Phs2YXWU ChatGPT 最佳免费替代软件!支持本地离线运行,100%免费开源,兼容多种主流AI大模型!].  
* [https://www.youtube.com/watch?v=gf8Phs2YXWU ChatGPT 最佳免费替代软件!支持本地离线运行,100%免费开源,兼容多种主流AI大模型!].  
** [https://www.youtube.com/watch?v=TCHnDqFdkLw Llama 3 正式发布!性能强悍,支持AI文生图,完全免费开源!附本地安装教程!]
* [https://www.youtube.com/watch?v=TCHnDqFdkLw Llama 3 正式发布!性能强悍,支持AI文生图,完全免费开源!附本地安装教程!]
** https://jan.ai/docs/local-api. Local Server Address: By default, Jan is only accessible on the same computer it's running on, using the address 127.0.0.1. You can change this to 0.0.0.0 to let other devices on your local network access it. However, this is less secure than allowing access from the same computer.
* https://jan.ai/docs/local-api. Local Server Address: By default, Jan is only accessible on the same computer it's running on, using the address 127.0.0.1. You can change this to 0.0.0.0 to let other devices on your local network access it. However, this is less secure than allowing access from the same computer.


== LM Studio ==
* [https://lmstudio.ai/ LM Studio].  
* [https://lmstudio.ai/ LM Studio].  
** [https://www.xda-developers.com/run-local-llms-mac-windows-lm-studio/ Run local LLMs with ease on Mac and Windows thanks to LM Studio]
* [https://www.xda-developers.com/run-local-llms-mac-windows-lm-studio/ Run local LLMs with ease on Mac and Windows thanks to LM Studio]
** [https://www.youtube.com/watch?v=NP0s7T9Mou8 Llama3 一键本地部署 !无需GPU !100% 保证成功,轻松体验 Meta 最新的 8B、70B AI大模型!]
* [https://www.youtube.com/watch?v=NP0s7T9Mou8 Llama3 一键本地部署 !无需GPU !100% 保证成功,轻松体验 Meta 最新的 8B、70B AI大模型!]
* R
** [https://martinctc.github.io/blog/summarising-top-100-uk-climbs-running-local-language-models-with-lm-studio-and-r/ Summarising Top 100 UK Climbs: Running Local Language Models with LM Studio and R]
* [https://www.cultofmac.com/how-to/run-deepseek-locally-on-mac How to run DeepSeek and other LLMs locally on your Mac]
* [https://www.xda-developers.com/ways-anyone-use-lm-studio-local-llm/ 6 ways anyone can use LM Studio and a local LLM on their PC]
 
== Anything LLM ==
* https://anythingllm.com/
* https://github.com/Mintplex-Labs/anything-llm
* AnythingLLM supports Qualcomm Hexagon NPU on Qualcomm Snapdragon X systems. [https://www.pcworld.com/article/2965927/the-great-npu-failure-two-years-later-local-ai-is-still-all-about-gpus.html The great NPU failure: Two years later, local AI is still all about GPUs]
* [https://www.youtube.com/watch?v=tWJvSy7dL1w DeepSeek-R1最佳本地用法!免费开源,无痛运行高级 AI 大模型,秒建私人知识库]
* [https://www.pcworld.com/article/2772205/how-to-build-your-own-ai-bot-that-answers-questions-about-your-files.html How to build your own AI bot to answer questions about your documents]
 
== Msty ==
* https://msty.app/
* [https://www.pcworld.com/article/2772205/how-to-build-your-own-ai-bot-that-answers-questions-about-your-files.html How to build your own AI bot to answer questions about your documents]
** Anything LLM took 10 to 15 minutes to embed a PDF file with around 150 pages in the test. Msty, on the other hand, often took three to four times as long.
 
== Ollama ==
<ul>
<li>https://github.com/ollama/ollama
* [https://github.com/ollama/ollama/blob/main/docs/faq.md? FAQ] like How do I configure Ollama server? '''Environment="OLLAMA_HOST=0.0.0.0" '''
* For example when I try Llama 3.2 1B model on 4GB (now I extend it to 8GB) Manjaro VM using 4 vCPU, the total memory including the xfce desktop is 2.27G.
 
<li>Issue: Did not get a response.
* If it took too long, I can use Ctrl+C to stop.
* Even after I quit ollama, a "ollama runner" process is still running. So I run "ps -ef | grep ollama". We can use '''ollama stop MODEL_NAME'''. See '''How do I keep a model loaded in memory or make it unload immediately?''' in [https://github.com/ollama/ollama/blob/main/docs/faq.md FAQ].
 
<li>My notes. llama3.1:8b is better than Phi3/Phi4 (14b).
<syntaxhighlight lang='sh'>
$ ollama list
NAME              ID              SIZE      MODIFIED   
qwen2:1.5b        f6daf2b25194    934 MB    6 days ago   
phi3:3.8b          4f2222927938    2.2 GB    6 days ago   
llama3.1:8b        46e0c10c039e    4.9 GB    6 days ago   
llama3.2:latest    a80c4f17acd5    2.0 GB    6 days ago   
llama3.2:1b        baf6a787fdff    1.3 GB    2 weeks ago
 
$ ollama pull llama3.1:8b
 
$ ollama run --verbose qwen2:1.5b
>>> what is lincoln memorial
...
total duration:      1m14.068603383s
load duration:        19.23796ms
prompt eval count:    13 token(s)
prompt eval duration: 2.348s
prompt eval rate:    5.54 tokens/s
eval count:          297 token(s)
eval duration:        1m11.699s
eval rate:            4.14 tokens/s
>>> /bye
 
$ ollama run --verbose phi3:3.8b
>>> what is lincoln memorial
...
total duration:      1m33.270810903s
load duration:        14.566152ms
prompt eval count:    15 token(s)
prompt eval duration: 7.383s
prompt eval rate:    2.03 tokens/s
eval count:          160 token(s)
eval duration:        1m25.872s
eval rate:            1.86 tokens/s
>>> /bye
</syntaxhighlight>
 
<li>[https://www.restack.io/p/ollama-answer-ollama-guidance-cat-ai Ollama Guidance for Effective Use]
 
<li>Vision:
* [https://ollama.com/blog/llama3.2-vision Llama 3.2 Vision]
* [https://medium.com/@tapanbabbar/how-to-run-llama-3-2-vision-on-ollama-a-game-changer-for-edge-ai-80cb0e8d8928 How to Run Llama 3.2-Vision Locally With Ollama: A Game Changer for Edge AI]
 
<li>If you want to [https://dev.to/hamed0406/how-to-change-place-of-saving-models-on-ollama-4ko8 change the default location] where Ollama saves its models, you can set the '''OLLAMA_MODELS''' environment variable to your desired directory. To do this:
* Open a terminal
* Run: '''sudo systemctl edit ollama.service'''
* Add the following line under the [Service] section & Save and exit the editor: '''Environment="OLLAMA_MODELS=/path/to/new/location" '''
* Reload the daemon: '''sudo systemctl daemon-reload'''
* Restart Ollama: '''sudo systemctl restart ollama'''
 
<li>GPU:
* [https://www.containerssimplified.com/container/running-ollama-on-your-local-machine-with-nvidia-gpus/ Running Ollama on Your Local Machine with NVIDIA GPUs]
* [https://jamesravey.medium.com/self-hosting-llama-3-on-a-home-server-00feeeba8174 Self-hosting Llama 3 on a home server]
 
<li>Model file
* https://github.com/ollama/ollama/blob/main/docs/modelfile.md A model file is the blueprint to create and share models with Ollama
 
<li>Raspberry Pi 5:
* [https://fleetstack.io/blog/running-open-llm-models-on-raspberry-pi-5-with-ollama Running Open LLM Models on Raspberry Pi 5 with Ollama]
* [https://itsfoss.com/raspberry-pi-ollama-ai-setup/ Run LLMs Locally on Raspberry Pi Using Ollama AI]
 
<li>[https://ubuntushell.com/install-alpaca-on-linux/ Alpaca]: A Linux GUI App to Manage Multiple AI Models Offline
</ul>
 
=== VS Code ===
* [https://blog.codegpt.co/create-your-own-and-custom-copilot-in-vscode-with-ollama-and-codegpt-736277a60298 Create your own and custom Copilot in VSCode with Ollama and CodeGPT]
** [https://docs.codegpt.co/docs/category/%EF%B8%8F-quick-start CodeGPT] Quick Start
* [https://medium.com/@dan.avila7/step-by-step-running-deepseek-locally-in-vscode-for-a-powerful-private-ai-copilot-4edc2108b83e Step-by-Step: Running DeepSeek locally in VSCode for a Powerful, Private AI Copilot]
* [https://www.makeuseof.com/local-coding-ai-vs-code-shockingly-good/ I built a local coding AI for VS Code and it’s shockingly good]. LM Studio + continue extension.
 
=== OpenWebUI ===
<ul>
<li>(2025/7/31) Ollama desktop is now available. [https://www.howtogeek.com/ollama-0-10-speeds-up-local-ai-models-introduces-desktop-app/ Ollama 0.10 Speeds up Local AI Models, Introduces Desktop App].
<li>https://github.com/open-webui/open-webui
<li>Mac
<ul>
<li>Install Ollama
* Download Ollama for [https://ollama.com/download/mac Mac]. After unzipping it, drag the file to the Application folder. Then double clicking the Ollama app to start the installation.
* Command line way: '''ollama run --verbose llama3.2'''
<li>Install Open WebUI
<syntaxhighlight lang='sh'>
$ brew install [email protected]
$ python3.11 -m venv ollamavenv
$ source ollamavenv/bin/activate
(ollamavenv) $ pip install open-webui
(ollamavenv) $ open-webui serve  # OR open-webui serve --port 8080
(ollamavenv) $ deactivate
</syntaxhighlight>
Create username, email (eg [email protected]) and password. There is no email verification, and it’s only stored locally, so the email is just an identifier for login. You’ll only need to log in again if you clear your browser cache or reset the database. The only thing that matters is: You remember the email + password you entered (you’ll need it to log in again later).
 
Go to http://localhost:8080 to see the Open WebUI.
 
The Ollama and llama3.2 was automatically recognized and ready to use.
</ul>
<li>Add a Non-Ollama Backend
* Go to Settings > Model Providers
* Click "Add Provider"
* Choose "OpenAI-compatible"
* Enter the base URL (e.g., http://localhost:1234/v1)
* Provide the API key (if needed) — for local setups you can use sk-fake-key
</ul>


* Ollama
=== Python ===
** [https://readmedium.com/step-by-step-guide-to-running-latest-llm-model-meta-llama-3-on-apple-silicon-macs-m1-m2-or-m3-b9424ada6840 Step-by-Step Guide to Running Latest LLM Model Meta Llama 3 on Apple Silicon Macs (M1, M2 or M3)]
[https://freedium.cfd/https://blog.stackademic.com/i-built-a-fully-offline-ai-agent-that-answers-questions-from-pdf-images-and-audio-no-cloud-2e8d71b246d6 I Built a Fully Offline AI Agent That Answers Questions From PDF, Images, and Audio — No Cloud…]
** [https://jamesravey.medium.com/self-hosting-llama-3-on-a-home-server-00feeeba8174 Self-hosting Llama 3 on a home server]
** [https://fleetstack.io/blog/running-open-llm-models-on-raspberry-pi-5-with-ollama Running Open LLM Models on Raspberry Pi 5 with Ollama]
** [https://itsfoss.com/raspberry-pi-ollama-ai-setup/ Run LLMs Locally on Raspberry Pi Using Ollama AI]
** [https://ubuntushell.com/install-alpaca-on-linux/ Alpaca]: A Linux GUI App to Manage Multiple AI Models Offline


* GPT4ALL
== GPT4ALL ==
** [https://youtu.be/kqz4nDcKctg?si=XCe-XbsoUieODjVT&t=76 Phi-3 开源大模型本地部署!能否媲美 ChatGPT、Cladue 3?]
* [https://youtu.be/kqz4nDcKctg?si=XCe-XbsoUieODjVT&t=76 Phi-3 开源大模型本地部署!能否媲美 ChatGPT、Cladue 3?]
* [https://www.howtogeek.com/heres-how-to-install-your-own-uncensored-local-gpt-like-chatbot/ Here's How To Install Your Own Uncensored Local GPT-Like Chatbot]
* It can download models from two sources: GPT4ALL and HuggingFace (no guarantee it will work).
* My testing:
** A new directory "gpt4all" was created under the home directory. The GUI can be launched from the command line '''~/gpt4all/bin/chat''' or from a desktop icon.
** Use my user account to install it, not to use 'sudo'. The installation will create a folder 'gpt4all' under my home directory.
** VM is not working if we use vCPU. It shows Encountered an error starting up: "Incompatible hardware detected." Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program required to successfully run a modern large language model. ... The solution is to use edit the hardware to use '''host cpu'''.
* When we launch GPT4ALL, it will check if a new version is available. If a new version is available, it will offer to upgrade.
* Comparison of GPT4ALL, LM Studio and Ollama
{| class="wikitable"
! Feature
! GPT4ALL
! LM Studio
! Ollama
|-
| Model Compatibility
| Vicuna, Alpaca, LLaMa
| Wide range including Vicuna, Alpaca, LLaMa, Falcon, Starcoder, GPT-2
| Various models, seamless workflow integration
|-
| User Interface
| User-friendly GUI
| More UI-friendly, in-app chat interface
| Simple command-line interface, various web-based clients available
|-
| Performance
| Good for lower-end systems
| Generally faster inference, more coherent responses
| Optimized for speed, rapid inference times
|-
| Resource Utilization
| Efficient on consumer-grade hardware
| May require more resources for larger models
| Can be resource-intensive for larger models
|-
| Customization
| Basic
| Advanced (e.g., adjustable parameters)
| Flexible, allows creating custom models
|-
| Acceleration Support
| Not specified
| CUDA, openCL, cuBLAS, Metal
| Not specified
|-
| Open Source
| Yes
| No (free to download)
| Yes
|-
| OS Support
| Cross-platform
| macOS, Windows (with AVX2), Linux (beta)
| macOS, Linux, Windows (preview)
|-
| Key Features
| RAG capabilities, wide hardware support
| Built-in chat interfaces, OpenAI-like local servers
| Simplicity, ease of installation, suitable for beginners
|-
| Developer Tools
| Python bindings, API
| Local inference server
| Command-line interface, API
|}


= Browser  =
== Remote access ==
* [https://chromewebstore.google.com/detail/sider-chatgpt-sidebar-+-g/difoiogjjojoaoomphldepapgpbgkhkb ChatGPT Sidebar] Chrome extension
* [https://github.com/open-webui/open-webui Open-Webui]
* [https://lifehacker.com/tech/opera-is-the-first-browser-to-support-local-ai-llms Opera Is the First Browser to Support Local AI LLMs]
 
== Documents ==
=== PrivateGPT ===
https://github.com/zylon-ai/private-gpt (56.7k star)
 
=== DocsGPT ===
https://github.com/arc53/DocsGPT (17.2k star)
 
= Models =
== Meta's LLaMA ==
* [https://ai.meta.com/blog/meta-llama-3/ Introducing Meta Llama 3: The most capable openly available LLM to date] 2024/4/18
* [https://huggingface.co/blog/lyogavin/llama3-airllm Run the strongest open-source LLM model: Llama3 70B with just a single 4GB GPU!] 2024/4/21. AirLLM. ''It’s not designed for real-time interactive scenarios like chatting, more suitable for data processing and other offline asynchronous scenarios.''
* [https://blog.stephenturner.us/p/create-a-free-llama-405b-llm-chatbot-github-repo-huggingface Create a free Llama 3.1 405B-powered chatbot on a GitHub repo in <1 min]


= BERT =
== BERT ==
* [https://www.makeuseof.com/what-is-bert-language-model-how-differ-gpt/ What Is the BERT Natural Language Processing Model and How Does It Differ From GPT?]
* [https://www.makeuseof.com/what-is-bert-language-model-how-differ-gpt/ What Is the BERT Natural Language Processing Model and How Does It Differ From GPT?]
* [https://www.makeuseof.com/gpt-vs-bert/ GPT vs. BERT: What Are the Differences Between the Two Most Popular Language Models?]
* [https://www.makeuseof.com/gpt-vs-bert/ GPT vs. BERT: What Are the Differences Between the Two Most Popular Language Models?]
== Build LLM ==
* [https://towardsdatascience.com/understanding-llms-from-scratch-using-middle-school-math-e602d27ec876 Understanding LLMs from Scratch Using Middle School Math]
* [https://levelup.gitconnected.com/building-a-2-billion-parameter-llm-from-scratch-using-python-1325cb05d6fb Building a 2 Billion Parameter LLM from Scratch Using Python]
* [https://www.youtube.com/watch?v=s1uFVfuT2aw Train an LLM From Scratch On NVIDIA Jetson Nano (Step-by-Step Guide)]
= AI agent =
* [https://github.com/mannaandpoem/OpenManus OpenManus]
** [https://nodeshift.com/blog/how-to-install-run-openmanus-locally-with-ollama-no-api-keys-required How to Install & Run OpenManus Locally with Ollama – No API Keys Required]
= LangChain =
Build context-aware reasoning applications
* https://github.com/langchain-ai/langchain
* https://en.wikipedia.org/wiki/LangChain
* [https://www.geeksforgeeks.org/introduction-to-langchain/ Introduction to LangChain]
* [https://www.infoworld.com/article/2338830/generative-ai-with-langchain-rstudio-and-just-enough-python.html Generative AI with LangChain, RStudio, and just enough Python]
* OpenAI has introduced a file upload capability that allows users to upload various file types, such as PDFs, CSVs, and PowerPoint presentations, directly to their platform for analysis. However, for more advanced or customized applications, developers often turn to '''frameworks''' like LangChain. '''LangChain provides tools to parse and process different file types, integrate with large language models (LLMs), and build sophisticated workflows tailored to specific needs.''' For instance, LangChain offers document loaders to handle various file formats and chains to analyze documents in a structured manner.
= AI Browser =
List
* [https://openai.com/index/introducing-chatgpt-atlas/ ChatGPT Atlas]
* [https://www.perplexity.ai/comet Comet] browser from Perplexity
* [https://lifehacker.com/tech/opera-is-the-first-browser-to-support-local-ai-llms Opera Is the First Browser to Support Local AI LLMs]
* https://pinokio.co/ & [https://github.com/pinokiocomputer/pinokio Pinokio]
* [https://www.diabrowser.com/ Dia] from [https://arc.net/ Arc].
** [https://www.makeuseof.com/i-thought-this-browser-was-awful-until-one-update-flipped-everything/ I thought this AI browser was awful until one update flipped everything]
Reviews
* [https://usefulai.com/tools/ai-browsers 8 Best Agentic AI Browsers in 2025]


= AI, ML and DL =
= AI, ML and DL =
[https://www.opensourceforu.com/2022/08/ai-ml-and-dl-whats-the-difference/ AI, ML and DL: What’s the Difference?]
[https://www.opensourceforu.com/2022/08/ai-ml-and-dl-whats-the-difference/ AI, ML and DL: What’s the Difference?]


= Coding =
= Applications =
 
== General Applications ==
* [https://health.udn.com/health/story/6005/3034944 人何時走完全未知?美研發AI預測臨終準確度達90%]
* [https://www.ithome.com.tw/news/122511 美國FDA首次批准AI醫療儀器上市,能自動即時偵測糖尿病視網膜病變]
* [https://www.worldjournal.com/5518499/article-美國現象/在家養老-科技幫大忙/  在家養老-科技幫大忙]
* [https://www.ithome.com.tw/news/122507 病理研究有新幫手,Google以AR顯微鏡結合深度學習即時發現癌細胞]
* [https://earther.com/this-new-app-is-like-shazam-for-your-nature-photos-1823952757 This New App Is Like Shazam for Your Nature Photos]. [https://www.inaturalist.org/pages/seek_app Seek App].
* [https://liliputing.com/2018/07/draw-this-camera-prints-crappy-drawings-of-the-things-you-photograph-diy.html Draw This camera prints crappy drawings of the things you photograph (DIY)] with Google's [https://quickdraw.withgoogle.com/ quickdraw].
* [https://www.makeuseof.com/tag/machine-learning-algorithms/ What Are Machine Learning Algorithms? Here’s How They Work]
* [https://jamanetwork.com/journals/jama/article-abstract/2754798 How to Read Articles That Use Machine Learning] Users’ Guides to the Medical Literature
* [https://www.techbang.com/posts/62754-googles-artificial-intelligence-open-source-oracle-is-three-years-old-and-its-being-used-in-a-lot-of-places-you-cant-imagine Google的人工智慧開源神器三歲了,它被用在很多你想不到的地方] Nov 2018
* [https://www.makeuseof.com/what-is-natural-language-processing-and-how-does-it-work/ What is Natural Language Processing and How Does It Work?] NLP works via preprocessing the text and then running it through the machine learning-trained algorithm.
* [https://arxiv.org/abs/2110.12112 Why Machine Learning Cannot Ignore Maximum Likelihood Estimation] van der Laan & Rose 2021
 
== Coding/code ==
* [https://www.scribbledata.io/blog/the-top-llms-for-code-generation-2024-edition/ The Top LLMs For Code Generation: 2024 Edition]
* [https://www.scribbledata.io/blog/the-top-llms-for-code-generation-2024-edition/ The Top LLMs For Code Generation: 2024 Edition]
* https://mistral.ai/news/codestral/
* https://mistral.ai/news/codestral/
** [https://www.maginative.com/article/mistral-unveils-codestral-an-ai-code-assistant-trained-on-80-programming-languages/ Mistral Unveils Codestral, an AI Code Assistant Trained on 80+ Programming Languages] 5/29/2024
** [https://www.maginative.com/article/mistral-unveils-codestral-an-ai-code-assistant-trained-on-80-programming-languages/ Mistral Unveils Codestral, an AI Code Assistant Trained on 80+ Programming Languages] 5/29/2024
* [https://bsky.app/profile/stephenturner.us/post/3lbf4qmvrxk2f Writing a browser extension] using Claude 3.5 Sonnet.
* [https://marketplace.visualstudio.com/items?itemName=Google.geminicodeassist Gemini Code Assist in VS Code]
** [https://developers.google.com/gemini-code-assist/docs/overview Documentation]
* [https://simonwillison.net/2025/Mar/11/using-llms-for-code/ Here’s how I use LLMs to help me write code]


= Images =
* Antigravity. [https://www.howtogeek.com/i-built-an-e-ink-photo-frame-using-an-arduino-e-paper-display-and-google-antigravity/ I built an E-Ink photo frame using an Arduino, E-Paper display and Google Antigravity]
== Drawing ==
 
== Images ==
=== Drawing ===
* [https://www.reviewgeek.com/151292/this-new-ai-tool-can-animate-your-childrens-drawings/ This New AI Tool Can Animate Your Children’s Drawings]. [https://sketch.metademolab.com/ Animated Drawing] by Meta.
* [https://www.reviewgeek.com/151292/this-new-ai-tool-can-animate-your-childrens-drawings/ This New AI Tool Can Animate Your Children’s Drawings]. [https://sketch.metademolab.com/ Animated Drawing] by Meta.
* [https://www.makeuseof.com/react-dall-e-image-generator-application-build-api/ How to Build an Image Generator in React Using the DALL-E API ]
* [https://www.makeuseof.com/react-dall-e-image-generator-application-build-api/ How to Build an Image Generator in React Using the DALL-E API ]
* [https://www.makeuseof.com/how-to-use-bing-image-creator-ai-art/ How to Use Bing Image Creator to Make AI Art]
* [https://www.makeuseof.com/how-to-use-bing-image-creator-ai-art/ How to Use Bing Image Creator to Make AI Art]
* [https://youtu.be/xMrilkJ21yo How To Install Stable Diffusion With Prompting Cheat Sheets] 5/21/2023
* [https://youtu.be/xMrilkJ21yo How To Install Stable Diffusion With Prompting Cheat Sheets] 5/21/2023
* [https://www.makeuseof.com/dall-e-3-best-image-prompts/ 8 DALL-E 3 Prompts for Your Next Image] 2024/3/28
* [https://www.makeuseof.com/best-open-source-ai-image-generators/ The 5 Best Open-Source AI Image Generators] 2024/4/23
* [https://www.makeuseof.com/best-open-source-ai-image-generators/ The 5 Best Open-Source AI Image Generators] 2024/4/23
* [https://www.makeuseof.com/create-logos-with-ai/ Using AI to Create Logos: The Pros, Cons, and Best Practices]
* [https://www.makeuseof.com/create-logos-with-ai/ Using AI to Create Logos: The Pros, Cons, and Best Practices]
* [https://www.youtube.com/watch?v=zOI8ePbTUSs 重磅炸弹!Stable Diffusion 3 终于开源了!] 2024/7
* [https://www.youtube.com/watch?v=zOI8ePbTUSs 重磅炸弹!Stable Diffusion 3 终于开源了!] 2024/7
* [https://www.kenkoonwong.com/blog/2024-09-01-stable-diffusion-3-in-r-why-not-thanks-to-reticulate/ Stable Diffusion 3 in R? Why not? Thanks to {reticulate}] 2024/9/1
* [https://www.kenkoonwong.com/blog/2024-09-01-stable-diffusion-3-in-r-why-not-thanks-to-reticulate/ Stable Diffusion 3 in R? Why not? Thanks to {reticulate}] 2024/9/1
* Run it locally
** [https://github.com/AUTOMATIC1111/stable-diffusion-webui Stable Diffusion web UI]
* [https://www.howtogeek.com/want-powerful-local-ai-image-generation-on-windows-use-this-tool/ Want Powerful Local AI Image Generation on Windows? Use This Tool] 4/21/2024
* https://github.com/lllyasviel/Fooocus
** [https://allthings.how/how-to-set-up-local-ai-image-generation-on-your-pc-with-fooocus/ How to Set Up Local AI Image Generation on Your PC with Fooocus] 4/29/2024
** [https://itsfoss.com/local-ai-image-tools/ 5 Open-source Local AI Tools for Image Generation I Found Interesting] 2/10/2025
** [https://www.linuxlinks.com/machine-learning-linux-fooocus-image-generating-software/2/ Machine Learning in Linux: Fooocus – image generating software]
* [https://hao.cnyes.com/post/142547 不止吉卜力!GPT-4o新玩法全網瘋傳,網友:AI成精了] convert this photo to studio ghibli style anime


== Describe images ==
=== Describe images ===
* [https://www.makeuseof.com/use-chatgpt-vision/ 8 Ways to Use ChatGPT Vision]
* [https://www.makeuseof.com/use-chatgpt-vision/ 8 Ways to Use ChatGPT Vision]
* [https://www.makeuseof.com/google-bard-use-image-prompts/ How to Use Image Prompts on Google Bard]
* [https://www.makeuseof.com/google-bard-use-image-prompts/ How to Use Image Prompts on Google Bard]


== GeoSpy ==
=== GeoSpy ===
https://geospy.web.app/
https://geospy.web.app/


= Videos =
== Videos ==
* [https://youtu.be/OpYYFGJPr0A 超逼真的AI数字人,一键免费生成教程!还能克隆你自己,用这2个网站即可轻松搞定!!] 5/21/2023 零度解说
* [https://youtu.be/OpYYFGJPr0A 超逼真的AI数字人,一键免费生成教程!还能克隆你自己,用这2个网站即可轻松搞定!!] 5/21/2023 零度解说
* [https://www.youtube.com/watch?v=5KC4wFTLq3E 零基礎入門 全流程AI做兒童動畫頻道,月賺1w美元|解決人物一致+嘴形問題|Creating animation channel with AI]
* [https://www.youtube.com/watch?v=5KC4wFTLq3E 零基礎入門 全流程AI做兒童動畫頻道,月賺1w美元|解決人物一致+嘴形問題|Creating animation channel with AI]
* [https://github.com/comfyanonymous/ComfyUI ComfyUI]
** [https://www.youtube.com/watch?v=v3UPg9sqIj0 视频换脸最强神器!效果极佳,完全免费,ComfyUI 官方客户端一键搞定!]


= Music =
== Music ==
* [https://www.makeuseof.com/google-musiclm-overview-how-to-use/ Does Google's MusicLM Live Up to the Hype?]
* [https://www.makeuseof.com/google-musiclm-overview-how-to-use/ Does Google's MusicLM Live Up to the Hype?]
* [https://www.howtogeek.com/i-took-googles-new-ai-music-tool-for-a-spin-heres-how-it-went/ I Took Google's New AI Music Tool for a Spin, Here's How It Went]
* [https://www.howtogeek.com/i-took-googles-new-ai-music-tool-for-a-spin-heres-how-it-went/ I Took Google's New AI Music Tool for a Spin, Here's How It Went]


= Text to/from speech =
== Games ==
* Write a complete Python game using only the standard pygame library (no external dependencies). The game should have a retro synthwave aesthetic with neon-like colors and a grid or night-sky background. The player controls a red square that can move left and right at the bottom of the screen using the arrow keys. Blue obstacles fall from the top of the screen, and the goal is to avoid them. Additionally, include the following features:
** Show a start screen with instructions, including:
*** “Press R to restart”
*** “Use Up/Down arrows to change obstacle speed”
** The red square should be smaller than the original version (e.g., 30x30 pixels).
** The game should be easy to play — falling speeds should start slow and obstacles should be spaced out enough for beginners.
** Players should be able to press the Up arrow to increase obstacle falling speed and the Down arrow to decrease it.
** The game should run smoothly with no errors, and the code should be fully self-contained in a single file.
 
== Text to/from speech ==
* [https://www.freedidi.com/8737.html 文字轉語音、語音轉文字! 這幾種方法你最好要知道]
* [https://www.freedidi.com/8737.html 文字轉語音、語音轉文字! 這幾種方法你最好要知道]
* [https://www.youtube.com/watch?v=aUcFDNyMuVc ChatTTS 最强文本转语音!一键本地安装,100%成功!效果逼真如真人,完全免费开源!]
* [https://www.youtube.com/watch?v=aUcFDNyMuVc ChatTTS 最强文本转语音!一键本地安装,100%成功!效果逼真如真人,完全免费开源!]
** [https://www.freedidi.com/12613.html ChatTTS 本地部署教程!目前最好用的文字转语音工具!]
** [https://www.freedidi.com/12613.html ChatTTS 本地部署教程!目前最好用的文字转语音工具!]
Line 304: Line 741:
** MY GPU is 4GB. By default, GPU is not used. I am using the docker compose method. Following the instruction at [https://github.com/jianchang512/ChatTTS-ui/issues/106 安装了CUDA,为什么还是为CPU呢? #106], I just need to open '''ChatTTS/core.py''' and on line 78, change "4096" to "2048". Bingo! Verify by '''nvidia-smi -l 1'''.
** MY GPU is 4GB. By default, GPU is not used. I am using the docker compose method. Following the instruction at [https://github.com/jianchang512/ChatTTS-ui/issues/106 安装了CUDA,为什么还是为CPU呢? #106], I just need to open '''ChatTTS/core.py''' and on line 78, change "4096" to "2048". Bingo! Verify by '''nvidia-smi -l 1'''.


= Bioinformatics =
* [https://github.com/openai/whisper Whisper]
** [https://www.tecmint.com/whisper-ai-audio-transcription-on-linux/ Running Whisper AI for Real-Time Speech-to-Text on Linux]
 
* OpenAI [https://github.com/ahmetoner/whisper-asr-webservice Whisper ASR Box]
** [https://www.makeuseof.com/this-local-voice-to-text-app-replaced-every-paid-service-for-me/ This local voice-to-text app replaced every paid service for me]
 
* [https://github.com/index-tts/index-tts IndexTTS] in github
** [https://www.youtube.com/watch?v=dJ2JDzLcqDw IndexTTS Voice Cloning and TTS in 4GB VRAM! (Local Test & Install)]
 
* [https://github.com/resemble-ai/chatterbox Chatterbox TTS] - SoTA open-source TTS
** [https://www.makeuseof.com/ai-voice-clone-chatterbox/ I cloned my voice with a local voice model and the result was unsettlingly good]
 
* [https://github.com/nari-labs/dia dia]
** [https://www.linuxlinks.com/machine-learning-linux-dia-text-speech-model/ Machine Learning in Linux: Dia – 1.6B parameter text to speech model]
 
== Bioinformatics ==
* [https://academic.oup.com/bib/article/23/6/bbac409/6713511 BioGPT: generative pre-trained transformer for biomedical text generation and mining]
* [https://academic.oup.com/bib/article/23/6/bbac409/6713511 BioGPT: generative pre-trained transformer for biomedical text generation and mining]
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9950855/ Applications of transformer-based language models in bioinformatics: a survey] 2023
* [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9950855/ Applications of transformer-based language models in bioinformatics: a survey] 2023
Line 314: Line 766:
** [https://mistral-7b.com/try-installing-biomistral-on-your-windows-system-for-the-best-medical-llm-experience-this-local-installation-is-user-friendly-and-ensures-optimal-performance/ Try installing BioMistral on your Windows system for the best medical LLM experience. This local installation is user-friendly and ensures optimal performance].
** [https://mistral-7b.com/try-installing-biomistral-on-your-windows-system-for-the-best-medical-llm-experience-this-local-installation-is-user-friendly-and-ensures-optimal-performance/ Try installing BioMistral on your Windows system for the best medical LLM experience. This local installation is user-friendly and ensures optimal performance].


= AI and statistics =
== AI and statistics ==
[https://www.frontiersin.org/articles/10.3389/fdgth.2022.833912/ Artificial Intelligence and Statistics: Just the Old Wine in New Wineskins?] Faes 2022
[https://www.frontiersin.org/articles/10.3389/fdgth.2022.833912/ Artificial Intelligence and Statistics: Just the Old Wine in New Wineskins?] Faes 2022


== What are the most important statistical ideas of the past 50 years ==
=== What are the most important statistical ideas of the past 50 years ===
[https://towardsdatascience.com/four-deep-learning-papers-to-read-in-june-2021-5570cc5213bb Four Deep Learning Papers to Read in June 2021]
[https://towardsdatascience.com/four-deep-learning-papers-to-read-in-june-2021-5570cc5213bb Four Deep Learning Papers to Read in June 2021]
== FDA Elsa ==
[https://www.ithome.com.tw/news/169338 FDA推出通用AI工具Elsa,可望加速審查流程]


= Neural network =
= Neural network =
Line 365: Line 820:
[https://datageeek.com/2021/06/01/simulated-neural-network-with-bootstrapping-time-series-data/ Simulated Neural Network with Bootstrapping Time Series Data]
[https://datageeek.com/2021/06/01/simulated-neural-network-with-bootstrapping-time-series-data/ Simulated Neural Network with Bootstrapping Time Series Data]


= Languages =
= Languages for machine learning =
[https://www.techrepublic.com/article/github-the-top-10-programming-languages-for-machine-learning/ GitHub: The top 10 programming languages for machine learning]
[https://www.techrepublic.com/article/github-the-top-10-programming-languages-for-machine-learning/ GitHub: The top 10 programming languages for machine learning]



Latest revision as of 09:21, 1 December 2025

人類如何勝過AI

人類如何勝過AI?

Prompts

Research prompts

Learning prompts

  • How to Learn Python FAST with ChatGPT?
    • Can you create a roadmap to learn python for data analysis
    • Can you create a roadmap to learn python for data analysis in 3 months with weekly plan and resources for learning
    • Can you create a roadmap to learn python for data analysis in 3 months with weekly plan, including resources and links for each week and youtube video links
    • Explain while loop in python to a child
  • How to learn to code FAST using ChatGPT
    • Give me a study plan to learn python for data science
    • Give me a study plan to learn python for data science with resources and a timeline
    • Sublime is used
    • (After ask a question and get an answer). Let's take this step by step.
  • Ask generative AI to be that colleague. Ask 'As a physicist, describe how cancer cells interact with their environment', or 'As a chemist..', 'As a developmental biologist..', 'As an economist..' 'As an electrician.' ...

Creating images

5 of these 10 photos are AI-generated — can you spot them?

Interesting prompts

  • Can you tell me everything you know about me, based on our past conversations?

Carbon footprint

Free AI isn’t sustainable — and we’ll be paying for it soon enough.

ChatGPT

https://chat.openai.com, https://openai.com/blog/chatgpt-plus/

Down

Network error

Network recommendations for ChatGPT errors on web and apps

Differences among platforms

8 ChatGPT Features You Can't Access on All Platforms

Settings

Plugins

How to Enable ChatGPT’s Web Browsing and Plugins

Use

Live voice

7 Interesting Ways You Can Use ChatGPT's Live Voice and Vision

Reasoning

How I Know When to Use ChatGPT Search vs. ChatGPT Reasoning

Deep research

API, Extension tools

Create your GPT

How I Stopped Procrastinating with ChatGPT — 2 Hours Saved Each Day!

call from R

call from Python

Jupyter-ai

A generative AI extension for JupyterLab

GPT-4

GPT o1

What Is ChatGPT's o1 Model and How Can You Use It?

GPT-4o

Alternatives

PDF

Word

Government

https://go.hhs.gov/chatgpt

Research

Content writer

Meeting notes

Detect AI text

Youtube summary

Chrome extension YouTube Summary with ChatGPT from 8 AI-Powered Chrome Extensions to Summarize YouTube Videos

AutoGPT

How to Download and Install Auto-GPT Step-by-Step

Other chats

4 AI Search Engines I Use Every Day. Perplexity, Exa, You AI, Andi AI.

Google Gemini

Google AI Studio

https://aistudio.google.com/

Google AI

NotebookLM

  • New Features in NotebookLM
    • Featured Notebooks: Expert-created templates that showcase best practices and help users learn how to build their own notebooks.
    • Discover Sources: A new button that suggests curated, high-quality sources (e.g., from universities or news outlets) to enrich your notebook.
    • Quizzes: Automatically generated quizzes based on your sources, with instant feedback and customizable difficulty, topic, and language.
    • Flashcards: 60 default cards for memorizing key concepts, with options to customize and request explanations.
    • Mindmaps: Interactive visual summaries of your sources, showing branching relationships between concepts. Not yet editable, but shareable.
    • Audio Overview: Create podcasts in multiple styles and languages, with prompts to guide topic focus and length.
    • Video Overview: Generate slide-based videos from your sources, structured into chapters and customizable by topic and language.
  • How RAG Works
    • Query Encoding: The user's question is converted into a vector (a mathematical representation).
    • Document Retrieval: The system searches a database or document set for the most relevant matches.
    • Context Injection: Retrieved documents are inserted into the model’s prompt.
    • Response Generation: The model uses both its training and the retrieved context to generate a response.
  • Why RAG Is Useful
    • Up-to-date answers: It can pull in current or domain-specific info not included in the model’s training.
    • Custom knowledge bases: You can feed it your own documents (e.g., PDFs, research papers, manuals).
    • No retraining needed: It improves accuracy without modifying the model itself.
  • Example Use Cases
    • Scientific research assistants (like in phosphoproteomics 🧬)
      • Ask the question: What are three recurring ideas throughout these texts/documents
    • Customer support bots using internal documentation
    • Legal or medical AI tools referencing case files or journals

Microsoft Copilot

perplexity.ai

Perplexity Assistant

Can't Afford ChatGPT Operator? Try Perplexity Assistant Instead

Multiple AI Chatbots

Grok

https://grok.com/. Designed by xAI.

Groq

Deepseek

Qwen

https://chat.qwen.ai/

文心一言

https://yiyan.baidu.com/

Duck.ai

https://duck.ai

Proton Lumo

Brave AI chatbot: Leo

Everything You Need to Know About Leo: Brave Browser’s AI Chatbot

You.com

You.com’s AI-infused Google rival provides a tantalizing glimpse of the future

Claude

Mistral/Le Chat

Trae AI

Open source chats

Run locally

Jan.ai

LM Studio

Anything LLM

Msty

Ollama

  • https://github.com/ollama/ollama
    • FAQ like How do I configure Ollama server? Environment="OLLAMA_HOST=0.0.0.0"
    • For example when I try Llama 3.2 1B model on 4GB (now I extend it to 8GB) Manjaro VM using 4 vCPU, the total memory including the xfce desktop is 2.27G.
  • Issue: Did not get a response.
    • If it took too long, I can use Ctrl+C to stop.
    • Even after I quit ollama, a "ollama runner" process is still running. So I run "ps -ef | grep ollama". We can use ollama stop MODEL_NAME. See How do I keep a model loaded in memory or make it unload immediately? in FAQ.
  • My notes. llama3.1:8b is better than Phi3/Phi4 (14b).
    $ ollama list
    NAME               ID              SIZE      MODIFIED    
    qwen2:1.5b         f6daf2b25194    934 MB    6 days ago     
    phi3:3.8b          4f2222927938    2.2 GB    6 days ago     
    llama3.1:8b        46e0c10c039e    4.9 GB    6 days ago     
    llama3.2:latest    a80c4f17acd5    2.0 GB    6 days ago     
    llama3.2:1b        baf6a787fdff    1.3 GB    2 weeks ago
    
    $ ollama pull llama3.1:8b
    
    $ ollama run --verbose qwen2:1.5b
    >>> what is lincoln memorial
    ...
    total duration:       1m14.068603383s
    load duration:        19.23796ms
    prompt eval count:    13 token(s)
    prompt eval duration: 2.348s
    prompt eval rate:     5.54 tokens/s
    eval count:           297 token(s)
    eval duration:        1m11.699s
    eval rate:            4.14 tokens/s
    >>> /bye
    
    $ ollama run --verbose phi3:3.8b
    >>> what is lincoln memorial
    ...
    total duration:       1m33.270810903s
    load duration:        14.566152ms
    prompt eval count:    15 token(s)
    prompt eval duration: 7.383s
    prompt eval rate:     2.03 tokens/s
    eval count:           160 token(s)
    eval duration:        1m25.872s
    eval rate:            1.86 tokens/s
    >>> /bye
  • Ollama Guidance for Effective Use
  • Vision:
  • If you want to change the default location where Ollama saves its models, you can set the OLLAMA_MODELS environment variable to your desired directory. To do this:
    • Open a terminal
    • Run: sudo systemctl edit ollama.service
    • Add the following line under the [Service] section & Save and exit the editor: Environment="OLLAMA_MODELS=/path/to/new/location"
    • Reload the daemon: sudo systemctl daemon-reload
    • Restart Ollama: sudo systemctl restart ollama
  • GPU:
  • Model file
  • Raspberry Pi 5:
  • Alpaca: A Linux GUI App to Manage Multiple AI Models Offline

VS Code

OpenWebUI

  • (2025/7/31) Ollama desktop is now available. Ollama 0.10 Speeds up Local AI Models, Introduces Desktop App.
  • https://github.com/open-webui/open-webui
  • Mac
    • Install Ollama
      • Download Ollama for Mac. After unzipping it, drag the file to the Application folder. Then double clicking the Ollama app to start the installation.
      • Command line way: ollama run --verbose llama3.2
    • Install Open WebUI
      $ brew install [email protected]
      $ python3.11 -m venv ollamavenv
      $ source ollamavenv/bin/activate
      (ollamavenv) $ pip install open-webui
      (ollamavenv) $ open-webui serve  # OR open-webui serve --port 8080
      (ollamavenv) $ deactivate

      Create username, email (eg [email protected]) and password. There is no email verification, and it’s only stored locally, so the email is just an identifier for login. You’ll only need to log in again if you clear your browser cache or reset the database. The only thing that matters is: You remember the email + password you entered (you’ll need it to log in again later).

      Go to http://localhost:8080 to see the Open WebUI.

      The Ollama and llama3.2 was automatically recognized and ready to use.

  • Add a Non-Ollama Backend
    • Go to Settings > Model Providers
    • Click "Add Provider"
    • Choose "OpenAI-compatible"
    • Enter the base URL (e.g., http://localhost:1234/v1)
    • Provide the API key (if needed) — for local setups you can use sk-fake-key

Python

I Built a Fully Offline AI Agent That Answers Questions From PDF, Images, and Audio — No Cloud…

GPT4ALL

  • Phi-3 开源大模型本地部署!能否媲美 ChatGPT、Cladue 3?
  • Here's How To Install Your Own Uncensored Local GPT-Like Chatbot
  • It can download models from two sources: GPT4ALL and HuggingFace (no guarantee it will work).
  • My testing:
    • A new directory "gpt4all" was created under the home directory. The GUI can be launched from the command line ~/gpt4all/bin/chat or from a desktop icon.
    • Use my user account to install it, not to use 'sudo'. The installation will create a folder 'gpt4all' under my home directory.
    • VM is not working if we use vCPU. It shows Encountered an error starting up: "Incompatible hardware detected." Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program required to successfully run a modern large language model. ... The solution is to use edit the hardware to use host cpu.
  • When we launch GPT4ALL, it will check if a new version is available. If a new version is available, it will offer to upgrade.
  • Comparison of GPT4ALL, LM Studio and Ollama
Feature GPT4ALL LM Studio Ollama
Model Compatibility Vicuna, Alpaca, LLaMa Wide range including Vicuna, Alpaca, LLaMa, Falcon, Starcoder, GPT-2 Various models, seamless workflow integration
User Interface User-friendly GUI More UI-friendly, in-app chat interface Simple command-line interface, various web-based clients available
Performance Good for lower-end systems Generally faster inference, more coherent responses Optimized for speed, rapid inference times
Resource Utilization Efficient on consumer-grade hardware May require more resources for larger models Can be resource-intensive for larger models
Customization Basic Advanced (e.g., adjustable parameters) Flexible, allows creating custom models
Acceleration Support Not specified CUDA, openCL, cuBLAS, Metal Not specified
Open Source Yes No (free to download) Yes
OS Support Cross-platform macOS, Windows (with AVX2), Linux (beta) macOS, Linux, Windows (preview)
Key Features RAG capabilities, wide hardware support Built-in chat interfaces, OpenAI-like local servers Simplicity, ease of installation, suitable for beginners
Developer Tools Python bindings, API Local inference server Command-line interface, API

Remote access

Documents

PrivateGPT

https://github.com/zylon-ai/private-gpt (56.7k star)

DocsGPT

https://github.com/arc53/DocsGPT (17.2k star)

Models

Meta's LLaMA

BERT

Build LLM

AI agent

LangChain

Build context-aware reasoning applications

AI Browser

List

Reviews

AI, ML and DL

AI, ML and DL: What’s the Difference?

Applications

General Applications

Coding/code

Images

Drawing

Describe images

GeoSpy

https://geospy.web.app/

Videos

Music

Games

  • Write a complete Python game using only the standard pygame library (no external dependencies). The game should have a retro synthwave aesthetic with neon-like colors and a grid or night-sky background. The player controls a red square that can move left and right at the bottom of the screen using the arrow keys. Blue obstacles fall from the top of the screen, and the goal is to avoid them. Additionally, include the following features:
    • Show a start screen with instructions, including:
      • “Press R to restart”
      • “Use Up/Down arrows to change obstacle speed”
    • The red square should be smaller than the original version (e.g., 30x30 pixels).
    • The game should be easy to play — falling speeds should start slow and obstacles should be spaced out enough for beginners.
    • Players should be able to press the Up arrow to increase obstacle falling speed and the Down arrow to decrease it.
    • The game should run smoothly with no errors, and the code should be fully self-contained in a single file.

Text to/from speech

Bioinformatics

AI and statistics

Artificial Intelligence and Statistics: Just the Old Wine in New Wineskins? Faes 2022

What are the most important statistical ideas of the past 50 years

Four Deep Learning Papers to Read in June 2021

FDA Elsa

FDA推出通用AI工具Elsa,可望加速審查流程

Neural network

Types of artificial neural networks

https://en.wikipedia.org/wiki/Types_of_artificial_neural_networks

neuralnet package

nnet package

sauron package

Explaining predictions of Convolutional Neural Networks with 'sauron' package

OneR package

So, what is AI really?

h2o package

https://cran.r-project.org/web/packages/h2o/index.html

shinyML package

shinyML - Compare Supervised Machine Learning Models Using Shiny App

LSBoost

Explainable 'AI' using Gradient Boosted randomized networks Pt2 (the Lasso)

LightGBM/Light Gradient Boosting Machine

Survival data

Simulated neural network

Simulated Neural Network with Bootstrapping Time Series Data

Languages for machine learning

GitHub: The top 10 programming languages for machine learning

Keras (high level library)

Keras is a model-level library, providing high-level building blocks for developing deep-learning models. It doesn’t handle low-level operations such as tensor manipulation and differentiation. Instead, it relies on a specialized, well-optimized tensor library to do so, serving as the backend engine of Keras.

Currently, the three existing backend implementations are the TensorFlow backend, the Theano backend, and the Microsoft Cognitive Toolkit (CNTK) backend.

On Ubuntu, we can install required packages by

$ sudo apt-get install build-essential cmake git unzip \
                  pkg-config libopenblas-dev liblapack-dev
$ sudo apt-get install python-numpy python-scipy python- matplotlib python-yaml
$ sudo apt-get install libhdf5-serial-dev python-h5py
$ sudo apt-get install graphviz
$ sudo pip install pydot-ng
$ sudo apt-get install python-opencv

$ sudo pip install tensorflow  # CPU only
$ sudo pip install tensorflow-gpu # GPU support

$ sudo pip install theano

$ sudo pip install keras
$ python -c "import keras; print keras.__version__"
$ sudo pip install --upgrade keras  $ Upgrade Keras

To configure the backend of Keras, see Introduction to Python Deep Learning with Keras.

Example 1: DeepDecon.

  • Model Definition: In train_model.py, model = Sequential() defines a neural network model using the Keras Sequential API. It adds several dense (fully connected) layers with dropout for regularization. The activation function is set to ReLU for hidden layers and sigmoid or softmax for the output layer, depending on the number of output classes.
  • Model Compilation: self.model.compile(loss=self.loss, optimizer=self.optimizer, metrics=[rmse, 'mse', metrics.mae]) compiles the model, specifying the loss function, optimizer, and evaluation metrics. The custom RMSE function is included as one of the metrics.
  • Model Training: history = self.model.fit(X_tr, y_tr, batch_size=self.batch_size, epochs=self.epochs, validation_data=validation_data, callbacks=callbacks, shuffle=True, verbose=verbose) trains the model on the training data (X_tr, y_tr) with specified batch size and number of epochs. It also uses validation data for early stopping if enabled.
  • Early Stopping: if self.early_stopping sets up early stopping to prevent overfitting by monitoring the validation loss and stopping training if it doesn’t improve for a specified number of epochs.

In the eval.py code,

  • Loading Models models = {} This section loads pre-trained models from specified paths and stores them in a dictionary. The custom RMSE function is used during model loading.
  • Calculating Differences def get_difference() This function calculates the differences between the true labels and the predicted labels. It returns the minimum and maximum differences, as well as the difference array.
  • Single Prediction: def get_single_prediction() This function performs a single prediction by iteratively refining the prediction interval until it stabilizes.
  • Batch Prediction: def get_prediction() This function performs predictions for a batch of input data by calling "get_single_prediction" for each input sample.
  • Main Function: This section sets up argument parsing, loads the test data, performs predictions, and saves the results to a specified file.

TensorFlow (backend library)

Basic

Some terms

Machine Learning Glossary from developers.google.com

Tensor

Tensors for Neural Networks, Clearly Explained!!!

Dense layer and dropout layer

In Keras, what is a "dense" and a "dropout" layer?

Fully-connected layer (= dense layer). You can choose "relu" or "sigmoid" or "softmax" activation function.

Activation function

  • Artificial neural network -> Neural networks as functions [math]\displaystyle{ \textstyle f (x) = K \left(\sum_i w_i g_i(x)\right) }[/math] where K (commonly referred to as the activation function) is some predefined function, such as the hyperbolic tangent or sigmoid function or softmax function or rectifier function.
  • Rectifier/ReLU f(x) = max(0, x).
  • Sigmoid. Binary problem. Logistic function and hyperbolic tangent tanh(x) are two examples of sigmoid functions.
  • Softmax. Multiclass classification.

Backpropagation

https://en.wikipedia.org/wiki/Backpropagation

Convolutional network

https://en.wikipedia.org/wiki/Convolutional_neural_network

Deep Learning with Python

Jupyter notebooks for the code samples of the book "Deep Learning with Python"

sudo apt install python3-pip python3-dev

sudo apt install build-essential cmake git unzip \
   pkg-config libopenblas-dev liblapack-dev
sudo apt-get install python3-numpy python3-scipy python3-matplotlib \
   python3-yaml
sudo apt install libhdf5-serial-dev python3-h5py
sudo apt install graphviz
sudo pip3 install pydot-ng

# sudo apt-get install python-opencv
# https://stackoverflow.com/questions/37188623/ubuntu-how-to-install-opencv-for-python3
# https://askubuntu.com/questions/783956/how-to-install-opencv-3-1-for-python-3-5-on-ubuntu-16-04-lts

sudo pip3 install keras

Colorize black-and-white photos

Colorize black-and-white photos

Keras using R

Training process:

  1. Draw a batch of X and Y
  2. Run the network on x (a step called the forward pass) to obtain predictions y_pred.
    • How many layers to use.
    • How many “hidden units” to chose for each layer.
  3. Compute the loss of the network on the batch
    • loss
    • optimizer: determines how learning proceeds (how the network will be updated based on the loss function). It implements a specific variant of stochastic gradient descent (SGD).
    • metrics
  4. Update all weights of the network in a way that slightly reduces the loss on this batch.
    • batch_size
    • epochs (=iteration over all samples in a batch_size of samples)

Keras (in order to use Keras, you need to install TensorFlow or CNTK or Theano):

  1. Define your training data: input tensors and target tensors.
  2. Define a network of layers (or model). Two ways to define a model:
    1. using the keras_model_sequential() function (only for linear stacks of layers, which is the most common network architecture by far) or
      model <- keras_model_sequential() %>%
        layer_dense(units = 32, input_shape = c(784)) %>%
        layer_dense(units = 10, activation = "softmax")
    2. the functional API (for directed acyclic graphs of layers, which let you build completely arbitrary architectures)
      input_tensor <- layer_input(shape = c(784))
      
      output_tensor <- input_tensor %>%
        layer_dense(units = 32, activation = "relu") %>%
        layer_dense(units = 10, activation = "softmax")
      
      model <- keras_model(inputs = input_tensor, outputs = output_tensor)
  3. Compile the learning process by choosing a loss function, an optimizer, and some metrics to monitor.
    model %>% compile(
      optimizer = optimizer_rmsprop(lr = 0.0001),
      loss = "mse",
      metrics = c("accuracy")
    )
  4. Iterate on your training data by calling the fit() method of your model.
    model %>% fit(input_tensor, target_tensor, batch_size = 128, epochs = 10)

Custom loss function

Custom Loss functions for Deep Learning: Predicting Home Values with Keras for R

Metrics

https://machinelearningmastery.com/custom-metrics-deep-learning-keras-python/

Docker RStudio IDE

Assume we are using rocker/rstudio IDE, we need to install some packages first in the OS.

$ docker run -d -p 8787:8787 -e USER=XXX -e PASSWORD=XXX --name rstudio rocker/rstudio

$ docker exec -it rstudio bash
# apt update
# apt install python-pip python-dev
# pip install virtualenv

And then in R,

install.packages("keras")
library(keras)
install_keras(tensorflow = "1.5")

Use your own Dockerfile

Data Science for Startups: Containers Building reproducible setups for machine learning

Some examples

See Tensorflow for R from RStudio for several examples.

Binary data (Chapter 3.4)

  • The final layer will use a sigmoid activation so as to output a probability (a score between 0 and 1, indicating how likely the sample is to have the target “1”.
  • A relu (rectified linear unit) is a function meant to zero-out negative values, while a sigmoid “squashes” arbitrary values into the [0, 1] interval, thus outputting something that can be interpreted as a probability.
library(keras)
imdb <- dataset_imdb(num_words = 10000)
c(c(train_data, train_labels), c(test_data, test_labels)) %<-% imdb

# Preparing the data
vectorize_sequences <- function(sequences, dimension = 10000) {...}
x_train <- vectorize_sequences(train_data)
x_test <- vectorize_sequences(test_data)
y_train <- as.numeric(train_labels)
y_test <- as.numeric(test_labels)

# Build the network
## Two intermediate layers with 16 hidden units each
## The final layer will output the scalar prediction
model <- keras_model_sequential() %>% 
  layer_dense(units = 16, activation = "relu", input_shape = c(10000)) %>% 
  layer_dense(units = 16, activation = "relu") %>% 
  layer_dense(units = 1, activation = "sigmoid")
model %>% compile(
  optimizer = "rmsprop",
  loss = "binary_crossentropy",
  metrics = c("accuracy")
)
model %>% fit(x_train, y_train, epochs = 4, batch_size = 512)
## Error in py_call_impl(callable, dots$args, dots$keywords) : MemoryError: 
## 10.3GB memory is necessary on my 16GB machine

# Validation
results <- model %>% evaluate(x_test, y_test)

# Prediction on new data
model %>% predict(x_test[1:10,])

Multi class data (Chapter 3.5)

  • Goal: build a network to classify Reuters newswires into 46 different mutually-exclusive topics.
  • You end the network with a dense layer of size 46. This means for each input sample, the network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.
  • The last layer uses a softmax activation. You saw this pattern in the MNIST example. It means the network will output a probability distribution over the 46 different output classes: that is, for every input sample, the network will produce a 46-dimensional output vector, where outputi is the probability that the sample belongs to class i. The 46 scores will sum to 1.
library(keras)
reuters <- dataset_reuters(num_words = 10000)
c(c(train_data, train_labels), c(test_data, test_labels)) %<-% reuters

model <- keras_model_sequential() %>% 
  layer_dense(units = 64, activation = "relu", input_shape = c(10000)) %>% 
  layer_dense(units = 64, activation = "relu") %>% 
  layer_dense(units = 46, activation = "softmax")
model %>% compile(
  optimizer = "rmsprop",
  loss = "categorical_crossentropy",
  metrics = c("accuracy")
)
history <- model %>% fit(
  partial_x_train,
  partial_y_train,
  epochs = 9,
  batch_size = 512,
  validation_data = list(x_val, y_val)
)
results <- model %>% evaluate(x_test, one_hot_test_labels)
# Prediction on new data
predictions <- model %>% predict(x_test)

Regression data (Chapter 3.6)

  • Because so few samples are available, we will be using a very small network with two hidden layers. In general, the less training data you have, the worse overfitting will be, and using a small network is one way to mitigate overfitting.
  • Our network ends with a single unit, and no activation (i.e. it will be linear layer). This is a typical setup for scalar regression (i.e. regression where we are trying to predict a single continuous value). Applying an activation function would constrain the range that the output can take. Here, because the last layer is purely linear, the network is free to learn to predict values in any range.
  • We are also monitoring a new metric during training: mae. This stands for Mean Absolute Error.
library(keras)
dataset <- dataset_boston_housing()
c(c(train_data, train_targets), c(test_data, test_targets)) %<-% dataset

build_model <- function() {
  model <- keras_model_sequential() %>% 
    layer_dense(units = 64, activation = "relu", 
                input_shape = dim(train_data)[[2]]) %>% 
    layer_dense(units = 64, activation = "relu") %>% 
    layer_dense(units = 1) 
    
  model %>% compile(
    optimizer = "rmsprop", 
    loss = "mse", 
    metrics = c("mae")
  )
}
# K-fold CV
k <- 4
indices <- sample(1:nrow(train_data))
folds <- cut(1:length(indices), breaks = k, labels = FALSE) 
num_epochs <- 100
all_scores <- c()
for (i in 1:k) {
  cat("processing fold #", i, "\n")
  # Prepare the validation data: data from partition # k
  val_indices <- which(folds == i, arr.ind = TRUE) 
  val_data <- train_data[val_indices,]
  val_targets <- train_targets[val_indices]
  
  # Prepare the training data: data from all other partitions
  partial_train_data <- train_data[-val_indices,]
  partial_train_targets <- train_targets[-val_indices]
  
  # Build the Keras model (already compiled)
  model <- build_model()
  
  # Train the model (in silent mode, verbose=0)
  model %>% fit(partial_train_data, partial_train_targets,
                epochs = num_epochs, batch_size = 1, verbose = 0)
                
  # Evaluate the model on the validation data
  results <- model %>% evaluate(val_data, val_targets, verbose = 0)
  all_scores <- c(all_scores, results$mean_absolute_error)
}

PyTorch

An R Shiny app to recognize flower species

Google Cloud Platform

Amazon

Amazon's Machine Learning University is making its online courses available to the public

Workshops

Notebooks from the Practical AI Workshop 2019

OpenML.org

R interface to OpenML.org

Biology