Tag

AI reasoning

0 views collected around this technical thread.

Code Mala Tang
Code Mala Tang
Jun 5, 2025 · Artificial Intelligence

Mastering LLM Prompts: Proven Techniques to Get Precise Answers

By rethinking how we interact with large language models—using role‑play, task decomposition, chain‑of‑thought, ReAct, and other advanced prompting strategies—readers can transform generic ChatGPT answers into precise, context‑aware responses, leveraging pattern recognition and context windows for superior AI assistance.

AI reasoningChain-of-ThoughtLLM techniques
0 likes · 21 min read
Mastering LLM Prompts: Proven Techniques to Get Precise Answers
Java Architecture Diary
Java Architecture Diary
Jun 5, 2025 · Artificial Intelligence

Unlock AI Reasoning: How Ollama’s New ‘Thinking’ Feature Works

Version 0.9.0 of Ollama introduces a ‘thinking’ control that lets users view and manage the AI model’s reasoning process, with detailed CLI commands, REST API usage, model support list, scripting options, and advanced Modelfile configurations for models like DeepSeek R1 and Qwen 3.

AI reasoningDeepSeekModelfile
0 likes · 6 min read
Unlock AI Reasoning: How Ollama’s New ‘Thinking’ Feature Works
DataFunTalk
DataFunTalk
Mar 9, 2025 · Artificial Intelligence

Critique Fine-Tuning (CFT): Boosting Large Language Model Reasoning with Minimal Data

The paper introduces Critique Fine-Tuning (CFT), a method that replaces simple imitation in supervised fine‑tuning with critique‑based learning, achieving superior reasoning performance on mathematical benchmarks using only 50 K samples, outperforming traditional reinforcement‑learning approaches that require millions of examples.

AI reasoningCritique Fine-TuningLarge Language Models
0 likes · 7 min read
Critique Fine-Tuning (CFT): Boosting Large Language Model Reasoning with Minimal Data
Code Mala Tang
Code Mala Tang
Feb 27, 2025 · Artificial Intelligence

Do New AI Reasoning Models Really Think? Unpacking the Debate

The article examines whether the latest AI models that claim to perform true reasoning—by breaking problems into steps and using chain‑of‑thought—actually reason like humans, presenting skeptical and supportive expert viewpoints, and offering practical guidance on how to use such models responsibly.

AI reasoningAI safetyChain-of-Thought
0 likes · 14 min read
Do New AI Reasoning Models Really Think? Unpacking the Debate
DataFunTalk
DataFunTalk
Dec 9, 2024 · Artificial Intelligence

The Future of Mathematics with AI: Insights from Terence Tao, OpenAI Researchers, and James Donovan

In a December 2024 online event titled “o1 Reasoning and the Future of Mathematics,” UCLA professor Terence Tao, OpenAI senior vice president Mark Chen, and policy lead James Donovan discuss how advanced AI reasoning models could transform mathematical research, problem solving, collaboration, and education.

AI reasoningArtificial IntelligenceEducation
0 likes · 41 min read
The Future of Mathematics with AI: Insights from Terence Tao, OpenAI Researchers, and James Donovan
IT Services Circle
IT Services Circle
Jul 17, 2024 · Artificial Intelligence

Why Large Language Models Mistake 9.11 > 9.9: Prompting, Tokenizer Effects, and Recent Findings

The article examines why leading large language models such as GPT‑4o, Gemini Advanced, and Claude 3.5 incorrectly claim that 9.11 is larger than 9.9, analyzes tokenization and prompting strategies that cause the error, and discusses recent research and OpenAI model updates.

AI reasoningLarge Language ModelsNumerical Comparison
0 likes · 7 min read
Why Large Language Models Mistake 9.11 > 9.9: Prompting, Tokenizer Effects, and Recent Findings
Architect
Architect
Feb 18, 2023 · Artificial Intelligence

Paradigm Shifts in Large Language Models: From Pre‑training to AGI and Future Research Directions

The article reviews the evolution of large language models, highlighting two major paradigm shifts after GPT‑3, the role of scaling laws, knowledge acquisition, prompting techniques, reasoning abilities, and outlines future research priorities for building more capable and efficient AI systems.

AI reasoningIn-Context LearningLarge Language Models
0 likes · 71 min read
Paradigm Shifts in Large Language Models: From Pre‑training to AGI and Future Research Directions