Posts
How Ollama Stores and Runs Models Locally
If you use Ollama to run AI models locally, you might wonder what’s actually happening under the hood. Where are the files stored? What do they contain? And what happens when you actually run a model?
This post breaks it down.
Where Ollama Stores Its Files On macOS/Linux, Ollama stores its data under ~/.ollama/. Here’s a peek inside:
tree ~/.ollama/ ~/.ollama/ ├── history ├── id_ed25519 ├── id_ed25519.pub ├── logs/ │ ├── app-*.
read morePosts
Notes On Neural Network
PS: These are notes from my study/review mostly from AI learning mode.
Core Concepts
Neuron = weights + bias + activation function. Weights: Measure how much each input matters. Bias: Shifts the threshold a neuron needs to “fire.” Activation function: Provides non-linearity, letting networks learn curves, not just straight lines. Single Neuron Example
Compute the raw sum: $$ z = w_1x_1 + w_2x_2 + … + b $$. Apply activation (e.
read morePosts
AI assisted software development
AI and LLMs: Useful, But More Code Isn’t Always the Solution
The rise of AI and large language models (LLMs) has made writing code easier and more accessible than ever before. With a few prompts, developers can generate boilerplate, automate repetitive tasks, and even solve complex problems. This is a remarkable leap in productivity and creativity.
However, it’s important to remember that code itself is not the solution—at least, not always.
read morePosts
What it means to be intelligent
What Does It Mean to Be Intelligent? LLMs: Word Calculators or Something More? Large Language Models (LLMs) like GPT-4 are often described as “word calculators”:
They don’t have an inherent understanding of the world. They can’t reason in the human sense. They can’t set their own goals or desires. Yet, from the seemingly simple objective of next-word prediction, we’ve seen a host of emergent properties. LLMs can:
Write code and poetry Summarize complex documents Pass professional exams Hold conversations that feel intelligent These capabilities make LLMs appear very intelligent, even if their underlying mechanism is just statistical pattern matching.
read morePosts
Ollama
What is Ollama? Ollama is a platform that makes it easy to run large language models (LLMs) like Llama, Gemma, Mistral, and many others on your local machine. It provides a simple interface for downloading, running, and interacting with these models directly from the terminal or via API, without relying on cloud services. This is especially useful for developers who want privacy, speed, and full control over their AI workflows.
read morePosts
Let's Talk Tooling
Tooling Matters After 8 years as a software engineer, across startups and established companies, I’ve seen firsthand how much good tooling can impact productivity, onboarding, and code quality. Tooling is not just about convenience—it’s about enabling teams to move faster, reduce errors, and focus on solving real problems.
Why Tooling is Critical Onboarding Speed: How quickly can a new engineer get a working environment and start contributing? The best teams make this nearly instant, with clear setup scripts, containerized environments, and automated checks.
read morePosts
check IDE config to git
What’s in your .gitignore? It’s become almost a convention to git ignore all the IDE specific files. Rationale being:
it’s not really the part of the codebase people can have different preferences of what IDE to use these configuration files can be [[Bloated Git Objects]] Debunking These Although IDE configurations can indeed be bulky, we can cherry-pick the select few configurations—especially the ones pertaining to run and debug setups. Those are pretty light and don’t change frequently.
read morePosts
Cataloguing My Garden
We must cultivate our garden.
Volatire (Candide)
This is the closing sentence in Volatire’s Satire Candide. Of course the metaphorical sense is, regardless of what’s happening in the world we should continue to work on ourselves and shouldn’t worry too much in things which are either inconsequential or are out of our controls. This doesn’t mean we become ignorant or indifferent towards those but rather put oneself and one’s concerns first.
read morePosts
About Fast Compilation of Go
Is Go Fast? Go is faster than interpreted languages like python in terms of sheer execution since it’s compiled to native executable format.
Go is slower than other compiled languages like C,Rust in terms of execution. This is primarily because although compiled go runs with a ‘runtime’ which does some housekeeping (scheduling and multiplexing goroutines, managing/freeing memory (GC) etc). This overhead is much smaller than a virtual machine required for languages like java but is an overhead nonetheless.
read morePosts
Did you read the logs?
Logs Logs (in a very degenerative sense) are a series of events/statements that act like the breadcrumb for the things that happened. Degenerative as I am referring mostly as application or software logs. (More pure meaning would be let’s say database logs which are used for replication process or a series of events for active state machine replication). I want to talk about the logs (application logs, system logs).
For interested reader, I highly recommend I heart logs which talks about the power of logs in a distributed systems - how a series of immutable logs help us chronologically capture a series of events and computing/coming to consensus/committing transaction etc on top of it.
read more