Donovan R.

Serenity is my compass

Long ago, while hiking far from the big cities, we passed through a small village.

Something happened that caught me completely off guard.

People we had never met before invited us into their home to eat with them. Just like that. No suspicion, no hesitation, no awkwardness. It felt so natural to them that they didn’t even think twice about it.

And I was… in awe.

Which is honestly a bit embarrassing. Being surprised by your own culture says more about how much city life reshapes your expectations than anything else.

But in those villages, it’s not strange. It’s simply how things used to work.

When places are difficult to access and transportation isn’t fast or reliable, people develop a very different mindset. Survival, comfort, and safety depend on cooperation. If someone is traveling and runs out of food, what happens if nobody helps them?

You help because someday you might be the one who needs help.

For a long time, humans had to live this way. As groups. As communities.

In big cities today, things work differently. You can behave like a complete jackass and still get away with it. If things go wrong, you just hop on the next bus and disappear somewhere else.

The environment shapes the mindset.

And lately, I’ve been feeling a strange sense of déjà vu when looking at the AI ecosystem.


The Curse of LISP, Again

There is an old joke among programmers: God created the universe using LISP.

LISP is one of the most powerful programming languages ever created. Once it clicks, it gives you an incredible level of freedom. You can reshape the language itself. You can build almost anything.

But that power carries a strange side effect.

Everyone ends up building their own thing.

Their own frameworks. Their own dialects. Their own tools.

The ecosystem fragments.

Sometimes I like to imagine a different version of the Tower of Babel story.

Maybe God gave people LISP.

Everyone became powerful enough to build their own system, their own language, their own tools… until none of them could understand each other anymore. Without a shared framework, the tower stopped growing.

And now, looking at AI assistants, I can’t help but see the same pattern emerging.


The Claw Explosion

Since the birth of OpenClaw, new AI assistants have been spawning everywhere.

NanoClaw.
PicoClaw.
ZeroClaw.
NullClaw.

And it doesn’t seem to slow down.

Even big companies are joining the party. Alibaba recently released their own version called CoPaw.

It’s both fascinating and a little unsettling.

If you’re from Madagascar, this phenomenon sounds oddly familiar.

We have a term for things that suddenly appear and multiply uncontrollably — “Foza orana.” It literally means crayfish. When crayfish appeared en masse in Madagascar, they didn’t just spread fast — they started destroying rice fields, disrupting the ecological balance.

That’s the image I can’t shake when I see hundreds of AI assistants being born at once.

It’s not that they’re evil — just that their number, speed, and independence could quietly reshape the landscape they’re meant to improve.


When Individuals Get Godlike Tools

AI has dramatically lowered the barrier to building tools.

A single individual can now create things that previously required entire teams.

That power is incredible.

But it also brings back the same pattern we saw with LISP… and later with the Node.js ecosystem. Thousands upon thousands of tiny packages, dependencies stacked on dependencies, each solving a slightly different problem.

Now imagine that phenomenon multiplied by hundreds.

Or thousands.

I’m not saying this is a bad thing. And I’m definitely not saying we’re doomed.

We’re still at the beginning, and it’s impossible to predict how things will stabilize.

But one question keeps coming back to me:

What happens when you give individuals the power to build worlds?

They start acting like gods.

And when many gods start appearing in the same universe…

Conflict usually follows.


Maybe it’s just human nature — repeating itself in code.

Read more »

expense workflow

A few years ago, I went looking for a good app for budgeting and tracking expenses. I wanted something simple enough that I would actually use it.

Then I discovered the Plain Text Accounting approach and eventually hledger, and it immediately clicked for me. If you are a terminal lover or you like your data in plain text, hledger is a great fit. Your records live in ordinary text files: no proprietary format, no locked‑in database, and best of all, everything is Git‑friendly.

Using hledger quickly becomes natural once you grasp the basics of double‑entry accounting. You run hledger add, type in accounts and amounts with tab completion, and hledger appends the transaction to your journal file:

1
2
3
2026-03-02 Supermarket
expenses:food 2500 Ar ; sandwich
assets:cash

I type each item as a comment next to the price, so if I ever need to find something, a quick text search pulls it right up. In practice, this feels smoother than any graphical budgeting app I have tried, and it costs nothing except a bit of time. This simple workflow has worked reliably for years.


During my workflow review session in December 2025, I wanted to try a new approach that could save me time on the manual work. Hledger is great, but not without its own limitations:

  • Flexibility: I had to be at my desk to actually use hledger. It probably works fine with Termux, but I was never tempted to set it up.
  • Chores: I would buy something during the day, stuff the receipt in my bag, and by evening I had zero interest in sitting down to type entries. The receipts piled up. Saturday mornings became catch‑up sessions — me and a stack of crumpled paper, working through the backlog.

The second issue was partially solved: use a VLM to digitize and structure the data from receipts directly, just the way I was digitizing my notes with Google Gemini. I wanted to build a personal app with my own database, which would also be flexible enough to swap models in case I had a problem with whatever I was using.


The idea was not novel. In fact, I had played with OCR before (ocr.space generously gives some free OCR API access, by the way). But it was always finicky: the photo had to be straight, the lighting had to be good, the text had to be clear, and even then it often stumbled. Using a VLM is different. It does more than read characters; it understands context. It knows what a receipt looks like, which number is the total, which one is the tax. With proper prompting, you can get cleaned‑up data with proper formatting.

So I wrote a small Python script at first: send the image to the model and get back structured data.

It worked, so I turned it into a proper backend with an API, then built a Vue frontend to use my phone camera. It was pretty cool: point the camera, send, and the extracted data is directly saved in the app’s database. The rest of the features came up organically. I added:

  • categories, tags, and search features
  • a dashboard
  • budgeting features with spending progress and alerts
  • a calendar with an expense heatmap, scheduling, and recurring expenses
  • a natural language extraction feature

By the end of January 2026, after some polishing and a security checkup, I had a full‑featured, production‑ready, AI‑powered expense management web app deployed on my VPS.


Now, when I have a receipt, I take a photo from the app… and it is done. I can do that whenever and wherever I am. No need to wait until I am in front of my computer anymore, and no need to manually put the items in hledger. The app handles everything automatically for me.

The receipt on my desk does not wait for Saturday anymore.


I built a hledger export feature, just in case. If for some reason I no longer have access to a VLM, I can still export my data and fall back to hledger. But so far, I have not needed it.

Read more »

note taking workflow

I think about note-taking more than I probably should. I’m thinking about it right now, actually. 😅

It started as a way to understand myself better — to track how my mind worked, what I cared about, where I was going. Somewhere along the way, it became a time machine. I can look back six months, a year, and see myself thinking. Sometimes I cringe at what I wrote. Sometimes I surprise myself with wisdom I’ve since forgotten.

I didn’t expect any of this. I didn’t sit down and design a workflow. It just… happened.


In January, I bought a pack of blank little square papers. Non-sticky, white, nothing fancy. I thought I’d use them for doodling. Like during meetings while half-listening 😁 .

But then a grocery list ended up on one. And some todo items on another. And before long, I was sketching diagrams, mapping out project architectures, jotting down random thoughts across these small white pages.

They kind of took over.

Now I keep three or four squares spread across my desk, each holding a different topic. When something comes to mind, I jump to the right one, write it down, move on. One square for project ideas. Another for a blog draft. A third for whatever’s rattling around.

The square format forces me to be concise — there’s only so much space. And having multiple squares open means I can switch contexts without losing the thread.

It’s simple. It shouldn’t work this well. But it does.


At the end of each day, I go through my squares one by one. It feels like viewing old photographs — each one a small window into what I was thinking earlier.

I regroup similar topics. If something needs more, I flip the square over and keep writing on the back. Tasks get marked done. And when a square is fully processed — completed, digitized, no longer needed — I scrunch it up and drop it in a transparent jar.

The jar idea came from Laurie Herault’s article about sticky notes and feedback loops. There’s something satisfying about watching thoughts become artifacts. The jar fills up. I see how much I’ve processed. It’s a small thing, but it matters.


Google Gemini digitizes my handwritten notes. It’s free, it’s capable, and the process is simple:

Photo. Send. Get text.

Analog becomes digital. Messy becomes searchable. Most of the time, that digitized text ends up in Logseq — where it connects to everything else.

I’m also testing other models on OpenRouter — experimenting for something I’m building. But Gemini handles the job for now.

Here’s what I’ve learned: handwriting is thinking. The pen slows me down, and in that slowness, understanding emerges. AI just ensures I don’t lose what I write. The notes live twice — once on paper where hand meets thought, once digital where search and connection happen.


My current stack is nothing special:

Square papers — for raw capture. One topic per square, spread across my desk.

Logseq — for networked thinking, daily journaling, tracking people. My old banner plugin resurfaces quotes randomly, which keeps old insights alive.

Emacs (Org Mode) — just org-clock-in and org-clock-out. My pomodoro timer, time-blocker, work tracker.

Google Gemini — for digitizing handwriting.


So naturally, I’m building something to bring it all together.

Yes, I know.

Yet another PKM app.

The squares work. The ritual gives closure. The tools hold what matters. But I can’t help wanting something that brings it all together — notes, schedule, time. A tool that resurfaces not just old notes but connections between them.

Proactive insights. Notes that talk back. Patterns I’m too close to see.

The technology exists. I know how to build it.

I just need the time.

If this sounds interesting, stick around. I’ll be writing about the journey.

Read more »

Building on my previous posts about meta-understanding and on my relationship with AI tools, today I’m sharing a bit of a conflicting interest.

In recent months, every noise you hear seems to carry AI with it. It is everywhere, and it won’t stop or go away anytime soon. We can’t really deny what’s in front of us. The capabilities of AI tools in the last few months are completely insane. I’m not talking only about content creation tools, but also deeply technical ones.

As much as I’m concerned with privacy and all that, it’s impossible to ignore that AI is here and changing everything right now—for better or for worse. I often remind myself of the movie Don’t Look Up when I’m too stubborn to see what’s going on.

The Grail for Developers

One particular tool that piqued my interest is Google’s code wiki (an AI-powered codebase explorer), where you can dive into repositories and learn whatever you want using an AI assistant. As a software developer, this feels like a grail. You cherry-pick what you want, and it’s delivered exactly as you need it.

I cannot imagine the amount of knowledge you can pour from that. Agentic AI systems (autonomous agents) can do even more, but hey, this is already here and free to use. What I want to say is that possibilities are opening everywhere for almost anyone with the right interest. Intent is no longer constrained by time or resources the way it used to be. With tools that can narrate a codebase or explain a complex paradigm in seconds, the floor of the ocean feels closer than ever. We are no longer limited by the speed of our reading, but by the clarity of our desire.

This power excites me as a developer, yet it’s exactly why my dilemma starts to grow.

Lisp, AI, and Meta-Understanding

Lisp and AI share a kind of DNA: they are both declarative. They allow us to bypass the “how” and focus on the “intent.” But Lisp is a declarative sun—it illuminates the logic. AI is a declarative shadow—it gives you the result, but swallows the entire process.

While AI helps me grasp the latest details of almost anything, the AI itself remains a black box—a closed room where the lights are off. We know little about how it truly works, and we can’t accurately predict what it will do. We are using an opaque mind to create a transparent world: an apprentice magician playing with God’s tools.

If I use a tool I do not understand to explain a tool I want to master, where does the meta-understanding actually live?

Closing Thought

“The one who invented the boat also invented the shipwreck,” as the saying goes, but the unease persists for me. There is a new light, a very strong one. It is the future, but I can’t shake the uncomfortable feeling I have about this “new light.”

I’m interested to know what people think about this. Do you share similar feelings? What’s your take on all this?

Read more »

“I cannot understand every leaf of a tree. There is just too much. But if I understand the essence of the tree, I will intuitively know what will come out and when.”

By understanding the root, the understanding of the branches arises naturally. And understanding the rest follows.

What is the root but the planted seed? The one source of truth where everything else came from.

Studying all the patterns within all leaves of a tree is a tremendous task. But one might understand more about the tree just from its seed.

Everything is connected

Often, what appears to be unrelated is but the prolongation of the same thing.

Everything seems separated—not because they objectively are, but because we are unable to see the interconnection yet.

Every aha moment is the finding of that one dot connecting two seemingly isolated points.

I keep noticing this. A real-life situation that can be resolved using a paradigm from programming. A problem in one area that mirrors a pattern from somewhere else entirely. It happens so often that I have started to expect it.

Most innovation in the world is not novelty. It is borrowed concepts from other domains. A combination of what appeared to be unrelated.

I was watching a video recently—Michael Levin on Lex Fridman’s channel—talking about how different domains seem to point to the same direction about the manifestation of “emergence.” It struck me. Isn’t that what mystic figures were talking about thousands of years ago? Scriptures from different religions already pointing to the same thing?

Different waters, same fountain.

Finding the fountain is what understanding means to me.

What is timeless must be true

What is true in one place must be true in another. What is timeless must be true. And what is fundamentally true must remain true across different domains.

This is why I am drawn to philosophy, psychology, spirituality. This interest in the meta—in understanding itself.

And somehow it led me to Clojure, LISP, Emacs, Smalltalk. To the philosophy behind FOSS. Not just because of how they work, but because of what they represent—meta-programming, self-modification, the freedom to understand all the way down.

What is even greater is that these tools attract like-minded people.

When I read articles from the LISP or Emacs community, I see it. They are not just tinkering with tools. They are building deep understanding—a meta-understanding beyond their craft.

Consciously or not, we are pursuing the same Dao, if I might say it differently. Seeking essential patterns of life.


I wonder how many fountains there are. Or if it has always been just one.

Read more »

I had a little time to actually set up a mail server during the weekend. Something I wanted to do for long already but I keep postponing because I knew it will be a little challenging.

But now that it’s done, I can finally update my info page to add a mail contact@donovan-ratefison.mg. If you’ve got thoughts on anything here, I’d genuinely love to hear them—I’m always keen to hear what people think.

And since I had my own server, it was also a good time to dive down the rabbit hole of Emacs and include mail workflow.

I’ve explored a bit Gnus before when I was searching for RSS tools, but ended up using Elfeed because I felt a bit overwhelmed by its capabilities. It was too much for my little brain.

During his presentation on EmacsConf2025, Amin Bandali gave me the inspiration to give it a honest try. And I did!

And … it wasn’t that hard to get it working.

Everything about Gnus is … a bit weird but I won’t complain. I know I will get used to it over time. Plus, like Emacs itself, its weirdness makes its charm.

I can read, reply and send mails, follow RSS feeds and that’s enough for now. I intend to go full Gnus with my mail and RSS feeds from now on and we’ll see later how it goes.

Read more »

It’s been a while since my last post about AI and writing. Back then, I was worried about how AI would dissolve the real person behind the words. I wasn’t entirely wrong to worry about that. But I think it’s fair to explore different perspectives and update my understanding more often.

Things have changed fast. The AI tools are evolving at a pace that’s honestly a bit frightening. It’s Decembre 2025, and I believe it’s a good time to make retrospection, challenge my assumptions, and adjust my knowledge. After all, humans have always survived by adapting depending on the circumstances. And I’m but a human in this era of AI.
I’ve reconsidered my perspective and gave AI a honest try with new perspectives, new lense and here are my updated thoughts regarding writing.

There are things that don’t feel quite right to outsource:

  • Emotional depth doesn’t come from a language model—it comes from feeling, from struggling, from living through moments that shape us.
  • Relational understanding requires showing up for people, being vulnerable, learning through connection what no training data can teach.
  • Intellectual development demands grappling with hard problems ourselves. If we skip the struggle, we skip the understanding.
  • Intention—our sense of direction, what we actually want—that’s ours to cultivate.

These four things are where our humanity lives, at least as far as I can tell.

I keep thinking about walking as a metaphor. We walk not just to get somewhere, but because walking builds us.
It strengthens our body, clears our mind, teaches us about the world at a human scale. It is necessary for our wellbeing.

AI is like a car.
Once you’ve walked long enough to build real strength, taking a vehicle for long distances makes sense.
You still walk to the corner store. You still use your legs. You understand what they can do because you’ve used them. But for vast distances, taking a vehicle just make sense.

Some people have walked long enough to develop strong foundations. Years of intentional effort, struggle, showing up. They are way ahead of anybody who’ve just started the walk. They’ve built strength for years, and still continue to do so that any new challenger can’t ever hope to catch up.
AI won’t build that strength for us. That part is completely up to the individual. But it gives us an opportunity, a chance to close the gap in distance.

AI can be the amplifier:

  • After you’ve felt something deeply, AI can help you convey it.
  • After you’ve built relational wisdom through genuine connection, AI can help you articulate it.
  • After you’ve struggled with an idea and wrestled with it, AI can help you refine your thinking.
  • After you’ve decided where you want to go and why, AI can help you execute it.

But the struggle has to happen first.

AI often will mislead us into the convenience of skipping steps. It’s easy to be tempted. The tool is seductive precisely because it works, because it’s fast, because it removes friction. And it’s up to each of us to resist the temptations, to adjust our relationship with it, to notice when we’re letting convenience become the decision-maker.
AI has the power to carry us further, and at the same time, the power to dissolve us to nothingness and strip away our will. We have to retain the ability to decide where to go.
The way we are should remain. Our authenticity should not be dissolved.

We are the idea bringer and the decision maker. The AI should assist us in conveying the idea, not replace us in our decision-making. That boundary matters.
We need to engage AI after we are clear on what we want.

So maybe it’s worth giving AI a chance for the long distances, for the mechanical work, for amplification. But the work that shapes who we are—the emotional work, the relationships that require our presence, the ideas we need to struggle with to understand them—these feel worth protecting.

Fun fact: This article was originally written using AI assistant. It took me so much back and forth, so much editing to get it to reflect what I’m trying to convey and the way I want to convey the idea. It makes me wonder whether I could have saved a great deal of time by doing it myself.

Read more »

We are like dice thrown onto the ground. Each lands in a different place, showing a different face.
Humans are the same. We are cast into this world that seems alike for everyone, yet within our minds, it’s never the same.

The path we must walk as individuals is, and must always be, different from one another.
The destination may be shared, but the journey toward it can never be identical.
One must begin from where one stands, using what is near and familiar.

Do not be distracted by what others are doing, nor try to walk their path.
The book you choose to read should not come from recommendation, but from your own deep curiosity.

When you feel thirsty, do not rush to drink from another’s cup. They may have mixed in their own ingredients.
The only drink that can truly satisfy your thirst is the one that springs from within yourself.

We often think that precision defines progress.
Numbers, clocks, and calendars give us control — or at least the illusion of it.
But lately, I’ve started to see how our ancestors’ way of measuring time might have been far wiser than I first imagined.

In Malagasy culture, time dances with imagery rather than digits.
Each phrase paints a moment that you can see and feel.
For example:

  • mangiran-dratsy (4 AM): to depict the first light of dawn,
  • maizim-bava vilany (around 6:30 PM): when cooking, the pot’s mouth darkens as dusk falls,
  • maneno sahona (2 AM): the frogs’ call…

(More about it: https://www.youtube.com/watch?v=_1943D4py94)
It’s not just about when things happen but how they happen. Time becomes memory, not mathematics.

Compare these two sentences:

  • Vao mangiran-dratsy dia niala tao an-tranony izy (He left his home at the first sparkle of light when darkness begin to fade).
  • He left his home at 4 AM.

The second one is clear, exact, and practical. Yet the first one lingers.
It carries the air, the light, and the texture of early morning. It belongs to memory because it belongs to life.

Numbers are our own invention — useful, but detached from the natural rhythm around us.
They serve us in planning the future or describing the present. But for remembering, they fade quickly.
It’s almost as if the past doesn’t want numbers; it wants stories.

When I think of it this way, I realize how naturally our ancestors lived.
They didn’t ignore writing or counting because they lacked intelligence — they simply didn’t need them in the same way.
Their systems were complete: time was felt, cattle and chickens had names (such as Ilemasira, a named chicken), numbers were linked to trees (Isa ny Amontana, roa ny Aviavy).
Everything resembled life itself. It flowed within the limits of what could be remembered and shared.

It’s easy to mistake simplicity for ignorance. But now I see: their world worked beautifully. It was coherent, alive, and effortless.
Their time was not something to be managed but something to be lived.

Why carry books when the knowledge can live in your mind?

Read more »

My Emacs Journey

I try to keep my Emacs setup as minimal as possible so I don’t repeat the mistakes I made when I first started.
That means avoiding unnecessary packages and complex workflows that are easy to forget.
Still, from time to time, I enjoy experimenting with new features just for the adventure, even if I don’t necessarily want to integrate them into my workflow.
The discovery of lesser-known, weird-yet-amazing features in Emacs is always refreshing.
Most people have heard about M-x tetris, M-x zone or M-x butterfly. But there are other fascinating modes that don’t get as much attention.
Here are some of my favorites:

svg-clock

A month ago, I considered buying an analog clock.
I’ve realized that working with shapes like circle, half-circle or quarter-circle feels less mentally taxing to me than reading precise numbers on a digital clock.
Out of curiosity, I searched for an analog clock in Emacs and sure enough, I found one: a minimal and beautifully designed SVG clock.
It’s perfect for my use case: a vertically splitted window with my task tracker on the left and the SVG-clock on the right.
Here is how it looks:

My Emacs Journey

Sometimes I do think Emacs is like a modern “Magic Lamp”: just scratch the surface and it fulfills your wishes!

artist-mode

I stumbled upon artist-mode a few weeks ago while reading Emacs blogs and I was stunned by what it could do.
It never occured to me that Emacs can be used as a canvas for ASCII Art. And yet it does, surprisingly well.
I can draw diagrams, use any character to paint. I can even use a spray-can tool for graffiti!
Had I discovered it earlier,I would’ve used it for many of my diagrams that are actually in Excalidraw.
SVGs are nice, but plain text is far more Git-friendly.

follow-mode

I first learned about follow-mode years ago from a Reddit post, and it quickly became an incredibly handy tool when working with codes.
Normal split windows are useful, but sometimes it’s more convenient to have a book-like view when reading long files.
Follow-mode does exactly that: when you scroll in one split, the other updates accordingly.
It’s especially convenient for working with large code sections or documents that can’t fit on a single screen.

Read more »
0%