Gitlab SSH post-quantum key exchange
History / Edit / PDF / EPUB / BIB / 1 min read (~159 words)A small PSA for Gitlab users who use SSH to connect to their Git repositories.
If you get the following warning when trying to connect:
** WARNING: connection is not using a post-quantum key exchange algorithm.
** This session may be vulnerable to "store now, decrypt later" attacks.
** The server may need to be upgraded. See https://openssh.com/pq.html
You should upgrade your Gitlab instance to the latest.
If you need to upgrade, you can use Gitlab's upgrade path tool which is very helpful to identify the correct upgrade steps to take.
You may also have to upgrade your operating system.
In my case I was still on Ubuntu 20.04 LTS it did not support post-quantum key exchange.
Note that if you do a release upgrade, you also should update the apt sources to point to the new release (e.g., going from focal to jammy) and upgrade gitlab using the updated source.
The meaning of life isn't something you discover - it's something you construct through systematic exploration and iterative refinement.
I think about it like optimizing a machine learning model. You start with some initial parameters (your genetics, environment, early experiences), but the actual trajectory emerges through the training process. The loss function isn't predetermined - you have to define what you're optimizing for, which is itself part of the work.
There's a bootstrapping problem here that's worth acknowledging: how do you choose meaning without already having meaning to guide that choice? The way out is probably recognizing that you're already embedded in a process. You don't start from a blank slate - you have patterns, preferences, curiosities that already exist. The work is surfacing those, examining them, and deciding which ones to amplify.
For me, it clusters around a few things:
Building systems that reduce cognitive overhead. Whether that's infrastructure automation, better tooling, or frameworks that make complex problems tractable. There's something deeply satisfying about creating leverage - doing work once that pays dividends repeatedly.
Understanding how things actually work. Not surface-level explanations, but the real mechanisms. Why does Kubernetes behave this way under load? How do transformers actually learn? What's the evidence base for this claim? Drilling down until you hit bedrock.
Documenting the process. Writing isn't just communication - it's thinking made concrete. When I write about my thinking process on AGI or automation, I'm not just sharing conclusions, I'm making my reasoning debuggable. Both for others and for future me.
The meta-level realization is that meaning comes from engagement with hard problems. Not difficulty for its own sake, but the kind of problems where the solution space isn't obvious and you have to actually think. The satisfaction isn't in having answers - it's in the process of going from "I don't understand this" to "okay, I see how this works now."
There's probably no cosmic meaning. But there's local meaning in building things that matter to you, learning things that genuinely puzzle you, and leaving some kind of documented trail that might be useful to someone else trying to solve similar problems.
The philosophical questions - consciousness, creativity, what happens after death - are interesting, but they don't need to be answered to have a meaningful life. The work is meaningful even if the ultimate questions remain open.
My client-side only AI web application workflow
History / Edit / PDF / EPUB / BIB / 4 min read (~799 words)Over the past month I've started building client-side only AI web applications (e.g., ai-text-editor, a private ai-language-assistant to teach myself Chinese Mandarin).
I've gone with this approach because it lends itself very well to vibe coding with vibe-kanban.
I've configured the Dev Server Script option to open the index.html file in my browser and it works immediately, no need to build assets or start a backend server.
This speeds up iteration cycle considerably.
This approach is also great because it produces a tool that can then be used directly in the browser from any device, without needing to install anything.
I can run what I built on my phone, on my work computer, one someone else's computer easily by pointing them out to the project's GitHub Pages URL which serves the application and is "production" ready.
I haven't really picked any frontend libraries or frameworks, just vanilla HTML/CSS/JS.
That is something I need to explore (e.g., react, svelte, solid, tailwind, vue.js, etc.) but for now I want to keep things simple.
For the past few projects I've used Claude Code.
I ask Claude to create the common CLAUDE.md using /init.
Additionally, I ask Claude to maintain a SPEC.md file that describes the features of the application.
In many cases the database relies on the browser's localStorage API and IndexedDB to store data, which is sufficient for my needs.
I ask Claude to maintain a DATABASE_SPEC.md file to describe the database schema.
Claude typically goes for a index.html, app.js, and styles.css file structure.
While it is ok for the first few iterations of the project, I usually ask Claude to refactor the code to split it into multiple files and modules as the codebase grows.
Doing so speeds up some of the iteration process since it doesn't end up reading large irrelevant chunks of the file when making edits.
It also makes it easier to review changes since I use the files modified as an indication of whether it worked on the right part of the codebase.
The main downside of this approach is that generally the app.js file ends up being quite large (e.g., 1000+ lines of code) since it contains all kind of global state and logic.
With this approach I've been able to fairly effectively work on small projects (~40 hours of work) and get something functional out the door that would have taken me weeks and where I'd probably have given up mid-project due to all kind of minor problems.
The template is available as a GitHub repository.
In index.html I have the following code to load llm.js and my application code:
<script src="https://cdn.jsdelivr.net/gh/tomzxforks/llm.js@main/dist/index.min.js"></script>
<script type="module" src="app.js"></script>
In app.js I have the following code to configure the LLM options:
const applicationId = "my-ai-application";
function getConfigurationValue(key, defaultValue) {
// Try to get configuration from applicationId key
try {
const appConfig = localStorage.getItem(applicationId);
if (appConfig) {
const config = JSON.parse(appConfig);
if (config && config.hasOwnProperty(key)) {
return config[key];
}
}
} catch (error) {
console.warn(`Error parsing ${applicationId} configuration:`, error);
}
// Fall back to llm-defaults key
try {
const defaultConfig = localStorage.getItem('llm-defaults');
if (defaultConfig) {
const config = JSON.parse(defaultConfig);
if (config && config.hasOwnProperty(key)) {
return config[key];
}
}
} catch (error) {
console.warn('Error parsing llm-defaults configuration:', error);
}
// Use provided default
return defaultValue;
}
const llmOptions = {
service: getConfigurationValue("service", "groq"),
model: getConfigurationValue("model", "openai/gpt-oss-120b"),
extended: true,
apiKey: getConfigurationValue("api_key", "LLM_API_KEY_NOT_SET"),
max_tokens: parseInt(getConfigurationValue("max_tokens", "8192")),
};
I set a key in localStorage called llm-defaults that contains a JSON object with default values for the LLM service, model, and API key.
For example:
{
"service": "groq",
"model": "openai/gpt-oss-120b",
"api_key": "sk-xxxxxx",
"max_tokens": "8192"
}
This way I can easily change the LLM configuration for all my client-side AI applications by updating this single localStorage key.
If I need to override the configuration for a specific application, I can set another key in localStorage with the name of the applicationId (e.g., my-ai-application) that contains the specific configuration for that application.
In the event that neither key is set, the code falls back to hardcoded default values.
The LLM.js I use is forked from themaximalist/llm.js.
The main changes I've made was to enable IIFE (Immediately Invoked Function Expression) builds that can be loaded directly in the browser via a <script> tag instead of using modules.
I also disabled minification since jsdelivr automatically minifies the code if you have min in the filename.
Marcus closed his laptop at exactly 6 PM, just as he had promised himself he would every day this week. The screen went dark, but the code remained illuminated behind his eyelids-persistent, glowing green text against black. Even as he stood from his desk and stretched, he could still see the function he'd been wrestling with, its logic branching through his mind like creeping ivy.
Not now, he told himself. Work is over.
He made dinner-pasta with store-bought sauce, the same meal he'd eaten three nights running. As the water boiled, his mind wandered back to the memory leak in the application. Maybe if he restructured the garbage collection calls? Or perhaps the issue was in the parent component, not the child. He caught himself drumming his fingers on the counter in the rhythm of typing, each tap a phantom keystroke solving problems that could wait until tomorrow.
His girlfriend called while he ate. Sarah's voice was warm, talking about her day at the veterinary clinic, a difficult surgery on a golden retriever that had gone well. Marcus made appropriate sounds of interest, but part of him was still debugging. Her words became background processes while his main thread analyzed whether implementing a cache would improve the API response time.
"Are you listening?" Sarah asked, not unkindly. She knew the signs.
"Sorry, yes. The dog's owner was crying?" He guessed, poorly.
"That was five minutes ago, Marc."
After the call, he tried to read a novel-something about a detective in Victorian London that his mother had recommended. But the detective's methodology reminded him of debugging: isolating variables, testing hypotheses, following the trail of clues through nested mysteries. Even fiction had become code.
At the gym, counting reps became iterations in a for-loop. One more set translated to one more compile. The rowing machine's display showed metrics that made him think about performance optimization. His heart rate monitor might as well have been displaying server response times.
He met Tom for drinks, his oldest friend who worked in marketing and didn't know a compiler from a cucumber. But when Tom complained about a difficult client presentation, Marcus found himself mentally architecting a solution-a simple web app that could dynamically generate presentations based on client data. He was halfway through explaining the tech stack before he noticed Tom's glazed expression.
"Remember when you used to talk about music?" Tom asked. "You had that whole theory about Radiohead's album structure."
Marcus did remember, vaguely, like recalling a program written in a deprecated language.
That night, he lay in bed, Sarah sleeping beside him. The ceiling was a blank canvas where his mind projected code. He tried counting sheep, but they became objects in an array, each one instantiated with properties: fluffiness, jump_height, sequential_number. He tried meditation, focusing on his breath, but his inhales and exhales became binary: 1, 0, 1, 0.
At 2 AM, he gave up and opened his laptop. The blue light washed over him like baptism, like coming home. The bug that had haunted him all day revealed itself within minutes-a missing await keyword, so simple it was almost insulting. He fixed it, pushed the commit, and felt the sweet release of resolution.
But even as he closed the laptop again, he knew this was just one bug fixed in a system full of them. Tomorrow would bring new problems, and the day after that, and the day after that. The code would follow him home, eat dinner with him, sleep in his bed, wake with him in the morning.
He looked at Sarah, sleeping peacefully, her mind presumably full of dreams that had nothing to do with her work. He envied her ability to close the clinic door and leave the sick animals behind. But then again, maybe she dreamed of surgery, of sutures and symptoms. Maybe everyone carried their own infinite loops.
Marcus finally drifted off around 3 AM, his last conscious thought a promise to himself that tomorrow he would try harder to context-switch, to properly close all his mental tabs. But even as sleep took him, somewhere in his subconscious, a background process continued running, optimizing and refactoring, an endless daemon that would not-could not-terminate.
In his dreams, he was debugging reality itself, and the bug was somewhere in his own source code.
I recently moved to Windows 11.
I had created only a local account during creation but at some point I configured OneDrive to use my wife's account.
This resulted in her account being bound as the "primary" account of the computer.
This is not what I wanted.
I looked online for a solution but all the instructions I found didn't have the screen I had.
I initially tried a regedit fix but it didn't work.
Here is how I fixed it:
- Go to Settings > Accounts > Your info.
- This is the "weird part", click on "Sign in with a Microsoft account instead".

- Follow the instructions using a different account.
- The previous account will be "detached" from the computer and you will now be using a local account.
- If you go in Settings > Accounts > Email & accounts, you will see that the previous account is still listed under "Accounts used by other apps" but that you can now remove it (which you couldn't do previously).
Hope this helps!