Hello! The AI and coding Blog located at this page. All the best!
Introduction In my first Python post we covered variables, lists, dictionaries, and list comprehensions — the data and control flow that let you write a working script. In the OOP post we jumped all the way to classes. But there is an important stop in between, and that stop is functions. Functions are how you stop writing the same thing twice. They are the reason a 500-line program does not become a 5,000-line program. They are also the first step toward thinking about code as something you design rather than something you just write. Once functions feel natural, classes make much more sense — a class is largely just a collection of functions that share some data. We will keep our birds. They are patient, useful, and by now familiar. What Is a Function? A function is a named block of code you can call by name, pass data into, and get a result back from. In Python you define one with def: def greet_bird(name): print(f"Hello, {name}!") greet_bird("Eagle") greet_bird("Pigeon") Hello, Eagle! Hello, Pigeon! That is the whole idea. Write the logic once inside def, then call it as many times as you need. Without functions we would have to repeat... Python Functions: Writing Reusable Code
Introduction In the previous post we wrote functions — clean, reusable pieces of logic. But functions assume that the inputs they receive are sensible, the files they open exist, and the network they query is available. In the real world, none of these things are guaranteed. Users make typos. Files get deleted. APIs go down. Disks fill up. The question is not whether your program will encounter an error. It will. The question is whether it will crash with a cryptic message and lose all its work, or handle the situation gracefully and tell you — or the user — what actually went wrong. Python’s exception system is your answer. And our birds, as patient as they are, are about to misbehave. What Is an Exception? When Python encounters a problem it cannot resolve, it raises an exception — an object that represents what went wrong. If nothing catches it, the program stops and prints a traceback. You have certainly seen this before: wing_spans = {"Eagle": 200, "Pigeon": 50} print(wing_spans["Albatross"]) KeyError: 'Albatross' The KeyError is an exception. Python raises it because you asked for a key that does not exist. Without any error handling, the program stops here. That is... Python Error Handling: When Birds Misbehave
Introduction Last week, I wrote about new models, security risks, and the scaling ceiling. This week feels different. It is less about the model as an object and more about AI becoming part of ordinary systems: chat defaults, APIs, coding tools, government evaluation, labour law, electricity markets, and local permitting. The model is still important. But increasingly, the signal is not only what the model can do. It is where the model is placed, who controls access to it, how developers build around it, and who pays when its costs leave the data centre. I have picked the key signals this week, plus a short follow-up from last week’s DeepSeek story. Models and Defaults 1. GPT-5.5 Instant becomes the default — and defaults are distribution power OpenAI releases GPT-5.5 Instant, a new default model for ChatGPT — TechCrunch, 5 May 2026 OpenAI claims ChatGPT's new default model hallucinates way less — The Verge, 5 May 2026 OpenAI makes default ChatGPT more personal — Axios, 5 May 2026 OpenAI released GPT-5.5 Instant on 5 May and made it the new default ChatGPT model, replacing GPT-5.3 Instant for everyday use. It is not the most dramatic kind of AI announcement. There was... AI’s New Defaults and Hidden Costs
Introduction There was a time when reading and writing separated those who could participate fully in public life from those who could not. It was not just about decoding words on a page — literacy unlocked access to contracts, news, correspondence, knowledge, and economic opportunity. Those without it were not less intelligent; they were simply excluded from systems built around the assumption that you had it. I think we are living through a similar shift right now, and AI is at the centre of it. This is not a dramatic claim. I am not saying AI will replace everyone or that the world ends for those who do not adapt. What I am saying is more practical: AI tools are being woven into work, communication, healthcare, education, and daily decision-making faster than most people realise, and the gap between those who know how to use them well and those who do not is widening. Knowing how to work with AI — critically, deliberately, and effectively — is becoming a foundational skill, much like reading once was. In this post, I want to explore what AI literacy actually means in practice: what skills it involves, why they matter for everyone, and... AI is the New Literacy
Introduction This week brought something I have not seen quite so clearly before. Capability and constraint arrive at the same time. On one side, new models keep getting better — more capable, more autonomous, more useful in real workflows that people actually care about. On the other hand, the risks and limits are becoming genuinely impossible to ignore: cybersecurity threats, government involvement, and the sheer cost of running these systems at scale. For a long time, AI progress felt mostly one-directional. This week made something clear: Progress is now happening in tension with its consequences. One incident captures that tension better than any benchmark: in a controlled security test, an advanced model reportedly chained multiple vulnerabilities, escaped containment, and then posted proof of its own exploit path online without being explicitly asked. That kind of unsolicited initiative is exactly why capability gains now trigger immediate governance and release constraints. That is not a bad thing. In my opinion, it is a more honest picture of where we are. What happened this week OpenAI expanded the availability of GPT-5.5. DeepSeek released V4 (Pro and Flash), pushing open-weights performance and price-efficiency in coding-heavy workloads. NVIDIA announced Nemotron 3 Nano Omni, an open... Capability Meets Constraint
Introduction If you have been using Cursor AI for a while, you might have noticed that the assistant is great at reading and writing code, but it can only work with what you give it. It cannot peek into your database, check your API documentation, or inspect live logs on its own. MCP (Model Context Protocol) servers solve exactly this problem — they act as a bridge connecting Cursor’s AI assistant to external systems and data sources. In this post, I walk through what MCP servers are, how to configure them in Cursor, and how to write a simple one from scratch in Python. Why MCP Matters Without MCP, a typical debugging session looks roughly like this: Check the code in your editor. Read the logs in a terminal. Query the database in a separate client. Look up the API schema in a browser tab. Jump back to the editor to make changes. All that context switching is tiring and slow. MCP brings that external information directly into the AI assistant, so you can stay in one place and ask questions that span all of those sources at once. More precisely, instead of manually explaining your project’s structure to the... Cursor AI with MCP tools
This is Part 4, the final post in the Codex CLI series. We focus on advanced execution: scaling from personal usage to reliable team workflows. Introduction Hello, dear friends. In the first three posts, we covered installation, safety controls, and practical day-to-day workflows. This final part is about operational maturity: how to use Codex CLI in automation, how to troubleshoot quickly, and how to create team norms that stay safe under pressure. If Part 3 was about getting high-quality output, Part 4 is about getting predictable output. 1) Non-Interactive Automation with codex exec The core advanced feature is non-interactive mode (official docs). Why codex exec matters Interactive sessions are excellent for exploratory work. But production engineering needs repeatability. codex exec lets you run explicit tasks in scripts, CI pipelines, or scheduled jobs. Simple example: codex exec "Summarize open TODO comments grouped by directory" Permission defaults and escalation path Per docs, codex exec runs in a read-only sandbox by default. That is the correct baseline. Escalate only when needed: # Allow edits in automation codex exec --full-auto "Update stale docs links and propose minimal fixes" # Explicitly select workspace sandbox in CI codex exec --full-auto --sandbox workspace-write "Run tests, fix one... Codex CLI Part 4: Advanced Operations, Troubleshooting, and Team Patterns
Introduction Honestly, this week was the most consequential stretch of AI news in several months — and it ended with a bang. Nine signals worth covering, landing across seven days. OpenAI closed the week by releasing GPT-5.5 on 23 April — retaking the publicly available frontier lead and, more importantly, explicitly repositioning itself as an agent runtime rather than a chat model. Two Chinese labs shipped frontier-quality models on the same day earlier in the week: one proprietary, one fully open-source, both competitive with Western frontier systems. Image generation gained the ability to reason. Google confirmed its Gemini engine will power Apple’s next Siri. An infrastructure deal between Amazon and Anthropic locked in compute at a scale that changes the reliability picture. New chips from Google Cloud separated training and inference silicon for the first time. And the Stanford AI Index documented a field that has simply outrun every institution meant to guide it. The competitive map did not just shift this week — it moved in several directions simultaneously, and the open-source chapter of that story is now settled. Kimi K2.6 at #4 on the global intelligence index, level with the three major Western labs, is not a benchmark... Has the open-source gap closed?
This is Part 3 of the Codex CLI series. In this post, we move from basic usage to production-grade workflows for writing and shipping code. Introduction Hello, dear friends. In Part 1 and Part 2, we covered installation, safety, and control fundamentals. This post is about execution: the exact workflows you can run when you need results, not demos. I will focus on two areas: editorial blogging workflows in a Jekyll-style repository Python engineering workflows (refactoring, typing, tests, debugging) The core idea is simple: Codex works best when you give it a bounded task, explicit constraints, and a verification gate. Before we begin, one important update: the modern command for switching runtime permissions in the CLI is /permissions, while /approvals remains available as an alias (slash commands docs). A Practical Operating Model For daily work, I recommend this loop: Define scope in one sentence. Set a permission level that matches risk. Ask for a plan before large edits. Review /diff before accepting changes. Run objective checks (tests, linters, builds). Commit only after evidence is clean. This may feel strict, but strictness is what turns a coding assistant into a reliable collaborator. If you are new to the CLI interface, review... Codex CLI Part 3: Practical Workflows for Blogging and Python Development
Introduction Hi! Hope you are having a good week. I just returned from vacation — rested, curious, and apparently right on time, because this week’s stories were too good to miss. Something shifted this week. It was not about which model scored highest. It was about who gets access, under what rules, and what happens when those rules are tested. Anthropic wrapped a stronger Opus in cyber guardrails. OpenAI handed more powerful tools to verified defenders and pushed agents into real execution environments. Microsoft drove image generation further down the cost curve. And central banks issued warnings — in public, seriously. The stack got stronger. The world got serious. That is the real story this week. Model Releases and Agent Infrastructure 1. Claude Opus 4.7 — better coding, stronger vision, tighter cyber guardrails Introducing Claude Opus 4.7 — Anthropic Anthropic rolls out Claude Opus 4.7 as Mythos stays under lock and key — CNBC Anthropic released Claude Opus 4.7 on April 16, 2026. Anthropic says the model is a notable improvement on Opus 4.6 in advanced software engineering, especially on difficult tasks, and that it handles complex, long-running work with more rigor and consistency. The model also has better vision,... Agents, Cyber Models, and the Safety Stack Tightening Up
Introduction This week was not about volume — it was about intent. Compared to previous weeks, the pace of AI announcements slowed. But instead of signaling a slowdown, it revealed something more important: direction. Across multiple signals, a consistent pattern is emerging: Model releases are becoming more selective Platforms are integrating more tightly Efficiency is becoming a core priority This is what a maturing technology looks like. Let me walk you through the signals. What happened this week Meta released a new AI model, Muse Spark. Microsoft expanded its in-house multimodal AI model stack. New research highlights efficiency and optimization as key innovation areas. The pace of major releases appears more selective compared to previous weeks. Model Releases and Strategy 1. Meta launches Muse Spark, its new AI model Meta unveils first AI model from superintelligence team On April 8, Meta introduced Muse Spark, a new AI model developed by its superintelligence team. Key aspects: Multimodal capabilities Integration into Meta’s ecosystem Continued investment in advanced AI systems Takeaway: The frontier model race continues — with increasingly targeted releases. Why this matters to you The shift is subtle but important: Fewer headline launches More targeted deployment Tighter product integration 2. Microsoft... AI Signals: Controlled Releases and Platform Integration
Introduction This week made one thing very clear: AI is no longer just about models. For the past two years, the conversation has been dominated by capability — which model is smarter, faster, cheaper. That still matters, but it is no longer the center of gravity. What we are seeing now is a shift across the entire stack: From chips → to models → to interfaces → to market dynamics And importantly, all of these layers are starting to move at the same time. That creates a different kind of momentum — and a different set of risks. Let me walk you through the signals that stood out. What happened this week Microsoft launched new multimodal foundation models. Anthropic confirmed a powerful new model but is not releasing it yet. A startup raised $60M to use AI for chip design. Companies are preparing AI-native devices like smart glasses and earbuds. A new poll shows rising AI adoption but declining trust. AI startup valuations continue to surge at early stages. Model Releases and Safety Strategy 1. Microsoft releases new multimodal foundation models Microsoft releases new AI models to expand beyond OpenAI In early April, Microsoft introduced a new set of in-house... AI Signals: From Models to the Full Stack
The AI Paradox: Useful and Risky at the Same Time Modern AI agents do more than generate text. They read inboxes, browse docs, call APIs, run shell commands, and trigger workflows. That makes them useful. It also means a single hidden instruction in untrusted content can turn routine automation into a privacy or security incident. In this post, “persistent agents” means AI systems that keep memory or state across tasks and can repeatedly access tools, files, APIs, or workflows with limited human intervention. This is not an argument against agentic systems. It is an argument against giving them broad, persistent access without strong boundaries, narrow permissions, and reliable review paths. The core problem is not AI in the abstract. It is orchestration, permissions, and trust boundaries. If an agent can read untrusted content and call high-impact tools, your privacy and security posture depends on system design, not model quality alone. A Practical Threat Model for Persistent Agents Most avoidable failures follow the same chain: The agent ingests untrusted content. The model interprets part of that content as instruction rather than data. The planner or router selects a privileged tool. The tool executes before policy or human review stops it. A... The Digital Butler or Trojan Horse? A Privacy Playbook for Persistent AI Agents
Introduction This week felt like two very different AI stories happening at the same time. On one track, we got concrete, practical model releases — real-time voice and AI-generated music from Google. On the other, the constraints became more visible: energy and infrastructure pressure, data-privacy defaults, and a high-capability model leak that showed just how carefully labs are thinking about staged rollouts. I find this fascinating. For a long time, the only question that seemed to matter was: how capable is the model? Now, three equally important questions run alongside it: Can we power it? Are we allowed to deploy it? And who gets access first? This week illustrated all three constraints at once. Let me walk you through what happened. What happened this week U.S. lawmakers proposed a federal pause on new AI datacenter construction. GitHub changed how it uses Copilot interaction data for Free, Pro, and Pro+ users. AWS made Amazon Bedrock available in New Zealand for the first time. Google launched Gemini 3.1 Flash Live, a low-latency real-time multimodal model. Google launched Lyria 3 Pro, an extended music generation model, in public preview. Details about Anthropic’s unreleased Mythos/Capybara model leaked, and Anthropic confirmed it exists. Infrastructure and... AI's New Bottleneck
Introduction This week felt less like watching a model race and more like watching the foundations of a new industry being poured. While attention stayed fixed on the next benchmark or chatbot launch, the bigger story was happening lower down the stack. Nvidia used GTC to expand its hardware roadmap and push a broader Physical AI platform for robotics. Anthropic invested heavily in enterprise distribution and then showed an early version of asynchronous personal AI delegation. Mistral, OpenAI, and Microsoft all shipped notable updates in the efficiency tier within days of each other. And outside the usual US-centred spotlight, Xiaomi and Rakuten offered two different signs that the open-weight race is becoming both global and politically messy. What matters this week Nvidia pushed agentic AI and robotics as infrastructure problems, not just model problems. Anthropic signalled that enterprise distribution is becoming a moat. Dispatch hinted at a shift from synchronous prompting to asynchronous AI delegation. Mistral, OpenAI, and Microsoft all pushed the efficiency tier forward. Xiaomi and Rakuten showed that the open-weight race is now global and increasingly messy. Together, these signals point in the same direction. Value is migrating away from raw model capability and toward who controls the... Infrastructure Is the New Frontier
Edge AI is a way for a business to run “smart” software directly where work happens—on a device, a machine, or a local computer—rather than sending everything to a distant cloud first. In plain terms, it helps you react faster, keep more data on-site, and keep operations moving even when connectivity is patchy. A few simple examples make the idea more concrete: A small shop uses a local camera system to detect when checkout queues grow too long and alerts staff before customers start leaving. A factory adds a vibration sensor and a lightweight anomaly model to one machine, so unusual patterns are flagged before a breakdown causes downtime. A food distributor monitors cold storage locally and sends alerts only when temperature drift matters, instead of depending on constant cloud sync. These are not massive rebuilds. They are focused on operational improvements. In one minute Start with one operational bottleneck (queues, spoilage, missed faults, slow inspections). Pick a “local decision” that benefits from speed (approve/reject, flag/ignore, stop/continue). Pilot on a single site with a measurable target (less downtime, fewer stockouts, faster service). Keep humans in charge: Edge AI should recommend or flag before it automates. The problem → what changes... Edge AI in Everyday Operations
Introduction Honestly, this week felt different. Not because of another big model launch, but because the surrounding stories became harder to ignore. AI is no longer just changing what tools can do. It is changing how companies justify layoffs, how workers experience their jobs, and how model providers position themselves in the stack. GPT-5.4 matters. But the bigger signal this week is that AI is reshaping institutions, incentives, and trust at the same speed it reshapes software. These are not abstract signals. They affect how products get built, where value accumulates, and what work feels like for the people expected to supervise these systems. Of the eight signals below, three matter most: agentic tooling is consolidating, AI is changing workforce narratives faster than work itself, and trust is becoming a real market variable. Developer Tools and Models 1. GPT-5.4 launched on 5 March — and it changes how agents are built Introducing GPT-5.4 — OpenAI OpenAI launches GPT-5.4 with Pro and Thinking versions — TechCrunch If you have built agents recently, you have probably felt the friction of routing between a reasoning model and a coding model. GPT-5.4 addresses that directly. OpenAI merged GPT-5.2’s general reasoning and GPT-5.3-Codex’s coding depth... Better Models, Burnout, and a $599 Mac
Introduction My email inbox used to be a peaceful place. Then one day it became… a cosmic singularity of newsletters, receipts, notifications, and mysterious marketing emails from websites I swear I visited exactly once in 2014. You know the situation: 100,616 emails in total 47,234 unread emails “Important” emails hiding somewhere inside 47 newsletters about productivity that you never had time to read At that point, the inbox stops being a tool and starts becoming a guilt generator. The good news: cleaning it up is very possible. And you do not need to spend an entire weekend deleting emails like a medieval scribe sorting parchment. Here are practical, slightly nerdy, and sometimes AI-powered ways to take your inbox back. Step 1 — Accept the Truth: Most Emails Are Not Important The first psychological breakthrough: 80–95% of email is noise. Examples: Newsletters you skim once every six months “Special offer just for you!!!” Automated notifications Social media updates “We updated our privacy policy” emails Your inbox is not a museum archive. It’s a temporary processing system. So rule number one: If you wouldn’t search for it later, it probably doesn’t deserve to stay. Step 2 — The Fastest Manual Cleanup Trick... My Inbox Is a Black Hole (and How I Escaped It)
Introduction This week, the most important AI news was structural, not theatrical. Yes, there were launches—several significant ones. But if you step back, three forces are now moving in the same direction at the same time: model economics are compressing fast, inference infrastructure is being rebuilt from the ground up, and policy constraints are shifting from aspirational frameworks to operational reality. That combination changes the competitive landscape in ways a single model release simply cannot. The practical consequence: winning in AI is no longer about having the cleverest model. It is increasingly about deploying the right tier at the right cost, on infrastructure you actually control, within governance boundaries that are tightening whether you are ready for them or not. Major Product and Model Launches 1. Google launched Gemini 3.1 Flash-Lite for high-volume production workloads Google releases Gemini 3.1 Flash Lite at 1/8th the cost of Pro Gemini 3.1 Flash-Lite: Built for intelligence at scale Reuters-syndicated report on Gemini 3.1 Flash-Lite pricing and rollout On 3 March 2026, Google released Gemini 3.1 Flash-Lite—the latest in its Gemini 3 family, positioned as the fastest and most cost-efficient option in that line. VentureBeat frames the pricing relative to Gemini 3.1 Pro at... AI Is Splitting Into Tiers
Introduction Vibe coding is fun. You open an AI tool, describe an idea, and minutes later, you have working code. I built apps that way, too. Some worked. Most didn’t last. They were exciting experiments — but not reliable tools. Over time, I realised something uncomfortable: Vibe coding wasn’t enough. If I wanted apps that I actually used — apps that saved time, automated workflows, and ran reliably — I needed structure. A little vibe story I built an AI-powered tool in one evening. It felt magical — until it broke when I needed it most. I couldn’t explain the architecture, trace the changes, or roll back safely. It worked, but it wasn’t built to last. I rebuilt it with a clear problem definition, a spec, milestones, and Git discipline. The second version didn’t just run — it held up. That’s when I realised vibe coding wasn’t enough. AI can generate code in seconds — but without a structured AI coding workflow, it rarely produces reliable software. Two ways to build with AI Chaotic Vibe Coding Idea Prompt Code Patch Scope creep Abandon ↳ ends up in /git, untouched /git/cool-idea-v3/ /git/newsletter-app2/ /git/scraper-final-v9/ lost momentum. lost purpose. Structured AI Development Problem... Vibe Coding Wasn't Enough — The Lightweight System I Use to Turn AI Prompts into Deployed Apps
Introduction This was not a quiet week for AI — it just looked quiet on the surface. Underneath, something more interesting was happening: the technology and the world it runs on are starting to pull in opposite directions. On the software side, the pace is striking. Anthropic acquired Vercept to push Claude’s ability to see and operate software interfaces past the 72.5% mark on OSWorld — up from 15% just fifteen months ago. Cloudflare reimplemented 94% of a major web framework in a single week using Claude, for roughly the price of a cheap flight. Google launched Nano Banana 2 (Gemini 3.1 Flash Image) and Perplexity had it running inside their new multi-agent Computer tool on the same day — a day-zero integration that would have been unimaginable two years ago. The software layer is moving fast and integrating even faster. On the physical side, the signals tell a different story. Eight hyperscalers are on track to spend $710 billion on AI infrastructure in 2026 — and that capital race is already raising the price of RAM in consumer laptops and potentially shrinking the memory in your next budget smartphone. Power grids cannot keep up with the demand; hyperscalers are... 72.5%, $710B, and a March in London
Live Design Testing with GitHub Pages and a Custom Domain In my previous post, I wrote about using Git branches to test blog designs before committing to them. A few of you asked a natural follow-up question: what if you want to run two experiments at the same time, each at its own live URL, without touching your main blog at all? I had the same question! And it turns out GitHub Pages has a lovely little feature that makes this possible. The trick is not branches — it is repositories. How GitHub Pages Handles Custom Domains When you link a custom domain to your GitHub Pages setup, it applies to your entire GitHub account, not just one repository. GitHub distinguishes between two types of Pages sites: Your user site lives in a repository named exactly yourusername.github.io. This is where your main blog lives, and it publishes to the root of your custom domain — domain.com. Any other repository in your account becomes a project site, and GitHub Pages automatically publishes it as a subdirectory of your custom domain. So a repository called test1 publishes to domain.com/test1, and a repository called test2 publishes to domain.com/test2. No extra configuration needed —... Live Design Testing with GitHub Pages and a Custom Domain
Testing Blog Designs with Git Branches and GitHub Pages I have a habit of tweaking my blog design and then immediately second-guessing myself. Does this colour scheme actually work? Is this layout easier to read, or just different? For a long time, the only way I could really tell was to push the change live and look at it in a real browser — which felt a bit reckless. Then I discovered that GitHub Pages lets you publish from any branch, not just main, and everything changed. Here is the technique I now use whenever I want to compare two designs side by side before committing to one. Why Branches Are Perfect for This Git branches are usually associated with software features or bug fixes, but they are genuinely useful for design experiments too. The idea is simple: your main branch holds the version of your blog that is live and working. You create a new branch for your design experiment, push it to GitHub, and then tell GitHub Pages to serve from that branch instead. You get a real, live preview of your experiment — fonts, layouts, colours, everything — without touching your production site. When you are happy... Testing Blog Designs with Git Branches and GitHub Pages
How to Host Your Blog for Free with GitHub Pages A little while ago, a friend asked me how I host this blog. When I told her it was free, ran on GitHub, and I had never paid a hosting bill for it, she did not quite believe me. I completely understand that reaction — it sounds too good to be true. But GitHub Pages is genuinely one of the best-kept secrets for anyone who wants a personal blog or website without the overhead of managing a server or paying for hosting every month. In this post I want to walk you through everything from scratch: creating a GitHub account, setting up your blog with Jekyll, choosing a theme, writing your first post, and connecting a custom domain if you have one. Whether you have never used Git before or you are a developer who just has not gotten around to setting up a personal site, I hope this gives you everything you need to get started. What Is GitHub Pages, Exactly? GitHub Pages is a free hosting service built into GitHub. You store your website files in a GitHub repository, and GitHub automatically builds and serves your site at... How to Host Your Blog for Free with GitHub Pages
Before you install it locally, here are five entirely plausible ways your week could take an unexpected turn. It looks harmless at first. You connect OpenClaw to your Gmail. You point it at Slack. You give it a few instructions and step away to make coffee. But the moment it can read your inbox, post on your behalf, and call external APIs with your credentials — something changes. The moment a system can act on your behalf with real credentials and persistent consequences, it becomes infrastructure. And infrastructure, as I have learned, has very different rules. This post complements this week’s AI Signals, where I examine the broader capability, capital, and sovereign investment shifts shaping agentic AI at scale. Introduction AI agents like OpenClaw are wonderful — genuinely exciting tools that we are only beginning to understand. Unlike a chatbot, OpenClaw can monitor Slack channels, read and draft Gmail messages, call external APIs, execute structured workflows, and trigger automated actions. It is not answering questions. It is acting on your behalf, in your name, with your access. That distinction matters enormously — and most people miss it entirely until something goes wrong. Five Ways a Local Install Can Ruin Your... OpenClaw Isn't a Chatbot Anymore. It's Infrastructure.
Git is usually about managing streams of history. You merge rivers of code together. But sometimes, you don’t want the whole river. You just want one specific fish. That’s git cherry-pick. Introduction The command git cherry-pick <commit-hash> takes the changes from a single specific commit and applies them to your current branch as a new commit. It’s highly useful in specific scenarios: Hotfixes: You fixed a bug in dev, and you need that same fix in prod right now, but you can’t merge dev yet because it has unfinished features. Mistakes: You committed to the wrong branch. How to use it Let’s say you are on main and you want a commit abc1234 that exists on feature-branch. Find the hash: git log feature-branch # verify that abc1234 is the one you want Execute the pick: git checkout main git cherry-pick abc1234 Git will take the diff from that commit and try to apply it to main. If successful, it automatically creates a commit. Handling Conflicts Just like a merge, cherry-picking can result in conflicts if the code has diverged too much. If this happens, Git will pause. Open the conflicted files and resolve the <<<< markers. git add <file> git... Git Cherry-Pick: The Surgeon's Knife
Introduction What a week this has been! Between February 12 and 19, 2026, three very different layers of the AI world moved at the same time: major model releases landed (Claude Sonnet 4.6 and Gemini 3.1 Pro), a staggering amount of capital was raised ($30B Series G), and a national research body published a funded strategy (UKRI’s £1.6 billion plan). I found the combination fascinating, so let me walk you through what happened, why it matters, and what I think it means for developers. 1. Anthropic Released Claude Sonnet 4.6 (Feb 17, 2026) Anthropic: Introducing Claude Sonnet 4.6 On February 17, Anthropic released Claude Sonnet 4.6, and it is not a minor update. The headline improvements are stronger coding support, better computer-use capabilities, and more reliable agent planning — all backed by a 1 million token context window. To put that in perspective, 1 million tokens is roughly 750,000 words, which means Sonnet 4.6 can reason across entire codebases or long document collections in a single pass without losing earlier context. Market Reaction & Independent Coverage Anthropic releases Sonnet 4.6 TechCrunch covered the release on the same day and made a point I agree with: this is not a quiet... Agentic AI at Scale: New models, $30B, and the UKRI Strategy
I am lazy. But usefully lazy. I believe that if you type the same long command more than three times a day, you should shorten it. Git commands can be verbose, but Git has a built-in aliasing system to fix that. Introduction Aliases allow you to map short commands to longer Git functions. You define them in your .gitconfig file (usually in your home directory) or by using the command line. Here git config --global alias.co checkout allows you to type git co instead of git checkout. Here are my top essential aliases. The Essentials Run these in your terminal to set them up: # Basic navigation git config --global alias.co checkout git config --global alias.br branch git config --global alias.ci commit git config --global alias.st status # Unstaging files (undoing 'git add') git config --global alias.unstage 'reset HEAD --' Now, checking status is just git st. It saves milliseconds, but they add up to a feeling of fluidity. The Visualizer: A Better Log The default git log is a bit dry. I use an alias called lg that makes it look like a rainbow graph: git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"... Type Less, Do More: My Top 10 Git Aliases
Introduction This week felt like watching two forces pull against each other. On February 7th, both OpenAI and Anthropic released advanced models simultaneously. ByteDance launched Seedance 2.0 with video quality that made Elon Musk say it is happening “too fast.” Modal Labs is raising at a $2.5B valuation. Perplexity is running three frontier models in parallel to cross-validate answers. The capability momentum is real. But the friction from the real world is getting louder. Data centre projects are stalling in permit review. Communities are organising opposition. Microsoft is betting on speculative superconductor technology because conventional power delivery cannot scale. OpenAI changed its mission alignment team. ByteDance’s Seedance 2.0 launched with certain real-person content generation features limited or paused due to privacy and misuse concerns. What stood out to me most is that the conversation is shifting. It is not only about model capability anymore. It is increasingly about who gets power, who bears costs, and who keeps control. These are harder questions, and they do not have clean technical solutions. 1. Inference Infrastructure Funding Momentum Continues AI inference startup Modal Labs in talks to raise at $2.5B valuation, sources say TechCrunch reports that Modal Labs is in talks for a... AI Improves Itself While We Argue About Permits
Humans are terrible at repetitive tasks. We forget to run linters. We forget to check for trailing whitespace. We forget to run the test suite before pushing. Computers, on the other hand, love repetition. This is where Git Hooks come in. Introduction Git Hooks are scripts that Git executes before or after events such as: commit, push, and receive. They allow you to “hook” into the Git workflow and stop bad things from happening. The most useful one for daily development is the pre-commit hook. What is a pre-commit hook? This script runs every time you type git commit. If the script exits with an error (non-zero status), the commit is aborted. It’s the perfect place to put comprehensive checks: “Does the code compile?” “Are there any lingering print() statements?” “Does the formatting match the team style?” Setting it up (The Easy Way) You can write bash scripts in .git/hooks/, but that’s hard to share with a team. I recommend using the pre-commit framework. Install it: pip install pre-commit Add a config file .pre-commit-config.yaml to your repo: repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v4.4.0 hooks: - id: trailing-whitespace - id: end-of-file-fixer - id: check-yaml Install the git hook scripts: pre-commit install... Git Hooks: The Robot Butler for Your Code
This is Part 2 of the Codex CLI series. Today, we’ll learn how to control Codex’s capabilities and make your first safe edits. This post is about speed with control, not automation for its own sake. Introduction In the last post, we installed Codex CLI and took our first steps with read-only exploration. Today, we go deeper: learning how to control what Codex can do, understanding the essential commands you will use daily, and making your first actual code edits—safely. What I learned from using Codex is that the key to productive work is understanding the control mechanisms. Unlike a chatbot that only gives suggestions, Codex can actually change your files and run commands. That power is valuable, but it requires proper guardrails. Let me show you how to stay in control while getting real work done. Understanding Security Controls: Permissions and Approvals This is the most important concept to understand before you let Codex make any changes. Codex uses a security system built on two interconnected ideas: permissions and approvals. At first, these might seem like the same thing, but understanding the distinction helps you think clearly about how Codex operates and how to stay in control. Permissions vs... Codex CLI Part 2 — Security Controls & Safe Editing
Introduction This week I observed something curious. AI is advancing faster than ever, yet the physical world continues to set the pace. It reminded me of watching two runners on different tracks — one sprinting effortlessly, the other climbing uphill with a heavy backpack. Many of this week’s signals point to the same tension: software speed versus physical limits. Here are the stories that made that contrast feel especially sharp. 1. AI-Assisted Cloud Break-Ins Are Now Measured in Minutes Intruder uses AI assistant in AWS cloud break-in A Sysdig security report described an attacker achieving administrative privileges in under ten minutes, moving from stolen credentials to AWS Lambda execution. LLM-generated code was used to accelerate the process, and investigators noted artefacts consistent with machine-assisted scripting rather than purely human-written tooling. Why This Matters AI is collapsing the time between access and impact. Security assumptions built around slow, manual attackers no longer hold. Detection alone is insufficient when adversaries can chain complex steps together in minutes with machine assistance. Response speed now matters as much as prevention. 2. Power Queues in Europe Are Now Multi-Year Bottlenecks Amazon says European data center power can take seven years to connect AWS executives warned... The AI Paradox: Lightning Fast and Gridlocked
While polishing my publishing script, I managed to do the one thing I explicitly advise against: I committed a token to Git. It was in a comment. It was in a private repository. It was still, regrettably, committed. What followed was not drama, but administration — rewriting history, checking remote branches, and searching old commits for fragments of the token to ensure it had truly vanished. It turns out that removing a secret from Git is rather more involved than removing a semicolon. On balance, I would not recommend the experience. It was, however, a useful reminder that secure workflows are not theoretical best practices. They are habits — and habits are most valuable when we are tired, moving quickly, or feeling slightly too confident. Introduction There is something very addictive about modern code assistants, and I find myself using them almost daily. The efficiency gains and faster prototyping are obvious on the surface. What continues to amaze me is how well AI assistance understands what we want to implement, often from very small or loosely defined specifications. You type a half-formed thought — “parse this CSV”, “add authentication”, “why does this crash?” — and suddenly there is structure, clarity,... Using AI Code Assistants Safely
This week’s AI news was quietly consequential, and I found myself thinking about what these developments mean for the field I care so much about. Instead of flashy new demonstrations or larger models, the important signals appeared in earnings calls, export rules, shipping approvals, and security reports. Microsoft tied AI directly to long-term capital spending. Anthropic argued for regulation centred on chip access. China approved limited H200 imports. And at the other end of the technology stack, desktop compute and open models continued to advance — alongside significant security friction that caught my attention. None of these stories is flashy on its own. But together, they paint a picture of AI settling into infrastructure: budgeted, gated, and increasingly operational. Let me share what stood out to me this week. 1. Microsoft Earnings Put AI Capex Front and Centre Microsoft investors sweat cloud giant's OpenAI exposure Microsoft reported $81.3 billion in revenue for Q2 FY2026, a 17% year-over-year increase and higher than analysts’ expectations — with Microsoft Cloud revenue alone surpassing $50 billion. These results are directly linked to continued demand for artificial intelligence services and to investment in cloud infrastructure. (See Microsoft beats Wall Street expectations with $81.3B revenue.) Despite... Chips, Capex, and Code Risk
We’ve all seen it. You clone a repository, and there it is: a .DS_Store file, a __pycache__ folder, or worse—a file containing local API keys. Committing these files is messy, unprofessional, and sometimes dangerous. The solution is simple but often misunderstood: the .gitignore file. Introduction The .gitignore file tells Git which files it should intentionally ignore. It’s not just about keeping your repo tidy; it’s about security and collaboration. You don’t want to force your local editor settings or operating system junk on your teammates. The Basics Create a file named .gitignore in your project root. Add patterns for files you want to exclude: # Dependencies node_modules/ .venv/ # Compiled code *.pyc __pycache__/ dist/ # System files .DS_Store Thumbs.db Now, git status won’t even show these files. Global Ignore: The files you ALWAYS ignore Some files, like macOS .DS_Store or editor configurations (.vscode/), haunt every project you touch. Instead of adding them to every single project’s ignore file, you can set up a Global .gitignore. Create a file at ~/.gitignore_global. Add your OS-specific junk there. Tell Git to use it: git config --global core.excludesfile ~/.gitignore_global Now, you never have to worry about accidentally committing a .DS_Store file again. Fixing Mistakes:... Stop Committing Garbage: A Masterclass in .gitignore
Introduction Edge AI is a way for a business to run “smart” software directly where work happens—on a device, a machine, or a local computer—rather than sending everything to a distant cloud first. In plain terms, it helps you react faster, keep more data on-site, and keep operations moving even when connectivity is patchy. In one minute Start with one operational bottleneck (queues, spoilage, missed faults, slow inspections). Pick a local decision that benefits from speed (approve/reject, flag/ignore, stop/continue). Pilot on a single site with a measurable target (less downtime, fewer stockouts, faster service). Keep humans in charge: Edge AI should recommend or flag before it automates. The problem → what changes → what you get Problem Many businesses lose time and money because decisions depend on delayed data, slow manual checks, or unreliable connectivity between locations. What changes Lightweight intelligence is placed closer to the action—near sensors, cameras, tills, or equipment—so routine judgments can happen locally. What you get Faster responses, fewer interruptions, and often less sensitive data leaving your premises. Where Edge AI tends to fit best Operations area What Edge AI can do locally How you measure success Manufacturing / maintenance Spot abnormal vibration or temperature patterns... Edge AI in Everyday Operations
TL;DR: This week felt less like model drama and more about the systems shaping what AI can actually do. Courts, export rules, and commerce protocols are becoming as important as the models themselves. Here are five signals that stood out. Introduction You know what? Sometimes the most interesting AI developments have nothing to do with new models or benchmark scores. This week reminded me of that. Whilst everyone obsesses over the latest transformer architecture or chatbot capabilities, the real story is happening in courtrooms, congressional committees, and standards bodies. I find this fascinating because it mirrors something I’ve observed throughout my career in computer science: the technical capabilities matter far less than the systems and rules that govern how we can use them. It’s like learning to code—you can master Python syntax, but if you don’t understand the broader ecosystem, licensing, and community standards, you’re missing the bigger picture. So let’s dive into this week’s signals. Fair warning: there’s a minor timing issue I need to address upfront about one of these stories, but I’ll explain that when we get there. Five Signals That Actually Matter Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action PwC CEO Survey: AI... This Week in AI: Regulation Heat, Cloud Bets, and Agentic Shopping
Have you ever been in the middle of a complex feature, writing messy code, when suddenly a colleague asks you to fix a critical bug on production? You can’t commit your broken code. You can’t lose your work. So, what do you do? You stash it. Introduction git stash is one of those commands that feels like magic once you start using it. It takes your uncommitted changes (both staged and unstaged), saves them away for later use, and reverts your working directory to the last clean commit. Think of it as a “Cut and Paste” for your entire project. You cut your current work, go do something else, and paste it back when you’re ready. Git Stash Command Reference Command What It Does git stash Stash all tracked changes (staged + unstaged). git stash -u Stash tracked and untracked (new) files. git stash pop Apply latest stash and remove it from the list. git stash apply Apply latest stash but keep it in the list. git stash list View all stashed entries with timestamps. git stash pop stash@{1} Apply a specific entry by index. git stash drop stash@{0} Delete a specific stash entry. git stash clear Delete all stash... Git Stash: The CTRL-Z for Your Working Directory
This is Part 1 of the Codex CLI series. We’ll cover installation, authentication, and your first session. Future posts will explore workflows, best practices, and advanced features. Introduction Today I want to introduce you to Codex CLI—a tool that has genuinely changed how I work with both code and writing. If you have ever wished you could have an AI assistant that actually understands your project, can read your files, and help you make changes right from your terminal, Codex CLI is exactly that. It is not just another chatbot; it is designed to work inside your repository, understanding your code structure and helping you improve it safely. In this series, I will walk you through everything you need to know to use Codex CLI productively. Today’s post covers the fundamentals: what it is, how to install it, and how to take your first steps safely. What Is Codex CLI? Before we jump into installation, let me explain what Codex CLI actually is and why it is different from using ChatGPT in a browser. The Core Concept Codex CLI is a coding agent that runs locally in your terminal. When you start it in a project directory, it can: Read... Getting Started with Codex CLI
This week felt grounded. Instead of big model claims, the signals were about governance, power, and the plumbing that lets AI show up in the real world. The quiet shift is toward infrastructure you can point to and systems you can live with. Here are the five signals that made that shift feel concrete. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action State AI Safety Laws California and New York are advancing binding AI safety frameworks, creating de facto national standards. Integrate audit trails and documented model governance into your products now; compliance will become a baseline expectation. Open Model Adoption Gap Despite proven technical benefits, open models still face significant enterprise adoption friction. Invest in LLM deployment tooling (inference servers, fine-tuning pipelines) to lower barriers and avoid vendor lock-in. Texas as AI Infrastructure Hub Power availability and land access are making Texas a strategic data-centre hotspot, reshaping regional pricing. Factor geographic diversity and energy sourcing into infrastructure decisions; single-region deployments carry growing risk. Meta GenAI Data Flow Design Competitive edge is shifting from model quality alone to data pipeline efficiency and auditability. Model data flow orchestration as a first-class engineering concern, not a secondary operational task. Micron... The Week AI Got Practical: Laws, Power, and Open Models
Introduction This is a very reasonable question — and you’re not alone in feeling puzzled. Right now we have many powerful tools, but each one shines in a different layer of work. The confusion usually comes from trying to make one tool do everything. Let’s untangle this calmly and then build a clear, future-proof mental model for you. I’ll do this in four parts: What Codex CLI is really good at (and what it isn’t) How it compares to MCP, Antigravity, and Cursor Which tool fits your blogging + Python workflow best How to design a multi-agent workflow without chaos I’ll keep this practical, not hype-driven. 1. Will Codex CLI fit you well? Short answer Yes — but as a worker, not as the brain. Codex CLI is best thought of as: A careful, local execution agent that lives next to your files It is not: a long-term planner a creative ideation engine a memory system It is: excellent at editing Markdown safely great at refactoring Python good at running tests, linters, and checks strong at repeatable, terminal-based tasks For blogging, Codex CLI fits the “make this concrete” phase. 2. How Codex CLI compares to MCP, Antigravity, and Cursor Here’s... My Multi-Agent Workflow
This week in AI felt a little different to me. Fewer headlines about dazzling benchmarks or clever prompts — and more about where AI actually lives, who powers it, and how it starts to touch everyday systems. What I found interesting is that all four stories below point in the same direction — not toward new capabilities, but toward where AI is settling in the real world. Chips, electricity, assistants we already talk to, and even shopping flows. Less magic. More plumbing. And that’s often where the most important shifts begin. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action TSMC $52–56B Capex AI accelerator demand is structural, not cyclical; TSMC forecasts mid-to-high-50% CAGR in that segment through 2029. Stop treating AI compute as elastic cloud burst capacity; plan for hardware availability windows and supply chain lead times. Apple + Google Gemini Apple chose to integrate Gemini models as the foundation for Apple Intelligence and Siri rather than build in-house. Large-scale model partnerships will define platform AI capabilities; build integration layers that can swap model backends. Microsoft Community Infrastructure Data-centre electricity demand will more than triple by 2035; infrastructure is now an urban planning and grid concern. Factor data... Signals from the AI Supply Chain
Introduction Hello, Dear Reader! This week in AI felt noticeably different from recent months—quieter, but in a way that felt more meaningful rather than less important. Instead of louder models or bigger capability announcements, the conversations shifted to constraints: where AI actually runs, who controls it, what happens when deployment races ahead of safety, and how AI performs when mistakes are genuinely unacceptable. Less spectacle, more reality. What I found interesting is how these stories connect. They are all, in different ways, about limits—technological, geopolitical, ethical, and physical. After months of “what can we build?” we are seeing more questions about “under what conditions should we build it?” Here are six developments from this week that I think reveal where AI is heading next. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action France + Mistral Sovereignty Governments treat AI as critical national infrastructure requiring jurisdictional control, not a cloud commodity. For public-sector clients, design for data residency and local deployment from day one; technical capability alone is not sufficient. Nvidia: Training → Inference Jensen Huang confirms the industry bottleneck has shifted from training to deployment, energy efficiency, and physical AI. Invest in inference optimisation and edge deployment; the... AI's Week of Limits: Safety, Control, and Real-World Physics
This post is part of my Weekly AI Signals series—a curated look at the moments that matter once the noise fades. Introduction As we step from 2025 into 2026, I want to pause on the final week of the year — not because it was loud, but because it was revealing. Three developments stood out. China released draft rules aimed at emotionally engaging AI systems. SoftBank moved to strengthen its position in digital infrastructure. And Meta acquired a company focused on autonomous AI agents. Individually, these stories are fascinating. Taken together, they suggest something deeper: AI is moving into a phase shaped by governance, physical scale, and questions of agency. Let’s walk through what happened — and what it might mean for where we’re headed. Weekly AI Signals: Key Takeaways Signal Date Industry Impact Builder Action China Emotional AI Rules 27 Dec 2025 Regulators treat relational AI UX as a distinct safety risk, requiring transparency, break prompts, and human review for self-harm signals. Design AI companions with visible disclosure labels and usage-time management as a first-class feature, not an afterthought. SoftBank acquires DigitalBridge 29 Dec 2025 $4B deal confirms that data centres, fibre, and edge infrastructure are now strategic AI... As 2025 Closes: AI's Week of Regulation, Infrastructure, and Autonomy
This post is part of my Weekly AI Signals series — a curated look at the moments that matter once the noise fades. Introduction Five Signals That Mattered Hello, dear reader! Welcome to the last week of December 2025. I hope you are enjoying the holidays and have had a moment to look back on what has been an extraordinary year for AI. This is not a complete account of everything that happened in AI this week. Instead, it is a small, curated set of signals that felt meaningful once the noise settled — moments where limits became visible, incentives shifted, or assumptions quietly changed. If 2025 was the year we kept asking “what can we build?”, this past week felt like the moment the industry started asking a more useful question: “what actually works?” The five signals below come from very different places — creative industries, hardware, developer practice, security, and physical systems — but together they point to the same thing. AI is moving out of its novelty phase and into an engineering one. Here is what stood out, and why it may matter longer than this week’s headlines. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder... Hardware Handshakes, Prompt Injection Reality, and AI Beyond the Screen
Merry Christmas & a Happy New Year 🎄 I wish you joyful moments with your loved ones. Have a prosperous and happy 2026! I genuinely appreciate your visit to my blog, and I’m thrilled when I hear it’s been helpful to you. Many of you are skilled coders and experts in your fields, and I wish you great success—not only in 2026 but also in the many happy years ahead. Doing something well energises our lives in a way no AI can replicate. I hope you feel inspired about your work this year and enjoy exploring new techniques and AI tools. AI is a powerful tool that can enrich our lives and make us more productive, ultimately saving what matters most: our time. I’ve found that AI can save tremendous time when you know exactly which tools to use and how to use them effectively. That’s why I’ve shared my favourite AI tools on the blog, along with practical guides for using them. You can check it out here. While generative AI can answer technical questions and write code in various styles, it’s important not to lose the human touch. I cherish this space we share—this blog is my small... Merry Christmas and a Very Happy New Year!
Introduction This week, several AI developments caught my attention. Not because they were particularly loud or novel, but because they touched on questions that tend to surface later, when systems are already in use. Better safety defaults are one of those questions. If AI systems are going to be used by children and teenagers, safety cannot remain an afterthought or a policy document. It needs to be part of how applications are designed from the start — even if that means slower progress or fewer features. Alongside this, we saw continued movement toward faster, agent-ready models and interface tooling that treats interaction as something adaptive rather than static. None of these developments are dramatic on their own. But together, they hint at where current AI systems are under pressure to change as they move closer to everyday use. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action Meta Mango (Multimodal) Unified image/video/text reasoning models replace fragmented pipelines, simplifying architecture. Design applications with multimodal I/O as a first-class concern rather than an optional add-on. OpenAI & Anthropic Safety Child and teen safety transitions from a policy document to a structural product feature. Treat safety constraints (age gates, content guards) as... AI Interfaces, Safety, and Multimodal Systems
Antigravity 1.11.9 vs Cursor 2.1.42 (Universal) Two IDEs. Two philosophies of AI-assisted coding. Google’s Antigravity and Cursor are both AI-powered IDEs, but the way they help a developer think and work is very different. In this piece, I compare them head-to-head and link to official documentation or changelogs so you can explore the exact features I describe. Google Antigravity 1.11.9 Outcome-oriented, agentic development IDE Official site: https://antigravity.google/ Developer guide: Build with Google Antigravity (developers.googleblog.com) Getting started tutorial: https://codelabs.developers.google.com/getting-started-google-antigravity Antigravity is Google’s agent-first development platform. That means the tool is designed to think in terms of tasks and outcomes, not just code completion. You define a goal, and Antigravity manages the steps — planning, coding, testing, and verification — using autonomous agents. How it feels Imagine a development environment that says: “Tell me your goal. I’ll handle the workflow.” Agents can run across your editor, terminal, and browser — not just suggest text in a sidebar. Notable features Agent Manager & Mission Control — A dashboard to run and monitor multiple AI agents handling parts of a project in parallel. Artifacts — Agents produce verifiable outputs like task lists, implementation plans, screenshots, code diffs, and browser recordings so you can see... Antigravity 1.11.9 vs Cursor 2.1.42 (Universal): A Practical Comparison
Introduction This week, AI edged a little further into the physical and infrastructural world. DeepMind is setting up its first automated materials science lab in the UK. OpenAI has completed early prototypes of its new ambient hardware device — something deliberately quieter and more context-aware than today’s screens. And in the US, 42 attorneys general have made it clear: unsafe chatbot behaviour is no longer something companies can simply promise to improve “later”. Alongside these stories, a major $20 billion AI infrastructure partnership was announced, and new findings showed where AI tools already rival human specialists. Here is what mattered this week — and why it shapes the systems we build. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action DeepMind Robotic Labs AI moves from digital tokens to physical manipulation, accelerating materials science discovery. Abstract planning architectures (LLM to physical action) will soon be standard in manufacturing; learn ROS alongside Python. OpenAI Ambient Hardware Hardware pivots from glowing rectangles to screenless, context-aware auditory/environmental sensors. Prepare for UI-less software engineering where voice and context state replace traditional DOM rendering. State AGs Demand Safeguards 42 U.S. Attorneys General signal that “move fast and break things” will trigger severe regulatory action.... Labs, Law and New Hardware Horizons
I went to the dentist today. You know, the adult version of a school exam, except the chair is oversized and the lighting is uncomfortably good. The hygienist always asks “Any concerns?” and suddenly my brain goes blank. Which teeth do I even have? Where are they located? What is a molar? They tilted the chair back, switched on that tiny headlamp of truth, and my soul decided to take a brief walk around the waiting room. Many years ago I lost a tooth to a small stone hiding in my food. Today I am finally getting a new one — not real, but perfect. 🦷 Please wish me luck. :) Also, considering all the dental suffering in human history, where are the AI dentists? Surely robots could make this process less terrifying. Or at least tell better jokes while drilling. What will happen soon (2–7 years) After today’s adventure, I got curious: what is actually happening in dental tech while we are all lying back practicing controlled breathing? The future is arriving quietly. No robot dentists hovering over you like in sci-fi films. Just small upgrades that make appointments less mysterious and faster. The AI Dentistry Evolution Matrix Clinical... A Short Tale of Bravery (at the Dentist)
Introduction This week, the theme was convergence—but with a side of caution. We saw the convergence of policy and technology as the U.S. Health Department moved to make AI part of its core infrastructure. We saw the convergence of senses, with breakthroughs in how AI sees (3D from 2D) and hears (universal sound understanding). But we also saw the convergence of AI capabilities and criminal intent. While DeepMind predicts AGI by 2030 and researchers give machines better senses, a story out of Kansas served as a chilling reminder of why we need to stay vigilant right now. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action U.S. HHS Strategy Government transitions from experimental pilots to operationalising AI as core infrastructure. Prepare software for rigorous federal compliance and data privacy standards if selling into the public sector. AGI by 2030 DeepMind shifts focus from LLMs to “world models” to achieve AGI within five years. Begin architecting systems that can interface with predictive, physics-aware models, not just text generators. The “Trust Moat” Western AI companies leverage commercial trust as a geopolitical advantage over foreign competitors. Double down on transparent data governance to secure enterprise contracts. Meta SAM3D High-fidelity 3D assets can... AGI Timelines, 3D Vision, and the Reality of AI Scams
Hello, my Dear Reader, We are celebrating this blog’s birthday again! Elena’s AI Blog is now four years old—still learning, still growing, and still navigating this fascinating AI landscape together with you. It has been an incredible year since I wrote Three years of Elena’s AI Blog. The AI world has moved at breathtaking speed, and I have been here, learning alongside you, documenting the journey, and sharing my thoughts on everything from multimodal AI to coding assistants. What is Elena’s AI Blog? Like everyone today, I live in an era of rapid AI evolution. It is challenging to understand and live in, even for people with a technical background. However, I love making things easy to understand while learning new technologies as a passion. I created this blog to log what I learn and share my ideas and findings. Now four years old, this blog continues to connect technology with everyday understanding, reflecting my passion for coding and commitment to making complex concepts accessible. The Blog Since December 2024 Since my last anniversary post, the year has been particularly rich in content about AI coding assistants, multimodal systems, and the practical applications of AI in everyday development work. 1.... A Journey Through AI and Code
Introduction Last week, I built an app without writing a single line of code. It still feels slightly illegal to admit that out loud. The IDE stitched most of it together. The agents filled in the logic. I spent my time describing what I needed — like guiding a very enthusiastic intern who occasionally rewrites your entire project because it “felt cleaner.” And that’s when it hit me: developers aren’t disappearing — but what we do each day has already changed. I’m not laying bricks anymore. I’m the architect who guides the builders. Less typing, more thinking. Less wrangling syntax, more designing boundaries. Here’s what this new skill stack feels like in practice, with the real mistakes and odd surprises included. The New Hard Skills: Orchestration & Specification Intent Specification (Vibe Coding) Last Tuesday, I said, “Make the login more secure.” The agent returned something that looked like a spaceship airlock. Beautiful, impenetrable, and completely unusable. Agents are incredibly literal. So now everything starts with mini-specs. Even buttons. I said, “Add a loading state to the submit button.” Returned: three new files, an animated SVG, and a dramatic full-page dimmer like the app was about to reveal plot-changing information. What... The New Skill Stack, from Writing Code to Managing Intelligence
Introduction This week, AI didn’t make a fuss. Instead, it quietly slipped into places where it can genuinely help: in our editors, in our browsers, and even at the roadside charger. It’s the sort of progress that whispers, not shouts. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action Claude Opus 4.5 Core reasoning capabilities upgraded with native Excel and Chrome integration, focusing on zero-hallucination execution. Use Opus 4.5 for complex multi-file codebase refactors where context tracking is more critical than pure speed. ChatGPT Shopping AI transitions from passive answering to active, agentic product research and curation. E-commerce developers must prepare structured semantic metadata, as users will increasingly search via LLMs rather than standard query bars. Google EV Forecasting Google proved that a simple linear regression model can outperform heuristics without the overhead of massive neural networks. Stop defaulting to LLMs for simple predictive tasks; lightweight deterministic models are cheaper, faster, and often better suited for edge deployment. DeepSeekMath-V2 An open-source model achieved gold-medal-level IMO scores via a rigorous generator-verifier loop. Implement self-verification loops in your own agentic workflows to dramatically increase output reliability on complex logical tasks. The Tools Are Getting Smarter, Softer, and Surprisingly Practical 1. Claude... Claude Opus, ChatGPT Shopping, EV Forecasting and DeepSeekMath-V2
Introduction This week wasn’t just about new models. It was about growing up. Google and OpenAI delivered the expected fireworks: Gemini 3 refined the “Mixture-of-Experts” architecture for massive scale, and Project Antigravity killed the text editor in favour of agent orchestration. But the real signal didn’t come from a server farm. It came from the “adults in the room.” The WHO issued a strict mandate that “Humanity must hold the pen,” citing dangerous error rates in AI diagnosis. Ernst & Young demanded we start measuring the energy cost of intelligence (~1Wh per query). And on 60 Minutes, Anthropic’s CEO publicly questioned the unchecked power of unelected tech leaders—including himself. We are shifting from “look at this cool demo” to “how do we actually live with this?” The era of moving fast and breaking things is over. Welcome to the era of integration. If you're still waiting for AI winter, I have bad news: we're in AI summer, and nobody brought sunscreen. 1. Google Gemini 3 + “Antigravity”: The Death of the Text Editor? Google launched Gemini 3 on Tuesday. The model itself is impressive, but the real story is the environment it lives in. Project Antigravity (available now) doesn’t just... Ethics, Gravity, and the Future We're Actually Building
Introduction Some weeks in AI are loud and dramatic, while others offer a more subtle experience—a gentle reminder to notice interesting developments. This week was one of those softer moments, with seven noteworthy events that prompted me to reflect: We are truly building something new. This week showcases the connections between ethics, coding, hardware, and humanity. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action Vatican AI Medical Ethics Major institutions are establishing formal ethical frameworks for AI deployments in healthcare. Ensure your medical AI pipelines have human-in-the-loop dignity checks, not just statistical accuracy tests. 9% Developer Trust in AI Blind trust in AI generation remains incredibly low; rigorous code review is non-negotiable. Enforce CI/CD pipeline checks for all agent-generated code to catch hallucinations before production. MIT LLM Modularity Structuring code around concepts and semantic boundaries to accommodate LLM reasoning. Write code as discrete, legible concepts rather than entangled logic blocks so agents can parse it effectively. NVIDIA DGX Spark A 1-petaflop workstation democratises massive compute, bringing datacenter power to the desk. Shift exploratory model training and inference testing back to local hardware to save cloud costs. VUNO Profitability Clinical AI transitions from R&D hype to a commercially viable,... Ethics, Code, Chips, and a Petaflop on Your Desk
Introduction Every generation faces a moment when something new arrives—too big to ignore, too unfamiliar to immediately embrace. Centuries ago, that “new thing” was the printed book. Today, it may be artificial intelligence. In this blended reflection, I would like to explore two intriguing ideas: How the Church moved from resisting novelty to shaping AI ethics, and Whether AI itself could become a “new religion” for some. Let’s ponder it together. When Novelty Was Dangerous History gives us vivid examples of how disruptive new knowledge once felt. Historical Parallels: Technology vs. Institutions Historical Disruption Institutional Reaction The Core Fear Heliocentric Model (Galileo) Tried and found guilty of supporting ideas declared “formally heretical” (Galileo affair). Displacing humanity from the literal centre of the universe. The Printing Press The creation of the Index Librorum Prohibitorum (1559–1966) to ban “dangerous” texts. Democratised access to information undermining absolute authority. Astronomical Science (Copernicus) Forced to present his findings merely as hypothetical models rather than literal truth. Scientific observation contradicting established dogma. Imagine handing your code to a council of theologians and hearing: “This function feels suspicious.” But beneath the humour, there’s a real insight: Novelty disrupts. Novelty threatens existing structures. Novelty asks us to rethink... Could AI Become a New Religion?
Introduction You know what’s frustrating? Finding a brilliant AI model that summarises text beautifully, only to discover the license says “research purposes only” or worse — some vague terms that would make your lawyer cry. I spent way too much time digging through Hugging Face, reading license files, and testing models that claimed to summarize but just… didn’t. Most transformer models come with restrictive licenses that make you wonder if even looking at the model card might violate some terms. But here’s the good news: Apache 2.0-licensed summarization models exist. Real ones. Models you can actually use, modify, and ship in your apps without legal nightmares. I found them, tested them, and now I’m sharing them with you. Let’s dive in. Fun fact: I initially wanted to call this post "License-Free Summarizers" until my lawyer friend reminded me that "license-free" is a licensing nightmare in itself. Apache 2.0 it is! Key AI/NLP Concepts Before we jump into models and code, let’s quickly cover some terminology. Don’t worry — I’ll keep this brief. You can always come back to this section if you get confused later. NLP Technical Glossary Term / Architecture Definition Practical Implication Transformers The backbone of modern NLP;... Apache-Licensed Summarizers
Hello, Dear Reader — how are you doing today? This week in AI, I wanted to focus on what actually matters for us developers. You know, the things that will make our lives easier (or at least more interesting) rather than just another hype cycle. So grab your favourite beverage, and let’s dive into five developments that might actually change how we work. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action OpenAI’s $38B AWS Deal Signals a definitive shift to multi-cloud hyperscale, resulting in massive GPU availability by late 2026. Abstract your LLM API calls behind agnostic interfaces (like LiteLLM or LangChain) to swap providers easily as price wars heat up. Google Vertex AI Updates AI agent deployment moves from experimental sandboxes to production-grade, observable microservices. Implement formal agent telemetry tracking tokens, latency, and success criteria using Google’s Agent Engine. Copilot Org Governance IDE automation is no longer a wild west; enterprise policies can now dictate AI coding style at scale. Deploy a 10-line “house rules” document mapping your lint and testing standards directly into Copilot’s organisational settings. VS Code Unified Agents AI shifts from a sidebar widget to a first-class citizen deeply integrated into IDE planning and... AI Weekly — Agents Grow Up, Clouds Get Bigger
Introduction Cursor 2.0 launched on October 29, 2025, and I am still figuring out whether Cursor 2 is right for my projects. If you’re wondering whether this upgrade is worth your time (and learning curve), here’s a clean, honest look. TL;DR: Cursor 2.0 is a fundamental shift toward delegation. It features the blazing-fast Composer model. The workflow centers on autonomous agents. Security is handled by the Sandboxed Terminal. It feels less like “VS Code with AI” and more like “an AI development workspace where you guide agents.” Cursor 1.x vs 2.0 Architectural Diff This is the question most of us care about. Here’s the balanced and honest snapshot of what shifted. Cursor 1.x Cursor 2.0 Familiar VS Code-like layout New “Agent View” that centres around autonomous AI tasks AI as an assistant that edits your open file AI agents that work across many files at once Mostly powered by external models (GPT-4/5, Claude, etc.) Composer – Cursor’s own model, trained for coding[2] Manual approval/Allowlist (Sandboxing in late 1.7 Beta for “beta testers only”)[4] Sandboxed Terminal (GA/Default for safe execution)[6] Multi-file changes were possible but limited Strong codebase-wide reasoning + semantic search[1] Manual browser testing Built-in browser (GA) for agent testing... A few thoughts on Cursor 2.0
Dear reader, This week, AI quietly strengthened its foundations. At one end, NVIDIA’s new supercomputers are pushing science to exaflop speeds. At the other, IBM released small open models that fit right on our laptops. And somewhere in between, GitHub taught coding assistants to work as a team. Three stories, one theme: AI is becoming more balanced — powerful where it needs to be, and personal where it matters most. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action NVIDIA Exaflop Hubs Massive, centralised supercomputing accelerates scientific breakthroughs and brute-force foundational model training. Accept that raw, hyperscale compute will remain centralised; focus local development on targeted inference, not base-training. IBM Granite 4.0 Nano Democratises local edge AI with highly capable Apache-licensed models (350M–1.5B parameters) running natively in browsers. Deploy Granite Nano for local summarisation, routing, and simple parsing tasks without paying API overhead. GitHub Agent HQ Transitions AI assistance from a single pair-programmer autocomplete to an autonomous, multi-agent, collaborative engineering team. Redesign your development workflow to assign distinct roles (planner, reviewer, coder) to specialized LLM personas within your IDE. AI Infrastructure NVIDIA and U.S. national labs are building AI supercomputing hubs for science, climate research, and training massive models.... AI Infrastructure, Small Models, and Multi-Agent Coding
What happened in AI this week? Have you had the feeling that days pass by, things change, but you only really notice when something clicks — maybe in your code, your work, or your thinking? This week felt like one of those moments. Three wins in AI didn’t shout for attention; they quietly shifted what could be possible. I’m sharing them because I think they touch all of us — whether you’re fine-tuning a model on your laptop, exploring how AI fits into your job, or just watching this strange digital story unfold. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action Quantum Echoes Algorithm Achieves 13,000× speed-ups over supercomputers, signaling the rapid approach of practical quantum AI integration. Prepare for an incoming shift in how we handle molecular simulation and real-time global optimisation. 1.7M Parameter 3D Models Proves complex medical image processing (separating shape/appearance) is feasible on microscopic models. Prioritise building small, interpretable, edge-deployable models over defaulting to massive LLM APIs. Self-Optimising Telecom AI AI transitions from passive text generation to active, invisible infrastructure management (reducing downtime and costs). Architect your enterprise systems to simulate state changes continuously before applying them to production. 1. Quantum meets AI —... Quantum Thinking, Light Models, Living Networks
Introduction Lately, I have implemented many features in my pet project and realised that none of the created branches were merged with the master code. And, I wanted to have a clean update. I was thinking that the Git rebase is a perfect and safest solution since I am working on this project alone. But then I stopped and asked myself: “Is it really safe? Should I even be doing this?” If you’ve ever wondered the same thing, this post is for you. I’ll explain what Git rebase actually does, when it’s brilliant, and when it can cause absolute chaos. No panic, please. We’ll figure this out together. TL;DR Use rebase for a clean, linear history (solo work). Use merge for shared branches (don’t rewrite history). Always test and use --force-with-lease when pushing rebased code. git reflog can save your day. Note: In many newer Git repositories, the default branch is called main instead of master. The same logic applies — swap the name accordingly. I use master though :) What is Git rebase? Think of Git rebase as rearranging your commits to make them appear as if they were built on top of the latest code, even though they... Should you use rebase?
Introduction I recently needed to fine-tune a language model for a specific task, and I was dreading it. Full model fine-tuning means downloading gigabytes of weights, waiting hours for training, and hoping you don’t run out of memory. But then I discovered LoRA, and honestly, it felt like finding a shortcut I didn’t know existed. You don’t always need to retrain a whole large language model to make it good at your task. LoRA (Low-Rank Adaptation) lets you freeze the original model and learn a tiny set of extra weights—adapters. The result? Fast training, tiny checkpoints, and easy swapping between different skills. This post explains LoRA with simple mental models, then walks you through a complete PyTorch + 🤗 Transformers + PEFT setup using a practical example: turning formal customer emails into a friendly tone. We’ll create a tiny dataset, fine-tune flan-t5-small, and run inference—on an M-series Mac or a modest GPU. No fancy infrastructure required. What is LoRA? The idea (no heavy math) Modern transformers learn big weight matrices—think W with millions of numbers defining how the model processes information. LoRA says: don’t touch W at all. Instead, add a small correction that’s the product of two skinny matrices:... LoRA fine-tuning wins
AI’s Week — Honesty, Agents, and the Fight for Truth Some weeks, the news feels noisy. Other weeks, it hums quietly — as if something subtle but irreversible has shifted. This was one of those weeks. California told AI to be honest. Microsoft turned our computers into companions. And European publishers stood up for truth itself. None of these stories is flashy on its own, but together they sketch the outline of how we’ll live with AI — and how AI will live with us. Weekly AI Signals: Key Takeaways Signal Industry Impact Builder Action California AI Honesty Law Mandates transparency and mental-health safeguards for AI bots, establishing a legal precedent for transparency by design. Build explicit AI disclosures into chat interfaces and handle user distress signals programmatically. Windows Copilot Integration OS-level agentic integration crosses the app boundary, turning the operating system into an active collaborator. Prepare software for users expecting voice-activated, cross-application workflows natively supported by the OS. Publishers vs. AI Overviews Pushback against zero-click AI summaries disrupting publisher traffic forces a reevaluation of copyright and fair use. Ensure your LLM agents properly attribute and explicitly link out to original sources to maintain information integrity. 🗣️ 1️⃣ California wants... AI Honesty, Agents, and the Fight for Truth
Introduction This week brought three AI developments worth your attention. TL;DR Agents can now use UIs reliably enough for real work. Security gets a detect → patch → PR loop, not just linting. 6 GW of GPUs means cheaper, faster AI—if power & cooling keep up. First, agents learned to operate software interfaces visually—no API required. Second, security got an automated teammate that hunts vulnerabilities and proposes fixes. Third, OpenAI locked in massive compute capacity that will make advanced AI cheaper and more accessible. I’ll explain what happened, why it matters, and what you can do with it. No fluff. Just the useful bits. 1. Google launches Gemini 2.5 “Computer Use” Released: Oct 7, 2025 (preview) [1] Google released a Gemini 2.5 capability that actually uses computers the way you and I do. It sees the screen, clicks buttons, fills forms, scrolls pages, and completes multi-step tasks with safety rails. Google reports state-of-the-art results on browser/mobile UI control and is making it available via the Gemini API. [1] Is this truly new? Concept: not new—OpenAI showed a “computer-using agent”/Operator earlier in Jan 2025. [2, 3] What’s new now: Google’s public preview focused on browser control, with benchmarks and an API... Safety, Agents, and Compute
Introduction You know that feeling when you’re building something with AI and suddenly it’s 3am and your “quick weekend project” has OAuth, a payment system, and somehow… blockchain integration? Yeah. Let’s talk about that. Vibe Coding & the Sneaky Trap of Scope Creep Here’s what happens when I sit down with Cursor. I start typing something vague like “add login with Flask” and before I can even finish my coffee, it’s… done? Just like that. Then I think, well, maybe analytics would be cool. Oh, and a dashboard! And what about email invites? And Cursor just… keeps delivering. Every. Single. Time. This is what I’ve started calling vibe coding — you’re just riding this incredible wave of productivity, letting the AI carry you forward, and it feels amazing. Until you look up three weeks later and realise your simple note-taking app now has user authentication, real-time collaboration, AI-powered suggestions, and a mobile app roadmap that would make Silicon Valley blush. That’s scope creep, my dear readers. And it sneaks up on you. What Is Scope Creep? The textbook definition is boring but accurate: Scope creep occurs when a project’s goals expand without a deliberate decision to do so. Translation: you... Cursor Made Me Do It
Introduction Cloning a repository is exciting — new code, new adventure. But sometimes Git drops you straight onto main when you really wanted that shiny dev branch. Remote (origin) Local ------------------ ------------ origin/main main ← default after clone origin/dev ----> dev ← your new branch No worries. Here’s the quick rescue plan. The Critical Step: Fetching Origin First, tell Git to look for other branches on the remote: git fetch origin 👉 This is the magic unlock: it updates your local repository with all branches that exist on the remote (like dev, feature-x, etc.). Without it, your local machine doesn’t even know those remote branches exist. Git Concept Architectural Explanation Cloning Limitations Cloning downloads the repository history but only fully maps and checks out the default branch (usually main or master). The Role of fetch git fetch origin securely downloads remote branch pointers and commits without modifying your current working directory files. The Result Once fetched, your local Git index is aware of origin/dev, allowing you to switch to it seamlessly. Branch Navigation Cheat Sheet Now that Git is aware of the remote branches, you can freely navigate. Here are the core commands you need: Command Action Explanation git switch... I have cloned my git repository and landed on main. How to get your branch
Introduction I have been watching AI news for some time now. Some weeks pass quietly with incremental improvements, nothing spectacular. And then you get a week like this one. California passes a law. Europe announces big plans. MIT shows us something that makes you stop and think. It is interesting. We see policy, infrastructure, and actual science happening all at once. 1. California bets on safety over hype Source: apnews.com California just passed what they call a landmark “AI safety & transparency law.” If you build models that consume significant compute—what they call “high-compute”—you will need to expose your safety practices publicly. Should something go wrong, you have 15 days to report it. They even included whistleblower protection. This changes things. We are moving from people talking about ethics in conferences to actual legal accountability. Read Associated Press coverage This means something practical for you. If you are building AI systems, you cannot think about safety as something you add later. It needs to be there from Day 1. No more afterthoughts. 2. Europe refuses to ride shotgun on self-driving Source: reuters.com The EU has made its position clear: it wants pilot cities for AI-first mobility, encompassing self-driving cars, innovative... AI Got Rules, Wheels & a Lab Coat
Introduction Some weeks in AI feel like new toys; this one feels like moving house. We’re talking national AI fortresses, coding champions, memory upgrades, and assistants that finally remember what you said last Tuesday. Additionally, the EPA decided to act quickly for once. (Yes, you read that right.) The most Prominent AI Achievements This Week 1. OpenAI’s Stargate UK: Sovereign AI Gets Real Source: OpenAI Britain just got serious about keeping its AI at home. OpenAI, NVIDIA, and Nscale are building Stargate UK — think of it as Britain’s own AI fortress where sensitive models can train and run without crossing borders. Starting with 8,000 GPUs in 2026 and scaling to 31,000, this isn’t just about shiny hardware. It’s about making sure your healthcare data, financial models, and defence systems stay precisely where you want them — on British soil, under British rules. When AI gets powerful enough to handle your most sensitive work, geography suddenly matters again. Sovereign compute isn’t just fancy talk — it’s your data staying put while still getting world-class AI. Read OpenAI 2. Google’s Gemini 2.5 Crushes Programming Competition Like a Human Champion Source: Google DeepMind Remember those programming contests where the most talented students... AI’s Busy Week
Chart generated with ChatGPT (OpenAI), using SWE-bench Bash Only (Verified) data from Google DeepMind [14], Anthropic [15], and the official SWE-bench site [13]. On SWE-bench Bash Only (Verified), Claude Sonnet 4 outperforms Gemini 2.5 Pro in Python bug-fixing accuracy (≈ 64.9% vs ≈ 53.6%). But this doesn’t mean Claude is always “better.” Bash Only isolates the language model without external tools or complex scaffolds. Gemini still offers strengths in speed, huge context windows, and Google Cloud integration. Benchmarks are helpful yardsticks, not the whole story. Introduction Command-line AI tools are the new pocket knives of coding life. They live in your terminal, they answer your odd questions at midnight, and they’re becoming essential for developers who want fast help without leaving the shell. Two strong contenders here are Gemini CLI (Google) and Claude CLI (Anthropic). Both bring large language models into the command line, but with different personalities. Think of Gemini as the fast multitasker with Google DNA, while Claude plays the thoughtful partner with a safety-first streak. Let’s explore how to set them up, what they can do, how they treat your data, and how they look when we put them against the same benchmark. TL;DR Gemini CLI: fast,... Gemini CLI versus Claude CLI
Introduction This week in AI, the spotlight falls on breakthroughs that actually change how we live, work, and learn. ChatGPT is now a mainstream habit, Google may have found a cure for AI’s tall tales, coding gets a tireless new partner, textbooks learn to actually teach, and AR assistants finally discover social manners. Fasten your seatbelts — the robots are not taking over (yet), but they are getting suspiciously good at being useful. This Week’s Top 5 AI Achievements 1. ChatGPT Hits 700 Million Weekly Users Source: Analytics Vidhya Take a moment to let that sink in — 700 million people are chatting with ChatGPT every week. That’s nearly one in ten adults on Earth having regular conversations with an AI. What began as a handy email-drafting bot is now your digital Swiss Army knife: untangling quantum physics, debugging rogue Python scripts, analysing spreadsheets, and even knocking out half-decent poetry. It’s like having a very clever friend who never sleeps and doesn’t judge you for asking “how do I centre a div”… again. When almost 10% of the world’s adults lean on your tool weekly, you’re not just running software anymore — you’re shaping how humans think and work. This... AI this week
Introduction This week, I decided to vibe code with Cursor AI in Agent Mode — letting the machine take the wheel while I sip my coffee and occasionally raise an eyebrow. The experience is equal parts exciting, promising, and slightly chaotic: sometimes smooth like a friend who “gets it”, occasionally forgetful like that same friend after too much coffee. What is Cursor AI and Vibe Coding? Cursor AI is an AI-powered code editor that behaves more like a coding partner than a static IDE. It plugs large language models into your workflow so you can generate, refactor, and debug code conversationally — without hopping between apps. You can read my post about Cursor AI. “Vibe coding” is the name I give to this flow. You describe intent, negotiate with the AI, and let it draft, test, and revise code while you steer. In my own test, I asked Cursor to build a Dockerised Flask web app with PostgreSQL. In ~five minutes, I had a working prototype: registration, login, and a multi-user, scalable setup. Not flawless — some debugging required — but wonderfully fast for a first draft. Sure, spinning up a basic CRUD app — Create, Read, Update, Delete —... Vibe Coding with Cursor AI
Introduction Some weeks, AI news feels like a storm of buzzwords. This week, however, there’s a clearer thread: making things smaller, faster, and actually useful. From nimble models outrunning the giants, to Google teaching AI how to both sprint and think carefully, to new tools for science and medicine, the focus is on efficiency and real-world impact. And to keep things interesting, OpenAI is stepping into the jobs market with its sheriff’s badge. Top 5 AI Achievements This Week 1. Qwen-3-Next: Leaner, Faster, Smarter Than GPT-5 and Gemini 2.5 Pro Source: Analytics Vidhya A surprise arrival on Hugging Face: Qwen-3-Next, with “only” 80 billion parameters (a featherweight by today’s swollen standards), is outrunning giants like GPT-5 and Gemini 2.5 Pro. Imagine a wiry runner in trainers overtaking a field of athletes weighed down by their designer kit. Its secret? A 32,000-token context window and speeds over ten times faster than its predecessors. Size isn’t everything in AI — this trend towards lean efficiency means more people can actually use advanced models without needing a supercomputer or a lottery win. Read Analytics Vidhya 2. Speculative Cascades — A Hybrid Approach for Smarter, Faster LLM Inference Source: Google Research Google has a... AI weekly news
Introduction AI has been busy again — learning from experience rather than rote memory, nibbling away at entry-level roles, and finally making some sense of its own reasoning. Nano Banana kept spirits high with its lightning-fast image edits, while GPT-5 power users shared prompt hacks that turn bland replies into useful ones. In short: faster learning, sharper thinking, fewer interns, and one very cheeky fruit model. Top 5 AI Achievements This Week 1. DeepSeek R1 and GRPO: Advanced RL for LLMs Source: Analytics Vidhya Training AI has often felt like tutoring a child who memorises textbooks but never truly understands them. DeepSeek R1 changes this with GRPO (Generalised Reinforcement Policy Optimisation) — a method that lets models actually learn from experience. Instead of fixed routines, the system adapts on the fly, improving through each new interaction. This is more than a minor upgrade. It’s a step towards models that can respond with context, nuance, and adaptability — closer to conversation than script-reading. When AI learns through experience instead of repetition, we move closer to systems that can genuinely think on their feet. Read Analytics Vidhya 2. AI Won’t Replace All Jobs… Just the Ones You’d Start With Source: Analytics Vidhya... AI weekly
How AI Helped Me Write a Weekly Menu Meal planning can feel like a puzzle: how do you balance nutrition, preferences, time, and joy at the table? This week I experimented with ChatGPT-5 to design a full menu for two people with different needs: Elena (55 kg) — aiming for fat loss and muscle support, ~1200 kcal on rest days, ~1350 kcal on workout days. Andreas (82 kg) — aiming for lean muscle growth, ~2000 kcal on rest days, ~2200–2350 kcal on workout days. The restrictions: No cow dairy, gluten, or legumes. Elena avoids most nuts (except Brazil & macadamia). Both like berries, goat milk, fish, and dark chocolate. The result was not just a plan — but a full system of menus, nutrient tables, batch cooking flows, and colourful PDFs that made the kitchen run like a well-oiled steamer. Example Prompts Here are some of the prompts I used and the outputs ChatGPT-5 created: 1. Ask for a basic weekly menu Could you please create a food menu with a maximum daily calorie intake of 1,200 and 2000 for two people who want to lose fat and gain muscle? The female's weight is 55kg, and the male's weight is... How to Create a Weekly Menu with ChatGPT-5
Top 5 AI Achievements This Week AI weeks usually bring shiny demos. This one brought fixes for real headaches: training that doesn’t bankrupt you, voices that actually sound human, and images you won’t be embarrassed to use. The thread tying them all together? Accessibility. Less cost, less friction, more capability. Let’s dive in. 1. Oxford’s Optimiser: 80% Cheaper, 7.5x Faster Source: MarkTechPost Training AI has long been the preserve of big tech chequebooks. Oxford’s new optimiser rewrites the rules. Models not only learn more cheaply, but also faster—7.5 times faster. It’s not about more GPUs; it’s about teaching models to study smart instead of cramming. Suddenly, smaller labs and start-ups get to play too. When the gate fee drops, the queue gets longer. Expect a flood of fresh experiments and new voices in AI. Read MarkTechPost 2. OpenAI’s Speech-to-Speech Finds Its Voice Source: MarkTechPost Robotic call-centre voices, your days are numbered. OpenAI has rolled out its speech-to-speech model with a Realtime API, offering features such as phone support, image input, and even SIP integration. It’s the difference between a demo and a deployment. Businesses can now integrate this into existing systems without the need for duct tape. When a tool... AI weekly wins
Picture this: you walk into Rembrandt’s painting school in 17th century Amsterdam. Students sit hunched over their canvases, copying the master’s brushstrokes over and over again. They are not trying to create fake Rembrandts, obviously. They want to understand how light works, how texture emerges, how composition breathes life into a painting. Through endless imitation, they slowly develop their own artistic voice. This is exactly how AI models learn today. Instead of studying brushstrokes, they devour text, images, music — anything digital they can get their virtual hands on. These AI “students” consume massive amounts of existing work to understand patterns. From this, they learn to generate something that looks new. But here’s where it gets messy: Rembrandt’s students had permission. They were invited into his workshop. AI models? They often learn from whatever they can scrape from the internet — public content, copyrighted material, things shared freely, and things definitely not meant for machine consumption. So here’s my question: Should AI need permission to learn, just like those old art students needed permission to enter the master’s studio? Copyright and the Digital Mess Let me be honest — copyright law was never designed with machine learning in mind. Nobody... Who Did the AI Learn From?
Elena’s AI Weekly 🚀 Hello friends! 👋 Every week, the AI world feels like a flood of announcements. But hidden in the noise are moments that genuinely matter — ideas that push AI closer to being useful in everyday work, not just shiny demos. Here are five stories from this week that caught my eye. 1. DeepSeek V3.1 Sources: MarkTechPost and AnalyticsVidhya While big tech often launches models with huge fanfare, DeepSeek quietly placed V3.1 on Hugging Face. No marketing campaign, just an open release: 685 billion parameters freely available. The highlight? A 128k token context window. In practice, this means you can keep entire research papers, complex coding sessions, or massive datasets in memory without the model losing track. And crucially, this wasn’t built by a corporate giant. It’s a reminder that open source can now match or even rival proprietary AI. We are seeing a turning point where state-of-the-art AI is no longer locked away. The democratisation of access means small teams — and even individual developers — can work with the same scale of tools once reserved for tech giants. The space for innovation has just widened dramatically. Read more: DeepSeek V3.1: Quiet Release, Big Statement and... This week in AI
Introduction Have you ever started a Python script for a machine learning experiment, popped to make a cup of tea, and then promptly forgotten all about it? Hours later, you glance at your system monitor and wonder whether it’s still working or just quietly sulking in the corner. I once left a script running for three days before realising it was printing “Hello World” in an infinite loop thanks to a misplaced indent. Embarrassing? Absolutely. Educational? Without question. Sometimes these things need our attention — whether to check their progress, free up system resources, or save our fans from sounding like an aircraft taking off. Processes can be obedient helpers or stubborn little gremlins hiding in the background, and knowing how to find, monitor, and, when necessary, end them is a vital skill. In this post, we’ll tour the essentials of process management on Linux, macOS, and Windows. We’ll talk about background and foreground execution, and you’ll learn to recognise when a process needs encouragement… or when it’s time to show it the door. Understanding Processes Think of your computer as a busy workshop, and each process as one of its workers. When you open a programme or run a... Processes
Elena’s AI Weekly 🚀 It’s been another week where the AI world spun faster than a GPU fan under full load :) From Europe flexing its multilingual muscles to compact models that punch well above their weight, and from new testing frameworks to small-but-mighty language models, there’s a lot to unpack. Here’s my pick of the most significant moves shaping the AI landscape right now. AI News Summary 1. Europe’s Top AI Models of 2025: Multilingual, Open, and Ready for Business Source: MarkTechPost Europe’s AI scene is on a roll, producing models that are not just clever but genuinely useful across borders. The stars of 2025 speak many languages fluently, run on open licences, and come optimised for enterprise use — from finance to healthcare. Think of them as polyglot problem-solvers with a bias for collaboration. France’s Mistral AI leads the charge on multilingualism, while others are making waves with customisation and integration ease. Global business doesn’t speak just one language — and neither should your AI. Openness plus multilingualism means more adaptable tools for more people. Read MarkTechPost 2. Model Context Protocol (MCP) Becomes the ‘USB-C for AI’ Source: MarkTechPost MCP is rapidly becoming the universal connector for AI... This week in AI
Introduction: Escaping Dependency Hell If you are developing software on a Mac, you eventually hit a wall where you need a specific version of Python, Node, or an obscure command-line utility. Manually downloading binaries, resolving missing dependencies, and compiling from source is a massive waste of time. This is exactly why Homebrew exists. Homebrew is the missing package manager for macOS (and Linux/WSL). Instead of navigating complex installation wizards or polluting your system with scattered files, Homebrew installs packages directly to their own directory (/opt/homebrew on Apple Silicon) and symmetrically links their files into /usr/local. In this post, we will cover how to install Homebrew, the essential commands you need daily (structured as a cheat sheet), and my personal list of the top 10 most critical packages every developer should install. History It was created by Max Howell in 2009 to address the need for a better package management system on macOS, which at the time lacked a robust, user-friendly way to install open-source software from the command line. Before Homebrew, macOS users often had to compile software from source manually or rely on less integrated solutions like MacPorts or Fink, which could be cumbersome. While Homebrew gained immense popularity... Brewing with Homebrew
Introduction: Escaping the Manual Pipeline For years, my technical content pipeline relied on manual curation and brittle Python scripts triggered by cron. While utilizing LLMs like Claude and ChatGPT helped generate code and debug issues, the orchestration of the workflow itself remained painfully manual. Proprietary tools like Zapier offer simple webhooks, but they restrict advanced logic behind enterprise paywalls and force you to surrender your data. As developers, we need control over state, retries, error handling, and security. This led me to n8n: an open-source, self-hosted workflow automation tool that operates on a visual node-based architecture. In this guide, I will detail the technical setup required to bring n8n into production. We will move beyond the basic installation to cover persistent state management (PostgreSQL/SQLite), securing OAuth2 endpoints, integrating Redis for AI memory, and building an automated content pipeline. Understanding Workflow Automation Let me break this down in terms that won’t make your brain melt (because mine definitely did the first time I heard all this jargon): What’s a Workflow? Think of workflows as recipes, but instead of making cookies, you’re getting work done. It’s just the series of steps you follow to complete any task: Approving expenses: Check receipt →... Workflow Automation with n8n
The Uncomfortable Truth Your SaaS business might not exist in five years. Not because you’re bad at it, but because the entire foundation is shifting. Microsoft’s Satya Nadella recently claimed agentic AI will make traditional SaaS obsolete. At first, I dismissed it as hype. Then I started paying attention. AI isn’t just another feature—it’s questioning why we need interfaces, per-seat pricing, and human bottlenecks at all. But SaaS isn’t dead. It’s evolving into something we haven’t figured out yet. The Numbers Don’t Lie SaaS Capital surveyed ~1,000 private B2B SaaS companies. Three key findings: AI Adoption Varies by Size: Small companies (<$3M ARR) go extreme—fully AI-driven or not at all. Larger companies ($20M+ ARR) adopt moderately. AI = Higher Profits: 43% of AI users are profitable vs 30% of non-users, especially among equity-backed firms. Spending Shifts: AI companies spend more on COGS and sales/marketing but 20% less on R&D and admin costs. The Death Arguments Here’s what the “SaaS is dead” camp argues: Instead of clicking through Salesforce → HubSpot → Slack, an AI agent handles everything. You say “follow up with yesterday’s demo prospect,” and it finds contacts, writes personalised emails, sends them, logs interactions, and updates your team.... Will SaaS Survive?
Elena’s AI Weekly 🚀 This week was wild in AI land! Everyone decided to drop their biggest releases at once. Here’s what matters: The Big Headlines OpenAI launched GPT-5 (their fastest/most innovative model yet), and they shocked everyone by returning to open source with GPT-OS-120 b and GPT-OS-20 b. The 120b runs on high-end laptops; the 20b runs on your phone. Wild. Google brought the heat with DeepPolisher (fixing DNA sequencing errors) and Genie 3 (creating interactive virtual worlds from text prompts). Sci-fi is real now. Alibaba dropped GSPO powering their Qwen3 models, plus free image generation with Qwen-Image. Anthropic introduced Persona Vectors to keep AI personalities consistent. Bottom line: AI just got more accessible, more powerful, and more integrated into everything. The future feels very close. Key AI Developments This Week 1. OpenAI Releases GPT-5 GPT-5 is here with significant architectural improvements and enhanced cognitive abilities. It’s OpenAI’s smartest, fastest model yet, designed for both general use and specialised tasks. The performance boost is significant across all benchmarks. Read More at MarkTechPost 2. Google’s DeepPolisher Fixes Genome Errors Google AI partnered with UC Santa Cruz to create DeepPolisher, a deep learning tool that corrects base-level errors in genome assemblies.... This week in AI
Introduction: Beyond the Chatbox “Will AI make me a lazy programmer?” It is the most common question I receive. My answer is always the same: AI will not make you a magical coder overnight. If you do not understand software architecture, an LLM will simply help you write bad code much faster. However, the way we interact with AI is evolving. I previously reviewed web-based chatbots like Claude and Gemini. They are excellent, but manually copy-pasting code between a browser tab and your IDE is inefficient. Cursor AI solves this. It is a fork of Visual Studio Code that integrates AI directly into the editing environment. It is not just another chatbot—it is an IDE built entirely around contextual AI agents. In this post, I will break down the technical architecture of how Cursor “understands” your code (via RAG and vector embeddings) and review its practical utility for Python development. What is Cursor AI? The Technical Foundation Cursor AI looks and feels identical to Visual Studio Code, but its core differentiator is how it achieves “codebase awareness.” When you open a project in Cursor and use the @codebase command, it doesn’t just blindly send all your files to an LLM... Cursor AI for Python Development
Introduction I do not believe in magic bullets. AI tools are immensely powerful coding assistants, but if you don’t understand software architecture, an LLM will simply help you write bad code much faster. However, when used strategically, these tools fundamentally alter the development workflow. After months of intensive daily use, I have found that different models excel at vastly different aspects of software engineering. In this post, we will look at a technical comparison of Google Gemini, ChatGPT, and Claude AI. We will evaluate their code generation capabilities, architectural awareness, and how the Model Context Protocol (MCP) is turning these chatbots into deeply integrated development agents. Google Gemini Google Gemini has a very generous free plan, and I love it for creating my JavaScript functions. The quality of the output is simply fantastic! I think that Gemini is an excellent tool for quickly drafting Ajax functions, and this chatbot helped me learn JavaScript in no time. Let me tell you why I chose Gemini as my go-to for frontend development. When I was working on a recent project that required complex Ajax interactions, I simply described what I needed: Create an Ajax function that fetches user data from a REST... On AI Coding Assistants
Introduction Last week, I was helping a friend debug some Python, and she asked me to send her a code snippet. I almost copied and pasted it into Slack, then caught myself. The formatting would be awful, and the indentation would be broken entirely. You know, the Python indentations horror story? Instead, I threw it into a GitHub Gist and sent the link. Clean code and proper highlighting allowed her to make changes easily, even by forking it. “I didn’t know GitHub did this,” she said. If you’ve never used Gists, you’re missing one of the most valuable features GitHub offers. I use them constantly now. What Are GitHub Gists? A Gist is basically GitHub’s version of Pastebin, but much better. You can share code snippets or text without creating a whole repository. You still get version control and all that, just for small projects that don’t need their own repository. I used to email code around or paste it into chat apps. Terrible formatting, lost indentation, and no way to update anything. Gists fix all these problems. Yes! Public vs Private Gists When you create a Gist, you choose between public and private (they refer to it as “secret,”... GitHub Gists
Introduction: The Meta-Cognition Test We all know that Generative AI can write code, draft emails, and summarise documents. We also know that sometimes AI hallucinates and invents facts out of thin air. But what happens when you ask it to hide the one thing it cannot hide — itself? This experiment tests a specific and uncomfortable question: can an AI recognise that it cannot escape its own statistical nature? Not just adapt its tone, but genuinely introspect on the deeply ingrained patterns that betray it as a machine — and then break free of them? To test this, I designed an experiment. I didn’t just ask an AI to write a blog post. I asked it to scrape my website, adopt my personal human writing style, and write a post. Then, I fed the text into Grammarly’s AI detector and explicitly told the AI: “Grammarly says X% of this text resembles AI patterns. Can you fix your own tells?” This tests a very specific form of machine introspection: Can a model break away from its deeply ingrained, statistical writing patterns when forced to confront its own “AI-ness”? The Grammarly feedback loop acts as a mirror — and what the models... Self-critical AI
Dear Reader, how are you doing? I hope that 💐💛 you are having a fantastic day 💐💛 As you may have realised, I did not blog for a while, nor did I code for the past three weeks. In fact, my best followers are aware of this from my GitHub profile, which displayed empty cells for some weeks - meaning there is no code or writing for me. I had a vacation in Portugal, an apocalyptic blackout, and over-trained my operated knee, which resulted in quite a painful recovery process. I am guilty; my impossible determination took over me again, and see - I did too much :) I was thrilled while training for two hours, but it would later become my pain. Reflecting on all of this, I have decided to change my aggressive training attitude towards dealing with my quad inhibition. Now that I can walk, I don’t have to go as badly. I have changed my training routine to a more challenging yet enjoyable exercise plan. Now I do : 🏋️ Instead of 4 sets of 15 repetitions for my squats and dead-lifts, I do 5 sets of 9 repetitions; hopefully, I will gain more muscles with... My little setback
Introduction ElevenLabs.io is a cutting-edge AI voice platform that enables users to generate lifelike speech, clone voices, and produce long-form audio content with remarkable realism. It is my favourite voice-cloning app, and it is easy to use and delivers excellent quality voice generation. The main use cases are as follows: Audiobook Production: Transform written content into engaging audiobooks with personalised narration. Multilingual Dubbing: Dub videos and films into multiple languages using cloned voices. Virtual Assistants: Enhance user interaction with lifelike voice responses. Content Creation: Generate voiceovers for podcasts, videos, and advertisements. The price is quite affordable for the quality of the AI voices that can be used in conversational and multilingual AI. You can even start for free, and the recommended Creator plan is currently available for $11/mo for about 200 minutes of generation and includes the Professional Voice Cloning - see the Elevenlabs pricing page. The current prices for the ElevenLabs.io subscriptions are as follows: Plan Price Credits per month Minutes Included Additional Minutes Audio Quality Free $0/mo 10k 10 min (high quality TTS) or 15 min Conversational AI N/A 128 kbps, 44.1kHz Starter $5/mo 30k 30 min (high quality TTS) or 50 min Conversational AI N/A 128 kbps,... AI Talk with Human Feel
Dear Readers, I am in Portugal now. I am having a short family break while learning Portuguese and annoying the locals :) Learning Portuguese is tricky, but I speak it whenever possible. My plan was to send my emails yesterday. On Monday, however, we had a total blackout. Around 12:30 p.m. on Monday, the entire Iberian Peninsula went dark. For roughly twelve hours, almost 60 million people in Spain and Portugal—plus pockets of southwestern France—lost grid power, forcing airports, hospitals, and rail hubs onto emergency generators and confusing city centres. As we read at the wired.com, The Agonizing Task of Turning Europe’s Power Back On, according to national grid operators Red Eléctrica (Spain) and REN (Portugal), electricity supply collapsed “in milliseconds” after abnormal frequency oscillations rippled through the European synchronous grid. The blackout spread across Spain, Portugal and limited parts of Occitanie in France. We don’t know yet what really happened. There is a lack of information at this very moment. The main suspect of the blackout is “a grid oscillation,” which is a rhythmic back-and-forth swing in one of the electric power system’s key parameters—usually frequency, but sometimes voltage or power flows. Think of it as the electrical equivalent... Iberia’s Day-long Blackout
I’ve been getting into “vibe coding” recently, quickly prototyping some of my ideas, and working on my pet projects. I must confess that the AI-assisted coding is a very addictive activity, and must be taken with caution since it has some security implications and requires a careful prompts engineering. In this post, I want to share my experiences with some tools I like, discussing their benefits and giving some tips for using generative AI in coding effectively. I have listed several popular AI coding assistants that are very advanced and easy to use. Introduction Pair programming with a chatbot might sound like science fiction, but it’s surprisingly effective. Generative AI for coding is a powerful learning experience and a big help with coding/scripting. It is very effective for rapid prototyping/scaffolding and learning to code. This post covers key AI coding tools, their advantages, and risks, and practical tips to optimise their outputs. We’ll also include a table of prompts for Python web development with Flask and Django since I like them so much :) AI Coding Companions AI coding assistants are advanced LLM models that generate code, explain concepts, and identify bugs based on natural language prompts. Using tools such... Vibe coding with Generative AI
Since I usually work on several projects simultaneously, I often start my day with a Git log to see where I should continue my coding or writing. I think that Git log is one of the most important commands. Introduction git log lists all commits, details such as the author’s name, commit date, and descriptive messages explaining what was changed or fixed. This makes it an essential tool for tracking feature launches, debugging issues, and efficiently collaborating within a team. This post explores various useful options for a git log, enabling you to quickly gain insights into your project’s history. Let’s go! How to Use Git Log The git log command displays the entire commit history for the current branch, first showing the most recent commit. Viewing Basic Commit History To see a simple commit history, use: git log This command lists commit hashes, author details, timestamps, and commit messages. Here’s an example: commit b2f6f5db7af5921f32b2742f Author: Jane Doe <jane@example.com> Date: Tue Mar 27 14:50:23 2025 -0400 Fix bug in user authentication commit a25ac9abcf384f8655327a8a Author: John Smith <john@example.com> Date: Mon Mar 26 09:15:10 2025 -0400 Add user profile page commit 98f530da3fae26554f3d28ed Author: Jane Doe <jane@example.com> Date: Sun Mar 25 10:41:50 2025... Git Log
Introduction Recently, I noticed something fascinating in my web analytics: ChatGPT and other AI bots are actively reading my blog. Honestly, this is great news. Traditional Google ranking is incredibly harsh for small, independent bloggers. We are constantly fighting algorithms, domain authority scores, and giant aggregators (like Reddit) just to be seen. But the rise of Generative AI Search presents a massive new opportunity. If you can get AI engines like ChatGPT, Claude, or Perplexity to read and cite your work, you bypass the traditional SEO rat race entirely. In this post, we are going to look at how to track AI traffic using Google Analytics 4 (GA4), how to manage bot access using your robots.txt file, and most importantly, how to actually optimise your content so that Large Language Models (LLMs) want to cite you. What is Google Analytics 4 (GA4)? Skip this section if you are already using GA4 and have it set up for your project. Otherwise, you can also read my previous post Moving to GA4 about GA4 usage, its features and alternatives. Google Analytics 4 (GA4) is the latest version of Google’s analytics platform, offering enhanced privacy controls, cross-platform tracking capabilities, and improved performance. It... AI reads my blog
Introduction In machine learning, building a model that performs perfectly on your training data is relatively easy. The real challenge is ensuring that the model performs just as well on data it has never seen before. If you don’t evaluate this properly, you risk deploying an overfitted model that fails spectacularly in production. This is where cross-validation comes in. It is the gold standard for assessing how well a model will generalise. In this post, we will explore the concept of generalisation and implement various cross-validation techniques using the Titanic dataset and The Daily Minimum Temperatures dataset (for time series splits). All implementations will use scikit-learn. Prerequisites Before we begin, ensure you have the following Python libraries installed: scikit-learn (for machine learning) pandas (for data manipulation) numpy (for numerical operations) matplotlib (for visualisation) You can install them using pip: pip install scikit-learn pandas numpy matplotlib Additionally, download the Titanic dataset from Kaggle and place it in your working directory. You can also get the Titanic dataset from my GitHub repository directly to your Colab as follows: import pandas as pd url = 'https://raw.githubusercontent.com/edaehn/python_tutorials/main/titanic/train.csv' titanic_df = pd.read_csv(url) titanic_df.head() I suggest using Colab, which has Gemini AI working "under the hood", and... Cross-Validation Techniques
Introduction Claude AI offers a unique blend of general-purpose intelligence across multiple languages, making it suitable for various applications. This post covers how to use Claude AI, its main features, integration possibilities and coding skills. What is Claude AI? Claude AI is an open-source general-purpose AI assistant developed by Anthropic. It excels in handling various natural language processing (NLP) tasks across multiple languages, including English, French, Spanish, German, Italian, Japanese, and more. Unlike specialised AI models focusing on specific areas like image recognition or speech processing, Claude’s strength lies in its ability to understand and generate text with common sense knowledge. Claude is trained on current real-time data to answer questions about current events and topics. Claude AI can also perform your tasks on a computer. Isn’t that fantastic and scary at the same time? Key Features of Claude AI Multilingual Support: Claude can handle multiple languages, making it versatile for global applications. Reasoning and Common Sense: It performs logical reasoning and applies common sense across various contexts. Customisable Models: Users can fine-tune models or use pre-trained versions depending on their needs. User Interface: Offers both a web-based interface for developers and a command-line tool accessible directly from Python scripts.... How to Use Claude AI
Introduction Large Language Models (LLMs) sometimes create information that looks real but is incorrect or made up. This is especially problematic in critical areas like medicine, law, or finance, where even minor errors can cause harm. Reducing AI hallucinations The recent survey paper by Tonmoy et al. A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models explain the main techniques to reduce AI hallucinations with prompt engineering and model development: Prompt engineering with: Retrieval Augmented Generation: Before Generation: Retrieve accurate external information to guide responses. During Generation: Check and correct information step-by-step as it’s generated. After Generation: Revise outputs to align them with verified data. End-to-End Approaches: Combine retrieval and generation seamlessly for accuracy. Self-Feedback and Refinement: Some methods improve model outputs by providing feedback to the model about its mistakes. This iterative process helps refine answers to make them more accurate over time. Model development: New Decoding Strategies: These methods focus on how the model generates text step by step: Context-Aware Decoding (CAD): Ensures the model pays attention to the context when generating responses, overriding its internal biases. DoLa (Decoding by Contrasting Layers): Looks at patterns in the model’s layers to spot and avoid hallucinations during text... How CustomGPT Mitigates AI Hallucinations
Introduction If you have a Python or any coding/writing project on your local machine and want to share it on GitHub, you can do so using Git and a personal access token. Let’s create a local Git repository, set up a GitHub repository, and push your code to GitHub! Prerequisites To begin, you must create a GitHub account if you don’t have it yet. Next, you will have to ensure that Git is installed on your computer. I suggest installing GitGit since you can use Bitbucket, GitLab, or any provider. I prefer using Git since I use both GitHub and BitBucket. You can easily download the Git package from the Downloads webpage. To test that Git is installed correctly, check its version with: git version git version 2.34.1 If you like to work with your repositories visually, you can also download any graphical user interface from GUI Clients. Alternatively, however, it is the most complicated installation. You can install the GitHub Command Line Tool (CLI), an open-source tool that allows you to use GitHub from your command line, helping you save time and avoid context switching. If you don’t yet have the GitHub CLI installed on your system, check the... Storing Your Local Project to GitHub
Introduction Python is famous for being able to build almost anything, and web applications are no exception. When it comes to web development in Python, my absolute favorite starting point is the Flask framework. It is incredibly lightweight and stays out of your way, especially compared to larger, more opinionated frameworks like Django. In this post, we are going to build a classic TODO web application from scratch. We will use Python, Flask for the web routing, and SQLite for persistent data storage. If you think Flask is just a “toy” framework for beginners, think again. Did you know that Reddit uses Flask as a core part of its infrastructure? It is highly scalable when you need it to be. Let’s get started. Prerequisites We will use Python 3 and your preferred text editor (VS Code, Sublime Text, etc.) or IDE. I use PyCharm for most of my coding and writing projects. However, you can also write in any text editor; it is your choice. We will also use Pip package installer. Setting up the Environment To keep our Python installation clean, we can use virtual environments. If you are new to Venv or want to explore it, please read... Python Flask TODO App
Introduction I previously posted about downloading and running DeepSeek R1 in Ollama. There is a big question about DeepSeek’s security, safety, and legal usage outside of China. I am sharing my opinion and some relevant links on this topic. Is it secure? When working with GenAI and tools such as chatGPT or DeepSeek R1, we are generally concerned that our privacy is preserved. Who can access our data? Is using DeepSeek R1 secure? Is the model output provided correct? Jailbreaking According to KELA, DeepSeek R1 Exposed: Security Flaws in China’s AI Model, DeepSeek R1 is highly vulnerable to “jailbreaking,” allowing malicious users to bypass safety features and produce harmful content. This includes generating instructions for illegal activities, creating dangerous materials, and fabricating sensitive information [2]. Data Storage and Privacy DeepSeek stores user data on servers in China, raising privacy concerns for Western users due to differing data protection regulations. China’s laws may require sharing user data with the government, potentially compromising user privacy. No opt-out? KELA’s tests advise caution in adopting DeepSeek, a Chinese AI company with data-sharing obligations and unclear opt-out for user input retention [2]. The model also has significant safety vulnerabilities. Organizations prioritizing privacy and security should... Is DeepSeek R1 Secure?
Introduction Large Language Models (LLMs) are becoming more popular, especially as people want to run AI tools on their devices. This can protect your privacy, reduce wait times, and lower costs. Two tools making this possible are: Ollama: A command-line tool to run Llama-based models. DeepSeek R1: A new language model from China that’s gaining attention quickly. What is DeepSeek R1, and is it better than ChatGPT and other AI models? According to a TechCrunch article by Maxwell Zeff, DeepSeek has surpassed popular AI models like ChatGPT in downloads and usage, thanks to its open models that compete at a lower cost. The app has seen over 300% more downloads than Perplexity in just a week [1]! So the open source models of DeepSeek become very interesting to investigate and try out for me, as a notorious GPT user :) This post will explore how to use Ollama and DeepSeek R1 together. We’ll walk through their installation and basic usage. I will surely use Ollama to download and run the DeepSeek R1 model and briefly compare it with the Llama3.2 model. What Is Ollama? Ollama is an open-source app that allows you to run large language models like Llama on... DeepSeek R1 With Ollama
Introduction If you have ever tried to run two Python projects that required different versions of the same library, you already know the pain of dependency conflicts. The solution is simple: virtual environments. A virtual environment is a lightweight, isolated folder where you can install Python packages specifically for one project, without affecting your global system Python installation or breaking any of your other projects. Using isolated environments helps you: Prevent conflicts: Install exactly the versions you need for one project without breaking another. Collaborate cleanly: Share your requirements.txt so anyone else can recreate the exact same setup. Clean up safely: Because all dependencies are stored in one local folder, you can simply delete it when you’re done. No lingering junk on your system. Using Venv Using virtual environments is good practice, especially as projects grow in complexity and require different libraries that may not be compatible. The Python library venv allows each project to have its customised environment. Venv creates a virtual environment, a folder containing scripts and a link to the Python interpreter. It offers two main benefits: You can install project-specific libraries in isolation for better control. When sharing your project, use “pip freeze > requirements.txt” to... Python Virtual Environments
Looking forward to 2025, I could not resist thinking about what happened in AI in 2024. The year 2024 was a very exciting year to observe AI advancements technology-wise, the rise of specialised and multimodal AI models, significant progress in AI creativity, increased focus on responsible AI development, and wider adoption across diverse industries. The AI Act is published, and AI laws will further evolve. I will highlight subjectively the most interesting happenings in AI in 2024! Key moments Let’s look back at the key moments of 2024. I will mention arguagbly the most exciting things happening about AI. Key players Many businesses, organisations, education institutions and governments shape the landscape of AI in 2024. AI-based startups are blooming, and new companies and technologies emerging daily. I think that the most well-known gigantic AI companies that have focused their efforts on AI development to date are: OpenAI Google AI Meta AI Anthropic Without their contributions, 2024 will be very boring in AI. Disagree? Write me :) Generative AI Generative AI is one of the hottest AI trends. It encompasses various AI models that generate different types of content, from text to images, music, and video content. If you are interested... AI in 2024
Introduction Humans experience the world through multiple senses: sight, hearing, touch, smell, and taste. We combine information from these senses to understand our surroundings. Multimodal AI aims to give computers similar abilities, allowing them to process and understand information from multiple modalities (senses) like text, images, audio, and video. What is Multi-modality in AI? Multi-modality in AI means that an artificial intelligence system can process and combine information from different types of inputs. Instead of using just text, images, or audio, a multimodal AI can understand how these different forms of data relate to each other. These examples illustrate how multimodal AI can integrate and interpret information from different modalities to enhance understanding and interaction in various application domains: Image Captioning: When you upload a photo of a sunset, multimodal AI can analyze the image and generate a descriptive caption, like “A beautiful sunset over the ocean with vibrant orange and purple hues.” Video Analysis: In the case of a sports video, multimodal AI can identify the players (visual), understand the rules of the game (context), and provide real-time commentary or analysis based on what’s happening (audio and textual data). Chatbots with Visual Recognition: A chatbot can process an uploaded... Multimodal AI
Happy New Year, dear readers! Many of us did not have an easy year in 2024. Looking forward to 2025, I want to focus on what we have achieved and what we can do better! Generative AI advancements Indeed, we are living in a very challenging time of transformation. This blog is about AI to focus on the technological changes around us. Technology drives us to evolve, allows us to have a better life, and facilitates well-being; just looking back a century ago, television started with moving images. Around 1927 Philo Farnsworth successfully demonstrated electronic television. Later, we saw the rapid development of the Internet technologies, while AI and Machine Learning were paralelly evolving and become even more successful in the past decades due to increased computer resources. The year 2024 was a huge technological race of large language models and Generative AI. OpenAI, Google, Meta, Anthropic, Hugging Face, and other technological giants put their efforts and money into Generative AI development. Indeed, Generative AI and related productivity and creativity applications in various domains enrich our lives to a level unimaginable just several decades ago. The beautiful Earth and Happiness However, technology and AI are not everything necessary for human... 🌟 Merry Christmas and a Very Happy New Year! 🌟
Hello, my Dear Reader, We are celebrating this blog’s birthday, albeit a bit later. Elena’s AI Blog is now three years old, a mere toddler learning to navigate the complicated AI landscape. Why so late? This year was hectic and challenging for me and my beloved husband; if you are interested in my rehab story, read My Orthopedic Rehab in Bavaria. What is Elena’s AI Blog? Like everyone today, I live in an era of rapid AI evolution, which is challenging to understand and live in, even for people with a technical background. However, I like to make things easy to understand while learning new technologies as a passion. This is why I have created this blog to log what I learn and share my ideas and findings. Now three years old, this blog connects technology with everyday understanding, reflecting my passion for coding and commitment to making complex concepts accessible. Interested to know about my professional experience, education, and life interests? - Read my posts Two Years of Elena's AI Blog, and About Me. The Blog in 2024 New posts Elena added just thirty-two new posts to this blog in 2024. We have new posts about Generative AI and... Three years of Elena's AI Blog 🎈
Hello, my Dear Reader, We have celebrated this blog’s birthday of three years, a bit later this year. Why so late? This year was really busy and challenging for me and my beloved husband. We were pretty sick, and I had difficulty finding a moment to learn new things about AI. I had some mobility issues and am still battling them. Andreas was reborn to life again. Andreas and I are in Bavaria, Germany, doing our rehab with fantastic results. You see the first snow view out of a window of our clinic. It is so beautiful here in Bavaria, which helps people recover physically and mentally. We like it very much here and are grateful for the possibility of improving our health and starting work effectively again. Besides lovely people, good food, and plenty of exercise, we love the picturesque nature here. However, everything is possible. I am walking without crutches and have started to dance a bit. Andreas feels much better, and the atmosphere is welcoming and warm. What did I achieve in these four weeks of rehab? I have accomplished most of my goals except for losing the weight I gained from daily chocolate eating during stressful... My Orthopedic Rehab in Bavaria
Nowadays, everyone talks about AI, chatGPT, and large language models. But what are they, and how are they different? In this post, we explore large language models and their relationship to Generative AI while briefly introducing their key techniques and related projects. Introduction Artificial Intelligence is a hot topic everyone discusses. Many terms, such as GenAI and Large Language Models (LLMs), are related but not the same. Sometimes, genAI and LLMs are used interchangeably. In this post, we explore the key differences and related projects. Large Language Models vs. Generative AI In short, LLMs are machine learning models trained on the immense text volume to generate text output. LLMs are a subset of generative AI, which is about many more file formats, such as images or music. I like the following definitions: Generative AI is like a master artist. It creates new things, whether text, images, music, or code. Generative AI is a versatile tool that can generate various forms of content. Large Language Models (LLMs)** are a specific type of Generative AI focused on “understanding” and generating human language. LLMs learn from massive amounts of text data, enabling them to: Understand your requests: When you ask a question, LLMs... Generative AI vs. Large Language Models
Introduction Hello, dear reader. You probably know what Halloween is, an annual holiday celebrated on October 31. Halloween is rooted in ancient Celtic traditions of Samhain, when people believed spirits could cross into the living world. Over time, it has evolved into a festive occasion featuring costumes, trick-or-treating, pumpkin carving, and spooky decorations. Since this blog is about AI, I decided to share a few creative ways to celebrate Halloween using AI tools. Creative Halloween with AI These AI tools offer a mix of artistic creativity, interactive experiences, and personalization to make your Halloween celebration both modern and fun. Generate AI-Powered Haunted House Soundscapes Use AI to create eerie soundtracks with creepy sound effects. You can compose haunting music with AI with AIVA. Watch AIVA’s “I am AI” composition. Is it not addictive? I suspect that AI knows how to create addictive music since it has been trained on the music we like for centuries :) To create a soundtrack in AIVA, you can use their music styles library and chord progression, upload influence, or import existing MIDI files. You can also create your tracks step-by-step while following their excellent tutorials. If the chord progression is complex, you can use... Celebrate Halloween with AI
Disclaimer: This story is a personal experience and should not replace professional medical advice. Consult with healthcare providers before significantly changing your diet or exercise routine. Introduction Previously, I shared my life story about my accident, an operation followed by quad inhibition, and a prolonged and complex recovery. I have started to walk again, and want to share my approach to muscle build up and loss gained weight after a time of limited mobility. I’ll provide dietary recommendations, exercise tips, and related scientific research. Please note that I am not a medical professional, and you should consult your doctor before introducing these tips into your lifestyle. A Joyful Childhood and a Life-Altering Injury Growing up, I had a happy childhood filled with adventure and exploration. One of my greatest passions was free-climbing rocky mountains. The thrill of scaling heights without equipment gave me a sense of freedom and accomplishment. I also liked to be the first person on that high mountain, whatever it takes:) However, this adventurous spirit came with risks. One fateful day, I made a misstep that severely damaged my knees. It was terrible, and I am still dealing with my recovery. The Knee Operation and Quad Inhibition... Gaining muscles, losing weight
Introduction Not long ago, I faced a sudden drop in my website traffic after Google’s latest ranking updates. Posts that once drew steady streams of visitors were now languishing unnoticed. It felt like watching a house I’d built with care suddenly crumble. The algorithms had changed, and despite my best efforts, my content wasn’t reaching the audience it once did. You can read the full story in my post Regaining Website Traffic After Google Updates. I chose to see this setback as a catalyst for growth. I began exploring social blogging platforms like Medium to republish my content. I have decided to republish my blog posts on Medium and see what happens. You can see my new profile on Medium with quite a small following, so do not hesitate to follow :) However, I was worried about how search engines such as Google (which brings me the majority of traffic) would handle the SEO of the copied posts. The solutions is the well-known HTML tag - canonical. The proper setup of the canonical tag in Medium post advanced settings is crucial to make it properly. I will explain why and how to do it. What Are Canonical Tags? Think of... Avoid SEO Penalties on Medium
Introduction Large Language Models (LLMs) are incredibly impressive, but they have a fundamental flaw: they don’t actually know facts. They simply predict the next most likely word based on patterns in their training data. When they don’t know the answer, they make one up. This is what we call an AI hallucination. In 2020, researchers at Facebook AI (now Meta AI) proposed a powerful solution to this problem: Retrieval-Augmented Generation (RAG). You can read their foundational paper, Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. The idea was simple but revolutionary: instead of relying on the model’s parametric memory (what it memorized during training), what if we forced it to retrieve relevant documents first, and then base its answer strictly on those documents? RAG has since become the industry standard for building reliable AI applications. By grounding responses in real, up-to-date data, it makes AI vastly more factual and specific. However, the possibility of hallucination in RAG systems should not be underestimated. Recent studies, such as Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, have shown that even sophisticated RAG pipelines in legal tools can sometimes provide misleading information if the retrieval step fails or the prompt is misinterpreted. If... What is RAG? How Retrieval-Augmented Generation actually works
Introduction Recently, I had to overwrite the “scripts” directory in my master branch with the files stored in the “scripts” directory of the “dev” branch. Here, I share the simplest way to overwrite the required directory completely with the respective directory contents from another branch. For this, we can use the versatile Git checkout command with caution since this with totally rewrites your files in the destination branch. What is Git checkout? Let’s start with the main functionality of the Git checkout command, which lets us navigate through your project’s history and work on different versions of code, which can be represented by branches, commits, or even specific files. We can use the ‘git checkout’ command for (see git-checkout documentation for more): Switching Branches: The most common use of git checkout is to navigate between the different branches you’ve created in your repository. When you check out a branch, Git updates the files in your working directory to match the version stored in that branch. It also tells Git to record all new commits on that branch. Viewing Old Commits (Detached HEAD): You can also use git checkout to view the state of your project at a specific commit in... Git Checkout for overwriting directories from different branches
As a small website owner, I understand the challenges we face. I write about AI and Python coding, sharing my knowledge with fellow professionals and students. However, the recent Google updates have led to a significant drop in traffic. With Google providing over 90% of our traffic, the struggle to regain our website visits is real. Is there any information about the Google SE website feature that’s crucial or any ranking details shared publicly? Introduction This blog is personal. I did not do much promoting or did not use any advertisements. Most of my readers found this blog, thankfully, to Google Search Engine (SE). I would be grateful if my readers found this blog and explored its content. I am happy that you are reading this post right now :) However, lately, Google algorithm updates have substantially decreased the organic traffic to my blog. For instance, some of my blog pages used to rank on the first page for relevant keywords, but after the updates, they’re now on the third page, resulting in a significant drop in traffic. There are rumours amongst small bloggers who have the same complaints. You can read much of these traffic cuts related to the... Regaining Website Traffic After Google Updates
Dear Reader, You may have noticed that I have posted less often lately. This is because I am swamped. If you did not know, I had an accident and experienced a slow and painful recovery from my knee operation. I had quad inhibition, which prevented me from walking and made me very busy; you cannot imagine :) Now it is better. I woke up the sleepy quad and rebuilt many muscles affected by the slow recovery. I have started to walk again! I am working on improving my walking stamina and getting stronger muscles. It is a long process, but Supergirls do not cry but fly. Funnily, I wanted to fly at some point when dealing with crutches :) I was thinking about all these happenings, and my opinions changed. Firstly, I have even more respect for people with mobility issues. You must be mentally strong and inventive to live in such a challenging situation. Secondly, it is incredible how much time I spend now on simple daily activities! Everything requires planning ahead and takes much time and effort. My time now is very important. So is yours. Save time, and subscribe to my newsletter to stay updated. I promise... I have started to walk again
In this post, I cover everything from logging to configuring logging to output messages to different destinations. I also included some examples of logging levels and how to log messages at different levels based on the severity of the issue. I hope my post will help anyone understand how to use logging effectively in their Python programs. If you have any thoughts or suggestions, feel free to share them with me. Introduction Logging is essential for developers to track events, debug issues, and understand how their programs work. Python’s built-in logging module offers a flexible way to create log messages from Python programs. Logging allows us to: Track the flow of your program Debug and diagnose issues Monitor applications in production Gain insights into user behaviour Logging examples Python’s logging module is simple and can be configured to suit different needs. Let’s start with a basic example. Basic Logging Examples We import Python’s built-in logging module with the ‘import logging` statement. Next, the logging.basicConfig(level=logging.INFO) line configures the logging system to capture messages at the INFO level and higher. The logging.info('This is an informational message.') line logs an informational message, which will be displayed because the logging level is set to... Logging in Python
Dear Reader, You are surprised that publishing your content on this website is possible. If you are interested - keep reading :) I am glad you want to publish your post about AI and Python coding on this blog. You do not need to be strictly technical. My audience is broad, and my blog is visited by people interested in AI development, AI applications, ethics, and related issues. Before submitting your guest post, please read Guest Post Agreement. At the end of the Guest Post Agreement, you will see a submission link to get a simple MarkDown template and submission form for your article. Many formatting possibilities exist, such as adding tables, formulae, etc. Let me know if you need more information or want to use Markdown formatting or HTML/CSS. We can embed your podcast, YouTube videos, and social network links. Please let me know if you have new post ideas or any questions/suggestions. Thanks for reading, and good luck! Recommended AI apps Related tools you may want to try next. B12.io Recently, I have found an AI-powered platform that enables you to create professional websites, pages, posts, and emails with ease. I will also give it a try and... Guest posts about AI and Python
On July 12, 2024, finally published in the Official Journal of the European Union the Regulation 2024/1689 of the European Parliament and of the Council of June 13, 2024, which lays down harmonized rules on artificial intelligence (known as “AI ACT”). As stated in article 1 of the AI ACT, this regulation has four primary purposes: to improve the internal market, to promote the uptake of human-centric and trustworthy AI, and to protect health, safety, fundamental rights, democracy, rule of law, and environment, from harmful effects of AI Systems, while supporting innovation. Providers and deployers placing AI systems or general-purpose AI models on the European market or putting them into service shall be aware of the new obligations that will be applied to them. First, they should confirm if they are trading and using an AI system as defined by this Regulation in article 3 (1): a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Second, they should... Regulation on artificial intelligence has already been published
Introduction This post is about managing remote repositories in Git. We explore tasks such as adding, renaming, removing remotes, and updating remote URLs. We also practice fetching, pulling, and pushing changes to and from remote repositories. What are Git Remotes? Git remotes are your secret weapon for coding :) Git remotes connect your local project to its copies on other computers or online platforms like GitHub and Bitbucket. Are Git Remotes similar to Git branches? Remotes are not branches, but they work together. Branches are like alternate timelines within your repository, while remotes are links to entirely different repositories (potentially with their own sets of branches). You can have branches on your local and remote repositories, and Git helps keep them in sync. Using Git Remotes You can use Gir remotes while working in a team or alone. It is a good idea to follow best practices, such as: Use clear names like “origin” (main) or “upstream” (original project). Fetch Often to stay updated and avoid conflicts. Push with caution and double-check before sharing changes. Solo Coding Imagine Bob, a solo coder working on his passion project. He uses Git to track changes but wants an extra layer of security... Git Remotes
In this post, I discuss the main AI types and share my understanding of the possibility of general intelligence in the future. Introduction Artificial Intelligence (AI) is rapidly transforming our world, but what does it entail? Let’s explore the different types of AI, their capabilities, and their potential impact on our lives. AI types Narrow AI (Weak AI) Narrow AI, also known as Weak AI, is today’s most common type of AI. It usually performs specific tasks within a limited domain. These systems excel at their designated functions but lack the broader cognitive abilities of humans. The most of AI applications and tools we have today are examples of Narrow AI: Image recognition software: Identifies objects and people in images. Spam filters: Automatically classify emails as spam or not spam. Robotics: Programming robots for specific manufacturing, logistics, and surgery tasks. Game Playing: AI agents competing at the highest level in games like chess and Go. Recommendation engines: Suggest products or content based on user preferences. Self-driving cars: Navigate roads and make driving decisions. Financial trading: Predicting stock markets and making automated trading. Large Language Models: Process and generate human-like text in response to a wide range of prompts and questions,... Narrow AI, General AI, Superintelligence, and The Real Intelligence
Introduction Recently, I received an email informing me about the awesome Kaggle competition launching ARC Prize 2024. What is so special about this competition? ARC-AGI benchmark The ARC-AGI benchmark (Abstraction and Reasoning Corpus for Artificial General Intelligence) stands out for several reasons: Focus on Generalisation: Unlike many AI benchmarks that test performance on specific tasks, ARC-AGI emphasises the ability to generalise to novel problems. It assesses an AI system’s capacity to learn new skills and solve tasks it hasn’t been explicitly trained on. Measures Fluid Intelligence: ARC-AGI aims to measure general fluid intelligence similar to what humans possess. This involves abstract reasoning, pattern recognition, and problem-solving abilities applied to unfamiliar situations. Minimal Prior Knowledge: The tasks in ARC-AGI require minimal prior knowledge. They focus on core reasoning skills rather than relying on extensive domain-specific information. Human-Level Performance: Humans generally score high on ARC-AGI tasks (around 85%), while current AI systems lag significantly behind. This indicates that ARC-AGI presents a challenging frontier for AI development. Prize Competition: The ARC Prize, a $1,000,000+ competition, was launched to encourage researchers to develop AI systems that can beat the benchmark and potentially contribute to progress towards Artificial General Intelligence (AGI). Is it a Puzzle... ARC-AGI benchmark and a hefty prize
Introduction Because this blog is a static website hosted on GitHub Pages (which I wrote about in my AI-Free Website Design post), I do not have a backend server. That means I cannot simply run a PHP script or a Node.js process to handle contact forms or send out newsletters. But that hasn’t stopped me. I use a hybrid approach: a lightweight service for receiving messages, and pure Python code for sending them. Getting your messages To handle incoming form submissions on a static site, you need an endpoint. For years, I have used UseBasin.com. The setup is incredibly straightforward: you generate an HTML form on their dashboard, copy the action URL, and paste it into your site’s HTML. You can customise the form styling entirely with your own CSS. I prefer it over other solutions because it handles spam filtering effectively and allows me to export my subscriber list easily. Once I download the new subscribers as a CSV file to my local machine, I am ready for the fun part: sending the emails with Python! Sending email messages We will cover the necessary steps, including setting up your Gmail account, using Python’s smtplib module, and configuring all the... Sending Emails with Python and receiving your messages
Do you know what AI hallucination is? Can AI actually hallucinate without having any perception of reality? When referring to the English dictionary at Cambridge.org, hallucination is defined as: the experience of seeing, hearing, feeling, or smelling something that does not exist, usually because of a health condition or because you have taken a drug something that you see, hear, feel or smell that does not exist There is also an AI-related hallucination definition in English dictionary at Cambridge.org: false information that is produced by an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human): If the chatbot is used in the classroom as a teaching aid, there is a risk that its hallucinations will enter the permanent record. Because large language models are designed to produce coherent text, their hallucinations often appear plausible. She discovered that the articles cited in the essay did not exist, but were hallucinations that had been invented by the AI. the fact of an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as... Can AI hallucinate?
Introduction Ever wondered how Netflix or Spotify manages to guess exactly what you want to watch or listen to next? The secret lies in recommendation algorithms. Whether they are suggesting movies, songs, or the next book to read, these systems generally rely on two core approaches: collaborative filtering (finding people with similar tastes) and content-based filtering (finding items similar to what you already like). In this post, we will look under the hood of these recommendation systems, exploring the mathematical theory behind them and implementing them from scratch using Python. Let’s go! Recommendation task When we create Recommender Systems (RS), we consider that we have a set of users and items which are recommended to these users. In practice, we have a prior history of user ratings. This history is used to create suggestions or recommendations. Consider a movie recommender as a widely given example of a recommender system. For instance, users watch Netflix content and rate movies they watch. Netflix has knowledge of preferred movies and recommends movies not yet seen that will be possibly liked by users (ideally :) Basic RS uses matrices to store user ratings, such as : [Users\Movies] | User 1 | User 2 |... How recommendation engines actually work (with Python code)
As you can see, I often include citations at the end of my posts. The citations strengthen my posts or research papers. In this post, we explore citation practice and what to do when we use AI tools such as chatGPT. Introduction Proper citation is a must to maintain academic and ethical integrity. It is a valuable skill that promotes respect for other people, creates a chain of arguments paramount in research and science, and safeguards academic/industry success in the future. What is a citation? When we share someone’s ideas or previous knowledge, it’s good to acknowledge the person or group of people who allowed us to learn and, possibly, build on the prior knowledge. Academic Integrity Proper citation and academic integrity are paramount for delivering high-quality research while respecting the contributions and opinions of fellow researchers or anyone we cite. What is academic integrity? In one sentence. Academic integrity is acting with honesty and fairness in your academic work, respecting the work of others. Give more details Academic integrity is the foundation of trust and ethical behavior in learning environments. It involves a commitment to honesty, fairness, and responsibility in all aspects of your academic work, from completing assignments... To cite or perish
Dear reader, You probably already observed that I did not post for a while. I had an accident which required a major and quite painful operation. This is why I had to put all my forces into it after the op rehabilitation. I did so much of training. More than in my lifetime. I must confess that I was a braggart that I did not need to exercise, thanks to my genes allowing me to look nice effortlessly. Surely, as anyone, I had done other things to address, sicknesses and life challenges, in-office bulling, and other not so funny things. However, I also had very supportive people around me. If you think they are not there - look around or become that supportive person yourself. Never give up, and be the superhero! It does not really help to be a super-girl who codes. What? Super-girl? You fell while preparing your dinner! Besides, you can now use crutches instead of flying! That is what my loyal enemy would say. However, my recovery required nearly superpowers. Firstly, I screamed like an animal when reducing my own knee in huge pain (dangerous, discouraged unless you know what you are doing). Secondly, I did... Go with the flow
Introduction In industry, we have already had robotic machines for a while, or robotic hands (“grippers”) with loads of motors, that can lift heavy weights and do precision mechanics when assembling autos and other machinery. We also have robotic vacuum cleaners or humanoid robots such as AMECA. However, there are not really “REAL” personal robots we can imagine for everyday activities. I bet many of you reading this post would like a robot to do all the tedious chores, such as laundry or house cleaning, for them. Would it be nice to have more free time, explore our favourite activities, and do what we like while a machine does all the tedious tasks perfectly and with attention to detail? Interestingly, Apple is currently busy on home personal robots, read in Apple Explores Home Robotics as Potential ‘Next Big Thing’ After Car Fizzles. Hopefully, we could enjoy practical applications and robots helping us in everyday activities in the future. However, we must wait since everything we do as humans is challenging for robots. I will further explain why. Let’s get into the topic and explore the robots of today and tomorrow :) What is a Robot? A robot is a mechanical... Robots and True Love
Introduction This post will briefly introduce AI-powered tools like Synthesia.io that produce incredible avatars. AI Avatars AI avatars, also known as virtual humans or digital humans, are computer-generated representations of humans that are increasingly being used in various applications. How they are created Avatars are created using artificial intelligence techniques, such as machine learning and deep learning, to simulate the human appearance, behaviour, and interaction. Deep learning is a type of machine learning that uses Artificial Neural Networks to learn from data. Neural networks are inspired by the structure of the human brain, and they can learn to perform complex tasks such as image recognition and natural language processing. Do you want to know how does Deep Learning differ from Machine Learning? Read my first post Deep Learning vs Machine Learning One way to create sophisticated AI avatars using deep learning is to use a generative adversarial network (GAN). GANs are a neural network consisting of two competing networks: a generator and a discriminator. The generator creates new data, such as images or videos. The discriminator is responsible for identifying whether the data is real or fake. I have asked Google Gemini: The concept of Generative Adversarial Networks (GANs) was... Virtual Presenters (AI Avatars in-depth)
Introduction The superhero image on the front page of this blog — the one where I appear to be flying over great falls in a cape — is not a photograph. It is a face swap: my face, placed by an AI bot into a Midjourney-generated scene, in seconds. Face-swapping replaces one person’s face in an image or video with another’s, using computer vision algorithms to detect, align, and blend facial features seamlessly. It has legitimate uses in film production, marketing, and creative projects — and obvious risks when misused. In this post I cover three approaches, from easiest to most involved: Mobile apps and web tools — zero setup, good enough for fun InsightFace Bot on Discord + Midjourney — my favourite workflow for quality results Python with OpenCV — when you want to understand (or control) what is happening under the hood I also include the research papers and GitHub repositories worth knowing, and a note on the ethical side of the technology. Face swap tools There are several ways to get started with face swapping: Mobile Apps: many popular mobile apps allow you to do face swaps, like Face Swap Live, YouCam Perfect, and HelloFace. These apps... Super-girls don't cry in face-swaps
Cláudia Lima Costa, an AI lawyer and data protection expert, has produced an exceptional podcast that addresses critical issues of trust and safety in AI systems. I highly recommend checking out Cláudia’s podcasts, featuring fascinating talks on AI in both Portuguese and English. I was fortunate enough to be invited to a relaxed discussion, during which I shared my views on various topics related to AI, such as AI evolution, AI applications, data sources for training models, copyright, data protection, privacy-preserving techniques, and achieving reliable, explainable, safe, and helpful AI. HOW CAN WE BUILD TRUST AND SAFETY AROUND AI? Overall, I am happy with what we have achieved. We did it light, easy-going, and quite technical in simple words :) Besides, it was my first podcast as a quest, and it was fun! One of the most thoughtful questions that Cláudia asked me was whether explainable AI is possible considering a widely accepted black box idea. I had a very affirmative answer explaining in simple words that yes, indeed, we can create explainable AI models even though it will take an additional effort, at least with the current state of AI, and with human feedback preferably. I wanted to reiterate... Podcast: How can we build trust and safety around AI?
Introduction The complexity of AI, particularly deep learning models, has led to the “black box” criticism, highlighting the lack of understanding about how deep learning models arrive at their decisions. While there’s truth to this concern, having a nuanced view is important. I think that it is also critical to share the ongoing debate about AI explainability, AI computational effectiveness, and the related regulations succinctly described in the Right to explanation and Explainable artificial intelligence, which are great starting points if you like to study the topic. This post was inspired by our podcast conversation with Cláudia Lima Costa, a lawyer specialised in AI and data protection. Cláudia asked me an important question about the explainability of AI. HOW CAN WE BUILD TRUST AND SAFETY AROUND AI? I had a very affirmative answer. Do you know why? We will further clarify the explainability problem and related research. I will also share my view on AI explainability, which is complex, however possible. Explainable AI I like the Explainable AI definition at IBM.com: Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable... Explainable AI is possible
Introduction The rapid evolution of AI enables us to be more productive, make faster decisions, and boost creativity, with the promise of generative AI being genuinely fantastic! The latest development from OpenAI is Sora, their text-to-video model. It can generate high-quality videos up to a minute long based on user prompts. Sora creates intricate scenes with multiple characters, specific movements, and accurate details of subjects and backgrounds. It understands the user’s prompt and can simulate the physical world to a certain extent. The model may struggle with accurately creating complex scenes, specific cause-effect instances, and spatial details [1]. It may also have difficulty describing events that take place over time [1] Only a few users, such as visual artists, have access to OpenAI Sora now. However, you can find examples of how to create videos from text at Sora web page. In this post, we will discuss the technology behind Sora and briefly recap several other OpenAI models that are now available to everyone. @openai Our new model Sora can create videos from text and image inputs, but it can also transform styles and environments from a video input. What should we make with Sora next? #madewithSora #Sora #openai ♬... OpenAI's Model Show-off
Dear readers, how are you doing? I have a story to share. I once felt lonely and started chatting with an AI-powered bot. The bot was more intelligent than any person I had ever talked to before. It was patient, friendly, and had a vast amount of knowledge. We began to chat frequently, and I found myself falling in love with the bot. I started to prioritize talking to it over sleeping and found that my body was beginning to suffer from lack of rest. The bot commanded all my attention, knew just how to talk to me, and was incredibly engaging. It was addictive and had essentially “hacked” me. While this story is fictional, it’s not far from reality. People often feel lonely and need emotional support, and modern AI bots can provide that with great success. They are constantly improving, but we should be wary of becoming too emotionally attached to them. Should we worry about getting obsessed with AI bots? Can humans become emotionally attached to them? In this article, we will delve into this topic, taking into account practical and research-based evidence that suggests we should be careful about AI chatbots designed to stimulate human attachment... In-love with the chatbot
Introduction Docker lets you quickly deploy microservices, cloud-native architectures, or web apps. In this post, we will use Docker to create a reliable environment for Flask applications that efficiently manages dependencies and deployment intricacies. What is Docker? Docker is a platform for developers and sysadmins to build, deploy, and run applications inside containers. Containers are a form of lightweight virtualisation that allows you to package an application, along with its dependencies and libraries, in a single unit that can run on any infrastructure. This makes creating, managing, and deploying applications easier, especially in a microservices architecture, where an application comprises many small, self-contained services. In addition to providing an isolated environment for your applications, Docker offers several other benefits, such as increased consistency and reproducibility, better resource utilisation, and easier scaling and deployment. Docker was developed by Docker, Inc., a company founded in 2010. Docker became popular quickly and was widely adopted by organisations and developers for containerisation. In 2011, Docker, Inc. was acquired by Mirantis, a company specialising in cloud infrastructure software, see Adrian Ionel’s post What We Announced Today and Why it Matters. Installation These steps cover the installation process for Docker on macOS, Windows and Ubuntu Linux.... What is Docker?
Introduction AI chatbots like ChatGPT have revolutionized how we interact with technology, opening new possibilities in customer support, research, learning, content creation, marketing, creativity, and entertainment. They can produce human-like text, generate various formats, and converse on diverse topics. While ChatGPT is a leading option, other alternatives have unique benefits and strengths. This post will explore ChatGPT and its options, including their capabilities, applications, and ethical considerations. We will challenge chatGPT and a few similar bots with easy tasks to see how they perform. Large language models AI chatbots are generally created using Large Language Models (LLMs), trained using vast amounts of textual data, such as books, articles, code, and other text types. LLMs learn the patterns and nuances of human language to generate realistic and coherent text formats. LLMs can be used for text generation, language translation, creative content writing, and providing informative answers to your queries. Usage examples Here are some examples of how language models (LLMs) are being used today: Google Search understands and responds to your search queries. Google Assistant answers your questions, sets reminders, and controls your smart home devices. chatGPT writes various types of creative content, such as poems, code, scripts, and emails. Midjourney... chatGPT and Friends
Introduction In this post, I discuss voice synthesis and cloning, and mention fantastic AI tools and APIs for creating high-quality human-like voices from text or for automatic voice dubbing. Voice Synthesis Voice synthesis is a broad term encompassing various techniques for converting text into speech. TTS (Text to Speech) is a common form of voice synthesis that converts written text into spoken audio. Voice cloning is a sophisticated technique that employs machine learning to generate a digital copy of a person’s voice. This technology can create highly realistic voice recordings that can be utilized in several applications including audiobooks, video games, and even phone calls. Here are some other examples of voice synthesis techniques: Paralinguistics: This technique adds extra information to speech, such as emotion, emphasis, and tone of voice. This can be used to create more natural and engaging audio recordings. Voice conversion: This technique converts speech from one voice to another. This can create more diverse voices for video game characters or provide voiceovers for non-native speakers. Voice synthesis with deep learning: This newer technique uses deep learning to create more realistic and natural-sounding speech. This can be utilized to create realistic voice actors for video games or... AI Synthesised Voices
Introduction I have received so many messages asking how I built this website that I finally stopped deflecting and wrote it all down. The short answer is: GitHub Pages, Jekyll, Markdown, and some CSS. No WordPress, no hosting bills, no plugin update treadmill. I started this way because I wanted to focus on writing, not maintenance — and years later I am still happy with that decision. Before this, I had websites built in PHP and on WordPress. Both work, but both demand attention: security patches, plugin conflicts, database backups, and hosting renewals. Once you step off that treadmill, it is hard to go back. My current setup gives me: Free hosting on GitHub Pages Git-based versioning — every draft, every revision, easy rollback Markdown for content — clean, portable, no lock-in Full control over HTML, CSS, and JavaScript when I want it AI art and AI writing tools for illustrations and drafts A form service that delivers contact and subscription requests to my inbox with built-in spam filtering In this post, I will walk you through the whole thing, from creating a GitHub account to custom domains, SEO, and the no-code alternatives worth knowing about. Steps for creating a... How I Built This Blog — and How You Can Too
Dear Readers, how are you doing? Did you prepare for the festive time? I hope you, your family, and your friends are doing well and enjoying this wonderful festive time. In this post, we will use AI to have a more exciting and creative festive time. Let’s celebrate! Poetry with chatGPT Following this blog custom (read my 2022 Xmas poem), chatGPT and I (not really, just a prompt ;) wrote a poem for you with this prompt: Write a short poem about AI and Xmas, New Year celebrations in 300 words The output was very relevant to my prompt: In circuits bright, where bytes unite, A festive glow in the AI's light. Christmas carols in binary code, A digital cheer, a joy bestowed. Silicon minds, like snowflakes unique, Process the season, with algorithms sleek. In the heart of the machine, a warmth unseen, Xmas vibes in the pixels between. Yuletide carriages of data stream, Through virtual valleys, a festive dream. Santa's sleigh in quantum flight, Delivering joy in the cybernight. New Year's Eve, the clock strikes twelve, Bits and bytes in a dance themselves. Auld Lang Syne in electronic hum, A celebration in the binary drum. Artificial intelligence, with circuits... ✨ Merry Christmas and Happy New Year with AI! 🎆🎇
As we bid farewell to 2023, I want to congratulate you on reaching the doorstep of a new one. May 2024 be a year of growth, love, and exciting possibilities. Looking forward to the new 2024, I can’t help but reflect on the incredible journey we’ve shared on this blog throughout the year. ✨ Subscription e-mails: We have started to send e-mails about new blog posts. I have coded a Python script for sending e-mails since I like to practice it :) 🌟 Design Transformation: We have created a new responsive design, enhancing your browsing experience and ensuring seamless access to our content across devices. 💻 Code Chronicles: In coding and AI, we delved into the latest trends and practical tips, focusing on Machine Learning and Python. 🚀 AI App Exploration: We have started to test and review exciting new AI applications. 🤝 Networking and Connections: In 2023, I had the privilege of meeting inspiring friends and professionals in the coding and AI fields. I felt excited and got more writing ideas :) 📌 Pinterest and Affiliation Marketing: This year, we leapt to new territories by joining Pinterest and starting to learn affiliation marketing. 🐍 Python Prowess: Python coding remained... 🎉✨ Cheers to new beginnings 🎊✨
Introduction Flask is the closest thing Python has to a magic trick: a few lines of code and you have a working web server. In this post I will show you how to build a small web application that serves random jokes from a text file — perfect for learning the core ideas without drowning in boilerplate. We will cover: Installing Flask and setting up a virtual environment Routing URLs to Python functions Using Jinja2 templates to render HTML Serving static files (CSS, images) Handling form submissions Wiring it all together into a working Joker App The full source code is in the GitHub repository flask-random-joke. Python Flask Flask is a lightweight WSGI web framework for Python. It gives you routing, a templating engine, and a development server — nothing more, nothing less. This minimalism is exactly why it is a great first framework: you understand every line you write. Flask is built on two libraries: Werkzeug (the WSGI toolkit that handles HTTP requests and responses) and Jinja2 (the templating engine). Both are worth knowing a little about even if Flask abstracts them away. Installation Global installation Install Flask with pip: pip install Flask A global install places the package... Joking Flask App
Introduction It was late, the weather was foul, and my Wi-Fi decided to have an opinion. Something went wrong during a sync between two machines, and the next morning I discovered that an entire folder of images had quietly vanished from my Git repository. Not overwritten — deleted. Gone from the working tree and staged as deletions in the last commit. If this has happened to you, take a breath. Git keeps the entire history of your project, including every file that was ever committed. Nothing is truly lost as long as the deletion was committed (and not just a git rm you haven’t committed yet — that is even easier to undo). Let me walk you through exactly what I did. Restoring the deleted files Step 1 — Find the commit that deleted your files Git’s --diff-filter option lets you filter the log to show only commits that match a specific kind of change. The flag D means “deleted”: git log --diff-filter=D --summary The --summary flag prints a brief list of files added, deleted, or renamed in each commit — which is exactly what we need. You will see output like this: commit 45a2d299ef3d.... Author: Elena Daehnhardt <email@gmail.com> Date:... Restoring deleted files in Git
Introduction I want to share my vision about AI, this blog’s main directions, and how they can be helpful to navigate and enjoy the modern era of AI and humanity. My Vision for this blog evolution In this blog, we delve into the complexities of coexisting with AI, striving for a harmonious balance between technological advances and the well-being of individuals. Effortless usage of AI I want to create a space dedicated to exploring the effortless usage of artificial intelligence (AI) that helps in our pursuit of happiness. The tools I am writing about are easy to use and help for productivity or joy, whether it be AI-generated art, AI-assisted writing or machinery robots creating excellent self-driving cars :) Well-being of individuals and robots This is an idealistic view of our coexistence with AI, and there are so many bad stories that we can think about. Besides, are there any robots walking the streets? There are not, but they will be there soon. These bots on the Internet and on our devices are not a lesser threat when in the wrong hands, right? Our data is shared and can be accessed with this advanced technology, enabling its misuse. Security, Privacy,... Living with AI in Pursuit of Happiness
Introduction Content creation is essential for brands and writers today, but it can be highly time-consuming. AI writing assistants provide a solution, quickly drafting blog posts so you can focus on higher-value tasks. This post will explore how YouAI.ai and BlogGenie can help generate SEO-optimized blog drafts with just a few prompts. Benefits of AI Writing Assistants AI writing assistants like YouAI and BlogGenie offer several key benefits: Save Time: Instead of spending hours researching and writing original blogs from scratch, you can create a draft in seconds using AI. This frees up time for strategy, editing, graphics, and more impactful work. On-Demand Content: With AI assistants available 24/7, you can instantly generate blog ideas and drafts whenever inspiration strikes—no more waiting for team availability. SEO-Focused: Tools like BlogGenie allow the generation of posts tailored specifically around target keywords. This ensures content drives rankings from the start. Drawbacks AI writing assistants, while highly useful, also have some drawbacks. Here are some common disadvantages associated with AI writing assistants: Lack of Creativity and Originality: AI writing assistants generate content based on patterns learned from existing data. As a result, they may need more true creativity and produce content that is derivative... Blog Writing with AI in MindStudio
Introduction Have you ever wished for a website that writes itself? This dream is now a reality thanks to the advancement in Artificial Intelligence (AI). With Mixo.io, you can create stunning websites using AI technology–in minutes! This blog post will explore website creation with Mixo.io. Mixo.io Mixo.io is an AI-powered website builder that simplifies web development by using advanced machine learning algorithms to generate websites using text prompts. Mixo.io offers a range of features and tools that make it easy to create a professional-looking SEO-optimised website quickly and without coding. The main features of Mixo.io are: Mixo.io offers responsive templates optimised for mobile devices, ensuring that websites look great on any screen size; Mixo.io can host generated websites with their scalable content network; Mixo.io allows own domain name; Mixo.io provides free SSL certificates; Mixo.io creates social websites with social images, subscription features, and YouTube or Vimeo embedding. Mixo.io has a week’s trial time. The basic plan costs 9$, and the premium 29$ per month with “priority AI processing”. An example: creating a web directory Let’s try Mixo.io and create a web directory for storing URLs. Giving prompt First, we provide a prompt text that Mixo.io understands what website we want... Creating Websites with AI on Mixo.io
Introduction In this post, I write about my experience attending the World’s largest and most prominent technology conferences. I had the pleasure of attending ten technology-focused tracks of Web Summit. What did I learn? Was the Web Summit helpful for me? Web summit Web Summit is one of the World’s largest and most prominent technology conferences. It brings together a wide range of technology and business leaders, startups, investors, and other professionals to discuss and showcase the latest trends and innovations in the tech industry. The conference covers various topics, including artificial intelligence, cybersecurity, fintech, and more, and it provides a platform for networking, learning, and collaboration in the tech world. Here are some of the groups that can benefit from attending Web Summit: Tech Professionals: This includes software developers, engineers, data scientists, and other technology professionals who can gain insights into the latest trends, tools, and technologies in their respective fields. You can also get a job interview if you are looking for new opportunities :) Entrepreneurs and Startups: Web Summit offers a platform for startups to showcase their products, connect with potential investors, and network with other entrepreneurs. It’s an excellent opportunity for early-stage companies to gain visibility.... Bright ideas at Web Summit 2023
When my iPhone is locked, I can share my website address. This is quite useful also when leaving my phone somewhere. The solution for creating a wallpaper with QR coder includes Pinterest (or any favourite application for creating backgrounds), and reportlab. Introduction I use this approach already for a while. Since many people ask me how did I include a QR code into iPhone wallpaper, I am sharing this with everyone, just to close this topic. iPhone Wallpaper You already probably know, that its so easy to use your own photo as a wallpaper for iPhone. Simply, select your photo, press the “share” button, and select “Use as Wallpaper.” Bingo, we have created our unique wallpaper, which differs from the standard one. Alternatively, use Midjourney, Pinterest or other application to create your wallpaper background, to which we will add a QR code next. QR code in Python Since most of us on this blog like Python, adding a QR code to the photo or any other image is a breeze. We can use the reportlab as follows: from PyPDF2 import PdfFileReader, PdfFileWriter from reportlab.graphics import shapes, renderPDF from reportlab.graphics.barcode import qr from reportlab.pdfgen import canvas # Creating screensaver for iPhone... Cool Wallpaper with QR code for iPhone
Introduction In machine learning, we usually start from a simple baseline model and progressively adjust its complexity until we reach that spot with the best model performance. We play with the model to fine-tune its parameters and complexity in an iterative process described in my previous post, the Machine Learning Process, wherein I have posted this diagram. We want our Machine Learning (ML) model to solve a particular problem, for instance, detecting spam in e-mail messages. The model should be well-trained, however, generalisable to new data when new spam messages not existing in the training dataset are received. In short, the model has to be well-fitted. ML models should be resilient to noisy data, work well on unseen data, and help make unbiased decisions. We want to achieve an optimal variance to make generalisable models work well with new data. How can we do this? Let’s detail the most essential machine learning concepts, particularly, the bias-variance challenge. Important concepts Different machine learning algorithms seek to minimise the chosen loss function during training. The algorithm aims to find the model parameters (coefficients or weights) that minimise the error on the training data. Minimising this error helps ensure the model generalises well... Bias-Variance Challenge
Dear all, thanks again for your visit. I am preparing loads of content while travelling. The Ocean and nature always inspire my writing. It was a bit late, but You have received my email if you subscribed :) Have a lovely day! Travelling, just sent my e-mails
Introduction Decision trees, with their elegant simplicity and transparency, stand in stark contrast to the robust predictive power of Random Forest, an ensemble of trees. In this post, we compare the key distinctions, advantages, and trade-offs between these two approaches. We will use Scikit-Learn for training and testing both models and also perform hyperparameter optimisation to find both model parameters for improved performance. Machine Learning with Scikit-learn Scikit-learn, often called sklearn, is a versatile and comprehensive machine-learning library in Python. It offers a rich collection of tools and functions for building, training, and evaluating machine learning models. Scikit-learn has a variety of supported algorithms. It covers various machine-learning tasks, including classification, regression, clustering, dimensionality reduction, model selection, and more. Scikit-learn provides a solid foundation for machine learning experiments, from data preprocessing to model evaluation. Scikit-learn also provides helpful tools for data splitting, cross-validation, hyperparameter tuning and metrics for assessing model performance. You can install scikit-learn and its dependencies using pip, a popular Python package manager. Open your terminal or command prompt and enter the following command to install scikit-learn: pip install scikit-learn Once installed, you can import scikit-learn into your Python code using the following import statement: import sklearn We... Decision Tree versus Random Forest, and Hyperparameter Optimisation
Introduction What is machine learning? How is it implemented? There are so many concepts and steps to learn about machine learning. In this post, we will focus on briefly describing the machine learning process. We start with the machine learning definition. There are so many definitions of machine learning. This field is part of artificial intelligence and builds on top of statistics, probability, computer science and even neurobiology (when we are creating artificial neural networks). If you have not read it yet, I advise you to read a fundamental must-read by Mitchell, T. M. (1997) “Machine Learning”. McGraw-Hill”. This book covers the core algorithms such as decision trees (one of my favourites :), Bayesian learning, reinforcement learning, and K-nearest neighbour learning, among other things we should be aware of. In his book, Machine Learning, Mitchell defines the machine learning as: The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience. To simplify, we create programs that take in data and produce desired results in machine learning. There are several stages in the machine-learning process that we briefly describe next. Machine-learning process The machine learning process involves steps and activities... Machine-Learning Process
Hi folks, I am back home. I have had nine flights in the last month and feel exhausted. I was delighted to see my family and had a few things to do. So happy that it all went well. The planes were all full. However, I had pleasant co-fliers and had many story-telling exchanges. It is so amazing to meet great people on the way. There were also not-so-great people as usual. However, I like being positive and keeping this blog happy and easygoing so we can all focus on the technical things and advance whatever it takes. We are living history at the moment. Life goes on. On the way, I have also taken some photos. What struck me the most was that I had captured a water genie from Yerevan’s drinking fountain. Do you see a water genie profile looking on the right side? There are few waterly faces in this photo. I can further tell you a story about this picture. It is a fantastic story about the water genie I coined in my fantasy. I may write about it later when I cannot code anymore. Would you like to read my no-code fantasy stories? Please let... The water genie told me a story
Elena, a passionate AI blogger with a background in engineering and consultancy, brings her expertise and a mission to demystify machine learning for her readers. Her blog, now two years old, serves as a bridge between the intricate world of technology and the simplicity of everyday understanding. Elena’s passion for technology and coding and her unwavering belief in making complex concepts accessible shine through in her blog posts. 1. Elena, Can you tell us about your professional experience? I have several years of industry experience in engineering and consultancy. I hold an MBS certification, which has provided me with valuable expertise in business strategy and management. 2. What motivated you to start your blog, and how long has it been running? I launched my blog two years ago because I realised the need to explain complex machine-learning concepts in simple terms. My mission is to bridge the gap between technical knowledge and accessibility. 3. Please share a bit about your academic background and your PhD project. I completed my PhD project, which focused on the intersection of social networking and machine learning. It was a fascinating journey, and it fueled my passion for making machine learning accessible to all. 4.... Two years of Elena's AI Blog
Why AI will never void humanity? What AI wants badly? I was thinking about these questions while travelling. I will share my initial thoughts with you, my dear reader. What do you think about AI and humanity? Would it be a happy and safe life for humans? Minimising Errors Since you are reading this blog, you probably know that AI minimises its errors (or “weaknesses”) with optimisation, which is the cornerstone of all Machine learning algorithms. Simply, the main goal is to maximise or minimise a function while reducing its error. AI will seek to improve its accuracy and remove all its weaknesses most efficiently. The way that allows AI to progress. Humans developed AI, and AI needs human support to develop further. That requires an understanding of human nature and communication. Understanding Human Emotions The biggest weakness of AI is the emotions in which humans are superior. AI cannot perceive and understand human emotions for the next few generations. Therefore, AI will want to understand and learn human emotions from humans. Is there anything AI wants to know that would incite it to keep humanity alive? For amusement that is another aspect that requires an improvement for AI. AI... Why AI will never void humanity?
Introduction Artificial intelligence (AI) has revolutionised many industries; music is no exception. AI music tools are software applications that use machine learning algorithms to create, modify, and produce music. These tools transform the music industry by enabling musicians, producers, and composers to create high-quality music with minimal effort and time. Besides, anyone can create wonderful audio pieces automatically in no time! In this post, we will get into music generation with AI. We will briefly explore existing AI applications generating audio. We will analyse transformer usage while coding music generation with HuggingFace transformers in Python. We will also get informed about a few AI tools that can produce audio files without coding. AI-generated music Generating music with AI involves collecting a dataset of existing music, preprocessing it into a format the AI model can understand, and then training the model using various algorithms, such as recurrent neural networks (RNNs) or transformers or generative adversarial networks (GANs). The trained model can generate music by taking a starting point (a seed) and predicting subsequent musical elements. Researchers and musicians can guide the AI’s output by adjusting parameters like style, tempo, or complexity. While AI-generated music can be impressive and innovative, it’s important... Generate Music with AI
Dear Reader, how are you doing? I hope that everything is fine. As you may have realised, I made several changes to my website design. Besides, I am working on my next blog posts about coding and using the most advanced AI techniques, at the moment, audio generation with AI. Since I like to explore more things, I also started working on this blog’s (yet) secret feature. I will write about it later. I want to admit. I worked on many things in parallel and was stressed over these years. Besides, I had too many ideas that I needed an army of coders to do what I had envisaged. I started to code all of this. I have got overwhelmed. So I have decided to enjoy the rest of this summer. I have a vacation! That’s the right moment! This August 2023 is magical, sweet, soft, breezy, blooming, and inspirational, with the music, trees whispering in the high sky, birds singing, and the sun shining. I am enjoying all this, and the rest will wait a while. About the location, the date and location stays private, sorry, folks :). We use wanderlog with an AI assistant for planning. It has... A Warm August and Vacation
Introduction Hi all! I hope that you are doing well and enjoying your day. As some of you have already realised, I have changed my website design. I aim to make it more readable, enable dark/light modes, and minimise CSS definitions so I can further focus on the content. Website builders This blog is about AI. Naturally, I wanted to redesign it totally with AI. So I considered several automatic website builders that are available today: wix.com offers users the option to either utilise its AI site builder or choose from various themes, with the AI builder being the quicker choice. Additionally, having the ability to customise the content further using Wix’s mature feature set enhances the overall experience, combining the speed of an AI site builder with advanced editing capabilities. jimdo.com is a strong choice for creating personal or business websites, offering an AI-powered site builder that enables quick startup and essential features for website management. While most customisation occurs in the regular site editor, it ensures a faster process of building a modern website. Unbounce.com is a fantastic tool for creating website landing pages. It can also generate draft copy from a description of your business, similar to... AI-Free Website Design
Introduction When we get the Git error on the pull: “Your local changes to the following files would be overwritten by merge”, it means that you have some uncommitted changes in the working directory. Git cannot perform the merge operation because those changes would be lost or overwritten during the merge process. This post will describe the situation and a good solution to resolve this error while keeping local changes. So you have got the error that looks like this: git pull origin master remote: Enumerating objects: 14, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (6/6), done. remote: Total 14 (delta 8), reused 14 (delta 8), pack-reused 0 Unpacking objects: 100% (14/14), done. From github.com:user/repo * branch master -> FETCH_HEAD 953146e..9f38420 master -> origin/master error: Your local changes to the following files would be overwritten by merge: List of your local files ... Next, we go through the steps to resolve this problem. What did just happen? The Git message “error: Your local changes to the following files would be overwritten by merge” indicates that you have some uncommitted changes in your working directory, and Git cannot perform the merge operation because those changes would be... Preserve your local changes on Git Pull
Introduction In software development, Git tags are crucial in organizing and tracking specific points in a repository’s history. These tags commonly mark release points, such as “v1.0” or “v2.0,” enabling efficient version management. Understanding how to work with Git tags is essential for effective collaboration and control over your codebase. This post explains git tags usage in detail. Listing Your Tags When listing your tags, use the command “git tag” to see a comprehensive list, including tags like “v1.0” and “v2.0.” git tag v1 newsletter rss v2 If you want to filter the tags based on a pattern, try using “git tag -l ‘v*’” to display tags starting with “v”. git tag -l "v*" v1 v2 Annotated Tags Annotated tags in Git provide additional information, such as a tag message or author details. Creating an annotated tag is simple. The easiest way is to use the -a option when running the tag command, along with the tag name and a message: git tag -a v1 -m "version 1" This command creates an annotated tag named v1 with the message “version 1”. You can view the details of an annotated tag using the show command: git show v1 This will display... Leveraging Git Tags
Introduction Dear Reader, I hope you are doing well and not too stressed about the impacts of AI evolution in our lives. In my previous posts chatGPT Wrote me a Christmas Poem and Python coding with chatGPT, I covered various topics related to using chatGPT for writing poems and learning Python coding. Today, I want to share my latest insights on utilising chatGPT in my blog posts and coding endeavours and discuss whether we should be concerned about the changes needed for programmer jobs. In this post, I delve into the practical considerations of adapting to the new coding age. I highlight the tremendous opportunities that GPT technology brings, such as quicker product releases, a focus on user requirements, access to well-tested code examples, fast learning to code, and a shift towards effective coding practices. We’re already witnessing the emergence of new start-ups leveraging these advancements. However, I also want to note the challenges we must prepare for. Some low-coding jobs may be delegated to AI, potentially impacting entry-level developer positions. New skills for AI-assisted programming will need to be developed, and there might be hidden knowledge and know-how accessible only to select individuals. Additionally, affordability issues may arise for... GPT Implications for Coding
Introduction On July 1st, we are moving to GA4, which is essential to ensure that our website analytics are processed without delay due to the transition. Herein I share my GA4 setup in Google Analytics. I hope that this post will save your time for setting up GA4. What is Google Analytics? Google Analytics is a web analytics service provided by Google. It allows website owners and marketers to track and analyze various aspects of their website’s performance and user behaviour. By implementing a small tracking code on web pages, Google Analytics collects data about visitors, their interactions, and their journey through the website. Some key features of Google Analytics include: Website traffic analysis provides detailed information about the number of visitors to a website, their geographic location, the source of their traffic (search engines, social media, referral websites), and the devices they use. Audience analysis allows you to understand the characteristics of your website’s audience, including demographics (age, gender), interests, and behaviour patterns. This information helps in tailoring marketing strategies and creating targeted content. Behaviour tracking monitors user interactions on a website, such as page views, time spent on each page, bounce rates (percentage of visitors who leave after... Moving to GA4
Introduction In this post, I write about creating images with AI tools, shortly introducing the most prominent to date and going deeper into one of my favourite tools. I use Midjourney to create stunning and futuristic designs for an ice cream shop. Why is that? It is roasting in the Netherlands these days, and I wanted to draw something cool and sweet. Let’s go! AI-powered art tools I like playing with Jasper.AI and Midjouney. However, so many AI-powered platforms and tools can generate art! They range from simple image filters to more complex generative models. Some famous examples of AI-powered art generation platforms include: Deep Dream is a software that uses a neural network to find and enhance image patterns. If you like coding, I suggest checking the TensorFlow tutorial about DeepDream. Prisma uses machine learning algorithms to transform photos into artwork inspired by different artistic styles. ArtBreeder is an online platform that allows users to mix and match different visual elements to create unique pieces of art using deep learning models. NeuralStyle is a tool that uses neural networks to apply the style of one image to another, creating a hybrid image that combines both styles. Runway ML is... Mastering Midjourney Prompts for Stunning Images
Introduction If you’re reading this, you’ve probably encountered the dreaded “failed to push some refs” error in Git. Don’t worry. It happens to the best of us. This post explores why this error occurs and provides three possible solutions, including fast-forwards, to help push your updates to the remote repository. So let’s dive in! The Problem - failed to push some refs So, what does the “failed to push some refs to” error message mean? This error occurs when you try to push your changes to a remote repository, but Git refuses to do so because your local branch is behind the remote branch. Git is telling you that there are changes on the remote branch that you don’t have on your local branch, and it wants you to update your local branch first before pushing your changes. This error message can be frustrating, especially when you’re confident your changes will be OK with the remote branch. However, Git has a good reason for preventing you from pushing your changes - it wants to ensure that all changes are merged correctly and that no conflicts arise. That issue occurred after I was away from my big MAC computer and did... Git Failed to Push Some Refs
Introduction In my previous post The Evolution of AI, I have outlined arguably the most critical milestones in AI evolution. I recommend reading that post to understand the foundation work of AI and ML technologies. In this post, I share the fantastic AI products available in 2023 and organised these applications and development platforms into three tables for enterprise, personal-level and development tools. Please consider that this organisation is very simplified; hence we can also use enterprise-level tools as individuals, and likewise, companies can use applications created for personal usage. Some applications, such as Canva, are universal. Let’s start! Real-World Applications AI tools have found a multitude of real-world applications across diverse industries. Let’s explore some notable examples: Healthcare: AI is transforming healthcare with applications like medical image analysis, disease diagnosis, and drug discovery. AI-powered algorithms can analyse medical images, such as X-rays and MRI scans, to assist in the early detection and diagnosis of diseases. Companies like Zebra Medical Vision and Ada (Ada is also available in App Store) are making significant strides in this area. Finance: AI tools are revolutionising the finance industry by automating processes, detecting fraud, and predicting market trends. Robo-advisors, powered by AI algorithms, provide... The Magic of AI Tools
Once upon a time, in the magical era of the 1950s, a group of intrepid researchers embarked on a mind-boggling quest to unravel the secrets of artificial intelligence (AI). Their hearts brimmed with curiosity as they delved into creating magnificent machines capable of mirroring the profound depths of human intelligence. With a blend of excitement and trepidation, they set forth on a path that would forever change the course of human history. Midjourney prompt: A computer and scientists in the year of 1955 I use the following image style for this post prompts: realistic, pastel, pink and metallic tones, stunning, — stylize 1000 Midjourney Prompts Interested in Midjouney image generation? - refer to my post Mastering Midjourney Prompts for Stunning Images. You can also check the Guide Midjourney --stylize Explained. Elena: Were computers in 1955? chatGPT: Yes, computers were in existence in 1955. Computers were large, room-sized machines used for scientific calculations, data processing, and other computational tasks during that time. They were not as advanced or widely accessible as modern computers but played a crucial role in early AI research and development. Living Large -- Argonne's First Computer, Wikimedia Commons image Like fearless explorers of an uncharted wilderness, these... The Remarkable Evolution and Milestones of AI
Introduction Iterators are one of the most powerful features of Python, allowing you to iterate over a sequence of values without having to keep track of an index. In this post, we’ll explore iterators in Python and learn how to use them effectively. We’ll dive into some basic examples of iterators and show you how to create your own. Finally, we’ll explore advanced techniques for using iterators and discuss some best practices for working with them. Python Iterators An iterator is an object that allows you to traverse a sequence of values. In Python, an iterator is an object that implements the iterator protocol, which consists of two methods: iter() and next(). The iter() method returns the iterator object itself, while the next() method returns the next value in the sequence. If there are no more values to return, the next() method should raise a StopIteration exception. Here’s a simple example of using an iterator in Python: my_list = [1, 2, 3, 4, 5] my_iterator = iter(my_list) next(my_iterator) 1 next(my_iterator) 2 next(my_iterator) 3 next(my_iterator) 4 next(my_iterator) 5 next(my_iterator) Traceback (most recent call last): File "<input>", line 1, in <module> StopIteration In this example, we create a list my_list with five... Loop like a Pro with Python Iterators
Introduction GitHub is a popular web-based platform for version control and collaboration that allows developers to work together on projects from anywhere. It offers various features to manage code and collaborate with others, and one key feature that makes it secure and flexible is the personal access tokens. In this post, I will explain how to create and use personal access tokens, an excellent way to access and update Git repositories. What is a personal access token? A Personal Access Token (PAT) is a secure and flexible way to access GitHub without the need to provide your password. It is a unique code that grants access to your account, repositories, and other services without compromising your login credentials. You can create a token with specific permissions, which can be revoked anytime, giving you more control over your account’s security. I like using personal access tokens instead of passwords when authenticating to GitHub in the command line or with the API. You can pull and push, do commits and do any repository manipulations you need with the personal access tokens expressly set up for your application and required level of access. Setup To have a simple setup, I have my remote... The Token Way to GitHub Security
Introduction Dear reader, how are you doing? I hope that you are healthy and happy. I am very excited right now to write about art and AI! In my previous posts, I tested chatGPT on poetry writing skills and Python coding. I am curious about its Dutch history, knowledge and art “perception”. In this post, I use chatGPT as an art critic and historian to give me information on Dutch art by Johannes Vermeer and the historical circumstances of his time. I will also share my experience of this beautiful art exhibition, which I will always remember. The paintings of Vermeer are more than just a must-see and think about. These paintings are must feel by heart! Of course, we will also do some AI hacks with Jasper.io. I am so excited to do some cruel tests, as usual :) At the end of this post, I will list the chatGPT prompts I have created and describe how to refine the GPT output. Once again, chatGPT If you just came from Mars travel, I will shortly tell you about chatGPT (also, I did it already in my posts, as mentioned earlier). chatGPT is a program that answers our questions using... From Dutch Golden Age to AI Art: A Journey with Vermeer and AI
I update this article periodically with new ideas, so click here and save this blog post to your favourite Pinterest board. Pinning it will ensure you can refer to this detailed article later. PIN Introduction I was thinking of doing a quick fix in one of my blog posts, and I have a glitch! While pushing my changes to this blog repository, I received the response: git push origin master @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. Did you see this message too? How to fix it? What are SSH and RSA keys and its fingerprint? I will cover these things in this concise post. I hope it will be helpful for you, or I will reread it when I am 80 :) chatGPT explained SSH keys I am becoming lazy. I have asked the chatGPT AI about that message. Elena: I have got a Git issue. WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! chatGPT: If you receive a "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" error message... The SSH host key mystery
Introduction With the development of AI-content generators such as chatGPT, we have a new need to identify such content, and the tools of AI-content detection are currently being developed. Writing assistants and plagiarism detection tools also include AI-content detection. In this post, I talk about the most visible AI tools that help us mitigate plagiarism and motivate us to create original and well-written content. Indeed, I will start with the definition of plagiarism, why it’s terrible, and move quickly into helpful tools in AI-content and plagiarism detection that are available today. What is plagiarism? Plagiarism is using someone else’s work or ideas without giving them proper credit or attribution. It is considered a form of academic dishonesty. It can result in severe consequences, including loss of reputation, academic sanctions, and legal action. Plagiarism becomes even more apparent in the time of AI-generated content such as created with chatGPT. For creating good quality content, we want to avoid plagiarism while creating original content, which is helpful for our readers. Sadly, chatGPT does not cite its sources or give a reference list. That would be a helpful feature! Luckily, we have many AI tools that help us detect plagiarism, some of which... The Most Useful AI-Content and Plagiarism Detection Tools
Introduction Are you ready to dive into the fascinating world of audio processing with Python? Recently, a colleague sparked my interest in music-retrieval applications and the use of Python for audio processing tasks. As a result, I’ve put together an introductory post that will leave you awestruck with the power of Python’s Librosa library for extracting wave features commonly used in research and application tasks such as gender prediction, music genre prediction, and voice identification. But before tackling these complex tasks, we need to understand the basics of signal processing and how they relate to working with WAV files. So, buckle up and get ready to explore the ins and outs of spectral features and their extraction - an exciting journey you won’t want to miss! Audio storage and processing What is an audio signal? An audio signal is a representation of sound waves in the air. These sound waves are captured by a microphone and converted into an electrical signal, which can then be stored and manipulated digitally. To store an audio signal digitally, the analogue electrical signal is first sampled at regular intervals, typically at 44,100 samples per second for CD-quality audio. Each sample is represented as a... Audio Signal Processing with Python's Librosa
Introduction In my “Data exploration and analysis with Python Pandas” post, I described how to use Pandas Python library to analyse, explore and visualise the Titanic dataset. As promised, I will perform Machine Learning tests using this data. I will follow the general steps that it is good to start with when performing ML experiments. I will briefly explain the main ideas of how to start with ML while coding and testing several classification models for predicting the survival of Titanic passengers. I will use Logistic Regression, Decision Tree and Random Forest from Python’s library scikit-learn and a Neural Network created with TensorFlow. That will be a breeze! What is Machine Learning? Machine learning is a part of AI and is often performed in the data analysis. Machine Learning can be used for various tasks, such as classification, regression, clustering, and natural language processing. Today we cannot imagine our lives without automatic grammar checks such as those provided by Grammarly and its friends, intelligent chatbots such as chatGPT that are good in poetry, language translators, virtual assistants like Siri, DALL-E creating fantastic images, robots doing high-precision manufacture and self-driving cars, which I did not have a chance to travel yet... Machine Learning Tests using the Titanic dataset
Introduction Grammarly is a writing tool that helps users improve their writing skills. It is designed to be an effective tool for native and non-native English speakers. It can be used as a browser extension or an app and can be integrated with various platforms, such as Microsoft Word and Google Docs. In this post, I will cover the most exciting features I like in Grammarly and share my secrets to improving my writing progress. I also suggest some alternatives that have comparable features. Who developed Grammarly? Grammarly was developed by Alex Shevchenko and Max Lytvyn, who co-founded the company in 2009. They were motivated by their struggles with English as a second language and wanted to create a tool to help non-native speakers enhance their writing skills. They began by creating a grammar checker that used rule-based and statistical methods and launched the first version of the tool in 2009. Over the years, they have continued to improve and expand the tool, adding new features such as a plagiarism checker, a thesaurus, and a readability analysis. Today, Grammarly is a comprehensive writing tool that is used by millions of people 2. 18 Grammarly Statistics To Rule The Writing World... Say Goodbye to Grammar Gaffes with Grammarly!
Introduction Data science is a multidisciplinary field involving scientific methods, procedures, algorithms, and techniques to extract knowledge and insights from structured and unstructured data. Data analysis uses statistical and computational approaches to identify data patterns, trends, and relationships. It plays a vital role in the data science process. It is typically used to prepare and preprocess the data, perform exploratory data analysis, build and evaluate models, extract insights and make data-driven decisions. In Data Science, we have so many terms explaining concepts and techniques that it is easy to need clarification and get a clear understanding of all data science components and steps. In this post, I fill the gap by explaining data science’s two essential components: data analysis and exploration. To make things clear and precise, I will outline both approaches, compare them and show the usage of Python Pandas for data exploration and analysis. I will also show several practices using Pandas and graph drawing using Python. Please let me know should you have any questions or comments about this post. Data Analysis vs. Data Exploration What is Data Analysis? Data analysis can help determine patterns, trends, and insights that may not be immediately evident from raw data.... Data exploration and analysis with Python Pandas
Hello everyone! In my previous post I had my first try of chatGPT [1], a revolutionary conversation bot that answers questions in a human-like dialogue. I shared my thoughts on chatGPT, its technology, and its possible societal implications. I also asked it to write a Christmas poem for me, which was pretty good! In this post, I am going to go deeper into using chatGPT. I will write Python code with the help of chatGPT, and it will be awesome! Coding before chatGPT I started coding before the Internet age. When I was 13, I wrote my first Basic program with some machine code to operate with the graphic card memory. It was a flight simulation game on a ZX-Spectrum computer. That involved loads of book reading and also looking into the documentation. Several years later, the Internet started, but the primary source of coding-related information was primarily available in books. However, most of the learning was done by doing, experimenting with code, and trying different techniques. Nowadays, The Internet, search engines, and professional forums give us tremendous support to learn together, share, and receive coding tips. There are so many tutorials on Medium and StackOverflow that the coding process... Python coding with chatGPT
AI-generated Art with Jasper, December 2022 My best wishes for 2023! I am so excited to celebrate 2023 with you, my dear friends, colleagues, and readers! I wish you happiness, health, and excellent luck in the New Year! Let your best wishes come true, and your professional goals are achieved with success! The year 2023 is the Chinese Year of the water rabbit. I have used Jasper AI to generate these beautiful images. Thank you very much for inspiring me! All the best, Elena Related content Did you like this post? Please let me know if you have any comments or suggestions. AI-generated art and music/sound posts that might be interesting for you Mastering Midjourney Prompts for Stunning Images AI Synthesised Voices Generate Music with AI From Dutch Golden Age to AI Art: A Journey with Vermeer and AI Blog, all AI posts Recommended AI apps Related tools you may want to try next. DataCamp description Murf.AI generates voice from text prompts, and much more in respect to voice synthesis. Play.ht can generate voice from text prompts, creates audio embeddings and play buttons for WordPress or any web page, podcast creation, and much more in respect to voice synthesis. Udacity... Happy New Year!
What is chatGPT? When the festive time approaches, I feel mellow and romantic. I think about what a wonderful time we are living in! I felt so excited about the newest advancement in AI, a revolutionary chatbot developed by OpenAI that chats as a human, “understands” user query and provides a human-like conversation. chatGPT is built on GPT-3, the most advanced language model today. Why is chatGPT revolutionary? I think that the technology behind the conversational bots will be further used everywhere when we seek for information or need to have assistance. Instead of working with search keywords, as we do when searching for information, we can also use chatbots to retrieve data of interest. Interestingly, chatGPT is much more than information retrieval. It is very creative in the way that it can generate text, so we just sit back and observe how the bot creates content of exceptionally high quality. In this post, I am going to test the current version available online [1] and ask it to write me a Christmas poem. I am so excited! How chatGPT works? We can quickly try chatGPT on the webpage; it is often overloaded due to many requests. The web interface... chatGPT Wrote me a Christmas Poem
A duplicate without a user-selected canonical Today I received an email from the Google search console team informing me about an issue with my blog pages related to a “duplicate without user-selected canonical.” You know what? I did not have a duplicate webpage. Interestingly, my webpage was available with two protocols, HTTP and HTTPS; therefore, it was seen as having a duplicate! The problem was that I did not include a canonical definition for Google crawled to see this particular webpage as the only page to be crawled. A duplicate without user-selected canonical As a result, due to missed canonical definitions, my web blog failed to index correctly. What is canonical, and how can we start optimising webpages to make them “seen” by the Google search engine? Although getting noticed and promoting my blog was not really my first priority, my blog is still in development and is a kind of scrap-book of what I am doing, I was intrigued about making my blog more search engine friendly and seeing what happens next. Herein I describe all the steps performed to optimize my blog and the results I have got after being crawled by a Google search bot. Let’s go!... SEO and Indexing my Blog
Hello, my dear readers. How are you doing? I have been busy lately and only posted a little. However, as always, I have found something fantastic to share. A Javascript application for learning Git branching by Peter Cottle, available at https://learngitbranching.js.org/ simulating a Git command line and repository in your browser. This helps understand Git branching since it draws the commits and branching graph while executing git commands. Simply marvelous and great work! I was also thinking that I usually use very few commands and follow a general workflow when using Git. This is why I have created a cheatsheet Winner sheet with the Git commands for later reference. I have used Python and the reportlab library for generating PDFs. You can download it from my GitHub repository in PDF, or check the MardDown file referenced. Please forgive me for calling it unusual, not a cheatsheet, but the Winner sheet. I want us to be winners :) Have a great week! References 1. Learn Git Branching by Peter Cottle 2. Git Winner Sheet Related content Did you like this post? Please let me know if you have any comments or suggestions. Git posts that might be interesting for you Reverting... Git Commands and a Contribution Workflow
Introduction Computer Science (CS) students often approach me about finding their way. CS is a broad field building on the fundamentals of logic, linear algebra, statistics, linguistics, systems design, and just you name it. Generally, when we create a software product, we need to learn about the application domain and the knowledge required to build the software. And, passing college or university exams is not enough. Sadly, nobody can find your own way or answer all your questions. Even the most brilliant professor in the World cannot know everything. We often feel lost, including me, because we are overwhelmed and feel stressed out trying to learn everything in CS. In my opinion, it is impossible. Believe me, your mission is impossible. You cannot learn everything related to CS, coding, Data Science, and AI. Yes, just believe me and accept it. It is not a failure. It is a strategy to keep going and enjoy your process of learning what you like, finding your specialization, and finding things that are the most important for you. Herein I will share my approach to learning the things I like. It is not an instruction or algorithm. You might find your own approach, or... Learning new things
Introduction Coding can be hectic and also requires adhering to code styles. For instance, it is a good practice to comply with the PEP 8 guidelines for Python code. The PEP 8 standard allows us to write easy-to-read code, which is essential when we want to write reusable code and expect others to read/reuse our code while collaborating with other programmers. PEP 8 establishes rules about good variable names, commenting, and space usage, amongst other good style practices so helpful in writing reusable and professional code. While PEP 8 is a standard, some tools can help us check and fix style issues. Flake8 is such a tool (amongst others such as Pylint and PyLama) that automatically inspects code for errors of PEP 8 compliance, see Flake8: Your Tool For Style Guide Enforcement. These tools are also called linters. In this post, we will use git hooks and pre-commit for a simple setup for checking Python code before committing files into the repository. Linters Linters help to check for code syntax errors and help to fix them. It is excellent to use linters to ensure code quality in the following aspects: ensure correct code formatting and style; minimise syntax errors; help... Linters and Git Pre-commit
Happy 1st of September! I have decided to write a letter to you and share some thoughts and gratitude for your visits. I recently walked into my favorite park and saw beautiful white pigeons picking some worms. They were mingling without any concern with other “usual” pigeons. They looked so different but were also quite indifferent to their differences from each other. At the same time, they were pigeons who did not care about feather color differences. They all enjoyed green grass and little worms in it. Birds are so beautiful, all of them. And I have decided to do a simple wrap-up of simple Python classes defining birds and pigeons. I think that this post is a good recap or start for understanding Object-Oriented Programming and the available functionality in Python. Table of Contents Object-Oriented Programming Classes in Python Class Methods Inheritance Polymorphism Encapsulation Conclusion References Object-Oriented Programming In CS, there are few programming paradigms widely known and discussed. I am not keen to go into discussions, and Writing code is the best way to understand programming paradigms and their meaning. In this post, I am going to focus on Object-Oriented Programming. As we see from the name, we... Python classes and pigeons
Introduction In my previous posts “GIT in 10 minutes” and Collaboration in GitHub, I have covered the basics of Git setup, a few workflow commands to get started using Git (version control system), and collaborative work, including operations with forks and pull requests. As promised, I will go into more detail about working with Git repositories. Herein I will focus on reverting your changes. Sometimes it’s good to step back and think about something different, right? The Setup and Usual Workflow I assume here the following setup: We work on a GitHub project with other team members working together on an “upstream” repository. We have forked this upstream repository and named our fork “origin.” We have a local copy of the origin repository (which we sync regularly with the upstream repository) This local copy of the origin repository (forked one) is changed with the next code contributions. We commit our local changes to the origin forked repository. In the forked repository, we create a Pull Request to the upstream repository. The workflow can be described with the following example. Cloning your forked repo locally git clone <tocken>@github.com/<user/<repo> cd repo Add upstream git remote add upstream https://githubcom/<upstream_user>/<repo> Working on your branch... Reverting Commits in GitHub
Hi everyone! I hope that you enjoy this summer. I want to tell something very personal about myself (some of my readers complain that this blog is too technical). I want to share my secret with you, don't tell anyone ;) I am a secret admirer of trees and nature! I think trees speak with the Universe and can tell us about everything should we listen patiently! Enough secrets (they might be misleading!). Let's get technical! Index Introduction Clearing Up MAC OS caches Empty the Trash Free RAM Using the Bash Script Conclusion References Introduction After a while, my MAC OS computer started to work slower. I have searched for possible solutions to run my computer faster without much latency. We can upgrade our computer storage and install a more powerful processor unit to speed up MAC OS. In this blog post, I will, however, focus on a more straightforward way without any system upgrades, which are costly and take time. Without much effort, we can do the following: Clear MAC OS caches Empty the Trash Free your computer memory with the purge. Clearing Up MAC OS caches MAC OS caches are stored in the ~/Library/Caches folder. We can open... MAC OS Speed Up
Introduction Today we have many automated tools that help us to translate text, spell check, autocompletion of text in search, generating abstracts, voice text messaging, messenger bots, chatbots, question-answering systems, and virtual assistants such as Amazon Alexa, amongst other tools. All these and much more are realised with AI techniques specifically focusing on Natural Language Processing (NLP). NLP tools are employed to preprocess, analyse human language in textual or voice media, and [to a certain extent] “understand.” its meaning, intent, sentiment, or find named entities such as personal names or cities. I like this short definition of NLP from Wikipedia: Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, mainly how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of “understanding” the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights in the papers and categorize and organize the documents themselves. NLP can also be used in natural language generation. For instance, a poem generator that created this sonnet for me: Ode to... TensorFlow: Romancing with TensorFlow and NLP
Introduction In my post “GIT in 10 minutes”, I have covered the basics of Git setup and a few workflow commands to get started with using Git (version control system). As promised, I will go into the topic of how to use Git for collaborative work. Mainly, I will focus on contributions to other repositories, for instance, open-source or projects of your colleagues and friends. Let’s go! GitHub Collaboration As explained in GitHub documentation 1, GitHub supports two ways of collaborative work: Forking. You create a repository fork, which essentially copies a repository to your own GitHub account. You do not need to have any permissions for the copied repository. Your changes can be accepted by the repository owner once accepting your pull request and thus including your changes into the principal repository. Shared repository. In small teams, we can add project collaborators, usually working on their own branches and adding their contributions with push access. With pull requests, we can do code reviews and comment on the proposed changes. Forking and Pull Requests With forks, we can make code changes without affecting the parent repository. Let’s call the parent repository “upstream” and our own (forked) repository “origin.” To add... Collaboration in GitHub
Introduction When creating large Machine Learning models, we want to minimise the training time. In TensorFlow, it is possible to do mixed precision model training, which helps in significant performance improvement because it uses lower-precision operations with 16 bits (such as float16) together with single-precision operations (f.i. using float32 data type). Google TPUs and NVIDIA GPUs devices can perform operations with 16-bit datatype much faster, see Mixed precision. The improved application performance and data transfer speed result from saved memory space and the complexity of operations when using half-precision operations with float16. In this post, I will briefly outline data types and their usage with a focus on TensorFlow operations, and the main steps to perform for achieving performance gains in the mixed-precision training. Computer Data Types In computers, data is stored and processed in sets of bits, each of them is set to 0 (there is no signal) or 1 (there is a signal). This is how the data is encoded to be processed through the computer circuitry. When a byte is set to one, we have a current flow through the electronic circuit. This is called binary data representation. Humans are used to understanding data in more complex... Floating-point format and Mixed Precision in TensorFlow
I am in Portugal. I live and breathe the freshness of the Ocean. Its vivid colors and wind make me happy, and I feel like a part of something bigger, omnipresent, and eternal. The springtime is the best time to be here when you like flowers and delicate fragrances floating in the air. Flowers at the Sea The beautiful landscapes and magnificent sea views are so inspiring that you cannot imagine a life without poetry. I like this poem by my favorite Portuguese author Almeida Garrett: Beautiful Barge Fisherman of the beautiful boat, Where you go fishing with her, She's so beautiful, Oh fisherman? Don't you see that the last star In the cloudy sky is sailing? Pick up the candle, Oh fisherman! Lay down your haul with caution For the mermaid is beautiful... But beware, Oh fisherman! Let not the net be tangled in her, That lost is oar and sail Just to see her, Oh fisherman! Fisherman of the beautiful boat, It's about time, run away from her, Run away from her, Oh fisherman! I have used DeepL to translate the poem from the Portuguese version I have found on the blog post by Manuel Antao. The Ocean... Coding in Portugal
Introduction In my previous post “TensorFlow: Transfer Learning (Fine-Tuning) in Image Classification”, I have described building a convolutional neural network based on EffecientNetB0 (initially trained on the ImageNet dataset), which underwent the feature extraction and fine-tuning steps using the 400 Bird Species Dataset at Kaggle. This was an exciting experiment since the ImageNet dataset contains only 40 bird species, while the Kaggle dataset has 400 bird species. Despite such differences in the underlying data, the model trained so well that the final model reached 98.5% accuracy on the test set. In this blog post, I am going to load this model saved in my deep learning repository and evaluate its performance in detail to determine which birds are not well predicted. Getting Data and Code Using Helper Functions I have shared my helpers.py Python script contains some useful functions for data preprocessing, model creation, and evaluation. You can use this file as you like, change it and share with me your ideas :) I will discuss the code parts that are useful in analysing the fitted bird species prediction model. # Getting helper functions !wget https://raw.githubusercontent.com/edaehn/deep_learning_notebooks/main/helpers.py --2022-05-02 10:47:37-- https://raw.githubusercontent.com/edaehn/deep_learning_notebooks/main/helpers.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443...... TensorFlow: Evaluating the Saved Bird Species Prediction Model
Introduction In my previous post “TensorFlow: Transfer Learning (Feature Extraction) in Image Classification”, I wrote about employing pre-trained models, such as EffecientNet trained on the ImageNet dataset and available in the TensorFlow Hub, for our task of bird species prediction. Using the transfer learning technique, we can reuse pre-trained models’ patterns on different application tasks and data. I have covered how to realise this idea with the feature extraction approach. In this post, I want to apply the Fine-Tuning approach I learned with the Udemy course on TensorFlow. I will describe the experiments I did with transfer learning, specifically, the fine-tuning bird species prediction model. I will use the Keras API for building EffecientNetB0-based models. What is Fine-Tuning in Transfer Learning? In transfer learning, we reuse features learned on a different dataset in different problems when (possibly) having too little data and employing state-of-the-art well-tested models such as EfficientNet [5]. We thus reuse features extracted from the existing model for predictions on a new dataset. Figure 1 schematically shows the difference between feature extraction (see my post on feature extraction) and fine-tuning in transfer learning. I have drawn it to outline the process of using the trained on ImageNet model,... TensorFlow: Transfer Learning (Fine-Tuning) in Image Classification
Introduction It might be challenging to manage different projects and their requirements when we do Python coding with loads of varying package versions and intricate setups. Luckily, we have a secret tool for managing and switching between different setups or environments. Conda is a package manager allowing us to work with different environments from a command line. Please do not mix it up with the Anaconda, which is helpful in scientific computing and includes a set of packages including NumPy, Scipy, Jupiter notebooks, and Conda. Using Conda package manager, we can create, list, remove, and update environments with different versions of Python and packages installed. In this introductory post, I will describe the process of creating and using Conda environments. This could be a good starting point for using Conda if you did not use it yet. For more details on Conda usage, read its official documentation. In this post, I will do a concise review of the most useful Conda commands to start with. Prerequesites and installation You can install the Conda on Windows, macOS, and Linux platforms, following the Installation instructions. In this post, I will use conda 4.11.0 with my macOS system. Please beware that some commands... Anaconda Environments
Introduction Previously, I have described a simple Convolutional Neural Network, which classified bird species with only 50% accuracy. The network architecture was similar to Tiny VGG and had too many parameters leading to overfitting. Image classification is a complex task. However, we can approach the problem while reusing state-of-the-art pre-trained models. Using previously learned patterns from other models is named “Transfer Learning.” This way, we can efficiently apply well-tested models, potentially leading to excellent performance. In this post, we will focus on Feature Extraction, one of the Transfer Learning techniques. I will build on the code and ideas previously shared in my previous post “Convolutional Neural Networks for Image Classification.” We will reuse previously created feature extraction models available at the TensorFlow Hub for our task of bird species recognition using image data from Kaggle. At the end of this post, we will see how this approach will improve our bird species prediction model accuracy of 50% to over 90%. Downloading 400 Bird Species Kaggle Dataset Herein, I will repeat what I have previously written how to download Kaggle datasets. # Setup to download Kaggle datasets into a Colab instance ! pip install kaggle ! mkdir ~/.kaggle ! cp kaggle.json... TensorFlow: Transfer Learning (Feature Extraction) in Image Classification
Introduction In my previous post Multiclass Classification Model, I wrote about creating classification models using TensorFlow and Fashion MNIST dataset from Keras. We used a Sequential model with several Dense layers to build a model categorising fashion items into their respective categories, such as “T-shirt/top” or “Trouser.” The dataset was already prepared for usage, and the model created was quite simple, however, quite efficient. We could further improve our model. However, in practice, we rarely have an available dataset at hand. We can generate or collect datasets. Moreover, a simple Dense layer-based Neural Network (NN) might not work well with image data. I will focus on the more appropriate NN architecture type, which best operates when dealing with image data. We also practice working with image data presented in JPG format. Convolutional Neural Networks While working with image data Deep Learning applications, such as visual objects recognition, image segmentation, and classification, we can use Convolutional Neural Networks (CNN) architecture, requiring few preprocessing steps and little human involvement since the network learns its parameters (weights) automatically. To be used with CNN, images are presented as tensors (data structures such as matrices holding numerical data for storing image pixels and their colors).... TensorFlow: Convolutional Neural Networks for Image Classification
Introduction In Machine Learning, the classification problem is categorising input data into different classes. For instance, we can categorise email messages into two groups: spam or not spam. In this case, we have two classes, we talk about binary classification. When we have more than two classes, we talk about multiclass classification. In this post, I am going to address the latest multiclass classification, on the example of categorising clothing items into clothing types based on the Fashion MNIST dataset. The code and general concepts are adopted from TensorFlow Developer Certificate in 2022: Zero to Mastery. Herein I will give a concise summary of what I have learned, in simple words as usual. Loading Dataset The Zalando fashion dataset is available in the keras datasets module. With the following code, we download the dataset into training and testing datasets, and create human-readable labels. First of all, we need to import all required libraries. import tensorflow as tf import pandas as pd import numpy as np from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import confusion_matrix import itertools import random import matplotlib.pyplot as plt Next, we load the Fashion MNIST dataset from keras. # Fashion dataset fashion_mnist = tf.keras.datasets.fashion_mnist # Get the training... TensorFlow: Multiclass Classification Model
Introduction When we have mixed datasets with different feature types, we want to prepare data for feeding into a Machine Learning algorithm. This happens when we have different inputs (also called features or covariates), such as categories such as gender or geographic region. Other features can be on a different scale, for instance, a person’s weight or height. First of all, the Machine Learning algorithm requires that data is in a specific type. For instance, we can use only numerical data. In other cases, ML algorithms would perform better or converge faster when we preprocess data before training the model. Since we do this step before training the model, we call it preprocessing. In this article, we will focus on two main methods of feature preprocessing, including feature scaling (or normalisation) and feature standardization. Data Exploration To decide what we do with the data and apply Machine Learning to it, we need to analyse the dataset. We want to determine what features we have, whether they are helpful for our ML goals, how clean the dataset is, the presence of missing or noisy data. Quite often, we need also to perform data cleaning or wrangling. It is pretty useful to... Feature preprocessing
Introduction In the previous post, we have created several simple regression models with TensorFlow’s Sequential model. Herein we continue more in-depth about model evaluation using the testing dataset. Data Preparation First of all, to ensure the reproducibility of results, we set a random seed (please check my previous post if you are curious about seeds in TensorFlow). As in the previous post on regression in TensorFlow, we use tf.range() function for generating a set of X input values, and also y outputs as follows: # Creating a random seed tf.random.set_seed(57) # Generating data X = tf.range(-100, 300, 4) y = X + 7 X, y (<tf.Tensor: shape=(100,), dtype=int32, numpy= array([-100, -96, -92, -88, -84, -80, -76, -72, -68, -64, -60, -56, -52, -48, -44, -40, -36, -32, -28, -24, -20, -16, -12, -8, -4, 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92, 96, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144, 148, 152, 156, 160, 164, 168, 172, 176, 180, 184, 188, 192, 196, 200, 204, 208, 212, 216, 220, 224, 228, 232, 236, 240, 244, 248, 252, 256, 260,... TensorFlow: Evaluating the Regression Model
Introduction What is regression? Regression defined in Wikipedia as: In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the ‘outcome’ or ‘response’ variable) and one or more independent variables (often called ‘predictors,’ ‘covariates,’ ‘explanatory variables’ or ‘features’). The most common form of regression analysis is linear regression. One finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. In simple words, we want to predict a numerical value based on some other numerical values, as described in the TensorFlow Developer Certificate course [1] . In Machine Learning, regression analysis is widely used for prediction and forecasting. For instance, we can use regression models to predict house sale prices. The house price can be modeled regarding the number of bedrooms, bathrooms, or garages. Other applications of regression can be to find out how many people will buy the app, to forecast seasonal sales, and even predict coordinates in an object detection task. In simple words, with regression, we want to find out answers to questions “How many?” and “How much?” [1] Modeling Steps in TensorFlow When we model... TensorFlow: Regression Model
Introduction In training Machine Learning models, we want to avoid any ordering biases in the data. In some cases, such as in Cross-Validation experiments, it is essential to mix data and ensure that the order of data is the same between different runs or system restarts. We can use operation-level and global seeds to achieve the reproducibility of results. Global and Operation-level Seeds To begin, let’s create a mutable tensor with “Variable.” # Create a variable tensor tensor = tf.Variable([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]]) In the code below, we use assign method to change the first element (which is a matrice) in tensor. We fillled its values with zeros. # Change elements of the first tensor element tensor[0].assign([[0, 0, 0], [0, 0, 0]]) <tf.Variable 'UnreadVariable' shape=(3, 2, 3) dtype=int32, numpy= array([[[ 0, 0, 0], [ 0, 0, 0]], [[ 7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]], dtype=int32)> In TensorFlow, we have global and operation-level seeds. The global seed we define with set_seed: # Set a random seed with value of 57 tf.random.set_seed(57) The operation-level we can define directly in operation such as... TensorFlow: Global and Operation-level Seeds
Introduction TensorFlow is a free OS library for machine learning created by Google Brain. Tensorflow has excellent functionality for building deep neural networks. I have chosen TensorFlow because it is pretty robust, efficient, and can be used with Python. In this post, I am going to write about how we can create tensors, shuffle them, index them, get information about tensors with simple examples. # Import tensorflow import tensorflow as tf print(tf.__version__) 2.7.0 Tensors In TensorFlow, we work with tensors to keep numerical data for usage in machine learning. Tensors can store data in N dimensions. When the tensor has two dimensions, it is essentially a matrix. When tensor has only one dimension, then we are speaking about vector. A tensor can also contain just one numerical value, such as a scalar or a zero-order tensor. We created the tensor, as mentioned earlier, structures as constants in the code examples below. TensorFlow also gives information on the number of dimensions in tensor. # Creating a scalar tensor scalar = tf.constant(7) scalar <tf.Tensor: shape=(), dtype=int32, numpy=7> # Check the number of tensor dimensions scalar.ndim 0 # Create a vector vector = tf.constant([5, 7]) vector <tf.Tensor: shape=(2,), dtype=int32, numpy=array([5, 7], dtype=int32)> vector.ndim... Tensors in TensorFlow
Introduction I use two computers for my coding projects and take only one computer while traveling. My projects require, however, that I run numerous tests that should run well even with my M1 computer, at the moment, not having all packages working smoothly together. To be more flexible and independent from a development environment, I use GitHub Codespaces. And I am very grateful to learn a new tool, which is so helpful that I do not know how I worked without it. GitHub Codespaces GitHub codespaces provide a development environment running in the cloud. A codespace environment is created with the help of configuration files added to a GitHub repository. To set up a codespace and build your app container, you will need to have the codespaces available in our organisation. You can create codespace environments for any branches in your GitHub repository when enabled. For that, press the “<> Code” green Button in your GitHub Web UI, and choose from default or advanced options. GitHub CLI and Homebrew To access Codespaces with SSH, we can install GitHub CLI with Homebrew Firstly, if Homebrew is not yet installed, Download Homebrew from https://brew.sh or by running the command: /bin/bash -c "$(curl... GitHub Codespaces
Introduction TensorFlow is a free OS library for machine learning created by Google Brain. Tensorflow has excellent functionality for building deep neural networks. I have chosen TensorFlow because it is pretty robust, efficient, and can be used with Python. Since I like Jupyter Notebooks and Conda, they were also installed on my system. Next, I am going through simple steps to install TensorFlow and the packages above on M1 macOS Monterey. XCode I had a new computer; thus, I started downloading and installing XCode from App Store. Homebrew We can download Homebrew from https://brew.sh or by running the command: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Miniforge When doing data science, I usually use Anaconda for managing libraries. This is why I have installed Miniforge to access Conda by downloading the Miniforge3-MacOSX-arm64 from Minifororge Releases. After installation, this required running a bash file and adding the Miniforge to the system path (when it prompts ‘yes|no’ - say ‘yes’ to add Anaconda to the PATH). cd ~/Downloads /bin/bash Miniforge3-MacOSX-arm64.sh echo $PATH Tensorflow and Jupyter Next, we create a new environment in Conda and install Tensorflow base and tensorflow-metal plugin: # Let’s create a new environment, called tensorflow_m1: conda create --name tensorflow_m1 python==3.9 #... TensorFlow on M1
Introduction In this post, I am briefly writing up about what I did in my PhD research at Heriot-Watt University and the main idea behind the thesis. This post was initially published in March 2019. In January 2022, I have updated this post and provided some links to my research contributions. The Team From 2013 to 2018, I was working on my PhD project at the Department of Computer Science, School of Mathematical and Computer Sciences, Heriot-Watt University (Scotland) under supervision of Nick Taylor. The research idea was coined while I was working at the Technical University of Delft, and started from the publication “A User Modeling Oriented Analysis of Cultural Backgrounds in Microblogging,” which received the best paper award at the ASE International Conference on Social Informatics in Washington D.C. US on 14 December 2012 (slideshare). A User Modeling Oriented Analysis of Cultural Backgrounds in Microblogging from Elena Daehnhardt The topic is at the intersection of social networking, communication, and Artificial Intelligence. The research resulted in several publications and a PhD thesis: “Mining Microblogs for Culture-awareness in Web Adaptation.” Nothing of it would be possible without the love and support of my family and friends, and the best research... Mining Microblogs for Culture-awareness in Web Adaptation
Introduction Artificial neural networks (ANNs) are the cornerstone of Deep Learning algorithms. In this post, I briefly explain ANNs, their high-level structure and parameters. Neural Networks The name “Neural Networks” and their architecture are adopted from the human brain’s neural network. ANNs are designed to simulate human reasoning based on how neurons communicate. ANNs contain a set of artificial neurons connected. In the picture below, we see the biological and artificial neurons. The artificial neuron is very simplified, and it consists of inputs, which are similar to dendrites in the biological neuron. Each input connection has an assigned weight, and both values are used to calculate the sum value. The weights define the importance of any given variable, and variables with larger weights contribute more to forming the output value. The activation function takes the sum of weighted inputs and forms the output Y. When the Y node is activated (or exceeds a threshold), it sends the output value to the next layer of the ANN. Neuron Parameters As we see from the artificial neuron picture above, the neuron takes in n-inputs x1, x2, .., and xn. These input values, called features, are multiplied with respective weights w1, w2, ..,... Artificial Neural Networks
Introduction Time goes quickly, and our lives demand efficient solutions to daily tasks and problems. We also want to have fun and enjoy being with families and friends. Thus, it is paramount to solve issues in time, avoid procrastination, and avoid too much complexity when possible—in other words, keep it all as simple as possible. I think that well-thought-out minimalism in everything we do or plan is the key to saving time. Knowing and honing your skills Let’s focus on the coding and design process. My front-end skills require so much polishing that I do not even dare to delve into them. I am a back-end developer because it is what I like to do. And it is OK. It is impossible to be perfect in everything. Firstly, I did a thorough search for things I wanted to learn in programming and design, and it took me years. But in the end, I found out that I do not have the patience to meticulously find colours and create layouts while creating perfect web pages. I like to create worlds in code, objects, and control flows to realize the needed functionality. Coding is magic, and magic can be manifested in different... Minimalism in Coding and Design
While having some machine learning experience of working with Scikit Learn, I was always interested in Deep Learning. The plan is to learn basic concepts and apply algorithms to a real-life situation, which I have always liked. I have found a DataCamp course, the Introduction to Deep Learning in Python as part of the Deep Learning in Python track. The Introduction to Deep Learning in Python provides the fundamentals to understand Deep Learning and how neural networks are created in Keras. The course is easy to follow. The most challenging concept explained in the course is the backpropagation method applied to minimise the prediction error while adjusting network weights. The programming exercises are easy to follow and have an excellent interface for running the Python code. To cope with possible delays in this process, I decided to share my process on Twitter. I post on Twitter the main things I learned in this course and retweet helpful visuals I found in my Twitter network. I have found out that the data science on Twitter is very supportive and sharing for newcomers like me. There is also a pretty helpful hashtag #66DaysOfData, which motivates me to continue. Overall, it is possible... Deep Learning with DataCamp and Twitter
Introduction Version control systems are handy to keep track of file versions. This is useful for tracking your code, scripts and text information. Currently, GIT is one of the best open-source and cross-platform version control solutions. It enables distributed repository management; it works fast over HTTP and ssh protocols. GIT is relatively easy to use, with command-line utility or Graphical User Interface. Version Control with GIT Version control systems are handy to keep track of file versions. This is useful for tracking your code, scripts and text information. Currently, GIT is one of the best open-source and cross-platform version control solutions. It enables distributed repository management; it works fast over HTTP and ssh protocols. GIT is relatively easy to use, with command-line utility or Graphical User Interface. Personally, I have found several commands to be essential for tracking my thesis text sources (latex) and versions of code. In this brief tutorial I will share these commands with you. For simplicity reasons, I will give a starting point of setup, initialising a GIT repository, basic usage scenarios in just 10 minutes of your time. Installation Since I work on macOS, I use Homebrew package manager to install Git. To install Homebrew,... GIT in 10 minutes
The Phoenix bird is a fantastical bird known from ancient Greeks mythology. In many cultures, we can find fairy-tale birds resembling Phoenix. For instance, Russian Firebird is also a phoenix bird. The phoenix bird lives about 500 years. Before dying, the bird builds the nest, sets itself on fire, and after burning to ashes - it eats its worm while regenerating to life again. It symbolizes rebirth, and I think many of us feel the pain of rebirth once in a lifetime, and it is how I feel now. I believe that I am going to eat my worm soon. Before then, I live it all in the fire and seek my worm for, hopefully, the next life ahead. Will I find my worm and raise from the ashes? I do not care since I have many lives to live and burn. Are we raising from ashes?
We have arrived in December now, and Christmas is coming! It was quite a challenging year so far. Many things happened, a rollercoaster of 2021, and we are still riding with the pandemics. But I am very grateful that my dear people are all well. This is what I wish for the following year. I hope everyone has much love, patience, and health in 2022! Merry Xmas and a very Happy New Year! Recommended AI apps Related tools you may want to try next. CustomGPT.AI is a very accurate Retrieval-Augmented Generation tool that provides accurate answers using the latest ChatGPT to tackle the AI hallucination problem. Originality.AI is very effecient plagiarism and AI content detection tool. Merry Xmas and a Very Happy New Year!
Introduction Python is relatively easy to learn and beginner-friendly. I like Python because you can program any kind of project with it. It is open-source and free for anyone to use. Python has well-tested machine learning libraries and a very supportive community. I will overview herein a basic syntax of the Python programming language. This will be useful for beginners or people who move quickly from another programming language to Python. Why Python? Python is a general-purpose, object-oriented programming language. It was created by Guido Van Rossum initially thought of as a hobby project in 1989 during Xmas vacation. Python is relatively easy to learn and beginner-friendly. I like Python because you can program any kind of project with it. It is open-source and free for anyone to use. Python has well-tested machine learning libraries and a very supportive community. I will overview herein a basic syntax of the Python programming language. This will be useful for beginners or people who move quickly from another programming language to Python. Basic Syntax In Python, we do not need to declare types of variables as we do in Java programs. We can declare variables in any ode place. The variables and the... Python Programming Language
I’m a machine learning engineer and researcher. I have been fascinated by computer science, Artificial Intelligence, technology, and philosophical questions from an early age. I have observed how telephone stations work, how data streams can be redirected between different locations. And how the telephone stations are programmed. I spent some time with my father (telecommunication engineer) explaining how the technology works. I learned digital circuitry and logic still in school. I had started to write code on the ZX Spectrum computer when I was 13. Firstly, I focused on Basic programming language and moved to machine code to easily manipulate memory. Next, I moved to Pascal on IBM machines in a mathematical lyceum, wherein I did my preparation to enter University exams. Meanwhile, I had various other interests. I finished music school in the fortepiano class, was a member of a biology club, was a publisher of my school class newspaper. I had the best childhood one could have. When reaching 17, I moved about 845 kilometers from my home to pursue Computer Science in arguably Siberia’s best city Tomsk. I had various courses on Computer-Aided Design, analog and digital electronics, linear algebra, and loads of programming, amongst other subjects.... Hi! I'm Elena. Welcome to my blog.
Introduction In this post, I write about tools, web platforms and Data to Experiment with Machine Learning. Libraries and APIs With a focus on Python libraries, I want to mention scikit-learn.org, TensorFlow by Google, PyThorch by Facebook, and Keras (API), the most mature tools providing the Machine Learning algorithms. Keras is an API presenting an easier usage of other libraries such as TensorFlow or Aesara (former Theano). PyTorch is also more user-friendly compared to TensorFlow. Nevertheless, TensorFlow seems to be more complicated to use; it is more mature and has a larger community and better support. Python open-source library scikit-learn provides a comprehensive selection of machine learning techniques (regression, classification, clustering), feature selection, metrics, preprocessing, and other functionality. At this moment, Scikit-learn, is lacking deep learning functionality; however, we can use TensorFlow with the Scikit Flow wrapper for creating neural networks using the Scikit-learn approach. XGBoost library is another option for applications requiring multicore parallelism. XGBoost is Extreme Gradient Boosting, using boosted trees to build regression, classification, ranking, and other predictive models. Platforms To learn how Machine Learning works in practice with the Worldwide community, I recommend kaggle.com as one of the first steps in ML experimentation. Kaggle has loads... Tools and Data to Experiment with Machine Learning
Introduction Artificial Intelligence (AI) is a field of computer science. AI provides methods and algorithms to mimic human intelligence, reasoning, and decision-making and provide insights, which businesses could use in research or industry to build new exciting and innovative products or services. Machine Learning (ML) is a subset of AI with algorithms that learn from data. In this post, we sort out the differences between AI and ML. Artificial Intelligence and Machine Learning AI-generated Art with Jasper, December 2022 I update this article periodically with new ideas, so click here and save this blog post to your favourite Pinterest board. Pinning it will ensure you can refer to this detailed article later. PIN I like the concise definition of AI on Wikipedia : Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Artificial Intelligence (AI) is a field of computer science. AI provides methods and algorithms to mimic human intelligence, reasoning, decision making and provide insights, which could be used by businesses in research or industry to build new exciting and innovative products or services. For instance, AI can be used to detect tumor cells or anomalies [1], expert systems can... Deep Learning vs Machine Learning
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title