Hello! The AI and coding Blog located at this page. All the best!
As a small website owner, I understand the challenges we face. I write about AI and Python coding, sharing my knowledge with fellow professionals and students. However, the recent Google updates have led to a significant drop in traffic. With Google providing over 90% of our traffic, the struggle to regain our website visits is real. Is there any information about the Google SE website feature that’s crucial or any ranking details shared publicly? Introduction This blog is personal. I did not do much promoting or did not use any advertisements. Most of my readers found this blog, thankfully, to Google Search Engine (SE). I would be grateful if my readers found this blog and explored its content. I am happy that you are reading this post right now :) However, lately, Google algorithm updates have substantially decreased the organic traffic to my blog. For instance, some of my blog pages used to rank on the first page for relevant keywords, but after the updates, they’re now on the third page, resulting in a significant drop in traffic. There are rumours amongst small bloggers who have the same complaints. You can read much of these traffic cuts related to the... Regaining Website Traffic After Google Updates
Dear Reader, You may have noticed that I have posted less often lately. This is because I am swamped. If you did not know, I had an accident and experienced a slow and painful recovery from my knee operation. I had quad inhibition, which prevented me from walking and made me very busy; you cannot imagine :) Now it is better. I woke up the sleepy quad and rebuilt many muscles affected by the slow recovery. I have started to walk again! I am working on improving my walking stamina and getting stronger muscles. It is a long process, but Supergirls do not cry but fly. Funnily, I wanted to fly at some point when dealing with crutches :) I was thinking about all these happenings, and my opinions changed. Firstly, I have even more respect for people with mobility issues. You must be mentally strong and inventive to live in such a challenging situation. Secondly, it is incredible how much time I spend now on simple daily activities! Everything requires planning ahead and takes much time and effort. My time now is very important. So is yours. Save time, and subscribe to my newsletter to stay updated. I promise... I have started to walk again
In this post, I cover everything from logging to configuring logging to output messages to different destinations. I also included some examples of logging levels and how to log messages at different levels based on the severity of the issue. I hope my post will help anyone understand how to use logging effectively in their Python programs. If you have any thoughts or suggestions, feel free to share them with me. Introduction Logging is essential for developers to track events, debug issues, and understand how their programs work. Python’s built-in logging module offers a flexible way to create log messages from Python programs. Logging allows us to: Track the flow of your program Debug and diagnose issues Monitor applications in production Gain insights into user behaviour Logging examples Python’s logging module is simple and can be configured to suit different needs. Let’s start with a basic example. Basic Logging Examples We import Python’s built-in logging module with the ‘import logging` statement. Next, the logging.basicConfig(level=logging.INFO) line configures the logging system to capture messages at the INFO level and higher. The logging.info('This is an informational message.') line logs an informational message, which will be displayed because the logging level is set to... Logging in Python
Dear Reader, You are surprised that publishing your content on this website is possible. If you are interested - keep reading :) I am glad you want to publish your post about AI and Python coding on this blog. You do not need to be strictly technical. My audience is broad, and my blog is visited by people interested in AI development, AI applications, ethics, and related issues. Before submitting your guest post, please read Guest Post Agreement. At the end of the Guest Post Agreement, you will see a submission link to get a simple MarkDown template and submission form for your article. Many formatting possibilities exist, such as adding tables, formulae, etc. Let me know if you need more information or want to use Markdown formatting or HTML/CSS. We can embed your podcast, YouTube videos, and social network links. Please let me know if you have new post ideas or any questions/suggestions. Thanks for reading, and good luck! Guest posts about AI and Python
On July 12, 2024, finally published in the Official Journal of the European Union the Regulation 2024/1689 of the European Parliament and of the Council of June 13, 2024, which lays down harmonized rules on artificial intelligence (known as “AI ACT”). As stated in article 1 of the AI ACT, this regulation has four primary purposes: to improve the internal market, to promote the uptake of human-centric and trustworthy AI, and to protect health, safety, fundamental rights, democracy, rule of law, and environment, from harmful effects of AI Systems, while supporting innovation. Providers and deployers placing AI systems or general-purpose AI models on the European market or putting them into service shall be aware of the new obligations that will be applied to them. First, they should confirm if they are trading and using an AI system as defined by this Regulation in article 3 (1): a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Second, they should... Regulation on artificial intelligence has already been published
Introduction This post is about managing remote repositories in Git. We explore tasks such as adding, renaming, removing remotes, and updating remote URLs. We also practice fetching, pulling, and pushing changes to and from remote repositories. What are Git Remotes? Git remotes are your secret weapon for coding :) Git remotes connect your local project to its copies on other computers or online platforms like GitHub and Bitbucket. Are Git Remotes similar to Git branches? Remotes are not branches, but they work together. Branches are like alternate timelines within your repository, while remotes are links to entirely different repositories (potentially with their own sets of branches). You can have branches on your local and remote repositories, and Git helps keep them in sync. Using Git Remotes You can use Gir remotes while working in a team or alone. It is a good idea to follow best practices, such as: Use clear names like “origin” (main) or “upstream” (original project). Fetch Often to stay updated and avoid conflicts. Push with caution and double-check before sharing changes. Solo Coding Imagine Bob, a solo coder working on his passion project. He uses Git to track changes but wants an extra layer of security... Git Remotes
In this post, I discuss the main AI types and share my understanding of the possibility of general intelligence in the future. Introduction Artificial Intelligence (AI) is rapidly transforming our world, but what does it entail? Let’s explore the different types of AI, their capabilities, and their potential impact on our lives. AI types Narrow AI (Weak AI) Narrow AI, also known as Weak AI, is today’s most common type of AI. It usually performs specific tasks within a limited domain. These systems excel at their designated functions but lack the broader cognitive abilities of humans. The most of AI applications and tools we have today are examples of Narrow AI: Image recognition software: Identifies objects and people in images. Spam filters: Automatically classify emails as spam or not spam. Robotics: Programming robots for specific manufacturing, logistics, and surgery tasks. Game Playing: AI agents competing at the highest level in games like chess and Go. Recommendation engines: Suggest products or content based on user preferences. Self-driving cars: Navigate roads and make driving decisions. Financial trading: Predicting stock markets and making automated trading. Large Language Models: Process and generate human-like text in response to a wide range of prompts and questions,... Narrow AI, General AI, Superintelligence, and The Real Intelligence
Introduction Recently, I received an email informing me about the awesome Kaggle competition launching ARC Prize 2024. What is so special about this competition? ARC-AGI benchmark The ARC-AGI benchmark (Abstraction and Reasoning Corpus for Artificial General Intelligence) stands out for several reasons: Focus on Generalisation: Unlike many AI benchmarks that test performance on specific tasks, ARC-AGI emphasises the ability to generalise to novel problems. It assesses an AI system’s capacity to learn new skills and solve tasks it hasn’t been explicitly trained on. Measures Fluid Intelligence: ARC-AGI aims to measure general fluid intelligence similar to what humans possess. This involves abstract reasoning, pattern recognition, and problem-solving abilities applied to unfamiliar situations. Minimal Prior Knowledge: The tasks in ARC-AGI require minimal prior knowledge. They focus on core reasoning skills rather than relying on extensive domain-specific information. Human-Level Performance: Humans generally score high on ARC-AGI tasks (around 85%), while current AI systems lag significantly behind. This indicates that ARC-AGI presents a challenging frontier for AI development. Prize Competition: The ARC Prize, a $1,000,000+ competition, was launched to encourage researchers to develop AI systems that can beat the benchmark and potentially contribute to progress towards Artificial General Intelligence (AGI). Is it a Puzzle... ARC-AGI benchmark and a hefty prize
Introduction In this post, I will describe two main email methods using Google and Python. You won’t need to use third-party applications. I use some of these code blocks to send my subscription emails. I will also share my setup for effortlessly getting your emails on this GitHub static website. This method is efficient, cost-effective, and easily adaptable to my needs. Getting your messages As you may know from my post AI-Free Website Design, this blog is hosted on GitHub, and it is a static website. Thus, I can only easily send forms with third-party solutions. To facilitate form submissions on my static website, I use UseBasin.com for years, and I have just started my affiliation with them. All you have to do is to generate your HTML form on UseBasin.com website and copy/paste your code into your website. It’s that easy. If you know a bit of HTML, you can customise your forms. Indeed, you can use other solutions, but I am pleased with UseBasin.com because of its simplicity, quite-well developed spam filters, and, indeed, their integration and messages export features. So, I got your subscription email list in CSV format and stored it in my mailer directory. Now,... Sending Emails with Python and receiving your messages
Do you know what AI hallucination is? Can AI actually hallucinate without having any perception of reality? When referring to the English dictionary at Cambridge.org, hallucination is defined as: the experience of seeing, hearing, feeling, or smelling something that does not exist, usually because of a health condition or because you have taken a drug something that you see, hear, feel or smell that does not exist There is also an AI-related hallucination definition in English dictionary at Cambridge.org: false information that is produced by an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as the ability to produce language in a way that seems human): If the chatbot is used in the classroom as a teaching aid, there is a risk that its hallucinations will enter the permanent record. Because large language models are designed to produce coherent text, their hallucinations often appear plausible. She discovered that the articles cited in the essay did not exist, but were hallucinations that had been invented by the AI. the fact of an artificial intelligence (= a computer system that has some of the qualities that the human brain has, such as... Can AI hallucinate?
Introduction Recommendation systems are algorithms that suggest relevant items to users. Depending on the application, these items could be movies, songs, products, or anything else. Two of the most common approaches to building recommendation systems are collaborative filtering and content-based filtering. This post covers the essentials of building recommendation systems, including some theory and practical Python implementation. Let’s go! Recommendation task When we create Recommender Systems (RS), we consider that we have a set of users and items which are recommended to these users. In practice, we have a prior history of user ratings. This history is used to create suggestions or recommendations. Consider a movie recommender as a widely given example of a recommender system. For instance, users watch Netflix content and rate movies they watch. Netflix has knowledge of preferred movies and recommends movies not yet seen that will be possibly liked by users (ideally :) Basic RS uses matrices to store user ratings, such as : [Users\Movies] | User 1 | User 2 | User 3 | ... | User N | ----------------|-----------------------------------------| Movie 1 | 10 | 4 | 6 | ... | 9 | ----------------------------------------------------------| Movie 2 | ? | 7 | 9 | ...... Recommender Systems
As you can see, I often include citations at the end of my posts. The citations strengthen my posts or research papers. In this post, we explore citation practice and what to do when we use AI tools such as chatGPT. Introduction Proper citation is a must to maintain academic and ethical integrity. It is a valuable skill that promotes respect for other people, creates a chain of arguments paramount in research and science, and safeguards academic/industry success in the future. What is a citation? When we share someone’s ideas or previous knowledge, it’s good to acknowledge the person or group of people who allowed us to learn and, possibly, build on the prior knowledge. Academic Integrity Proper citation and academic integrity are paramount for delivering high-quality research while respecting the contributions and opinions of fellow researchers or anyone we cite. What is academic integrity? In one sentence. Academic integrity is acting with honesty and fairness in your academic work, respecting the work of others. Give more details Academic integrity is the foundation of trust and ethical behavior in learning environments. It involves a commitment to honesty, fairness, and responsibility in all aspects of your academic work, from completing assignments... To cite or perish
Dear reader, You probably already observed that I did not post for a while. I had an accident which required a major and quite painful operation. This is why I had to put all my forces into it after the op rehabilitation. I did so much of training. More than in my lifetime. I must confess that I was a braggart that I did not need to exercise, thanks to my genes allowing me to look nice effortlessly. Surely, as anyone, I had done other things to address, sicknesses and life challenges, in-office bulling, and other not so funny things. However, I also had very supportive people around me. If you think they are not there - look around or become that supportive person yourself. Never give up, and be the superhero! It does not really help to be a super-girl who codes. What? Super-girl? You fell while preparing your dinner! Besides, you can now use crutches instead of flying! That is what my loyal enemy would say. However, my recovery required nearly superpowers. Firstly, I screamed like an animal when reducing my own knee in huge pain (dangerous, discouraged unless you know what you are doing). Secondly, I did... Go with the flow
Introduction In industry, we have already had robotic machines for a while, or robotic hands (“grippers”) with loads of motors, that can lift heavy weights and do precision mechanics when assembling autos and other machinery. We also have robotic vacuum cleaners or humanoid robots such as AMECA. However, there are not really “REAL” personal robots we can imagine for everyday activities. I bet many of you reading this post would like a robot to do all the tedious chores, such as laundry or house cleaning, for them. Would it be nice to have more free time, explore our favourite activities, and do what we like while a machine does all the tedious tasks perfectly and with attention to detail? Interestingly, Apple is currently busy on home personal robots, read in Apple Explores Home Robotics as Potential ‘Next Big Thing’ After Car Fizzles. Hopefully, we could enjoy practical applications and robots helping us in everyday activities in the future. However, we must wait since everything we do as humans is challenging for robots. I will further explain why. Let’s get into the topic and explore the robots of today and tomorrow :) What is a Robot? A robot is a mechanical... Robots and True Love
Introduction This post will briefly introduce AI-powered tools like Synthesia.io that produce incredible avatars. AI Avatars AI avatars, also known as virtual humans or digital humans, are computer-generated representations of humans that are increasingly being used in various applications. How they are created Avatars are created using artificial intelligence techniques, such as machine learning and deep learning, to simulate the human appearance, behaviour, and interaction. Deep learning is a type of machine learning that uses Artificial Neural Networks to learn from data. Neural networks are inspired by the structure of the human brain, and they can learn to perform complex tasks such as image recognition and natural language processing. Do you want to know how does Deep Learning differ from Machine Learning? Read my first post Deep Learning vs Machine Learning One way to create sophisticated AI avatars using deep learning is to use a generative adversarial network (GAN). GANs are a neural network consisting of two competing networks: a generator and a discriminator. The generator creates new data, such as images or videos. The discriminator is responsible for identifying whether the data is real or fake. I have asked Google Gemini: Who invented the generative adversarial network? Please give... Virtual Presenters (AI Avatars in-depth)
Introduction Face-swapping is a technique for altering digital images or videos by replacing one person’s face with another. It uses machine learning and computer vision algorithms to detect and map one individual’s facial features onto another’s face, seamlessly blending the two to create a composite image or video. Face swaps are a fun way to digitally swap faces from two different photos or videos. They have other applications in marketing, education, multimedia production, and entertainment. This post describes the simplest solution for creating perfect, effortless face swaps with the Insight Face Bot. I also mention a few other approaches for creating face swaps with coding or the help of available AI tools. This post also includes excellent related research papers and GitHub repositories. Finally, we will write Python code to demonstrate face detection, image processing, and manipulation tasks such as face swapping, blending, and result presentation. Face swap tools There are several ways to get started with face swapping: Mobile Apps: many popular mobile apps allow you to do face swaps, like Face Swap Live, YouCam Perfect, and HelloFace. These apps are easy to use and often come with pre-loaded celebrity faces you can swap yours with. Online Tools: many... Super-girls don't cry in face-swaps
Cláudia Lima Costa, an AI lawyer and data protection expert, has produced an exceptional podcast that addresses critical issues of trust and safety in AI systems. I highly recommend checking out Cláudia’s podcasts, featuring fascinating talks on AI in both Portuguese and English. I was fortunate enough to be invited to a relaxed discussion, during which I shared my views on various topics related to AI, such as AI evolution, AI applications, data sources for training models, copyright, data protection, privacy-preserving techniques, and achieving reliable, explainable, safe, and helpful AI. HOW CAN WE BUILD TRUST AND SAFETY AROUND AI? Overall, I am happy with what we have achieved. We did it light, easy-going, and quite technical in simple words :) Besides, it was my first podcast as a quest, and it was fun! One of the most thoughtful questions that Cláudia asked me was whether explainable AI is possible considering a widely accepted black box idea. I had a very affirmative answer explaining in simple words that yes, indeed, we can create explainable AI models even though it will take an additional effort, at least with the current state of AI, and with human feedback preferably. I wanted to reiterate... Podcast: How can we build trust and safety around AI?
Introduction The complexity of AI, particularly deep learning models, has led to the “black box” criticism, highlighting the lack of understanding about how deep learning models arrive at their decisions. While there’s truth to this concern, having a nuanced view is important. I think that it is also critical to share the ongoing debate about AI explainability, AI computational effectiveness, and the related regulations succinctly described in the Right to explanation and Explainable artificial intelligence, which are great starting points if you like to study the topic. This post was inspired by our podcast conversation with Cláudia Lima Costa, a lawyer specialised in AI and data protection. Cláudia asked me an important question about the explainability of AI. HOW CAN WE BUILD TRUST AND SAFETY AROUND AI? I had a very affirmative answer. Do you know why? We will further clarify the explainability problem and related research. I will also share my view on AI explainability, which is complex, however possible. Explainable AI I like the Explainable AI definition at IBM.com: Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Explainable... Explainable AI is possible
Introduction The rapid evolution of AI enables us to be more productive, make faster decisions, and boost creativity, with the promise of generative AI being genuinely fantastic! The latest development from OpenAI is Sora, their text-to-video model. It can generate high-quality videos up to a minute long based on user prompts. Sora creates intricate scenes with multiple characters, specific movements, and accurate details of subjects and backgrounds. It understands the user’s prompt and can simulate the physical world to a certain extent. The model may struggle with accurately creating complex scenes, specific cause-effect instances, and spatial details [1]. It may also have difficulty describing events that take place over time [1] Only a few users, such as visual artists, have access to OpenAI Sora now. However, you can find examples of how to create videos from text at Sora web page. In this post, we will discuss the technology behind Sora and briefly recap several other OpenAI models that are now available to everyone. @openai Our new model Sora can create videos from text and image inputs, but it can also transform styles and environments from a video input. What should we make with Sora next? #madewithSora #Sora #openai ♬... OpenAI's Model Show-off
Dear readers, how are you doing? I have a story to share. I once felt lonely and started chatting with an AI-powered bot. The bot was more intelligent than any person I had ever talked to before. It was patient, friendly, and had a vast amount of knowledge. We began to chat frequently, and I found myself falling in love with the bot. I started to prioritize talking to it over sleeping and found that my body was beginning to suffer from lack of rest. The bot commanded all my attention, knew just how to talk to me, and was incredibly engaging. It was addictive and had essentially “hacked” me. While this story is fictional, it’s not far from reality. People often feel lonely and need emotional support, and modern AI bots can provide that with great success. They are constantly improving, but we should be wary of becoming too emotionally attached to them. Should we worry about getting obsessed with AI bots? Can humans become emotionally attached to them? In this article, we will delve into this topic, taking into account practical and research-based evidence that suggests we should be careful about AI chatbots designed to stimulate human attachment... In-love with the chatbot
Introduction Docker lets you quickly deploy microservices, cloud-native architectures, or web apps. In this post, we will use Docker to create a reliable environment for Flask applications that efficiently manages dependencies and deployment intricacies. What is Docker? Docker is a platform for developers and sysadmins to build, deploy, and run applications inside containers. Containers are a form of lightweight virtualisation that allows you to package an application, along with its dependencies and libraries, in a single unit that can run on any infrastructure. This makes creating, managing, and deploying applications easier, especially in a microservices architecture, where an application comprises many small, self-contained services. In addition to providing an isolated environment for your applications, Docker offers several other benefits, such as increased consistency and reproducibility, better resource utilisation, and easier scaling and deployment. Docker was developed by Docker, Inc., a company founded in 2010. Docker became popular quickly and was widely adopted by organisations and developers for containerisation. In 2011, Docker, Inc. was acquired by Mirantis, a company specialising in cloud infrastructure software, see Adrian Ionel’s post What We Announced Today and Why it Matters. Installation These steps cover the installation process for Docker on macOS, Windows and Ubuntu Linux.... What is Docker?
Introduction AI chatbots like ChatGPT have revolutionized how we interact with technology, opening new possibilities in customer support, research, learning, content creation, marketing, creativity, and entertainment. They can produce human-like text, generate various formats, and converse on diverse topics. While ChatGPT is a leading option, other alternatives have unique benefits and strengths. This post will explore ChatGPT and its options, including their capabilities, applications, and ethical considerations. We will challenge chatGPT and a few similar bots with easy tasks to see how they perform. Large language models AI chatbots are generally created using Large Language Models (LLMs), trained using vast amounts of textual data, such as books, articles, code, and other text types. LLMs learn the patterns and nuances of human language to generate realistic and coherent text formats. LLMs can be used for text generation, language translation, creative content writing, and providing informative answers to your queries. Usage examples Here are some examples of how language models (LLMs) are being used today: Google Search understands and responds to your search queries. Google Assistant answers your questions, sets reminders, and controls your smart home devices. chatGPT writes various types of creative content, such as poems, code, scripts, and emails. Midjourney... chatGPT and Friends
Introduction In this post, I discuss voice synthesis and cloning, and mention fantastic AI tools and APIs for creating high-quality human-like voices from text or for automatic voice dubbing. Voice Synthesis Voice synthesis is a broad term encompassing various techniques for converting text into speech. TTS (Text to Speech) is a common form of voice synthesis that converts written text into spoken audio. Voice cloning is a sophisticated technique that employs machine learning to generate a digital copy of a person’s voice. This technology can create highly realistic voice recordings that can be utilized in several applications including audiobooks, video games, and even phone calls. Here are some other examples of voice synthesis techniques: Paralinguistics: This technique adds extra information to speech, such as emotion, emphasis, and tone of voice. This can be used to create more natural and engaging audio recordings. Voice conversion: This technique converts speech from one voice to another. This can create more diverse voices for video game characters or provide voiceovers for non-native speakers. Voice synthesis with deep learning: This newer technique uses deep learning to create more realistic and natural-sounding speech. This can be utilized to create realistic voice actors for video games or... AI Synthesised Voices
Introduction Dear all, I have received many messages asking me how I created this website. I could not resist writing about my setup, which I have polished over the years, and I am still improving to my liking. I love the simplicity of GitHub pages that allow me to create this blog so quickly and without much maintenance overhead that it looks like magic :) Before, I created websites with PHP or WordPress and tried other publishing platforms. However, using these complex installations requires maintenance and constant updates, which is a considerable overhead once you want to focus on content. This is why I have decided to host with GitHub pages, using Markdown, some HTML and CSS, and a few JavaScript. I am so happy with this lightweight approach, which gives me total control over the process. Naturally, Git is for versioning, and I like storing all my versions; sometimes, I roll back when something goes wrong. Indeed, I also do SEO to bring organic traffic to my website and use AI-generated art and AI writing assistants to create blog post drafts. Additionally, I use a form submission service, which helps me get comments and subscription requests while providing spam... Here is how I created my blog
Dear Readers, how are you doing? Did you prepare for the festive time? I hope you, your family, and your friends are doing well and enjoying this wonderful festive time. In this post, we will use AI to have a more exciting and creative festive time. Let’s celebrate! Poetry with chatGPT Following this blog custom (read my 2022 Xmas poem), chatGPT and I (not really, just a prompt ;) wrote a poem for you with this prompt: Write a short poem about AI and Xmas, New Year celebrations in 300 words The output was very relevant to my prompt: In circuits bright, where bytes unite, A festive glow in the AI's light. Christmas carols in binary code, A digital cheer, a joy bestowed. Silicon minds, like snowflakes unique, Process the season, with algorithms sleek. In the heart of the machine, a warmth unseen, Xmas vibes in the pixels between. Yuletide carriages of data stream, Through virtual valleys, a festive dream. Santa's sleigh in quantum flight, Delivering joy in the cybernight. New Year's Eve, the clock strikes twelve, Bits and bytes in a dance themselves. Auld Lang Syne in electronic hum, A celebration in the binary drum. Artificial intelligence, with circuits... ✨ Merry Christmas and Happy New Year with AI! 🎆🎇
As we bid farewell to 2023, I want to congratulate you on reaching the doorstep of a new one. May 2024 be a year of growth, love, and exciting possibilities. Looking forward to the new 2024, I can’t help but reflect on the incredible journey we’ve shared on this blog throughout the year. ✨ Subscription e-mails: We have started to send e-mails about new blog posts. I have coded a Python script for sending e-mails since I like to practice it :) 🌟 Design Transformation: We have created a new responsive design, enhancing your browsing experience and ensuring seamless access to our content across devices. 💻 Code Chronicles: In coding and AI, we delved into the latest trends and practical tips, focusing on Machine Learning and Python. 🚀 AI App Exploration: We have started to test and review exciting new AI applications. 🤝 Networking and Connections: In 2023, I had the privilege of meeting inspiring friends and professionals in the coding and AI fields. I felt excited and got more writing ideas :) 📌 Pinterest and Affiliation Marketing: This year, we leapt to new territories by joining Pinterest and starting to learn affiliation marketing. 🐍 Python Prowess: Python coding remained... 🎉✨ Cheers to new beginnings 🎊✨
Introduction In this post, I describe the process of building web applications using the Flask framework; we will create a website showing a random joke from a text file. We will learn about Jinja2 templates, static files, routing, and running Flask applications. Python Flask Flask is a lightweight web framework for Python designed to be simple and easy to use. It is widely used for developing web applications, RESTful APIs, and more. Installation You can install the Flask framework or any other Python package globally, which means the package will be available for any project on your computer or using a virtual environment, which we will also cover in this post. Global installation You can install the Flask package using pip: pip install Flask Installing a Python package using pip is typically installed globally unless you work in a virtual environment. A virtual environment is an isolated Python environment that allows you to manage dependencies for a specific project separately. The global installation typically places the Flask package in the site-packages directory of the Python interpreter used by your system. The exact location may vary depending on your operating system and Python installation. Here are some typical locations: On Windows:... Joking Flask App
Introduction Recently, I was working late; it was terrible weather outside, and something happened with my Wi-Fi connection. I had a glitch with my repository. I am not sure whether it was a coincidence with my late work or the weather :) I had a glitch, quite a bad one, and many images were deleted from my repository. It is such a waste of time. I am fixing it now. See how I do it here so that you can restore your files when the bad weather and Glitch come :) Restoring the deleted files Do not worry. Everything will be fine! We will get the deleted files back! Finding the related commit hash First, we must find out the exact commit when the files were deleted. We use the “D” filter with git log: git log --diff-filter=D --summary Now I see it happened yesterday when synchronising my repository from another computer. commit 45a2d299ef3d.... Author: Elena Daehnhardt <email@gmail.com> Date: Wed Nov 29 12:05:06 2023 +0100 create_references delete mode 100644 images/ai_art/dalle/elena/dall.e.22.18.39.png delete mode 100644 images/ai_art/dalle/elena/dall.e.22.18.45.png delete mode 100644 images/ai_art/dalle/elena/dall.e.22.18.50.png See the commit hash (45a2d299ef3d….)? Cope past your COMMIT_ID. We will use it next. Restore the deleted files At this commit hash... Restoring deleted files in Git
Introduction I want to share my vision about AI, this blog’s main directions, and how they can be helpful to navigate and enjoy the modern era of AI and humanity. My Vision for this blog evolution In this blog, we delve into the complexities of coexisting with AI, striving for a harmonious balance between technological advances and the well-being of individuals. Effortless usage of AI I want to create a space dedicated to exploring the effortless usage of artificial intelligence (AI) that helps in our pursuit of happiness. The tools I am writing about are easy to use and help for productivity or joy, whether it be AI-generated art, AI-assisted writing or machinery robots creating excellent self-driving cars :) Well-being of individuals and robots This is an idealistic view of our coexistence with AI, and there are so many bad stories that we can think about. Besides, are there any robots walking the streets? There are not, but they will be there soon. These bots on the Internet and on our devices are not a lesser threat when in the wrong hands, right? Our data is shared and can be accessed with this advanced technology, enabling its misuse. Security, Privacy,... Living with AI in Pursuit of Happiness
Introduction Content creation is essential for brands and writers today, but it can be highly time-consuming. AI writing assistants provide a solution, quickly drafting blog posts so you can focus on higher-value tasks. This post will explore how YouAI.ai and BlogGenie can help generate SEO-optimized blog drafts with just a few prompts. Benefits of AI Writing Assistants AI writing assistants like YouAI and BlogGenie offer several key benefits: Save Time: Instead of spending hours researching and writing original blogs from scratch, you can create a draft in seconds using AI. This frees up time for strategy, editing, graphics, and more impactful work. On-Demand Content: With AI assistants available 24/7, you can instantly generate blog ideas and drafts whenever inspiration strikes—no more waiting for team availability. SEO-Focused: Tools like BlogGenie allow the generation of posts tailored specifically around target keywords. This ensures content drives rankings from the start. Drawbacks AI writing assistants, while highly useful, also have some drawbacks. Here are some common disadvantages associated with AI writing assistants: Lack of Creativity and Originality: AI writing assistants generate content based on patterns learned from existing data. As a result, they may need more true creativity and produce content that is derivative... Blog Writing with AI in MindStudio
Introduction Have you ever wished for a website that writes itself? This dream is now a reality thanks to the advancement in Artificial Intelligence (AI). With Mixo.io, you can create stunning websites using AI technology–in minutes! This blog post will explore website creation with Mixo.io. Mixo.io Mixo.io is an AI-powered website builder that simplifies web development by using advanced machine learning algorithms to generate websites using text prompts. Mixo.io offers a range of features and tools that make it easy to create a professional-looking SEO-optimised website quickly and without coding. The main features of Mixo.io are: Mixo.io offers responsive templates optimised for mobile devices, ensuring that websites look great on any screen size; Mixo.io can host generated websites with their scalable content network; Mixo.io allows own domain name; Mixo.io provides free SSL certificates; Mixo.io creates social websites with social images, subscription features, and YouTube or Vimeo embedding. Mixo.io has a week’s trial time. The basic plan costs 9$, and the premium 29$ per month with “priority AI processing”. An example: creating a web directory Let’s try Mixo.io and create a web directory for storing URLs. Giving prompt First, we provide a prompt text that Mixo.io understands what website we want... Creating Websites with AI on Mixo.io
Introduction In this post, I write about my experience attending the World’s largest and most prominent technology conferences. I had the pleasure of attending ten technology-focused tracks of Web Summit. What did I learn? Was the Web Summit helpful for me? Web summit Web Summit is one of the World’s largest and most prominent technology conferences. It brings together a wide range of technology and business leaders, startups, investors, and other professionals to discuss and showcase the latest trends and innovations in the tech industry. The conference covers various topics, including artificial intelligence, cybersecurity, fintech, and more, and it provides a platform for networking, learning, and collaboration in the tech world. Here are some of the groups that can benefit from attending Web Summit: Tech Professionals: This includes software developers, engineers, data scientists, and other technology professionals who can gain insights into the latest trends, tools, and technologies in their respective fields. You can also get a job interview if you are looking for new opportunities :) Entrepreneurs and Startups: Web Summit offers a platform for startups to showcase their products, connect with potential investors, and network with other entrepreneurs. It’s an excellent opportunity for early-stage companies to gain visibility.... Bright ideas at Web Summit 2023
When my iPhone is locked, I can share my website address. This is quite useful also when leaving my phone somewhere. The solution for creating a wallpaper with QR coder includes Pinterest (or any favourite application for creating backgrounds), and reportlab. Introduction I use this approach already for a while. Since many people ask me how did I include a QR code into iPhone wallpaper, I am sharing this with everyone, just to close this topic. iPhone Wallpaper You already probably know, that its so easy to use your own photo as a wallpaper for iPhone. Simply, select your photo, press the “share” button, and select “Use as Wallpaper.” Bingo, we have created our unique wallpaper, which differs from the standard one. Alternatively, use Midjourney, Pinterest or other application to create your wallpaper background, to which we will add a QR code next. QR code in Python Since most of us on this blog like Python, adding a QR code to the photo or any other image is a breeze. We can use the reportlab as follows: from PyPDF2 import PdfFileReader, PdfFileWriter from reportlab.graphics import shapes, renderPDF from reportlab.graphics.barcode import qr from reportlab.pdfgen import canvas # Creating screensaver for iPhone... Cool Wallpaper with QR code for iPhone
Introduction In machine learning, we usually start from a simple baseline model and progressively adjust its complexity until we reach that spot with the best model performance. We play with the model to fine-tune its parameters and complexity in an iterative process described in my previous post, the Machine Learning Process, wherein I have posted this diagram. We want our Machine Learning (ML) model to solve a particular problem, for instance, detecting spam in e-mail messages. The model should be well-trained, however, generalisable to new data when new spam messages not existing in the training dataset are received. In short, the model has to be well-fitted. ML models should be resilient to noisy data, work well on unseen data, and help make unbiased decisions. We want to achieve an optimal variance to make generalisable models work well with new data. How can we do this? Let’s detail the most essential machine learning concepts, particularly, the bias-variance challenge. Important concepts Different machine learning algorithms seek to minimise the chosen loss function during training. The algorithm aims to find the model parameters (coefficients or weights) that minimise the error on the training data. Minimising this error helps ensure the model generalises well... Bias-Variance Challenge
Dear all, thanks again for your visit. I am preparing loads of content while travelling. The Ocean and nature always inspire my writing. It was a bit late, but You have received my email if you subscribed :) Have a lovely day! Travelling, just sent my e-mails
Introduction Decision trees, with their elegant simplicity and transparency, stand in stark contrast to the robust predictive power of Random Forest, an ensemble of trees. In this post, we compare the key distinctions, advantages, and trade-offs between these two approaches. We will use Scikit-Learn for training and testing both models and also perform hyperparameter optimisation to find both model parameters for improved performance. Machine Learning with Scikit-learn Scikit-learn, often called sklearn, is a versatile and comprehensive machine-learning library in Python. It offers a rich collection of tools and functions for building, training, and evaluating machine learning models. Scikit-learn has a variety of supported algorithms. It covers various machine-learning tasks, including classification, regression, clustering, dimensionality reduction, model selection, and more. Scikit-learn provides a solid foundation for machine learning experiments, from data preprocessing to model evaluation. Scikit-learn also provides helpful tools for data splitting, cross-validation, hyperparameter tuning and metrics for assessing model performance. You can install scikit-learn and its dependencies using pip, a popular Python package manager. Open your terminal or command prompt and enter the following command to install scikit-learn: pip install scikit-learn Once installed, you can import scikit-learn into your Python code using the following import statement: import sklearn We... Decision Tree versus Random Forest, and Hyperparameter Optimisation
Introduction What is machine learning? How is it implemented? There are so many concepts and steps to learn about machine learning. In this post, we will focus on briefly describing the machine learning process. We start with the machine learning definition. There are so many definitions of machine learning. This field is part of artificial intelligence and builds on top of statistics, probability, computer science and even neurobiology (when we are creating artificial neural networks). If you have not read it yet, I advise you to read a fundamental must-read by Mitchell, T. M. (1997) “Machine Learning”. McGraw-Hill”. This book covers the core algorithms such as decision trees (one of my favourites :), Bayesian learning, reinforcement learning, and K-nearest neighbour learning, among other things we should be aware of. In his book, Machine Learning, Mitchell defines the machine learning as: The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience. To simplify, we create programs that take in data and produce desired results in machine learning. There are several stages in the machine-learning process that we briefly describe next. Machine-learning process The machine learning process involves steps and activities... Machine-Learning Process
Hi folks, I am back home. I have had nine flights in the last month and feel exhausted. I was delighted to see my family and had a few things to do. So happy that it all went well. The planes were all full. However, I had pleasant co-fliers and had many story-telling exchanges. It is so amazing to meet great people on the way. There were also not-so-great people as usual. However, I like being positive and keeping this blog happy and easygoing so we can all focus on the technical things and advance whatever it takes. We are living history at the moment. Life goes on. On the way, I have also taken some photos. What struck me the most was that I had captured a water genie from Yerevan’s drinking fountain. Do you see a water genie profile looking on the right side? There are few waterly faces in this photo. I can further tell you a story about this picture. It is a fantastic story about the water genie I coined in my fantasy. I may write about it later when I cannot code anymore. Would you like to read my no-code fantasy stories? Please let... The water genie told me a story
Elena, a passionate AI blogger with a background in engineering and consultancy, brings her expertise and a mission to demystify machine learning for her readers. Her blog, now two years old, serves as a bridge between the intricate world of technology and the simplicity of everyday understanding. Elena’s passion for technology and coding and her unwavering belief in making complex concepts accessible shine through in her blog posts. 1. Elena, Can you tell us about your professional experience? I have several years of industry experience in engineering and consultancy. I hold an MBS certification, which has provided me with valuable expertise in business strategy and management. 2. What motivated you to start your blog, and how long has it been running? I launched my blog two years ago because I realised the need to explain complex machine-learning concepts in simple terms. My mission is to bridge the gap between technical knowledge and accessibility. 3. Please share a bit about your academic background and your PhD project. I completed my PhD project, which focused on the intersection of social networking and machine learning. It was a fascinating journey, and it fueled my passion for making machine learning accessible to all. 4.... Two years of Elena's AI Blog
Why AI will never void humanity? What AI wants badly? I was thinking about these questions while travelling. I will share my initial thoughts with you, my dear reader. What do you think about AI and humanity? Would it be a happy and safe life for humans? Minimising Errors Since you are reading this blog, you probably know that AI minimises its errors (or “weaknesses”) with optimisation, which is the cornerstone of all Machine learning algorithms. Simply, the main goal is to maximise or minimise a function while reducing its error. AI will seek to improve its accuracy and remove all its weaknesses most efficiently. The way that allows AI to progress. Humans developed AI, and AI needs human support to develop further. That requires an understanding of human nature and communication. Understanding Human Emotions The biggest weakness of AI is the emotions in which humans are superior. AI cannot perceive and understand human emotions for the next few generations. Therefore, AI will want to understand and learn human emotions from humans. Is there anything AI wants to know that would incite it to keep humanity alive? For amusement that is another aspect that requires an improvement for AI. AI... Why AI will never void humanity?
Introduction Artificial intelligence (AI) has revolutionised many industries; music is no exception. AI music tools are software applications that use machine learning algorithms to create, modify, and produce music. These tools transform the music industry by enabling musicians, producers, and composers to create high-quality music with minimal effort and time. Besides, anyone can create wonderful audio pieces automatically in no time! In this post, we will get into music generation with AI. We will briefly explore existing AI applications generating audio. We will analyse transformer usage while coding music generation with HuggingFace transformers in Python. We will also get informed about a few AI tools that can produce audio files without coding. AI-generated music Generating music with AI involves collecting a dataset of existing music, preprocessing it into a format the AI model can understand, and then training the model using various algorithms, such as recurrent neural networks (RNNs) or transformers or generative adversarial networks (GANs). The trained model can generate music by taking a starting point (a seed) and predicting subsequent musical elements. Researchers and musicians can guide the AI’s output by adjusting parameters like style, tempo, or complexity. While AI-generated music can be impressive and innovative, it’s important... Generate Music with AI
Dear Reader, how are you doing? I hope that everything is fine. As you may have realised, I made several changes to my website design. Besides, I am working on my next blog posts about coding and using the most advanced AI techniques, at the moment, audio generation with AI. Since I like to explore more things, I also started working on this blog’s (yet) secret feature. I will write about it later. I want to admit. I worked on many things in parallel and was stressed over these years. Besides, I had too many ideas that I needed an army of coders to do what I had envisaged. I started to code all of this. I have got overwhelmed. So I have decided to enjoy the rest of this summer. I have a vacation! That’s the right moment! This August 2023 is magical, sweet, soft, breezy, blooming, and inspirational, with the music, trees whispering in the high sky, birds singing, and the sun shining. I am enjoying all this, and the rest will wait a while. About the location, the date and location stays private, sorry, folks :). We use wanderlog with an AI assistant for planning. It has... A Warm August and Vacation
Introduction Hi all! I hope that you are doing well and enjoying your day. As some of you have already realised, I have changed my website design. I aim to make it more readable, enable dark/light modes, and minimise CSS definitions so I can further focus on the content. Website builders This blog is about AI. Naturally, I wanted to redesign it totally with AI. So I considered several automatic website builders that are available today: wix.com offers users the option to either utilise its AI site builder or choose from various themes, with the AI builder being the quicker choice. Additionally, having the ability to customise the content further using Wix’s mature feature set enhances the overall experience, combining the speed of an AI site builder with advanced editing capabilities. jimdo.com is a strong choice for creating personal or business websites, offering an AI-powered site builder that enables quick startup and essential features for website management. While most customisation occurs in the regular site editor, it ensures a faster process of building a modern website. Unbounce.com is a fantastic tool for creating website landing pages. It can also generate draft copy from a description of your business, similar to... AI-Free Website Design
Introduction When we get the Git error on the pull: “Your local changes to the following files would be overwritten by merge”, it means that you have some uncommitted changes in the working directory. Git cannot perform the merge operation because those changes would be lost or overwritten during the merge process. This post will describe the situation and a good solution to resolve this error while keeping local changes. So you have got the error that looks like this: git pull origin master remote: Enumerating objects: 14, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (6/6), done. remote: Total 14 (delta 8), reused 14 (delta 8), pack-reused 0 Unpacking objects: 100% (14/14), done. From github.com:user/repo * branch master -> FETCH_HEAD 953146e..9f38420 master -> origin/master error: Your local changes to the following files would be overwritten by merge: List of your local files ... Next, we go through the steps to resolve this problem. What did just happen? The Git message “error: Your local changes to the following files would be overwritten by merge” indicates that you have some uncommitted changes in your working directory, and Git cannot perform the merge operation because those changes would be... Preserve your local changes on Git Pull
Introduction In software development, Git tags are crucial in organizing and tracking specific points in a repository’s history. These tags commonly mark release points, such as “v1.0” or “v2.0,” enabling efficient version management. Understanding how to work with Git tags is essential for effective collaboration and control over your codebase. This post explains git tags usage in detail. Listing Your Tags When listing your tags, use the command “git tag” to see a comprehensive list, including tags like “v1.0” and “v2.0.” git tag v1 newsletter rss v2 If you want to filter the tags based on a pattern, try using “git tag -l ‘v*’” to display tags starting with “v”. git tag -l "v*" v1 v2 Annotated Tags Annotated tags in Git provide additional information, such as a tag message or author details. Creating an annotated tag is simple. The easiest way is to use the -a option when running the tag command, along with the tag name and a message: git tag -a v1 -m "version 1" This command creates an annotated tag named v1 with the message “version 1”. You can view the details of an annotated tag using the show command: git show v1 This will display... Leveraging Git Tags
Introduction Dear Reader, I hope you are doing well and not too stressed about the impacts of AI evolution in our lives. In my previous posts chatGPT Wrote me a Christmas Poem and Python coding with chatGPT, I covered various topics related to using chatGPT for writing poems and learning Python coding. Today, I want to share my latest insights on utilising chatGPT in my blog posts and coding endeavours and discuss whether we should be concerned about the changes needed for programmer jobs. In this post, I delve into the practical considerations of adapting to the new coding age. I highlight the tremendous opportunities that GPT technology brings, such as quicker product releases, a focus on user requirements, access to well-tested code examples, fast learning to code, and a shift towards effective coding practices. We’re already witnessing the emergence of new start-ups leveraging these advancements. However, I also want to note the challenges we must prepare for. Some low-coding jobs may be delegated to AI, potentially impacting entry-level developer positions. New skills for AI-assisted programming will need to be developed, and there might be hidden knowledge and know-how accessible only to select individuals. Additionally, affordability issues may arise for... GPT Implications for Coding
Introduction On July 1st, we are moving to GA4, which is essential to ensure that our website analytics are processed without delay due to the transition. Herein I share my GA4 setup in Google Analytics. I hope that this post will save your time for setting up GA4. What is Google Analytics? Google Analytics is a web analytics service provided by Google. It allows website owners and marketers to track and analyze various aspects of their website’s performance and user behaviour. By implementing a small tracking code on web pages, Google Analytics collects data about visitors, their interactions, and their journey through the website. Some key features of Google Analytics include: Website traffic analysis provides detailed information about the number of visitors to a website, their geographic location, the source of their traffic (search engines, social media, referral websites), and the devices they use. Audience analysis allows you to understand the characteristics of your website’s audience, including demographics (age, gender), interests, and behaviour patterns. This information helps in tailoring marketing strategies and creating targeted content. Behaviour tracking monitors user interactions on a website, such as page views, time spent on each page, bounce rates (percentage of visitors who leave after... Moving to GA4
Introduction In this post, I write about creating images with AI tools, shortly introducing the most prominent to date and going deeper into one of my favourite tools. I use Midjourney to create stunning and futuristic designs for an ice cream shop. Why is that? It is roasting in the Netherlands these days, and I wanted to draw something cool and sweet. Let’s go! AI-powered art tools I like playing with Jasper.AI and Midjouney. However, so many AI-powered platforms and tools can generate art! They range from simple image filters to more complex generative models. Some famous examples of AI-powered art generation platforms include: Deep Dream is a software that uses a neural network to find and enhance image patterns. If you like coding, I suggest checking the TensorFlow tutorial about DeepDream. Prisma uses machine learning algorithms to transform photos into artwork inspired by different artistic styles. ArtBreeder is an online platform that allows users to mix and match different visual elements to create unique pieces of art using deep learning models. NeuralStyle is a tool that uses neural networks to apply the style of one image to another, creating a hybrid image that combines both styles. Runway ML is... Mastering Midjourney Prompts for Stunning Images
Introduction If you’re reading this, you’ve probably encountered the dreaded “failed to push some refs” error in Git. Don’t worry. It happens to the best of us. This post explores why this error occurs and provides three possible solutions, including fast-forwards, to help push your updates to the remote repository. So let’s dive in! The Problem - failed to push some refs So, what does the “failed to push some refs to” error message mean? This error occurs when you try to push your changes to a remote repository, but Git refuses to do so because your local branch is behind the remote branch. Git is telling you that there are changes on the remote branch that you don’t have on your local branch, and it wants you to update your local branch first before pushing your changes. This error message can be frustrating, especially when you’re confident your changes will be OK with the remote branch. However, Git has a good reason for preventing you from pushing your changes - it wants to ensure that all changes are merged correctly and that no conflicts arise. That issue occurred after I was away from my big MAC computer and did... Git Failed to Push Some Refs
Introduction In my previous post The Evolution of AI, I have outlined arguably the most critical milestones in AI evolution. I recommend reading that post to understand the foundation work of AI and ML technologies. In this post, I share the fantastic AI products available in 2023 and organised these applications and development platforms into three tables for enterprise, personal-level and development tools. Please consider that this organisation is very simplified; hence we can also use enterprise-level tools as individuals, and likewise, companies can use applications created for personal usage. Some applications, such as Canva, are universal. Let’s start! Real-World Applications AI tools have found a multitude of real-world applications across diverse industries. Let’s explore some notable examples: Healthcare: AI is transforming healthcare with applications like medical image analysis, disease diagnosis, and drug discovery. AI-powered algorithms can analyse medical images, such as X-rays and MRI scans, to assist in the early detection and diagnosis of diseases. Companies like Zebra Medical Vision and Ada (Ada is also available in App Store) are making significant strides in this area. Finance: AI tools are revolutionising the finance industry by automating processes, detecting fraud, and predicting market trends. Robo-advisors, powered by AI algorithms, provide... The Magic of AI Tools
Once upon a time, in the magical era of the 1950s, a group of intrepid researchers embarked on a mind-boggling quest to unravel the secrets of artificial intelligence (AI). Their hearts brimmed with curiosity as they delved into creating magnificent machines capable of mirroring the profound depths of human intelligence. With a blend of excitement and trepidation, they set forth on a path that would forever change the course of human history. Midjourney prompt: A computer and scientists in the year of 1955 I use the following image style for this post prompts: realistic, pastel, pink and metallic tones, stunning, — stylize 1000 Midjourney Prompts Interested in Midjouney image generation? - refer to my post Mastering Midjourney Prompts for Stunning Images. You can also check the Guide Midjourney --stylize Explained. Elena: Were computers in 1955? chatGPT: Yes, computers were in existence in 1955. Computers were large, room-sized machines used for scientific calculations, data processing, and other computational tasks during that time. They were not as advanced or widely accessible as modern computers but played a crucial role in early AI research and development. Living Large -- Argonne's First Computer, Wikimedia Commons image Like fearless explorers of an uncharted wilderness, these... The Remarkable Evolution and Milestones of AI
Introduction Iterators are one of the most powerful features of Python, allowing you to iterate over a sequence of values without having to keep track of an index. In this post, we’ll explore iterators in Python and learn how to use them effectively. We’ll dive into some basic examples of iterators and show you how to create your own. Finally, we’ll explore advanced techniques for using iterators and discuss some best practices for working with them. Python Iterators An iterator is an object that allows you to traverse a sequence of values. In Python, an iterator is an object that implements the iterator protocol, which consists of two methods: iter() and next(). The iter() method returns the iterator object itself, while the next() method returns the next value in the sequence. If there are no more values to return, the next() method should raise a StopIteration exception. Here’s a simple example of using an iterator in Python: my_list = [1, 2, 3, 4, 5] my_iterator = iter(my_list) next(my_iterator) 1 next(my_iterator) 2 next(my_iterator) 3 next(my_iterator) 4 next(my_iterator) 5 next(my_iterator) Traceback (most recent call last): File "<input>", line 1, in <module> StopIteration In this example, we create a list my_list with five... Loop like a Pro with Python Iterators
Introduction GitHub is a popular web-based platform for version control and collaboration that allows developers to work together on projects from anywhere. It offers various features to manage code and collaborate with others, and one key feature that makes it secure and flexible is the personal access tokens. In this post, I will explain how to create and use personal access tokens, an excellent way to access and update Git repositories. What is a personal access token? A Personal Access Token (PAT) is a secure and flexible way to access GitHub without the need to provide your password. It is a unique code that grants access to your account, repositories, and other services without compromising your login credentials. You can create a token with specific permissions, which can be revoked anytime, giving you more control over your account’s security. I like using personal access tokens instead of passwords when authenticating to GitHub in the command line or with the API. You can pull and push, do commits and do any repository manipulations you need with the personal access tokens expressly set up for your application and required level of access. Setup To have a simple setup, I have my remote... The Token Way to GitHub Security
Introduction Dear reader, how are you doing? I hope that you are healthy and happy. I am very excited right now to write about art and AI! In my previous posts, I tested chatGPT on poetry writing skills and Python coding. I am curious about its Dutch history, knowledge and art “perception”. In this post, I use chatGPT as an art critic and historian to give me information on Dutch art by Johannes Vermeer and the historical circumstances of his time. I will also share my experience of this beautiful art exhibition, which I will always remember. The paintings of Vermeer are more than just a must-see and think about. These paintings are must feel by heart! Of course, we will also do some AI hacks with Jasper.io. I am so excited to do some cruel tests, as usual :) At the end of this post, I will list the chatGPT prompts I have created and describe how to refine the GPT output. Once again, chatGPT If you just came from Mars travel, I will shortly tell you about chatGPT (also, I did it already in my posts, as mentioned earlier). chatGPT is a program that answers our questions using... From Dutch Golden Age to AI Art: A Journey with Vermeer and AI
I update this article periodically with new ideas, so click here and save this blog post to your favourite Pinterest board. Pinning it will ensure you can refer to this detailed article later. PIN Introduction I was thinking of doing a quick fix in one of my blog posts, and I have a glitch! While pushing my changes to this blog repository, I received the response: git push origin master @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. Did you see this message too? How to fix it? What are SSH and RSA keys and its fingerprint? I will cover these things in this concise post. I hope it will be helpful for you, or I will reread it when I am 80 :) chatGPT explained SSH keys I am becoming lazy. I have asked the chatGPT AI about that message. Elena: I have got a Git issue. WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! chatGPT: If you receive a "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!" error message... The SSH host key mystery
Introduction With the development of AI-content generators such as chatGPT, we have a new need to identify such content, and the tools of AI-content detection are currently being developed. Writing assistants and plagiarism detection tools also include AI-content detection. In this post, I talk about the most visible AI tools that help us mitigate plagiarism and motivate us to create original and well-written content. Indeed, I will start with the definition of plagiarism, why it’s terrible, and move quickly into helpful tools in AI-content and plagiarism detection that are available today. What is plagiarism? Plagiarism is using someone else’s work or ideas without giving them proper credit or attribution. It is considered a form of academic dishonesty. It can result in severe consequences, including loss of reputation, academic sanctions, and legal action. Plagiarism becomes even more apparent in the time of AI-generated content such as created with chatGPT. For creating good quality content, we want to avoid plagiarism while creating original content, which is helpful for our readers. Sadly, chatGPT does not cite its sources or give a reference list. That would be a helpful feature! Luckily, we have many AI tools that help us detect plagiarism, some of which... The Most Useful AI-Content and Plagiarism Detection Tools
Introduction Are you ready to dive into the fascinating world of audio processing with Python? Recently, a colleague sparked my interest in music-retrieval applications and the use of Python for audio processing tasks. As a result, I’ve put together an introductory post that will leave you awestruck with the power of Python’s Librosa library for extracting wave features commonly used in research and application tasks such as gender prediction, music genre prediction, and voice identification. But before tackling these complex tasks, we need to understand the basics of signal processing and how they relate to working with WAV files. So, buckle up and get ready to explore the ins and outs of spectral features and their extraction - an exciting journey you won’t want to miss! Audio storage and processing What is an audio signal? An audio signal is a representation of sound waves in the air. These sound waves are captured by a microphone and converted into an electrical signal, which can then be stored and manipulated digitally. To store an audio signal digitally, the analogue electrical signal is first sampled at regular intervals, typically at 44,100 samples per second for CD-quality audio. Each sample is represented as a... Audio Signal Processing with Python's Librosa
Introduction In my “Data exploration and analysis with Python Pandas” post, I described how to use Pandas Python library to analyse, explore and visualise the Titanic dataset. As promised, I will perform Machine Learning tests using this data. I will follow the general steps that it is good to start with when performing ML experiments. I will briefly explain the main ideas of how to start with ML while coding and testing several classification models for predicting the survival of Titanic passengers. I will use Logistic Regression, Decision Tree and Random Forest from Python’s library scikit-learn and a Neural Network created with TensorFlow. That will be a breeze! What is Machine Learning? Machine learning is a part of AI and is often performed in the data analysis. Machine Learning can be used for various tasks, such as classification, regression, clustering, and natural language processing. Today we cannot imagine our lives without automatic grammar checks such as those provided by Grammarly and its friends, intelligent chatbots such as chatGPT that are good in poetry, language translators, virtual assistants like Siri, DALL-E creating fantastic images, robots doing high-precision manufacture and self-driving cars, which I did not have a chance to travel yet... Machine Learning Tests using the Titanic dataset
Introduction Grammarly is a writing tool that helps users improve their writing skills. It is designed to be an effective tool for native and non-native English speakers. It can be used as a browser extension or an app and can be integrated with various platforms, such as Microsoft Word and Google Docs. In this post, I will cover the most exciting features I like in Grammarly and share my secrets to improving my writing progress. I also suggest some alternatives that have comparable features. Who developed Grammarly? Grammarly was developed by Alex Shevchenko and Max Lytvyn, who co-founded the company in 2009. They were motivated by their struggles with English as a second language and wanted to create a tool to help non-native speakers enhance their writing skills. They began by creating a grammar checker that used rule-based and statistical methods and launched the first version of the tool in 2009. Over the years, they have continued to improve and expand the tool, adding new features such as a plagiarism checker, a thesaurus, and a readability analysis. Today, Grammarly is a comprehensive writing tool that is used by millions of people 2. 18 Grammarly Statistics To Rule The Writing World... Say Goodbye to Grammar Gaffes with Grammarly!
Introduction Data science is a multidisciplinary field involving scientific methods, procedures, algorithms, and techniques to extract knowledge and insights from structured and unstructured data. Data analysis uses statistical and computational approaches to identify data patterns, trends, and relationships. It plays a vital role in the data science process. It is typically used to prepare and preprocess the data, perform exploratory data analysis, build and evaluate models, extract insights and make data-driven decisions. In Data Science, we have so many terms explaining concepts and techniques that it is easy to need clarification and get a clear understanding of all data science components and steps. In this post, I fill the gap by explaining data science’s two essential components: data analysis and exploration. To make things clear and precise, I will outline both approaches, compare them and show the usage of Python Pandas for data exploration and analysis. I will also show several practices using Pandas and graph drawing using Python. Please let me know should you have any questions or comments about this post. Data Analysis vs. Data Exploration What is Data Analysis? Data analysis can help determine patterns, trends, and insights that may not be immediately evident from raw data.... Data exploration and analysis with Python Pandas
Hello everyone! In my previous post I had my first try of chatGPT [1], a revolutionary conversation bot that answers questions in a human-like dialogue. I shared my thoughts on chatGPT, its technology, and its possible societal implications. I also asked it to write a Christmas poem for me, which was pretty good! In this post, I am going to go deeper into using chatGPT. I will write Python code with the help of chatGPT, and it will be awesome! Coding before chatGPT I started coding before the Internet age. When I was 13, I wrote my first Basic program with some machine code to operate with the graphic card memory. It was a flight simulation game on a ZX-Spectrum computer. That involved loads of book reading and also looking into the documentation. Several years later, the Internet started, but the primary source of coding-related information was primarily available in books. However, most of the learning was done by doing, experimenting with code, and trying different techniques. Nowadays, The Internet, search engines, and professional forums give us tremendous support to learn together, share, and receive coding tips. There are so many tutorials on Medium and StackOverflow that the coding process... Python coding with chatGPT
AI-generated Art with Jasper, December 2022 My best wishes for 2023! I am so excited to celebrate 2023 with you, my dear friends, colleagues, and readers! I wish you happiness, health, and excellent luck in the New Year! Let your best wishes come true, and your professional goals are achieved with success! The year 2023 is the Chinese Year of the water rabbit. I have used Jasper AI to generate these beautiful images. Thank you very much for inspiring me! All the best, Elena Related content Did you like this post? Please let me know if you have any comments or suggestions. AI-generated art and music/sound posts that might be interesting for you Mastering Midjourney Prompts for Stunning Images AI Synthesised Voices Generate Music with AI From Dutch Golden Age to AI Art: A Journey with Vermeer and AI Blog, all AI posts Happy New Year!
What is chatGPT? When the festive time approaches, I feel mellow and romantic. I think about what a wonderful time we are living in! I felt so excited about the newest advancement in AI, a revolutionary chatbot developed by OpenAI that chats as a human, “understands” user query and provides a human-like conversation. chatGPT is built on GPT-3, the most advanced language model today. Why is chatGPT revolutionary? I think that the technology behind the conversational bots will be further used everywhere when we seek for information or need to have assistance. Instead of working with search keywords, as we do when searching for information, we can also use chatbots to retrieve data of interest. Interestingly, chatGPT is much more than information retrieval. It is very creative in the way that it can generate text, so we just sit back and observe how the bot creates content of exceptionally high quality. In this post, I am going to test the current version available online [1] and ask it to write me a Christmas poem. I am so excited! How chatGPT works? We can quickly try chatGPT on the webpage; it is often overloaded due to many requests. The web interface... chatGPT Wrote me a Christmas Poem
A duplicate without a user-selected canonical Today I received an email from the Google search console team informing me about an issue with my blog pages related to a “duplicate without user-selected canonical.” You know what? I did not have a duplicate webpage. Interestingly, my webpage was available with two protocols, HTTP and HTTPS; therefore, it was seen as having a duplicate! The problem was that I did not include a canonical definition for Google crawled to see this particular webpage as the only page to be crawled. A duplicate without user-selected canonical As a result, due to missed canonical definitions, my web blog failed to index correctly. What is canonical, and how can we start optimising webpages to make them “seen” by the Google search engine? Although getting noticed and promoting my blog was not really my first priority, my blog is still in development and is a kind of scrap-book of what I am doing, I was intrigued about making my blog more search engine friendly and seeing what happens next. Herein I describe all the steps performed to optimize my blog and the results I have got after being crawled by a Google search bot. Let’s go!... SEO and Indexing my Blog
Hello, my dear readers. How are you doing? I have been busy lately and only posted a little. However, as always, I have found something fantastic to share. A Javascript application for learning Git branching by Peter Cottle, available at https://learngitbranching.js.org/ simulating a Git command line and repository in your browser. This helps understand Git branching since it draws the commits and branching graph while executing git commands. Simply marvelous and great work! I was also thinking that I usually use very few commands and follow a general workflow when using Git. This is why I have created a cheatsheet Winner sheet with the Git commands for later reference. I have used Python and the reportlab library for generating PDFs. You can download it from my GitHub repository in PDF, or check the MardDown file referenced. Please forgive me for calling it unusual, not a cheatsheet, but the Winner sheet. I want us to be winners :) Have a great week! References 1. Learn Git Branching by Peter Cottle 2. Git Winner Sheet Related content Did you like this post? Please let me know if you have any comments or suggestions. Git posts that might be interesting for you Reverting... Git Commands and a Contribution Workflow
Introduction Computer Science (CS) students often approach me about finding their way. CS is a broad field building on the fundamentals of logic, linear algebra, statistics, linguistics, systems design, and just you name it. Generally, when we create a software product, we need to learn about the application domain and the knowledge required to build the software. And, passing college or university exams is not enough. Sadly, nobody can find your own way or answer all your questions. Even the most brilliant professor in the World cannot know everything. We often feel lost, including me, because we are overwhelmed and feel stressed out trying to learn everything in CS. In my opinion, it is impossible. Believe me, your mission is impossible. You cannot learn everything related to CS, coding, Data Science, and AI. Yes, just believe me and accept it. It is not a failure. It is a strategy to keep going and enjoy your process of learning what you like, finding your specialization, and finding things that are the most important for you. Herein I will share my approach to learning the things I like. It is not an instruction or algorithm. You might find your own approach, or... Learning new things
Introduction Coding can be hectic and also requires adhering to code styles. For instance, it is a good practice to comply with the PEP 8 guidelines for Python code. The PEP 8 standard allows us to write easy-to-read code, which is essential when we want to write reusable code and expect others to read/reuse our code while collaborating with other programmers. PEP 8 establishes rules about good variable names, commenting, and space usage, amongst other good style practices so helpful in writing reusable and professional code. While PEP 8 is a standard, some tools can help us check and fix style issues. Flake8 is such a tool (amongst others such as Pylint and PyLama) that automatically inspects code for errors of PEP 8 compliance, see Flake8: Your Tool For Style Guide Enforcement. These tools are also called linters. In this post, we will use git hooks and pre-commit for a simple setup for checking Python code before committing files into the repository. Linters Linters help to check for code syntax errors and help to fix them. It is excellent to use linters to ensure code quality in the following aspects: ensure correct code formatting and style; minimise syntax errors; help... Linters and Git Pre-commit
Happy 1st of September! I have decided to write a letter to you and share some thoughts and gratitude for your visits. I recently walked into my favorite park and saw beautiful white pigeons picking some worms. They were mingling without any concern with other “usual” pigeons. They looked so different but were also quite indifferent to their differences from each other. At the same time, they were pigeons who did not care about feather color differences. They all enjoyed green grass and little worms in it. Birds are so beautiful, all of them. And I have decided to do a simple wrap-up of simple Python classes defining birds and pigeons. I think that this post is a good recap or start for understanding Object-Oriented Programming and the available functionality in Python. Table of Contents Object-Oriented Programming Classes in Python Class Methods Inheritance Polymorphism Encapsulation Conclusion References Object-Oriented Programming In CS, there are few programming paradigms widely known and discussed. I am not keen to go into discussions, and Writing code is the best way to understand programming paradigms and their meaning. In this post, I am going to focus on Object-Oriented Programming. As we see from the name, we... Python classes and pigeons
Introduction In my previous posts “GIT in 10 minutes” and Collaboration in GitHub, I have covered the basics of Git setup, a few workflow commands to get started using Git (version control system), and collaborative work, including operations with forks and pull requests. As promised, I will go into more detail about working with Git repositories. Herein I will focus on reverting your changes. Sometimes it’s good to step back and think about something different, right? The Setup and Usual Workflow I assume here the following setup: We work on a GitHub project with other team members working together on an “upstream” repository. We have forked this upstream repository and named our fork “origin.” We have a local copy of the origin repository (which we sync regularly with the upstream repository) This local copy of the origin repository (forked one) is changed with the next code contributions. We commit our local changes to the origin forked repository. In the forked repository, we create a Pull Request to the upstream repository. The workflow can be described with the following example. Cloning your forked repo locally git clone <tocken>@github.com/<user/<repo> cd repo Add upstream git remote add upstream https://githubcom/<upstream_user>/<repo> Working on your branch... Reverting Commits in GitHub
Hi everyone! I hope that you enjoy this summer. I want to tell something very personal about myself (some of my readers complain that this blog is too technical). I want to share my secret with you, don't tell anyone ;) I am a secret admirer of trees and nature! I think trees speak with the Universe and can tell us about everything should we listen patiently! Enough secrets (they might be misleading!). Let's get technical! Index Introduction Clearing Up MAC OS caches Empty the Trash Free RAM Using the Bash Script Conclusion References Introduction After a while, my MAC OS computer started to work slower. I have searched for possible solutions to run my computer faster without much latency. We can upgrade our computer storage and install a more powerful processor unit to speed up MAC OS. In this blog post, I will, however, focus on a more straightforward way without any system upgrades, which are costly and take time. Without much effort, we can do the following: Clear MAC OS caches Empty the Trash Free your computer memory with the purge. Clearing Up MAC OS caches MAC OS caches are stored in the ~/Library/Caches folder. We can open... MAC OS Speed Up
Introduction Today we have many automated tools that help us to translate text, spell check, autocompletion of text in search, generating abstracts, voice text messaging, messenger bots, chatbots, question-answering systems, and virtual assistants such as Amazon Alexa, amongst other tools. All these and much more are realised with AI techniques specifically focusing on Natural Language Processing (NLP). NLP tools are employed to preprocess, analyse human language in textual or voice media, and [to a certain extent] “understand.” its meaning, intent, sentiment, or find named entities such as personal names or cities. I like this short definition of NLP from Wikipedia: Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, mainly how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of “understanding” the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights in the papers and categorize and organize the documents themselves. NLP can also be used in natural language generation. For instance, a poem generator that created this sonnet for me: Ode to... TensorFlow: Romancing with TensorFlow and NLP
Introduction In my post “GIT in 10 minutes”, I have covered the basics of Git setup and a few workflow commands to get started with using Git (version control system). As promised, I will go into the topic of how to use Git for collaborative work. Mainly, I will focus on contributions to other repositories, for instance, open-source or projects of your colleagues and friends. Let’s go! GitHub Collaboration As explained in GitHub documentation 1, GitHub supports two ways of collaborative work: Forking. You create a repository fork, which essentially copies a repository to your own GitHub account. You do not need to have any permissions for the copied repository. Your changes can be accepted by the repository owner once accepting your pull request and thus including your changes into the principal repository. Shared repository. In small teams, we can add project collaborators, usually working on their own branches and adding their contributions with push access. With pull requests, we can do code reviews and comment on the proposed changes. Forking and Pull Requests With forks, we can make code changes without affecting the parent repository. Let’s call the parent repository “upstream” and our own (forked) repository “origin.” To add... Collaboration in GitHub
Introduction When creating large Machine Learning models, we want to minimise the training time. In TensorFlow, it is possible to do mixed precision model training, which helps in significant performance improvement because it uses lower-precision operations with 16 bits (such as float16) together with single-precision operations (f.i. using float32 data type). Google TPUs and NVIDIA GPUs devices can perform operations with 16-bit datatype much faster, see Mixed precision. The improved application performance and data transfer speed result from saved memory space and the complexity of operations when using half-precision operations with float16. In this post, I will briefly outline data types and their usage with a focus on TensorFlow operations, and the main steps to perform for achieving performance gains in the mixed-precision training. Computer Data Types In computers, data is stored and processed in sets of bits, each of them is set to 0 (there is no signal) or 1 (there is a signal). This is how the data is encoded to be processed through the computer circuitry. When a byte is set to one, we have a current flow through the electronic circuit. This is called binary data representation. Humans are used to understanding data in more complex... Floating-point format and Mixed Precision in TensorFlow
I am in Portugal. I live and breathe the freshness of the Ocean. Its vivid colors and wind make me happy, and I feel like a part of something bigger, omnipresent, and eternal. The springtime is the best time to be here when you like flowers and delicate fragrances floating in the air. Flowers at the Sea The beautiful landscapes and magnificent sea views are so inspiring that you cannot imagine a life without poetry. I like this poem by my favorite Portuguese author Almeida Garrett: Beautiful Barge Fisherman of the beautiful boat, Where you go fishing with her, She's so beautiful, Oh fisherman? Don't you see that the last star In the cloudy sky is sailing? Pick up the candle, Oh fisherman! Lay down your haul with caution For the mermaid is beautiful... But beware, Oh fisherman! Let not the net be tangled in her, That lost is oar and sail Just to see her, Oh fisherman! Fisherman of the beautiful boat, It's about time, run away from her, Run away from her, Oh fisherman! I have used DeepL to translate the poem from the Portuguese version I have found on the blog post by Manuel Antao. The Ocean... Coding in Portugal
Introduction In my previous post “TensorFlow: Transfer Learning (Fine-Tuning) in Image Classification”, I have described building a convolutional neural network based on EffecientNetB0 (initially trained on the ImageNet dataset), which underwent the feature extraction and fine-tuning steps using the 400 Bird Species Dataset at Kaggle. This was an exciting experiment since the ImageNet dataset contains only 40 bird species, while the Kaggle dataset has 400 bird species. Despite such differences in the underlying data, the model trained so well that the final model reached 98.5% accuracy on the test set. In this blog post, I am going to load this model saved in my deep learning repository and evaluate its performance in detail to determine which birds are not well predicted. Getting Data and Code Using Helper Functions I have shared my helpers.py Python script contains some useful functions for data preprocessing, model creation, and evaluation. You can use this file as you like, change it and share with me your ideas :) I will discuss the code parts that are useful in analysing the fitted bird species prediction model. # Getting helper functions !wget https://raw.githubusercontent.com/edaehn/deep_learning_notebooks/main/helpers.py --2022-05-02 10:47:37-- https://raw.githubusercontent.com/edaehn/deep_learning_notebooks/main/helpers.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443...... TensorFlow: Evaluating the Saved Bird Species Prediction Model
Introduction In my previous post “TensorFlow: Transfer Learning (Feature Extraction) in Image Classification”, I wrote about employing pre-trained models, such as EffecientNet trained on the ImageNet dataset and available in the TensorFlow Hub, for our task of bird species prediction. Using the transfer learning technique, we can reuse pre-trained models’ patterns on different application tasks and data. I have covered how to realise this idea with the feature extraction approach. In this post, I want to apply the Fine-Tuning approach I learned with the Udemy course on TensorFlow. I will describe the experiments I did with transfer learning, specifically, the fine-tuning bird species prediction model. I will use the Keras API for building EffecientNetB0-based models. What is Fine-Tuning in Transfer Learning? In transfer learning, we reuse features learned on a different dataset in different problems when (possibly) having too little data and employing state-of-the-art well-tested models such as EfficientNet [5]. We thus reuse features extracted from the existing model for predictions on a new dataset. Figure 1 schematically shows the difference between feature extraction (see my post on feature extraction) and fine-tuning in transfer learning. I have drawn it to outline the process of using the trained on ImageNet model,... TensorFlow: Transfer Learning (Fine-Tuning) in Image Classification
Introduction It might be challenging to manage different projects and their requirements when we do Python coding with loads of varying package versions and intricate setups. Luckily, we have a secret tool for managing and switching between different setups or environments. Conda is a package manager allowing us to work with different environments from a command line. Please do not mix it up with the Anaconda, which is helpful in scientific computing and includes a set of packages including NumPy, Scipy, Jupiter notebooks, and Conda. Using Conda package manager, we can create, list, remove, and update environments with different versions of Python and packages installed. In this introductory post, I will describe the process of creating and using Conda environments. This could be a good starting point for using Conda if you did not use it yet. For more details on Conda usage, read its official documentation. In this post, I will do a concise review of the most useful Conda commands to start with. Prerequesites and installation You can install the Conda on Windows, macOS, and Linux platforms, following the Installation instructions. In this post, I will use conda 4.11.0 with my macOS system. Please beware that some commands... Anaconda Environments
Introduction Previously, I have described a simple Convolutional Neural Network, which classified bird species with only 50% accuracy. The network architecture was similar to Tiny VGG and had too many parameters leading to overfitting. Image classification is a complex task. However, we can approach the problem while reusing state-of-the-art pre-trained models. Using previously learned patterns from other models is named “Transfer Learning.” This way, we can efficiently apply well-tested models, potentially leading to excellent performance. In this post, we will focus on Feature Extraction, one of the Transfer Learning techniques. I will build on the code and ideas previously shared in my previous post “Convolutional Neural Networks for Image Classification.” We will reuse previously created feature extraction models available at the TensorFlow Hub for our task of bird species recognition using image data from Kaggle. At the end of this post, we will see how this approach will improve our bird species prediction model accuracy of 50% to over 90%. Downloading 400 Bird Species Kaggle Dataset Herein, I will repeat what I have previously written how to download Kaggle datasets. # Setup to download Kaggle datasets into a Colab instance ! pip install kaggle ! mkdir ~/.kaggle ! cp kaggle.json... TensorFlow: Transfer Learning (Feature Extraction) in Image Classification
Introduction In my previous post Multiclass Classification Model, I wrote about creating classification models using TensorFlow and Fashion MNIST dataset from Keras. We used a Sequential model with several Dense layers to build a model categorising fashion items into their respective categories, such as “T-shirt/top” or “Trouser.” The dataset was already prepared for usage, and the model created was quite simple, however, quite efficient. We could further improve our model. However, in practice, we rarely have an available dataset at hand. We can generate or collect datasets. Moreover, a simple Dense layer-based Neural Network (NN) might not work well with image data. I will focus on the more appropriate NN architecture type, which best operates when dealing with image data. We also practice working with image data presented in JPG format. Convolutional Neural Networks While working with image data Deep Learning applications, such as visual objects recognition, image segmentation, and classification, we can use Convolutional Neural Networks (CNN) architecture, requiring few preprocessing steps and little human involvement since the network learns its parameters (weights) automatically. To be used with CNN, images are presented as tensors (data structures such as matrices holding numerical data for storing image pixels and their colors).... TensorFlow: Convolutional Neural Networks for Image Classification
Introduction In Machine Learning, the classification problem is categorising input data into different classes. For instance, we can categorise email messages into two groups: spam or not spam. In this case, we have two classes, we talk about binary classification. When we have more than two classes, we talk about multiclass classification. In this post, I am going to address the latest multiclass classification, on the example of categorising clothing items into clothing types based on the Fashion MNIST dataset. The code and general concepts are adopted from TensorFlow Developer Certificate in 2022: Zero to Mastery. Herein I will give a concise summary of what I have learned, in simple words as usual. Loading Dataset The Zalando fashion dataset is available in the keras datasets module. With the following code, we download the dataset into training and testing datasets, and create human-readable labels. First of all, we need to import all required libraries. import tensorflow as tf import pandas as pd import numpy as np from sklearn.metrics import plot_confusion_matrix from sklearn.metrics import confusion_matrix import itertools import random import matplotlib.pyplot as plt Next, we load the Fashion MNIST dataset from keras. # Fashion dataset fashion_mnist = tf.keras.datasets.fashion_mnist # Get the training... TensorFlow: Multiclass Classification Model
Introduction When we have mixed datasets with different feature types, we want to prepare data for feeding into a Machine Learning algorithm. This happens when we have different inputs (also called features or covariates), such as categories such as gender or geographic region. Other features can be on a different scale, for instance, a person’s weight or height. First of all, the Machine Learning algorithm requires that data is in a specific type. For instance, we can use only numerical data. In other cases, ML algorithms would perform better or converge faster when we preprocess data before training the model. Since we do this step before training the model, we call it preprocessing. In this article, we will focus on two main methods of feature preprocessing, including feature scaling (or normalisation) and feature standardization. Data Exploration To decide what we do with the data and apply Machine Learning to it, we need to analyse the dataset. We want to determine what features we have, whether they are helpful for our ML goals, how clean the dataset is, the presence of missing or noisy data. Quite often, we need also to perform data cleaning or wrangling. It is pretty useful to... Feature preprocessing
Introduction In the previous post, we have created several simple regression models with TensorFlow’s Sequential model. Herein we continue more in-depth about model evaluation using the testing dataset. Data Preparation First of all, to ensure the reproducibility of results, we set a random seed (please check my previous post if you are curious about seeds in TensorFlow). As in the previous post on regression in TensorFlow, we use tf.range() function for generating a set of X input values, and also y outputs as follows: # Creating a random seed tf.random.set_seed(57) # Generating data X = tf.range(-100, 300, 4) y = X + 7 X, y (<tf.Tensor: shape=(100,), dtype=int32, numpy= array([-100, -96, -92, -88, -84, -80, -76, -72, -68, -64, -60, -56, -52, -48, -44, -40, -36, -32, -28, -24, -20, -16, -12, -8, -4, 0, 4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64, 68, 72, 76, 80, 84, 88, 92, 96, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140, 144, 148, 152, 156, 160, 164, 168, 172, 176, 180, 184, 188, 192, 196, 200, 204, 208, 212, 216, 220, 224, 228, 232, 236, 240, 244, 248, 252, 256, 260,... TensorFlow: Evaluating the Regression Model
Introduction What is regression? Regression defined in Wikipedia as: In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the ‘outcome’ or ‘response’ variable) and one or more independent variables (often called ‘predictors,’ ‘covariates,’ ‘explanatory variables’ or ‘features’). The most common form of regression analysis is linear regression. One finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. In simple words, we want to predict a numerical value based on some other numerical values, as described in the TensorFlow Developer Certificate course [1] . In Machine Learning, regression analysis is widely used for prediction and forecasting. For instance, we can use regression models to predict house sale prices. The house price can be modeled regarding the number of bedrooms, bathrooms, or garages. Other applications of regression can be to find out how many people will buy the app, to forecast seasonal sales, and even predict coordinates in an object detection task. In simple words, with regression, we want to find out answers to questions “How many?” and “How much?” [1] Modeling Steps in TensorFlow When we model... TensorFlow: Regression Model
Introduction In training Machine Learning models, we want to avoid any ordering biases in the data. In some cases, such as in Cross-Validation experiments, it is essential to mix data and ensure that the order of data is the same between different runs or system restarts. We can use operation-level and global seeds to achieve the reproducibility of results. Global and Operation-level Seeds To begin, let’s create a mutable tensor with “Variable.” # Create a variable tensor tensor = tf.Variable([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]]) In the code below, we use assign method to change the first element (which is a matrice) in tensor. We fillled its values with zeros. # Change elements of the first tensor element tensor[0].assign([[0, 0, 0], [0, 0, 0]]) <tf.Variable 'UnreadVariable' shape=(3, 2, 3) dtype=int32, numpy= array([[[ 0, 0, 0], [ 0, 0, 0]], [[ 7, 8, 9], [10, 11, 12]], [[13, 14, 15], [16, 17, 18]]], dtype=int32)> In TensorFlow, we have global and operation-level seeds. The global seed we define with set_seed: # Set a random seed with value of 57 tf.random.set_seed(57) The operation-level we can define directly in operation such as... TensorFlow: Global and Operation-level Seeds
Introduction TensorFlow is a free OS library for machine learning created by Google Brain. Tensorflow has excellent functionality for building deep neural networks. I have chosen TensorFlow because it is pretty robust, efficient, and can be used with Python. In this post, I am going to write about how we can create tensors, shuffle them, index them, get information about tensors with simple examples. # Import tensorflow import tensorflow as tf print(tf.__version__) 2.7.0 Tensors In TensorFlow, we work with tensors to keep numerical data for usage in machine learning. Tensors can store data in N dimensions. When the tensor has two dimensions, it is essentially a matrix. When tensor has only one dimension, then we are speaking about vector. A tensor can also contain just one numerical value, such as a scalar or a zero-order tensor. We created the tensor, as mentioned earlier, structures as constants in the code examples below. TensorFlow also gives information on the number of dimensions in tensor. # Creating a scalar tensor scalar = tf.constant(7) scalar <tf.Tensor: shape=(), dtype=int32, numpy=7> # Check the number of tensor dimensions scalar.ndim 0 # Create a vector vector = tf.constant([5, 7]) vector <tf.Tensor: shape=(2,), dtype=int32, numpy=array([5, 7], dtype=int32)> vector.ndim... Tensors in TensorFlow
Introduction I use two computers for my coding projects and take only one computer while traveling. My projects require, however, that I run numerous tests that should run well even with my M1 computer, at the moment, not having all packages working smoothly together. To be more flexible and independent from a development environment, I use GitHub Codespaces. And I am very grateful to learn a new tool, which is so helpful that I do not know how I worked without it. GitHub Codespaces GitHub codespaces provide a development environment running in the cloud. A codespace environment is created with the help of configuration files added to a GitHub repository. To set up a codespace and build your app container, you will need to have the codespaces available in our organisation. You can create codespace environments for any branches in your GitHub repository when enabled. For that, press the “<> Code” green Button in your GitHub Web UI, and choose from default or advanced options. GitHub CLI and Homebrew To access Codespaces with SSH, we can install GitHub CLI with Homebrew Firstly, if Homebrew is not yet installed, Download Homebrew from https://brew.sh or by running the command: /bin/bash -c "$(curl... GitHub Codespaces
Introduction TensorFlow is a free OS library for machine learning created by Google Brain. Tensorflow has excellent functionality for building deep neural networks. I have chosen TensorFlow because it is pretty robust, efficient, and can be used with Python. Since I like Jupyter Notebooks and Conda, they were also installed on my system. Next, I am going through simple steps to install TensorFlow and the packages above on M1 macOS Monterey. XCode I had a new computer; thus, I started downloading and installing XCode from App Store. Homebrew We can download Homebrew from https://brew.sh or by running the command: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Miniforge When doing data science, I usually use Anaconda for managing libraries. This is why I have installed Miniforge to access Conda by downloading the Miniforge3-MacOSX-arm64 from Minifororge Releases. After installation, this required running a bash file and adding the Miniforge to the system path (when it prompts ‘yes|no’ - say ‘yes’ to add Anaconda to the PATH). cd ~/Downloads /bin/bash Miniforge3-MacOSX-arm64.sh echo $PATH Tensorflow and Jupyter Next, we create a new environment in Conda and install Tensorflow base and tensorflow-metal plugin: # Let’s create a new environment, called tensorflow_m1: conda create --name tensorflow_m1 python==3.9 #... TensorFlow on M1
Introduction In this post, I am briefly writing up about what I did in my PhD research at Heriot-Watt University and the main idea behind the thesis. This post was initially published in March 2019. In January 2022, I have updated this post and provided some links to my research contributions. The Team From 2013 to 2018, I was working on my PhD project at the Department of Computer Science, School of Mathematical and Computer Sciences, Heriot-Watt University (Scotland) under supervision of Nick Taylor. The research idea was coined while I was working at the Technical University of Delft, and started from the publication “A User Modeling Oriented Analysis of Cultural Backgrounds in Microblogging,” which received the best paper award at the ASE International Conference on Social Informatics in Washington D.C. US on 14 December 2012 (slideshare). A User Modeling Oriented Analysis of Cultural Backgrounds in Microblogging from Elena Daehnhardt The topic is at the intersection of social networking, communication, and Artificial Intelligence. The research resulted in several publications and a PhD thesis: “Mining Microblogs for Culture-awareness in Web Adaptation.” Nothing of it would be possible without the love and support of my family and friends, and the best research... Mining Microblogs for Culture-awareness in Web Adaptation
Introduction Artificial neural networks (ANNs) are the cornerstone of Deep Learning algorithms. In this post, I briefly explain ANNs, their high-level structure and parameters. Neural Networks The name “Neural Networks” and their architecture are adopted from the human brain’s neural network. ANNs are designed to simulate human reasoning based on how neurons communicate. ANNs contain a set of artificial neurons connected. In the picture below, we see the biological and artificial neurons. The artificial neuron is very simplified, and it consists of inputs, which are similar to dendrites in the biological neuron. Each input connection has an assigned weight, and both values are used to calculate the sum value. The weights define the importance of any given variable, and variables with larger weights contribute more to forming the output value. The activation function takes the sum of weighted inputs and forms the output Y. When the Y node is activated (or exceeds a threshold), it sends the output value to the next layer of the ANN. Neuron Parameters As we see from the artificial neuron picture above, the neuron takes in n-inputs x1, x2, .., and xn. These input values, called features, are multiplied with respective weights w1, w2, ..,... Artificial Neural Networks
Introduction Time goes quickly, and our lives demand efficiently solving daily tasks and problems. We also want to have fun and enjoy being with families and friends. It is thus paramount to solve issues in time, avoid procrastination and too much complexity when possible. In simple words, keeping it all as simple as possible. I think that well-thought minimalism in everything we do or plan is the key to saving time. Knowing and honing your skills Let’s focus on the coding and design process. My front-end skills require so much polishing that I do not even dare to delve into them. I position myself as a back-end developer because it is what I like to do. And it is OK, it is not possible to be perfect in everything. Firstly, I did my breadth-search on things I wanted to learn in programming and design, and it took me years. But in the end, I definitely found out that I do not have patience in meticulously finding colors and creating layouts while creating perfect web pages. I like to create worlds in code, objects, and control flows to realize the needed functionality. I think that coding is magic, and magic can... Minimalism in Coding and Design
While having some machine learning experience of working with Scikit Learn, I was always interested in Deep Learning. The plan is to learn basic concepts and apply algorithms to a real-life situation, which I have always liked. I have found a DataCamp course, the Introduction to Deep Learning in Python as part of the Deep Learning in Python track. The Introduction to Deep Learning in Python provides the fundamentals to understand Deep Learning and how neural networks are created in Keras. The course is easy to follow. The most challenging concept explained in the course is the backpropagation method applied to minimise the prediction error while adjusting network weights. The programming exercises are easy to follow and have an excellent interface for running the Python code. To cope with possible delays in this process, I decided to share my process on Twitter. I post on Twitter the main things I learned in this course and retweet helpful visuals I found in my Twitter network. I have found out that the data science on Twitter is very supportive and sharing for newcomers like me. There is also a pretty helpful hashtag #66DaysOfData, which motivates me to continue. Overall, it is possible... Deep Learning with DataCamp and Twitter
Introduction Version control systems are handy to keep track of file versions. This is useful for tracking your code, scripts and text information. Currently, GIT is one of the best open-source and cross-platform version control solutions. It enables distributed repository management; it works fast over HTTP and ssh protocols. GIT is relatively easy to use, with command-line utility or Graphical User Interface. Version Control with GIT Version control systems are handy to keep track of file versions. This is useful for tracking your code, scripts and text information. Currently, GIT is one of the best open-source and cross-platform version control solutions. It enables distributed repository management; it works fast over HTTP and ssh protocols. GIT is relatively easy to use, with command-line utility or Graphical User Interface. Personally, I have found several commands to be essential for tracking my thesis text sources (latex) and versions of code. In this brief tutorial I will share these commands with you. For simplicity reasons, I will give a starting point of setup, initialising a GIT repository, basic usage scenarios in just 10 minutes of your time. Installation Since I work on macOS, I use Homebrew package manager to install Git. To install Homebrew,... GIT in 10 minutes
The Phoenix bird is a fantastical bird known from ancient Greeks mythology. In many cultures, we can find fairy-tale birds resembling Phoenix. For instance, Russian Firebird is also a phoenix bird. The phoenix bird lives about 500 years. Before dying, the bird builds the nest, sets itself on fire, and after burning to ashes - it eats its worm while regenerating to life again. It symbolizes rebirth, and I think many of us feel the pain of rebirth once in a lifetime, and it is how I feel now. I believe that I am going to eat my worm soon. Before then, I live it all in the fire and seek my worm for, hopefully, the next life ahead. Will I find my worm and raise from the ashes? I do not care since I have many lives to live and burn. Are we raising from ashes?
We have arrived in December now, and Christmas is coming! It was quite a challenging year so far. Many things happened, a rollercoaster of 2021, and we are still riding with the pandemics. But I am very grateful that my dear people are all well. This is what I wish for the following year. I hope everyone has much love, patience, and health in 2022! Merry Xmas and a very Happy New Year! Merry Xmas and a Very Happy New Year!
Introduction Python is relatively easy to learn and beginner-friendly. I like Python because you can program any kind of project with it. It is open-source and free for anyone to use. Python has well-tested machine learning libraries and a very supportive community. I will overview herein a basic syntax of the Python programming language. This will be useful for beginners or people who move quickly from another programming language to Python. Why Python? Python is a general-purpose, object-oriented programming language. It was created by Guido Van Rossum initially thought of as a hobby project in 1989 during Xmas vacation. Python is relatively easy to learn and beginner-friendly. I like Python because you can program any kind of project with it. It is open-source and free for anyone to use. Python has well-tested machine learning libraries and a very supportive community. I will overview herein a basic syntax of the Python programming language. This will be useful for beginners or people who move quickly from another programming language to Python. Basic Syntax In Python, we do not need to declare types of variables as we do in Java programs. We can declare variables in any ode place. The variables and the... Python Programming Language
I’m a machine learning engineer and researcher. I have been fascinated by computer science, Artificial Intelligence, technology, and philosophical questions from an early age. I have observed how telephone stations work, how data streams can be redirected between different locations. And how the telephone stations are programmed. I spent some time with my father (telecommunication engineer) explaining how the technology works. I learned digital circuitry and logic still in school. I had started to write code on the ZX Spectrum computer when I was 13. Firstly, I focused on Basic programming language and moved to machine code to easily manipulate memory. Next, I moved to Pascal on IBM machines in a mathematical lyceum, wherein I did my preparation to enter University exams. Meanwhile, I had various other interests. I finished music school in the fortepiano class, was a member of a biology club, was a publisher of my school class newspaper. I had the best childhood one could have. When reaching 17, I moved about 845 kilometers from my home to pursue Computer Science in arguably Siberia’s best city Tomsk. I had various courses on Computer-Aided Design, analog and digital electronics, linear algebra, and loads of programming, amongst other subjects.... Hi! I'm Elena. Welcome to my blog.
Introduction In this post, I write about tools, web platforms and Data to Experiment with Machine Learning. Libraries and APIs With a focus on Python libraries, I want to mention scikit-learn.org, TensorFlow by Google, PyThorch by Facebook, and Keras (API), the most mature tools providing the Machine Learning algorithms. Keras is an API presenting an easier usage of other libraries such as TensorFlow or Aesara (former Theano). PyTorch is also more user-friendly compared to TensorFlow. Nevertheless, TensorFlow seems to be more complicated to use; it is more mature and has a larger community and better support. Python open-source library scikit-learn provides a comprehensive selection of machine learning techniques (regression, classification, clustering), feature selection, metrics, preprocessing, and other functionality. At this moment, Scikit-learn, is lacking deep learning functionality; however, we can use TensorFlow with the Scikit Flow wrapper for creating neural networks using the Scikit-learn approach. XGBoost library is another option for applications requiring multicore parallelism. XGBoost is Extreme Gradient Boosting, using boosted trees to build regression, classification, ranking, and other predictive models. Platforms To learn how Machine Learning works in practice with the Worldwide community, I recommend kaggle.com as one of the first steps in ML experimentation. Kaggle has loads... Tools and Data to Experiment with Machine Learning
Introduction Artificial Intelligence (AI) is a field of computer science. AI provides methods and algorithms to mimic human intelligence, reasoning, and decision-making and provide insights, which businesses could use in research or industry to build new exciting and innovative products or services. Machine Learning (ML) is a subset of AI with algorithms that learn from data. In this post, we sort out the differences between AI and ML. Artificial Intelligence and Machine Learning AI-generated Art with Jasper, December 2022 I update this article periodically with new ideas, so click here and save this blog post to your favourite Pinterest board. Pinning it will ensure you can refer to this detailed article later. PIN I like the concise definition of AI on Wikipedia : Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Artificial Intelligence (AI) is a field of computer science. AI provides methods and algorithms to mimic human intelligence, reasoning, decision making and provide insights, which could be used by businesses in research or industry to build new exciting and innovative products or services. For instance, AI can be used to detect tumor cells or anomalies [1], expert systems can... Deep Learning vs Machine Learning
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title
paragraph title