How does the meaning of the word KPI evolves over time?

Key Performance Indicators, or KPIs, are a critical aspect of measuring the success of any organization, team or individual. Initially, the term KPI referred to a set of metrics used to evaluate the performance of an organization, with a focus on financial and operational goals. However, over time, the meaning of the term KPI has evolved and expanded beyond these traditional definitions. In this post, we will explore how the meaning of the word KPI has changed (or can be changed).

Traditionally, Key Performance Indicators were metrics used to measure the performance of an organization, team, or individual. For example, in a sales team, KPIs could include the number of leads generated, the conversion rate, and the average deal size. Similarly, in a manufacturing organization, KPIs could include metrics such as production output, quality control, and inventory turnover. These metrics were used to evaluate the performance of the organization against its strategic goals, to identify areas of improvement and measure progress over time.

However, with the changing dynamics of the workplace, the meaning of KPI has shifted to include a broader range of metrics. For example, one interpretation of KPI is to Keep People Involved. This refers to the importance of involving employees in the decision-making process, engaging them in the work, and empowering them to take ownership of their roles. By involving employees in the decision-making process, they feel more invested in the work, and the organization benefits from their insights and perspectives. This can lead to improved employee engagement, increased innovation, and ultimately, better business outcomes.

Another interpretation of KPI is to Keep People Interested. This refers to the importance of creating an environment that is engaging and stimulating for employees. This includes providing opportunities for learning and development, recognizing and rewarding employee contributions, and fostering a culture of creativity and innovation. By keeping employees interested and engaged, organizations can retain top talent, build stronger teams, and drive innovation.

Another interpretation of KPI is to Keep People Informed. This refers to the importance of providing employees with the information they need to make informed decisions and perform their jobs effectively. This includes communicating organizational goals, sharing key performance metrics, and providing regular feedback and performance reviews. By keeping employees informed, organizations can improve transparency, build trust, and foster a culture of accountability.

Finally, another interpretation of KPI is to Keep People Inspired. This refers to the importance of creating a workplace that is inspiring and motivating for employees. This includes providing a sense of purpose and meaning in the work, recognizing and celebrating successes, and creating a culture of inclusivity and diversity. By keeping employees inspired, organizations can improve employee satisfaction, reduce turnover, and drive better business outcomes.

In conclusion, the meaning of the word KPI has evolved over time, from a traditional focus on financial and operational metrics to a broader range of metrics that focus on people and culture. By embracing these expanded interpretations of KPI, organizations can build stronger teams, retain top talent, drive innovation, and ultimately, achieve better business outcomes.

How marketing strategies need to be adapted in 2023?

Not everyone is aware, but for a while I was involved with (mostly the technical side) of marketing campaigns and solutions for some pretty big known companies – covering some pretty groundbreaking campaigns, involving the newest technologies of their day – everything from Facebook applications to spatial computing using Adobe Flash and a webcam (like kicking virtual soccer balls using an overhead projector). So, as a result, I have been following technology and other trends in marketing strategies too – hence this post summing what I see as a trend for 2023.


Marketing strategies are constantly evolving with the advent of new technologies, changing consumer behavior, and the emergence of new trends. As we enter 2023, businesses must adapt to these changes and develop innovative marketing strategies to stay ahead of the competition. In this post, we will discuss the top marketing strategies of 2023.

  • Personalization
    Personalization has been a buzzword in marketing for a while now, but it will continue to be an important strategy in 2023. With the vast amount of data available, companies can now create highly personalized experiences for their customers. By using customer data to tailor their marketing messages, businesses can improve customer engagement, loyalty, and sales.
  • Influencer Marketing
    Influencer marketing has been around for a while, but it is only going to get bigger in 2023. As consumers become more skeptical of traditional advertising, they are turning to influencers for recommendations and reviews. In fact, studies have shown that consumers trust influencers more than traditional celebrities or brands. By partnering with influencers, businesses can reach new audiences and build credibility with their target market.
  • Video Marketing
    Video marketing has been growing in popularity over the past few years, and it shows no signs of slowing down in 2023. With the rise of platforms like TikTok and Instagram Reels, businesses must create engaging video content that resonates with their target audience. By incorporating video into their marketing strategy, businesses can increase brand awareness, engagement, and conversions.
  • Voice Search Optimization
    Voice search is becoming more prevalent as more consumers use voice-enabled devices like Amazon Echo and Google Home (and I see the turn of events, might be Cortana is back with Microsoft’s growing AI growth?). In 2023, businesses must optimize their content for voice search to ensure that they appear in voice search results. This includes using natural language, answering questions concisely, and optimizing for long-tail keywords.
  • Artificial Intelligence (AI)
    AI is transforming the way businesses approach marketing. From chatbots to predictive analytics, AI can help businesses streamline their marketing efforts and deliver personalized experiences to their customers. In 2023, more businesses will adopt AI-powered marketing solutions to automate repetitive tasks, analyze customer data, and improve their marketing ROI.
  • Social Media Advertising
    Social media advertising has been a staple in many businesses’ marketing strategies, and it will continue to be in 2023. As social media platforms continue to grow in popularity, businesses must invest in social media advertising to reach their target audience. This includes creating targeted ads, running influencer campaigns, and leveraging user-generated content. I am actually in big favor of the last one – e.g even I advertised weight loss solution before 😀
  • Metaverse Marketing
    The metaverse is a virtual world where users can interact with each other and digital objects in real-time. As more businesses enter the metaverse, they must develop marketing strategies that cater to this new environment. This includes creating branded experiences that align with their overall brand image, developing virtual products or services, and partnering with influencers in the metaverse.
  • Spatial Computing
    Spatial computing involves the use of technology to blend the digital and physical worlds. This technology is becoming increasingly important in marketing, as it allows businesses to create immersive experiences for their customers. By leveraging spatial computing, businesses can create virtual showrooms, interactive product demos, and AR/VR experiences that engage customers and drive conversions.
  • Location-Based Marketing
    Location-based marketing involves targeting customers based on their physical location. With the rise of spatial computing, businesses can now use this technology to deliver personalized experiences to customers based on their location. For example, a retail store can use spatial computing to create an AR experience that highlights their products when a customer walks past their store.
  • 3D Product Visualization
    As more businesses move into the metaverse and leverage spatial computing, they must also create 3D product visualizations that accurately represent their products in a virtual environment. This includes creating 3D models, textures, and animations that are optimized for virtual environments.
  • Virtual Events
    Virtual events have become increasingly popular over the past two-three years (see pandemic 🙂 ), and they will continue to be an important marketing strategy in 2023. By using metaverse and spatial computing technologies, businesses can create immersive virtual events that engage customers and generate buzz. This includes creating virtual conference spaces, virtual product launches, and virtual trade shows.

In conclusion, businesses must adapt to the ever-changing marketing landscape to stay ahead of the competition. By incorporating these marketing strategies into their overall marketing plan, businesses can increase their brand awareness, engagement, and conversions in 2023. Specifically, the area I am interested in,  the rise of metaverse and spatial computing technologies offers new opportunities for businesses to create engaging marketing experiences for their customers.

How are database structures really driving performance?

Preamble – I was still in high school when the fascinating world of data structures first amazed me: for each problem, there is a particular structure that would fit the best, heap, trees, colored trees, oh my?! So, my interest in these structures stayed through the college years too. When working at Compaq as an Oracle administrator for National Healthcare, I did put many of my learnings (and teachings) to real try – e.g. I did have to run and optimize queries accessing sometimes record counts up to a billion, and the endusers expected to have a timely answer. That version of Oracle had some level of manual optimization you could play around with, so I ended up in long nightly calls with Oracle product groups to help me finetune some queries to conjure the results up in (sometimes) a week.


In today’s digital world, databases have become an essential part of modern computing. They are used to store and manage vast amounts of data and are an integral part of many software applications. Over time, various structures have been developed to optimize the performance of databases. In this post, we will take a closer look at some of the key structures driving modern databases.

  1. Skip List
    The skip list is a probabilistic data structure used to implement an ordered set or map. It is essentially a linked list with additional pointers that “skip” some elements, providing a faster search time. Skip lists are useful for maintaining a sorted index in memory, and are commonly used in high-performance databases and search engines.
  2. Hash Index
    A hash index is a data structure that maps keys to values using a hash function. The hash function takes the key as input and returns a unique value that represents the location of the value in the database. Hash indexes are fast for lookups, but not well suited for range queries or sorting.
  3. SSTable
    SSTable stands for “Sorted String Table.” It is a file format used to store data in a sorted order. SSTables are immutable and append-only, which means that once written, they cannot be modified. This makes them very efficient for read operations, as data can be read sequentially without the need for complex index structures.
  4. LSM Tree
    LSM stands for “Log-Structured Merge.” The LSM tree is a data structure that uses a combination of in-memory and on-disk structures to store data. New data is first stored in an in-memory data structure called a memtable. Once the memtable becomes too large, it is flushed to disk in an SSTable format. Over time, multiple SSTables are merged together to form a single larger SSTable. The LSM tree is very efficient for write-intensive workloads, as it minimizes disk I/O operations and can handle large write volumes.
  5. B-Tree
    The B-tree is a balanced tree data structure that is commonly used in databases to store and retrieve data. B-trees are optimized for disk-based storage and are designed to minimize disk I/O operations. They work by splitting nodes when they become too full, allowing for fast insertions and deletions while maintaining a balanced tree structure.
  6. Inverted Index
    An inverted index is a data structure used to index text data, such as in a search engine. It works by creating a mapping of each unique word in a document to the documents that contain that word. This allows for fast full-text searches and is commonly used in search engines and document management systems.
  7. Suffix Tree
    A suffix tree is a data structure used to store and index strings. It works by creating a tree structure that represents all possible suffixes of a string. Suffix trees are useful for text processing and are commonly used in natural language processing and bioinformatics.
  8. R-Tree
    An R-tree is a spatial index data structure used to index points or rectangles in space. It works by dividing space into smaller rectangles and indexing them based on their position. R-trees are useful for geographic information systems, image processing, and other applications that deal with spatial data.
  9. Bloom Filter
    A Bloom filter is a probabilistic data structure used to test whether an element is a member of a set. It works by hashing each element and setting corresponding bits in a bit array. Bloom filters are space-efficient and provide fast lookups but may produce false positives.
  10. Cuckoo Hashing
    Cuckoo hashing is a hash table algorithm that resolves collisions by rehashing the key to a different hash table. It works by using two hash tables and can provide very fast insertions and lookups.
  11. Fractal Tree
    A fractal tree is a self-similar data structure that is optimized for large-scale data storage and retrieval. It is designed to provide fast insertions, deletions, and lookups, and can handle data sets that are too large to fit into memory.
  12. Bitmap(ped) Index
    A bitmapped index is a data structure used to index data that can be represented as bitmaps (not bitmap as a picture – bitmap as a map of bits). It works by mapping each possible value of a column to a bit in a bitmap, and then performing bitwise operations to filter data.
  13. Tries
    A trie, also known as a prefix tree, is a data structure used to store and search for strings. It works by storing each character of a string in a tree structure, with each path from the root to a leaf representing a unique string. Back around 2003, we managed to fine tune a trie based algorithm to beat Microsoft’s pessimistic implementation in .NET 1.1 nearly 100 fold 🙂
  14. HyperLogLog
    HyperLogLog is a probabilistic data structure used to estimate the cardinality of a set. It works by using a hash function to map elements to a large number of “buckets,” and then counting the number of unique buckets. HyperLogLog provides a space-efficient way to estimate the size of large data sets.

These are just a few examples of modern structures that are used to optimize the performance of databases. As data sets continue to grow and evolve, new structures will likely be developed to meet the needs of modern computing – eg we haven’t touched the structures needed for quantum computing yet 🙂 In conclusion, modern databases rely on a variety of key structures to optimize performance and efficiently store and retrieve data. From skip lists and hash indexes to B-trees and inverted indexes, each data structure has its strengths and weaknesses, and choosing the right one for a particular application requires careful consideration of the specific use case.

Metrics to avoid while comparing developers

Is it a false assumption that writing more code, making more code changes is better? Let’s see.

As the software development industry continues to evolve, the need for measuring productivity has increased as well. Managers often use metrics such as lines of code or commit count to gauge the performance of developers. However, using these metrics can be counterproductive, and often leads to negative consequences for both developers and the company. In this post, we will discuss why line of code or commit count is a bad metric, and what should be the metric a developer is measured on instead.

Line of Code and Commit Count: A Flawed Metric

One of the most common metrics used to measure a developer’s productivity is the number of lines of code they produce. The idea behind this metric is simple: the more code a developer writes, the more productive they are. Similarly, commit count is another metric that is used to measure productivity. A commit is a snapshot of code that a developer makes to a code repository. The more commits a developer makes, the more productive they are presumed to be.

However, both of these metrics suffer from several flaws. Firstly, the number of lines of code or commits a developer produces does not take into account the quality of the code. A developer could write a thousand lines of code, but if they are poorly written, buggy, and difficult to maintain, they are not productive at all. Similarly, a developer could make hundreds of commits, but if they are not adding any value to the project, they are not being productive.

Secondly, these metrics do not consider the context of the project. The number of lines of code or commits required for a small project is vastly different from that of a large, complex project. A developer working on a small project could write a few hundred lines of code and be done with it, while a developer working on a larger project could write thousands of lines of code, but still be far from completing the project. Comparing the productivity of these two developers based solely on lines of code or commit count is not a fair assessment.

Thirdly, these metrics can lead to unhealthy competition among developers. When developers are measured based on the number of lines of code or commits they produce, they may feel pressured to write more code than necessary, even if it means compromising on quality. This can lead to a culture where developers are encouraged to prioritize quantity over quality, leading to technical debt, poor code maintainability, and increased project costs in the long run.

A Better Metric for Measuring Developer Productivity

So, if lines of code or commit count is a flawed metric, what should be the metric a developer is measured on instead? The answer lies in measuring the value a developer adds to the project. The value a developer adds is a combination of several factors, including the quality of their code, their ability to meet project goals, their collaboration with team members, and their contribution to the project’s overall success.

Measuring value can be tricky, but some of the ways to measure it include measuring the impact of a developer’s code on the project, the number of bugs they fix, the number of customer tickets they resolve, and the feedback they receive from team members and stakeholders. These metrics provide a more comprehensive view of a developer’s performance and their contribution to the project’s success.

Another important metric to consider is the developer’s ability to learn and grow. The technology landscape is constantly evolving, and developers who can learn and adapt to new technologies are more valuable to the company. Measuring a developer’s ability to learn new skills, their participation in training programs, and their involvement in open-source projects can provide insights into their potential to grow and contribute to the company’s long-term success.

In conclusion, lines of code or commit count is a flawed metric for measuring developer productivity. Instead, companies should focus on measuring the value a developer adds to the project. What are these?

There are several tools that can be used to measure the metrics that truly matter for developers and the success of a project. Here are some of the tools that can help measure good metrics:

Code review tools – These tools can help measure the quality of code written by developers. They can identify bugs, code smells, and other issues that could impact the project. Some popular code review tools include SonarQube, Code Climate, and Crucible.

Agile project management tools – These tools can help measure the progress of a project and ensure that developers are meeting project goals. Agile project management tools like Jira, Trello, and Asana can be used to track the progress of sprints, measure the velocity of the team, and identify areas where improvements can be made.

Feedback tools – These tools can be used to measure the impact of a developer’s work on the project. They can collect feedback from stakeholders, customers, and team members to provide insights into the value that a developer is adding to the project. Some popular feedback tools include SurveyMonkey, Google Forms, and Typeform.

Analytics tools – These tools can help measure the performance of a project and identify areas where improvements can be made. They can track metrics such as user engagement, conversion rates, and page load times to provide insights into the overall success of the project. Some popular analytics tools include Google Analytics, Mixpanel, and Kissmetrics.

Learning and development tools – These tools can be used to measure a developer’s ability to learn and grow. They can track participation in training programs, involvement in open-source projects, and certifications obtained to provide insights into a developer’s potential to contribute to the company’s long-term success. Some popular learning and development tools include Udemy, Coursera, and LinkedIn Learning.

In summary, using tools that focus on measuring quality, progress, feedback, performance, and learning can provide a more comprehensive view of a developer’s performance and the success of a project. Companies should consider using a combination of these tools to measure the metrics that truly matter for developers and the success of their projects. 

Mono or Micro? Repo or Service?

The monolith vs microservices debate is one of the most discussed topics in software architecture. It is a debate about how to structure software applications and services to be more efficient, scalable, and manageable. While some argue in favor of a monolithic architecture, others prefer a microservices-based approach. To understand this debate, it’s important to examine the theory of complexity, which can provide insights into the benefits and drawbacks of each approach.

At its core, the theory of complexity is concerned with the behavior of complex systems. It considers the various components and their interactions, and how they affect the system as a whole. The theory of complexity can be applied to software architecture by examining the different levels of complexity in software systems. These levels include components, systems, and anthropotechnic complexity.

Components complexity refers to the complexity of individual software components, such as classes, functions, and modules. In a monolithic architecture, all components are tightly coupled, meaning they are dependent on one another. This creates a high level of components complexity, as any changes to one component can have unintended consequences on other components. In contrast, a microservices architecture separates components into smaller, independent services. This reduces components complexity by limiting the interactions between components and making it easier to manage and maintain each service individually.

System complexity refers to the complexity of the interactions between components in a software system. In a monolithic architecture, system complexity can be high because all components are interdependent. Any change to one component can have ripple effects throughout the system, making it difficult to manage and scale. In contrast, a microservices architecture reduces system complexity by isolating each service and limiting the interactions between them. This makes it easier to manage and scale the system as a whole.

Anthropotechnic complexity refers to the complexity of human interactions with software systems. This includes issues related to usability, maintainability, and scalability. In a monolithic architecture, anthropotechnic complexity can be high because any changes to the system can have a wide range of impacts on the user experience. This can make it difficult to maintain and scale the system. In contrast, a microservices architecture reduces anthropotechnic complexity by allowing for more focused and targeted changes to individual services. This makes it easier to maintain and scale the system without negatively impacting the user experience.

The monorepo vs polyrepo debate is another topic that is closely related to the monolith vs microservices debate in software architecture. The monorepo debate is concerned with how to manage the codebase of a software system. A monorepo, short for a monolithic repository, is a single repository that contains all the code for a software system. In contrast, a polyrepo, short for a polyglot repository, is a set of separate repositories that contain the code for different parts of the system.

Like the monolith vs microservices debate, the monorepo vs polyrepo debate is also connected to the theory of complexity. In a monorepo, all the code for a system is located in a single repository, which can make it easier to manage and maintain the codebase. This is because developers have a single point of reference for the entire system, which can make it easier to understand how different parts of the system work together. Additionally, a monorepo can make it easier to manage dependencies between different parts of the system, as all the dependencies can be managed in a single repository.

However, a monorepo can also have drawbacks in terms of complexity. For example, if the codebase becomes too large, it can be difficult to manage and build, which can lead to longer build times and slower development cycles. Additionally, if there are multiple teams working on different parts of the system, conflicts can arise when multiple developers are working on the same codebase.

In contrast, a polyrepo can reduce some of the complexity associated with managing a large codebase. By separating the codebase into multiple repositories, developers can more easily manage and build each part of the system independently. This can also make it easier for multiple teams to work on different parts of the system without conflicts.

However, a polyrepo can also have its own drawbacks. For example, managing dependencies between different parts of the system can be more difficult in a polyrepo because there are multiple repositories to manage. Additionally, it can be more difficult to understand how different parts of the system work together because the code is located in separate repositories.

In conclusion, the monorepo vs polyrepo debate is supported by the theory of complexity, as both approaches have their own benefits and drawbacks in terms of managing the complexity of a software system. Ultimately, the choice between a monorepo or polyrepo depends on the specific needs of the development team and the software system they are working on. Similarly, the monolith vs microservices debate is also connected to the theory of complexity, which provides insights into the benefits and drawbacks of each approach. Monolithic architectures have higher levels of components and system complexity, which can make them more difficult to manage and scale. In contrast, microservices architectures reduce complexity at both the component and system levels, making them more manageable and scalable. Additionally, microservices architectures also reduce anthropotechnic complexity by allowing for more focused and targeted changes to individual services. Ultimately, the choice between a monolithic or microservices-based architecture depends on the specific needs of the software system and the goals of the development team.

Welcome Microsoft to the Supercomputers top #20!

Supercomputers have been around since the 1960s, and they have played a crucial role in advancing scientific research and technological progress. But as computing power becomes more accessible and distributed, many people question whether supercomputers are still relevant today. In this post, I will explore the current state of supercomputing and examine their continued relevance in the modern era.

First, it’s essential to understand what makes a computer “super.” Supercomputers are designed to handle highly complex and computationally intensive tasks that require massive amounts of data processing, storage, and analysis. These tasks can include weather forecasting, molecular modeling, simulation of physical phenomena, and data-intensive machine learning applications. Supercomputers use multiple processors and parallel processing techniques to perform calculations at incredibly high speeds, often measured in quadrillions of floating-point operations per second (FLOPS).

Supercomputers have traditionally been the domain of government agencies, research institutions, and large corporations with deep pockets. However, the development of cloud computing and the rise of distributed computing has made high-performance computing more accessible to a broader range of users. Cloud providers like AWS, Azure, and GCP now offer access to supercomputing resources on a pay-per-use basis, making it easier for researchers and startups to leverage this technology.

Despite the increased accessibility of high-performance computing, supercomputers remain essential for several reasons. Firstly, they enable breakthrough scientific research that would be impossible without their computational power. From the discovery of Higgs boson particle to the simulation of the human brain, supercomputers have been instrumental in many groundbreaking discoveries. These discoveries have led to advances in medicine, engineering, and other fields that benefit society as a whole.

Supercomputers are still necessary for certain types of applications that require extreme processing power. For example, weather forecasting models require vast amounts of data processing to accurately predict future weather patterns. Similarly, molecular modeling and simulation require enormous computational resources to simulate complex chemical reactions accurately. These applications are often beyond the capabilities of traditional computing resources and require the use of supercomputers.

In conclusion, supercomputers are still relevant today and will continue to be so for the foreseeable future. While the rise of cloud computing and distributed computing has made high-performance computing more accessible, supercomputers remain essential for breakthrough scientific research, data-intensive applications, and national security and defense. As technology continues to advance, supercomputers will continue to play a crucial role in shaping the world around us.

So, welcome Microsoft in the top 20!

Testing: the black sheep of computer science

Testing is an essential aspect of computer science that ensures the reliability and effectiveness of software products. It involves the process of evaluating software applications to identify errors, bugs, and defects before deployment. Despite its critical role in software development, testing has often been considered the black sheep of computer science. In this post, we explore why testing has been treated as such and the importance of changing this perception.

Black sheep with glasses typing on a computer

One of the reasons testing has been regarded as the black sheep of computer science is that it is often viewed as a mundane and repetitive task. Developers are usually more interested in the creative aspects of programming, such as designing and implementing new features. Testing, on the other hand, involves verifying that the code works as intended, which can be time-consuming and tedious.

Another reason is that testing is often seen as an afterthought in the development process. Many organizations prioritize delivering new features quickly, often at the expense of testing. This approach can lead to software products with multiple bugs and defects that can result in costly consequences, including downtime, data breaches, and loss of customer trust.

Furthermore, testing is often a complex and challenging task that requires a deep understanding of the software system, the application domain, and various testing techniques. Testing professionals must be skilled in designing and executing tests, analyzing test results, and communicating their findings to developers and stakeholders effectively.

Another issue that contributes to the black sheep status of testing is the lack of recognition and appreciation for the work that testing professionals do. Many people outside the software development process view testing as a straightforward task that anyone can perform, and as such, they don’t appreciate the skills and expertise that testing professionals bring to the table.

Changing the perception of testing is crucial, as it plays a critical role in the success of software products. Effective testing helps to identify defects early in the development process, reducing the time and costs associated with fixing bugs and defects after deployment. It also ensures that software products are reliable, secure, and meet the needs of the end-users.

To change the perception of testing, organizations must prioritize testing in the software development process. They should invest in training and hiring skilled testing professionals, provide the necessary tools and resources, and encourage collaboration between developers and testers. Additionally, organizations should recognize and appreciate the value of testing and the contributions of testing professionals.

While programming and testing are both important parts of software development, testing requires a different set of skills and knowledge than programming. In this article, we will discuss why testing needs better skills than programming and provide examples to support this claim.

Critical thinking and problem-solving skills: Testing requires testers to think critically and identify potential issues and edge cases that developers may have missed. This involves analyzing requirements and design documents, exploring the software system, and evaluating different test scenarios to ensure that the software meets the specified requirements. For example, testers may have to simulate different user behaviors, test for compatibility with different platforms and devices, and evaluate performance and scalability under different loads. These tasks require testers to have excellent problem-solving skills and the ability to think critically and creatively.

Domain knowledge: Testers need to have a good understanding of the domain in which the software is being developed. This includes knowledge of the business processes, user workflows, industry regulations, and technical constraints that affect the software system. For example, testers working on a healthcare application must have a good understanding of the medical terminology, healthcare workflows, and regulatory requirements related to the application. This knowledge helps testers to identify potential issues, create relevant test scenarios, and evaluate the software system’s performance accurately.

Attention to detail: Testing requires a high level of attention to detail to identify issues that might otherwise go unnoticed. Testers need to be meticulous in their work, thoroughly reviewing software requirements, design documents, and code to ensure that the software functions as expected. For example, testers may need to verify that the software performs the intended function, handles errors and exceptions correctly, and provides appropriate feedback to users. These tasks require testers to be detail-oriented and have excellent organizational skills.

Communication skills: Testers need to communicate effectively with developers, project managers, and other stakeholders. They must be able to articulate their findings clearly, report bugs, and explain complex issues to non-technical stakeholders. For example, testers may need to write detailed bug reports, provide test results, and participate in project meetings to discuss issues and solutions. These tasks require testers to have excellent communication skills, both written and verbal.

There have been several high-profile cases where more testing could have prevented a software-related catastrophe. Here are some examples:

The Therac-25 radiation therapy machine: In the 1980s, a software defect in the Therac-25 radiation therapy machine caused several patients to receive lethal doses of radiation. The defect was caused by a race condition that occurred when operators changed settings too quickly. The manufacturer had not performed adequate testing on the machine, and the software was not designed to detect the error condition. If more testing had been performed, the defect could have been detected and corrected before the machine was released to the market.

The Ariane 5 rocket launch: In 1996, the maiden flight of the Ariane 5 rocket ended in disaster when the rocket veered off course and self-destructed. The cause of the failure was traced to a software error in the rocket’s inertial reference system. The software was not designed to handle the high-speed data that was being generated during the flight, and an overflow error occurred, causing the software to shut down. If more testing had been performed, the software defect could have been detected and corrected before the launch.

The Volkswagen emissions scandal: In 2015, it was discovered that Volkswagen had installed software in their diesel vehicles that detected when the vehicles were undergoing emissions testing and activated a “cheat mode” that reduced emissions to pass the test. During normal driving, the vehicles emitted up to 40 times the legal limit of nitrogen oxide. If more testing had been performed on the vehicles, the software defect could have been detected and corrected before the vehicles were released to the market.

The Equifax data breach: In 2017, Equifax suffered a massive data breach that exposed the personal information of over 143 million people. The breach was caused by a software vulnerability in an open-source web application framework that Equifax was using. The vulnerability had been patched months before the breach, but Equifax had not performed adequate testing to ensure that the patch had been applied to all of their systems. If more testing had been performed, the vulnerability could have been detected and corrected before the breach occurred.

These examples highlight the importance of thorough testing in software development. In each case, more testing could have prevented a catastrophic outcome. It’s crucial for organizations to prioritize testing in the software development process, invest in skilled testing professionals, and perform thorough testing to ensure the reliability and security of their software products.


In conclusion, while programming and testing both require specific skills, testing requires better skills than programming in critical thinking, problem-solving, domain knowledge, attention to detail, and communication skills. These skills are essential for identifying potential issues, designing relevant test scenarios, and evaluating the software system’s performance accurately. Therefore, organizations must recognize the importance of testing and invest in hiring and training skilled testing professionals to ensure the delivery of high-quality software products.

Emerging Technologies SIG (Zenith) Proposal Meeting on 3/29

Without further ado, I am happy to announce, that based on the proceedings on top of the Why focus on Emerging Technologies in Financial companies post, and thanks to FINOS management, namely Gabriele Columbro (Executive Director of FINOS) and Maurizio Pillitu (CTO at FINOS), we are ready to have a ‘proposal’ kickoff meeting for the Emerging Technologies SIG. It will provide a forum for FINOS members and the wider fintech community to discuss, collaborate on, and develop new and innovative solutions that can help drive the financial industry forward.

The SIG, if approved, would host regular meetings, events, webinars, workshops and other activities to help members stay up-to-date with the latest trends and developments in emerging technologies. We do encourage all FINOS members who are interested in emerging technologies to join our kickoff session below and become part of this exciting new community. Together, we can help to drive innovation and advance the fintech industry in fresh and exciting ways!

Proposed logo for the Zenith group

Proposal to create a FINOS Emerging Technologies Special Interest Group

Proposing the creation of a FINOS Emerging Technologies Special Interest Group (Zenith). The purpose of the Zenith SIG would be to explore and promote the adoption of new and innovative technologies in the financial services industry. The proposed goals of the SIG are to:

  1. identify and evaluate emerging technologies that have potential to transform the sector
  2. to share best practices, use cases, and insights for the broader community in the form of webinars, podcasts, and articles.

To gather interest and commitment, FINOS is organizing an initial exploratory meeting – which will also help to prepare for the SIG approval submission (to FINOS Board of Directors) – on Wednesday 29th of March at 10:00 US/Eastern. Agendas and conference info can be found in the issues.

License

Copyright 2023 Fintech Open Source Foundation

Distributed under the Apache License, Version 2.0.

SPDX-License-Identifier: Apache-2.0


Details of the upcoming meeting:

Google Calendar Link

Date: March 29th, 2023, at 10am EST / 3pm GMT

Location: Zoom

Agenda:

  •  Convene & roll call (5mins)
  •  Display FINOS Antitrust Policy summary slide
  •  Review Meeting Notices
  •  Presenting SIG draft charta 
  •  Acceptance / reviewing of charta
  •  AOB, Q&A & Adjourn (5mins)

The age of Digital Aristotle arrives?

Another hugely successful topic was my diving into methods around teaching and how they change now and the future – so here we are again 🙂 Over the past few years, the world has witnessed a significant shift towards digitalization, and the education sector is no exception. With the rise of digital Aristotle, teaching has undergone a revolution, transforming the way we learn and acquire knowledge. Digital Aristotle is a concept that refers to the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms to mimic the teaching style of the Greek philosopher Aristotle, making education more personalized and effective.

A digitized Aristotle bust with the Matrix's green letters in the background

In the past, education was primarily delivered in a traditional classroom setting, with teachers lecturing to a group of students. This one-size-fits-all approach to education had its limitations, as students have different learning styles and abilities. However, with the introduction of digital Aristotle, the teaching approach has become more personalized and tailored to the individual needs of each student. AI and ML algorithms can analyze student data to identify their strengths and weaknesses, and then provide customized learning paths to help students improve their performance.

One of the significant benefits of digital Aristotle is that it provides students with instant feedback. Traditional teaching methods often relied on exams and assignments to assess a student’s knowledge, but this approach had limitations in terms of providing timely feedback. However, digital Aristotle can analyze a student’s performance in real-time and provide immediate feedback, allowing students to identify their strengths and weaknesses and improve their performance in real-time.

Another advantage of digital Aristotle is that it allows for a more interactive and engaging learning experience. Traditional teaching methods often relied on passive learning, where students were expected to sit and listen to a teacher’s lecture. However, digital Aristotle uses interactive simulations, videos, and gamification to make learning more engaging and fun, which can improve students’ motivation and retention of information.

Moreover, digital Aristotle can also help teachers to be more effective in their teaching. By analyzing data from students’ performance, teachers can identify areas where students need more assistance and provide targeted interventions. Additionally, digital Aristotle can also assist teachers in grading assignments, reducing the time they spend on grading and allowing them to focus on other aspects of teaching.

Digital Aristotle has also revolutionized the concept of online learning. With the COVID-19 pandemic, online learning has become the norm, and digital Aristotle has made it more effective. Online learning can be challenging for students who need personalized attention, but digital Aristotle can provide customized learning paths to help students learn at their own pace.

As usual, there is another side – the digital Aristotle is not a perfect solution (yet?). While this post highlights the benefits of a digital Aristotle, it does not discuss the potential downsides of relying on AI and ML for education. One possible way to mitigate this is by acknowledging the limitations of digital Aristotle and emphasizing the importance of human interaction and guidance in education.

There are also an ethical implications of using AI and ML in education, such as privacy concerns and potential biases in algorithmic decision-making. One way to mitigate this is by implementing transparent and accountable AI systems and regularly monitoring and auditing their performance.

You might feel like I do question the role of teachers in a digital Aristotle-driven education system – while the system can help teachers be more effective, we do not fully address the role of teachers in a digital Aristotle-driven education system. One way to mitigate this is by emphasizing the importance of teachers in guiding and supporting students’ learning and incorporating digital Aristotle as a tool to enhance, not replace, traditional teaching methods.

Overall, it is important to consider both the benefits and potential drawbacks of digital Aristotle and to continually evaluate and improve the technology’s performance to ensure that it aligns with ethical and educational standards. In conclusion, digital Aristotle is a game-changer in the education sector, providing personalized learning experiences and making education more accessible and effective. While there are concerns about the role of AI and ML in education, it is clear that digital Aristotle has the potential to transform the way we learn and acquire knowledge. As the world continues to digitize, digital Aristotle is likely to become an essential tool in education, helping students and teachers alike to achieve their full potential.

What can help you beside INCUP with ADHD?

Even for myself is a surprise how much interest my post on INCUP (the combination of Interest, Novelty, Challenge, Urgency, and Passion) and ADHD got recently. So figured I’d dive deeper to check other solutions beside INCUP for Attention Deficit Hyperactivity Disorder (ADHD). It is a neurodevelopmental disorder that affects millions of people worldwide. People with ADHD struggle with inattention, hyperactivity, and impulsivity. While medication is a common treatment option for ADHD, there are other tools and strategies that can be used to manage symptoms. In this post, we will discuss what you can use besides INCUP to help you with your ADHD.

Cognitive Behavioral Therapy (CBT)

CBT is a type of therapy that focuses on changing negative thoughts and behaviors. It can be used to treat ADHD by helping individuals develop coping strategies and changing their perspective on their symptoms. CBT can also help individuals develop time-management skills and organization strategies, which can be helpful for managing ADHD symptoms.

Exercise

Exercise is an effective way to manage ADHD symptoms. It helps to increase dopamine levels, which can improve focus and attention. Exercise also helps to reduce hyperactivity and impulsivity. Individuals with ADHD should aim to incorporate regular exercise into their daily routine.

Mindfulness Meditation

Mindfulness meditation can help individuals with ADHD develop a greater awareness of their thoughts and behaviors. It can also help to reduce stress and anxiety, which can worsen ADHD symptoms. Mindfulness meditation can be practiced anywhere, and there are many resources available online to help individuals get started.

Diet

A healthy diet can help to manage ADHD symptoms. Foods that are high in protein and complex carbohydrates can help to improve focus and attention. Omega-3 fatty acids, found in fish and nuts, have also been shown to improve ADHD symptoms. On the other hand, processed foods, sugary drinks, and caffeine should be avoided, as they can worsen symptoms.

Sleep

Getting enough sleep is essential for managing ADHD symptoms. Lack of sleep can worsen symptoms such as inattention, hyperactivity, and impulsivity. Individuals with ADHD should aim to get 7-9 hours of sleep per night and develop a consistent sleep routine.

Time Management

Developing time-management skills is crucial for managing ADHD symptoms. Individuals with ADHD should develop a schedule or planner to help them stay organized and prioritize tasks. Breaking down tasks into smaller, manageable steps can also be helpful.

Support Groups

Joining a support group can be a great way to connect with others who are also managing ADHD. Support groups provide a safe and supportive environment for individuals to share their experiences and learn from others. Many support groups are available online, making them accessible to anyone.


In conclusion, while medication may be a common treatment option for ADHD, there are many other tools and strategies available to manage symptoms. Cognitive-behavioral therapy, exercise, mindfulness meditation, diet, sleep, time management, and support groups are just a few examples of what can be used besides INCUP to help individuals with ADHD manage their symptoms. It’s important to remember that everyone’s experience with ADHD is unique, and what works for one person may not work for another. Therefore, it’s essential to explore different options and find what works best for you.