Metrics to avoid while comparing developers

Is it a false assumption that writing more code, making more code changes is better? Let’s see.

As the software development industry continues to evolve, the need for measuring productivity has increased as well. Managers often use metrics such as lines of code or commit count to gauge the performance of developers. However, using these metrics can be counterproductive, and often leads to negative consequences for both developers and the company. In this post, we will discuss why line of code or commit count is a bad metric, and what should be the metric a developer is measured on instead.

Line of Code and Commit Count: A Flawed Metric

One of the most common metrics used to measure a developer’s productivity is the number of lines of code they produce. The idea behind this metric is simple: the more code a developer writes, the more productive they are. Similarly, commit count is another metric that is used to measure productivity. A commit is a snapshot of code that a developer makes to a code repository. The more commits a developer makes, the more productive they are presumed to be.

However, both of these metrics suffer from several flaws. Firstly, the number of lines of code or commits a developer produces does not take into account the quality of the code. A developer could write a thousand lines of code, but if they are poorly written, buggy, and difficult to maintain, they are not productive at all. Similarly, a developer could make hundreds of commits, but if they are not adding any value to the project, they are not being productive.

Secondly, these metrics do not consider the context of the project. The number of lines of code or commits required for a small project is vastly different from that of a large, complex project. A developer working on a small project could write a few hundred lines of code and be done with it, while a developer working on a larger project could write thousands of lines of code, but still be far from completing the project. Comparing the productivity of these two developers based solely on lines of code or commit count is not a fair assessment.

Thirdly, these metrics can lead to unhealthy competition among developers. When developers are measured based on the number of lines of code or commits they produce, they may feel pressured to write more code than necessary, even if it means compromising on quality. This can lead to a culture where developers are encouraged to prioritize quantity over quality, leading to technical debt, poor code maintainability, and increased project costs in the long run.

A Better Metric for Measuring Developer Productivity

So, if lines of code or commit count is a flawed metric, what should be the metric a developer is measured on instead? The answer lies in measuring the value a developer adds to the project. The value a developer adds is a combination of several factors, including the quality of their code, their ability to meet project goals, their collaboration with team members, and their contribution to the project’s overall success.

Measuring value can be tricky, but some of the ways to measure it include measuring the impact of a developer’s code on the project, the number of bugs they fix, the number of customer tickets they resolve, and the feedback they receive from team members and stakeholders. These metrics provide a more comprehensive view of a developer’s performance and their contribution to the project’s success.

Another important metric to consider is the developer’s ability to learn and grow. The technology landscape is constantly evolving, and developers who can learn and adapt to new technologies are more valuable to the company. Measuring a developer’s ability to learn new skills, their participation in training programs, and their involvement in open-source projects can provide insights into their potential to grow and contribute to the company’s long-term success.

In conclusion, lines of code or commit count is a flawed metric for measuring developer productivity. Instead, companies should focus on measuring the value a developer adds to the project. What are these?

There are several tools that can be used to measure the metrics that truly matter for developers and the success of a project. Here are some of the tools that can help measure good metrics:

Code review tools – These tools can help measure the quality of code written by developers. They can identify bugs, code smells, and other issues that could impact the project. Some popular code review tools include SonarQube, Code Climate, and Crucible.

Agile project management tools – These tools can help measure the progress of a project and ensure that developers are meeting project goals. Agile project management tools like Jira, Trello, and Asana can be used to track the progress of sprints, measure the velocity of the team, and identify areas where improvements can be made.

Feedback tools – These tools can be used to measure the impact of a developer’s work on the project. They can collect feedback from stakeholders, customers, and team members to provide insights into the value that a developer is adding to the project. Some popular feedback tools include SurveyMonkey, Google Forms, and Typeform.

Analytics tools – These tools can help measure the performance of a project and identify areas where improvements can be made. They can track metrics such as user engagement, conversion rates, and page load times to provide insights into the overall success of the project. Some popular analytics tools include Google Analytics, Mixpanel, and Kissmetrics.

Learning and development tools – These tools can be used to measure a developer’s ability to learn and grow. They can track participation in training programs, involvement in open-source projects, and certifications obtained to provide insights into a developer’s potential to contribute to the company’s long-term success. Some popular learning and development tools include Udemy, Coursera, and LinkedIn Learning.

In summary, using tools that focus on measuring quality, progress, feedback, performance, and learning can provide a more comprehensive view of a developer’s performance and the success of a project. Companies should consider using a combination of these tools to measure the metrics that truly matter for developers and the success of their projects. 

Mono or Micro? Repo or Service?

The monolith vs microservices debate is one of the most discussed topics in software architecture. It is a debate about how to structure software applications and services to be more efficient, scalable, and manageable. While some argue in favor of a monolithic architecture, others prefer a microservices-based approach. To understand this debate, it’s important to examine the theory of complexity, which can provide insights into the benefits and drawbacks of each approach.

At its core, the theory of complexity is concerned with the behavior of complex systems. It considers the various components and their interactions, and how they affect the system as a whole. The theory of complexity can be applied to software architecture by examining the different levels of complexity in software systems. These levels include components, systems, and anthropotechnic complexity.

Components complexity refers to the complexity of individual software components, such as classes, functions, and modules. In a monolithic architecture, all components are tightly coupled, meaning they are dependent on one another. This creates a high level of components complexity, as any changes to one component can have unintended consequences on other components. In contrast, a microservices architecture separates components into smaller, independent services. This reduces components complexity by limiting the interactions between components and making it easier to manage and maintain each service individually.

System complexity refers to the complexity of the interactions between components in a software system. In a monolithic architecture, system complexity can be high because all components are interdependent. Any change to one component can have ripple effects throughout the system, making it difficult to manage and scale. In contrast, a microservices architecture reduces system complexity by isolating each service and limiting the interactions between them. This makes it easier to manage and scale the system as a whole.

Anthropotechnic complexity refers to the complexity of human interactions with software systems. This includes issues related to usability, maintainability, and scalability. In a monolithic architecture, anthropotechnic complexity can be high because any changes to the system can have a wide range of impacts on the user experience. This can make it difficult to maintain and scale the system. In contrast, a microservices architecture reduces anthropotechnic complexity by allowing for more focused and targeted changes to individual services. This makes it easier to maintain and scale the system without negatively impacting the user experience.

The monorepo vs polyrepo debate is another topic that is closely related to the monolith vs microservices debate in software architecture. The monorepo debate is concerned with how to manage the codebase of a software system. A monorepo, short for a monolithic repository, is a single repository that contains all the code for a software system. In contrast, a polyrepo, short for a polyglot repository, is a set of separate repositories that contain the code for different parts of the system.

Like the monolith vs microservices debate, the monorepo vs polyrepo debate is also connected to the theory of complexity. In a monorepo, all the code for a system is located in a single repository, which can make it easier to manage and maintain the codebase. This is because developers have a single point of reference for the entire system, which can make it easier to understand how different parts of the system work together. Additionally, a monorepo can make it easier to manage dependencies between different parts of the system, as all the dependencies can be managed in a single repository.

However, a monorepo can also have drawbacks in terms of complexity. For example, if the codebase becomes too large, it can be difficult to manage and build, which can lead to longer build times and slower development cycles. Additionally, if there are multiple teams working on different parts of the system, conflicts can arise when multiple developers are working on the same codebase.

In contrast, a polyrepo can reduce some of the complexity associated with managing a large codebase. By separating the codebase into multiple repositories, developers can more easily manage and build each part of the system independently. This can also make it easier for multiple teams to work on different parts of the system without conflicts.

However, a polyrepo can also have its own drawbacks. For example, managing dependencies between different parts of the system can be more difficult in a polyrepo because there are multiple repositories to manage. Additionally, it can be more difficult to understand how different parts of the system work together because the code is located in separate repositories.

In conclusion, the monorepo vs polyrepo debate is supported by the theory of complexity, as both approaches have their own benefits and drawbacks in terms of managing the complexity of a software system. Ultimately, the choice between a monorepo or polyrepo depends on the specific needs of the development team and the software system they are working on. Similarly, the monolith vs microservices debate is also connected to the theory of complexity, which provides insights into the benefits and drawbacks of each approach. Monolithic architectures have higher levels of components and system complexity, which can make them more difficult to manage and scale. In contrast, microservices architectures reduce complexity at both the component and system levels, making them more manageable and scalable. Additionally, microservices architectures also reduce anthropotechnic complexity by allowing for more focused and targeted changes to individual services. Ultimately, the choice between a monolithic or microservices-based architecture depends on the specific needs of the software system and the goals of the development team.

Welcome Microsoft to the Supercomputers top #20!

Supercomputers have been around since the 1960s, and they have played a crucial role in advancing scientific research and technological progress. But as computing power becomes more accessible and distributed, many people question whether supercomputers are still relevant today. In this post, I will explore the current state of supercomputing and examine their continued relevance in the modern era.

First, it’s essential to understand what makes a computer “super.” Supercomputers are designed to handle highly complex and computationally intensive tasks that require massive amounts of data processing, storage, and analysis. These tasks can include weather forecasting, molecular modeling, simulation of physical phenomena, and data-intensive machine learning applications. Supercomputers use multiple processors and parallel processing techniques to perform calculations at incredibly high speeds, often measured in quadrillions of floating-point operations per second (FLOPS).

Supercomputers have traditionally been the domain of government agencies, research institutions, and large corporations with deep pockets. However, the development of cloud computing and the rise of distributed computing has made high-performance computing more accessible to a broader range of users. Cloud providers like AWS, Azure, and GCP now offer access to supercomputing resources on a pay-per-use basis, making it easier for researchers and startups to leverage this technology.

Despite the increased accessibility of high-performance computing, supercomputers remain essential for several reasons. Firstly, they enable breakthrough scientific research that would be impossible without their computational power. From the discovery of Higgs boson particle to the simulation of the human brain, supercomputers have been instrumental in many groundbreaking discoveries. These discoveries have led to advances in medicine, engineering, and other fields that benefit society as a whole.

Supercomputers are still necessary for certain types of applications that require extreme processing power. For example, weather forecasting models require vast amounts of data processing to accurately predict future weather patterns. Similarly, molecular modeling and simulation require enormous computational resources to simulate complex chemical reactions accurately. These applications are often beyond the capabilities of traditional computing resources and require the use of supercomputers.

In conclusion, supercomputers are still relevant today and will continue to be so for the foreseeable future. While the rise of cloud computing and distributed computing has made high-performance computing more accessible, supercomputers remain essential for breakthrough scientific research, data-intensive applications, and national security and defense. As technology continues to advance, supercomputers will continue to play a crucial role in shaping the world around us.

So, welcome Microsoft in the top 20!

Testing: the black sheep of computer science

Testing is an essential aspect of computer science that ensures the reliability and effectiveness of software products. It involves the process of evaluating software applications to identify errors, bugs, and defects before deployment. Despite its critical role in software development, testing has often been considered the black sheep of computer science. In this post, we explore why testing has been treated as such and the importance of changing this perception.

Black sheep with glasses typing on a computer

One of the reasons testing has been regarded as the black sheep of computer science is that it is often viewed as a mundane and repetitive task. Developers are usually more interested in the creative aspects of programming, such as designing and implementing new features. Testing, on the other hand, involves verifying that the code works as intended, which can be time-consuming and tedious.

Another reason is that testing is often seen as an afterthought in the development process. Many organizations prioritize delivering new features quickly, often at the expense of testing. This approach can lead to software products with multiple bugs and defects that can result in costly consequences, including downtime, data breaches, and loss of customer trust.

Furthermore, testing is often a complex and challenging task that requires a deep understanding of the software system, the application domain, and various testing techniques. Testing professionals must be skilled in designing and executing tests, analyzing test results, and communicating their findings to developers and stakeholders effectively.

Another issue that contributes to the black sheep status of testing is the lack of recognition and appreciation for the work that testing professionals do. Many people outside the software development process view testing as a straightforward task that anyone can perform, and as such, they don’t appreciate the skills and expertise that testing professionals bring to the table.

Changing the perception of testing is crucial, as it plays a critical role in the success of software products. Effective testing helps to identify defects early in the development process, reducing the time and costs associated with fixing bugs and defects after deployment. It also ensures that software products are reliable, secure, and meet the needs of the end-users.

To change the perception of testing, organizations must prioritize testing in the software development process. They should invest in training and hiring skilled testing professionals, provide the necessary tools and resources, and encourage collaboration between developers and testers. Additionally, organizations should recognize and appreciate the value of testing and the contributions of testing professionals.

While programming and testing are both important parts of software development, testing requires a different set of skills and knowledge than programming. In this article, we will discuss why testing needs better skills than programming and provide examples to support this claim.

Critical thinking and problem-solving skills: Testing requires testers to think critically and identify potential issues and edge cases that developers may have missed. This involves analyzing requirements and design documents, exploring the software system, and evaluating different test scenarios to ensure that the software meets the specified requirements. For example, testers may have to simulate different user behaviors, test for compatibility with different platforms and devices, and evaluate performance and scalability under different loads. These tasks require testers to have excellent problem-solving skills and the ability to think critically and creatively.

Domain knowledge: Testers need to have a good understanding of the domain in which the software is being developed. This includes knowledge of the business processes, user workflows, industry regulations, and technical constraints that affect the software system. For example, testers working on a healthcare application must have a good understanding of the medical terminology, healthcare workflows, and regulatory requirements related to the application. This knowledge helps testers to identify potential issues, create relevant test scenarios, and evaluate the software system’s performance accurately.

Attention to detail: Testing requires a high level of attention to detail to identify issues that might otherwise go unnoticed. Testers need to be meticulous in their work, thoroughly reviewing software requirements, design documents, and code to ensure that the software functions as expected. For example, testers may need to verify that the software performs the intended function, handles errors and exceptions correctly, and provides appropriate feedback to users. These tasks require testers to be detail-oriented and have excellent organizational skills.

Communication skills: Testers need to communicate effectively with developers, project managers, and other stakeholders. They must be able to articulate their findings clearly, report bugs, and explain complex issues to non-technical stakeholders. For example, testers may need to write detailed bug reports, provide test results, and participate in project meetings to discuss issues and solutions. These tasks require testers to have excellent communication skills, both written and verbal.

There have been several high-profile cases where more testing could have prevented a software-related catastrophe. Here are some examples:

The Therac-25 radiation therapy machine: In the 1980s, a software defect in the Therac-25 radiation therapy machine caused several patients to receive lethal doses of radiation. The defect was caused by a race condition that occurred when operators changed settings too quickly. The manufacturer had not performed adequate testing on the machine, and the software was not designed to detect the error condition. If more testing had been performed, the defect could have been detected and corrected before the machine was released to the market.

The Ariane 5 rocket launch: In 1996, the maiden flight of the Ariane 5 rocket ended in disaster when the rocket veered off course and self-destructed. The cause of the failure was traced to a software error in the rocket’s inertial reference system. The software was not designed to handle the high-speed data that was being generated during the flight, and an overflow error occurred, causing the software to shut down. If more testing had been performed, the software defect could have been detected and corrected before the launch.

The Volkswagen emissions scandal: In 2015, it was discovered that Volkswagen had installed software in their diesel vehicles that detected when the vehicles were undergoing emissions testing and activated a “cheat mode” that reduced emissions to pass the test. During normal driving, the vehicles emitted up to 40 times the legal limit of nitrogen oxide. If more testing had been performed on the vehicles, the software defect could have been detected and corrected before the vehicles were released to the market.

The Equifax data breach: In 2017, Equifax suffered a massive data breach that exposed the personal information of over 143 million people. The breach was caused by a software vulnerability in an open-source web application framework that Equifax was using. The vulnerability had been patched months before the breach, but Equifax had not performed adequate testing to ensure that the patch had been applied to all of their systems. If more testing had been performed, the vulnerability could have been detected and corrected before the breach occurred.

These examples highlight the importance of thorough testing in software development. In each case, more testing could have prevented a catastrophic outcome. It’s crucial for organizations to prioritize testing in the software development process, invest in skilled testing professionals, and perform thorough testing to ensure the reliability and security of their software products.


In conclusion, while programming and testing both require specific skills, testing requires better skills than programming in critical thinking, problem-solving, domain knowledge, attention to detail, and communication skills. These skills are essential for identifying potential issues, designing relevant test scenarios, and evaluating the software system’s performance accurately. Therefore, organizations must recognize the importance of testing and invest in hiring and training skilled testing professionals to ensure the delivery of high-quality software products.

Emerging Technologies SIG (Zenith) Proposal Meeting on 3/29

Without further ado, I am happy to announce, that based on the proceedings on top of the Why focus on Emerging Technologies in Financial companies post, and thanks to FINOS management, namely Gabriele Columbro (Executive Director of FINOS) and Maurizio Pillitu (CTO at FINOS), we are ready to have a ‘proposal’ kickoff meeting for the Emerging Technologies SIG. It will provide a forum for FINOS members and the wider fintech community to discuss, collaborate on, and develop new and innovative solutions that can help drive the financial industry forward.

The SIG, if approved, would host regular meetings, events, webinars, workshops and other activities to help members stay up-to-date with the latest trends and developments in emerging technologies. We do encourage all FINOS members who are interested in emerging technologies to join our kickoff session below and become part of this exciting new community. Together, we can help to drive innovation and advance the fintech industry in fresh and exciting ways!

Proposed logo for the Zenith group

Proposal to create a FINOS Emerging Technologies Special Interest Group

Proposing the creation of a FINOS Emerging Technologies Special Interest Group (Zenith). The purpose of the Zenith SIG would be to explore and promote the adoption of new and innovative technologies in the financial services industry. The proposed goals of the SIG are to:

  1. identify and evaluate emerging technologies that have potential to transform the sector
  2. to share best practices, use cases, and insights for the broader community in the form of webinars, podcasts, and articles.

To gather interest and commitment, FINOS is organizing an initial exploratory meeting – which will also help to prepare for the SIG approval submission (to FINOS Board of Directors) – on Wednesday 29th of March at 10:00 US/Eastern. Agendas and conference info can be found in the issues.

License

Copyright 2023 Fintech Open Source Foundation

Distributed under the Apache License, Version 2.0.

SPDX-License-Identifier: Apache-2.0


Details of the upcoming meeting:

Google Calendar Link

Date: March 29th, 2023, at 10am EST / 3pm GMT

Location: Zoom

Agenda:

  •  Convene & roll call (5mins)
  •  Display FINOS Antitrust Policy summary slide
  •  Review Meeting Notices
  •  Presenting SIG draft charta 
  •  Acceptance / reviewing of charta
  •  AOB, Q&A & Adjourn (5mins)

The age of Digital Aristotle arrives?

Another hugely successful topic was my diving into methods around teaching and how they change now and the future – so here we are again 🙂 Over the past few years, the world has witnessed a significant shift towards digitalization, and the education sector is no exception. With the rise of digital Aristotle, teaching has undergone a revolution, transforming the way we learn and acquire knowledge. Digital Aristotle is a concept that refers to the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms to mimic the teaching style of the Greek philosopher Aristotle, making education more personalized and effective.

A digitized Aristotle bust with the Matrix's green letters in the background

In the past, education was primarily delivered in a traditional classroom setting, with teachers lecturing to a group of students. This one-size-fits-all approach to education had its limitations, as students have different learning styles and abilities. However, with the introduction of digital Aristotle, the teaching approach has become more personalized and tailored to the individual needs of each student. AI and ML algorithms can analyze student data to identify their strengths and weaknesses, and then provide customized learning paths to help students improve their performance.

One of the significant benefits of digital Aristotle is that it provides students with instant feedback. Traditional teaching methods often relied on exams and assignments to assess a student’s knowledge, but this approach had limitations in terms of providing timely feedback. However, digital Aristotle can analyze a student’s performance in real-time and provide immediate feedback, allowing students to identify their strengths and weaknesses and improve their performance in real-time.

Another advantage of digital Aristotle is that it allows for a more interactive and engaging learning experience. Traditional teaching methods often relied on passive learning, where students were expected to sit and listen to a teacher’s lecture. However, digital Aristotle uses interactive simulations, videos, and gamification to make learning more engaging and fun, which can improve students’ motivation and retention of information.

Moreover, digital Aristotle can also help teachers to be more effective in their teaching. By analyzing data from students’ performance, teachers can identify areas where students need more assistance and provide targeted interventions. Additionally, digital Aristotle can also assist teachers in grading assignments, reducing the time they spend on grading and allowing them to focus on other aspects of teaching.

Digital Aristotle has also revolutionized the concept of online learning. With the COVID-19 pandemic, online learning has become the norm, and digital Aristotle has made it more effective. Online learning can be challenging for students who need personalized attention, but digital Aristotle can provide customized learning paths to help students learn at their own pace.

As usual, there is another side – the digital Aristotle is not a perfect solution (yet?). While this post highlights the benefits of a digital Aristotle, it does not discuss the potential downsides of relying on AI and ML for education. One possible way to mitigate this is by acknowledging the limitations of digital Aristotle and emphasizing the importance of human interaction and guidance in education.

There are also an ethical implications of using AI and ML in education, such as privacy concerns and potential biases in algorithmic decision-making. One way to mitigate this is by implementing transparent and accountable AI systems and regularly monitoring and auditing their performance.

You might feel like I do question the role of teachers in a digital Aristotle-driven education system – while the system can help teachers be more effective, we do not fully address the role of teachers in a digital Aristotle-driven education system. One way to mitigate this is by emphasizing the importance of teachers in guiding and supporting students’ learning and incorporating digital Aristotle as a tool to enhance, not replace, traditional teaching methods.

Overall, it is important to consider both the benefits and potential drawbacks of digital Aristotle and to continually evaluate and improve the technology’s performance to ensure that it aligns with ethical and educational standards. In conclusion, digital Aristotle is a game-changer in the education sector, providing personalized learning experiences and making education more accessible and effective. While there are concerns about the role of AI and ML in education, it is clear that digital Aristotle has the potential to transform the way we learn and acquire knowledge. As the world continues to digitize, digital Aristotle is likely to become an essential tool in education, helping students and teachers alike to achieve their full potential.

What can help you beside INCUP with ADHD?

Even for myself is a surprise how much interest my post on INCUP (the combination of Interest, Novelty, Challenge, Urgency, and Passion) and ADHD got recently. So figured I’d dive deeper to check other solutions beside INCUP for Attention Deficit Hyperactivity Disorder (ADHD). It is a neurodevelopmental disorder that affects millions of people worldwide. People with ADHD struggle with inattention, hyperactivity, and impulsivity. While medication is a common treatment option for ADHD, there are other tools and strategies that can be used to manage symptoms. In this post, we will discuss what you can use besides INCUP to help you with your ADHD.

Cognitive Behavioral Therapy (CBT)

CBT is a type of therapy that focuses on changing negative thoughts and behaviors. It can be used to treat ADHD by helping individuals develop coping strategies and changing their perspective on their symptoms. CBT can also help individuals develop time-management skills and organization strategies, which can be helpful for managing ADHD symptoms.

Exercise

Exercise is an effective way to manage ADHD symptoms. It helps to increase dopamine levels, which can improve focus and attention. Exercise also helps to reduce hyperactivity and impulsivity. Individuals with ADHD should aim to incorporate regular exercise into their daily routine.

Mindfulness Meditation

Mindfulness meditation can help individuals with ADHD develop a greater awareness of their thoughts and behaviors. It can also help to reduce stress and anxiety, which can worsen ADHD symptoms. Mindfulness meditation can be practiced anywhere, and there are many resources available online to help individuals get started.

Diet

A healthy diet can help to manage ADHD symptoms. Foods that are high in protein and complex carbohydrates can help to improve focus and attention. Omega-3 fatty acids, found in fish and nuts, have also been shown to improve ADHD symptoms. On the other hand, processed foods, sugary drinks, and caffeine should be avoided, as they can worsen symptoms.

Sleep

Getting enough sleep is essential for managing ADHD symptoms. Lack of sleep can worsen symptoms such as inattention, hyperactivity, and impulsivity. Individuals with ADHD should aim to get 7-9 hours of sleep per night and develop a consistent sleep routine.

Time Management

Developing time-management skills is crucial for managing ADHD symptoms. Individuals with ADHD should develop a schedule or planner to help them stay organized and prioritize tasks. Breaking down tasks into smaller, manageable steps can also be helpful.

Support Groups

Joining a support group can be a great way to connect with others who are also managing ADHD. Support groups provide a safe and supportive environment for individuals to share their experiences and learn from others. Many support groups are available online, making them accessible to anyone.


In conclusion, while medication may be a common treatment option for ADHD, there are many other tools and strategies available to manage symptoms. Cognitive-behavioral therapy, exercise, mindfulness meditation, diet, sleep, time management, and support groups are just a few examples of what can be used besides INCUP to help individuals with ADHD manage their symptoms. It’s important to remember that everyone’s experience with ADHD is unique, and what works for one person may not work for another. Therefore, it’s essential to explore different options and find what works best for you.

F# for financial applications? Morphir-dotnet!

F# is a functional programming language that has gained significant traction over the years, and for good reasons. It is an amazing language because it offers a range of features that make it a versatile and efficient language to use.



Really is a powerful language, that can be used for a wide range of applications. One of the key advantages of F# is its portability to web assembly. Web assembly is a binary instruction format that can be executed by web browsers. With F#, it is possible to write code that can be compiled to web assembly, allowing developers to build web applications with F#. This feature makes F# a valuable tool for building modern web applications.

Another advantage of F# is its cooperation with WebSharper and Fable. WebSharper is a web development framework that provides a rich set of tools for building web applications. Fable is a compiler that allows F# code to be compiled to JavaScript. Together, these tools provide developers with an efficient way to build web applications using F#.

F# also has the ability to transpile to other languages. This feature allows developers to write code in F# and then transpile it to another language, such as JavaScript or Python. This feature makes F# a versatile language that can be used for a wide range of applications.

One example of how F# is being used is in the renewed interest for it in the development of Morphir, a language tool from FINOS and Morgan Stanley. Morphir is a domain-specific language that is designed to help developers build financial applications. Morphir is built on Elm and able to use Bosque and other business logic frontends and DSLs, and offers a range of features that make it a powerful tool for building financial applications.

One of the key features of Morphir is its ability to generate code automatically. This feature allows developers to write code in Morphir and then generate code in another language, such as Scala, TypeScript and now (re)upcoming, F#. This feature makes it easy for developers to build financial applications in different languages.

In conclusion, F# is an amazing language that offers a range of features that make it a valuable tool for developers. Its portability to web assembly, cooperation with WebSharper and Fable, and ability to transpile to other languages make it a versatile language that can be used for a wide range of applications. Additionally, tools such as Morphir demonstrate how F# can be used to build powerful and efficient domain-specific languages. As the popularity of functional programming languages continues to grow, it is clear that F# is a language that is here to stay, moreover bloom.

That’s why I am happy to announce that we are looking for inspired parties to join this work – please do drop a note to morphir at FINOS dot org, if you feel interested!

Are you a boss? Or better yet, are you a leader?

Leadership and management are two distinct concepts, but they are often used interchangeably. While both roles involve leading and guiding a team towards achieving common goals, the methods and approaches taken can be vastly different. In particular, the difference between a boss and a leader is often overlooked, even though it can have a significant impact on the success of a team or organization.

One of the key differences between a boss and a leader is their attitude towards their team members. A boss tends to be more authoritarian and hierarchical, seeking to maintain a strict chain of command. They often look for reasons to say no and may issue orders and demands in a way that makes it clear they expect loyalty and obedience from their subordinates. A leader, on the other hand, is more collaborative and open-minded. They look for reasons to say yes and work to build a sense of trust and camaraderie with their team. A leader gives direction and takes responsibility, but they do so in a way that encourages their team members to take ownership of their work and contribute to the group’s success.

Another key difference between a boss and a leader is their approach to decision-making. A boss may make decisions unilaterally, without much input from their team members. They may also be hesitant to take responsibility for any negative outcomes that result from their decisions. A leader, on the other hand, is more likely to seek input and feedback from their team members before making a decision. They take responsibility for the decisions they make and are willing to admit when they have made a mistake. They also work to create an environment in which their team members feel comfortable speaking up and sharing their opinions.

One of the most important differences between a boss and a leader is their focus on the people they lead. A boss may view themselves as the most important person in the room, and their actions and decisions may reflect that belief. They may prioritize their own interests over those of their team members, leading to resentment and a lack of trust. A leader, on the other hand, recognizes that their success is directly tied to the success of their team members. They work to create an environment in which everyone feels valued and appreciated, and they make a conscious effort to recognize and reward the contributions of their team members.

Leadership is a vital aspect of any organization, and a good leader can make a significant difference in the success of a team. While the differences between a boss and a leader may seem subtle, they can have a profound impact on the team’s productivity, morale, and overall success.

Another key difference between a boss and a leader is the way they handle conflicts and challenges. A boss may adopt a confrontational approach, trying to enforce their authority and control over the team. This can create a tense and hostile work environment, where team members feel unsupported and undervalued. A leader, on the other hand, takes a collaborative approach, working with their team members to find solutions to the challenges they face. They encourage open communication and seek to understand the perspectives of everyone involved. This can lead to a more positive and supportive work environment, where team members feel empowered and motivated.

Additionally, a boss may be focused solely on the short-term goals of the organization, while a leader is more likely to take a long-term view. A boss may prioritize quick wins and immediate results, even if they are not sustainable in the long term. A leader, on the other hand, takes a more strategic approach, considering the long-term implications of their decisions and actions. They prioritize building relationships and developing their team members, recognizing that this will ultimately lead to more sustainable success for the organization.

The difference between a boss and a leader is significant. While a boss may be effective in certain situations, a leader is more likely to inspire trust and loyalty from their team members. A leader is more collaborative, open-minded, and focused on the success of their team. They work to create an environment in which everyone feels valued and appreciated, and they take responsibility for their decisions and actions. By understanding these differences, individuals in leadership positions can work to develop the skills and traits necessary to be an effective leader, rather than simply a boss.

In conclusion, being a leader is not just about being in a position of authority; it’s about using that authority to inspire and empower others. The differences are groundbreaking, and it’s important for individuals in leadership positions to understand and embrace these differences. By doing so, they can create a more positive, productive, and supportive work environment, where everyone feels valued and motivated to contribute to the success of the team.

The teaching entanglement

As an educator or mentor, engaging your students or mentees is crucial to their learning and development. Active engagement helps to build a connection between the student and the material being taught, allowing for better understanding and retention. However, it can be a challenge to keep students engaged, especially in today’s digital age where distractions are plentiful. Here are some tips on how to more actively engage your students and mentees:

Create a welcoming environment

The first step in engaging your students or mentees is to create a welcoming environment. This means being approachable, supportive, and non-judgmental. When students feel comfortable and safe, they are more likely to participate actively in discussions and ask questions. You can create a welcoming environment by starting each session with a brief introduction or icebreaker activity, and by encouraging everyone to participate.

  • Start the class or session with an icebreaker question, such as “What is something interesting you learned or did over the weekend?”
  • Make yourself approachable and available for questions or concerns before and after class or session.
  • Use positive body language, such as smiling, making eye contact, and nodding to show you are actively listening.

Use a variety of teaching methods

Using a variety of teaching methods can help to keep students engaged and interested in the material. Some students may be visual learners, while others may be more auditory or kinesthetic learners. By incorporating different teaching methods such as lectures, discussions, hands-on activities, and multimedia presentations, you can cater to different learning styles and keep everyone engaged.

  • Use multimedia presentations such as PowerPoint, videos, or podcasts to supplement lectures.
  • Incorporate hands-on activities, such as group discussions, case studies, or role-playing exercises.
  • Use online educational platforms, such as Kahoot or Quizlet, to create interactive quizzes or games.

Provide feedback and praise

Providing regular feedback and praise is a great way to keep students motivated and engaged. Positive feedback can boost confidence and encourage students to continue working hard. Be specific with your feedback and praise, highlighting specific accomplishments or improvements. This can also help students understand what they are doing well and what areas they need to work on.

  • Instead of just saying “good job,” be specific with your praise. For example, “Great job on the research project! Your analysis of the data was really insightful.”
  • Provide constructive feedback that identifies areas of strength and areas for improvement.
  • Encourage students to self-reflect on their progress and set goals for the future.

Incorporate real-life examples

Incorporating real-life examples and case studies can help students see how the material they are learning applies to the real world. This can make the material more interesting and relevant, and help students understand its practical applications. You can also encourage students to share their own experiences or ask them to research real-life examples related to the topic being discussed.

  • Use current events or news articles to illustrate how the material being taught applies to real-world situations.
  • Use case studies or success stories to show how individuals or organizations have applied the concepts being taught.
  • Ask students to research and share their own examples of real-world applications related to the topic.

Encourage collaboration and group work

Collaboration and group work can help to keep students engaged and foster a sense of community in the classroom. By working together, students can learn from one another, share their perspectives, and develop teamwork skills. You can encourage collaboration by assigning group projects or activities, and by setting clear expectations for participation and teamwork.

  • Assign group projects that require students to work together to solve a problem or create a product.
  • Set clear expectations for participation and teamwork, such as assigning roles or responsibilities.
  • Provide opportunities for students to give and receive feedback to each other to improve collaboration skills.

Be flexible and adaptable

It is important to be flexible and adaptable in your teaching or mentoring style. Each student or mentee is unique and may have different learning styles, needs, and preferences. By being open to feedback and willing to adjust your teaching style as needed, you can better engage your students and help them achieve their goals.

  • Consider using different teaching methods if a particular method is not engaging students.
  • Be open to feedback from students and adjust teaching or mentoring style accordingly.
  • Modify assignments or projects to better meet the needs of individual students or mentees.

Use technology to enhance engagement

It is essential to be following the technology trends, and to adapt to them. Let it be adding PowerPoint slides to your zoom background, this way not needed for listeners to split their focus between the slides and your body language; or using other online services, you can do a lot to increase the value of your teachings.

  • Use interactive whiteboards or smartboards to display multimedia presentations or virtual simulations.
  • Use online platforms, such as Google Classroom or Blackboard, to post resources, assignments, and communicate with students outside of class.
  • Use educational apps or software to gamify learning and make it more engaging.

Make learning relevant to students’ interests and goals

This is about understanding your audience and adapt towards them – there is no material that is “one size fits all”. Start with the necessary interaction to better get their views, and make sure you apply this knowledge on the fly.

  • Incorporate students’ interests and hobbies into class discussions or projects.
  • Create assignments that are relevant to their future career goals or personal aspirations.
  • Encourage students to share their personal experiences or perspectives related to the material being taught.

Use storytelling and humor

Nothing keeps a message fresher than a good timed joke. It can be incorporated into the material, can be ad hoc – this and the use of a story will make wonders to make sure the material stick for longer time.

  • Use storytelling to illustrate complex concepts or to make a point.
  • Use humor, when appropriate, to make the material more relatable and memorable – this might be the good time to use the memes.
  • Use anecdotes or personal stories to make connections between the material being taught and real-world situations.

Provide opportunities for reflection and self-assessment

Labs and other tools that would help try out the newly formed skills are quintessential to make sure the material will stay with the studentele. This does not necessarily mean you as a teacher needs to evaluate – providing a way to self assess is many cases more useful.

  • Use reflection questions or journal prompts to help students think more deeply about the material and their own learning.
  • Use self-assessment tools, such as rubrics or checklists, to help students evaluate their own work and progress.
  • Encourage students to set their own learning goals and track their progress towards achieving them.

In conclusion, actively engaging students and mentees is essential to their learning and development. There are many ways to actively engage students and mentees, including creating a welcoming environment, using a variety of teaching methods, providing feedback and praise, incorporating real-life examples, encouraging collaboration, being flexible and adaptable, using technology, making learning relevant to students’ interests and goals, using storytelling and humor, and providing opportunities for reflection and self-assessment. By incorporating these strategies, you can help your students and mentees stay motivated and interested in the material, leading to better learning outcomes and success.