Like always, it is about balancing employee training and treatment. The sentiment behind the statement “train people well enough so they can leave, treat them well enough so they don’t want to” is one of balance and foresight. A company or organization that invests in its employees and helps them grow both professionally and personally has the potential to reap significant rewards. By training its workers well and creating a positive work environment, a business can increase its chances of retaining valuable employees and improve overall morale.
The first half of the statement, “train people well enough so they can leave,” implies that companies should help their employees acquire the skills and knowledge they need to succeed in their careers, regardless of whether they stay with the organization. By doing so, the company is providing them with opportunities for growth and advancement that will benefit both the employee and the company in the long run. This kind of investment in employee development can help establish the company as a desirable place to work and create a reputation for excellence in the industry.
The second half of the statement, “treat them well enough so they don’t want to,” highlights the importance of creating a positive work environment. Employees who feel valued and appreciated are more likely to be productive and motivated, and they are also less likely to leave the company in search of a better work environment. A business that provides its employees with a supportive and inclusive atmosphere, fair pay and benefits, and opportunities for growth and advancement is more likely to retain its best workers and attract new talent.
The key to success, then, is finding the right balance between training and treatment. A company that provides excellent training but fails to create a positive work environment is likely to experience high turnover, as employees seek better opportunities elsewhere. On the other hand, a company that provides a great work environment but fails to invest in employee development will likely struggle to retain its best workers as they seek out more challenging and rewarding career opportunities.
In conclusion, the statement “train people well enough so they can leave, treat them well enough so they don’t want to” underscores the importance of investing in employee development and creating a positive work environment. By doing so, companies can increase the chances of retaining their best workers, improve morale, and establish a reputation for excellence in the industry. And for me, this training I was lucky to give, happened to be many times the TAP…
Today’s post is inspired by Matt Shuster, who asked about my opinion on DORA vs Agile pipelines. So, let’s see the basics first; measuring the performance of an agile delivery pipeline requires a combination of metrics that focus on both efficiency and effectiveness. Here are a few metrics that are commonly used for this purpose:
Lead time: The time from when a feature is requested to when it is delivered to the customer.
Cycle time: The time it takes to complete a specific task or feature from start to finish.
Throughput: The number of features delivered per unit of time.
Defect density: The number of defects per unit of delivered code.
Deployment frequency: The frequency of code releases to production.
Time to restore service: The time it takes to restore service after a production failure.
User satisfaction: Feedback from users on the quality and functionality of the delivered features.
As with all metrics, these metrics should be regularly monitored and used to continuously improve the delivery pipeline by identifying bottlenecks, optimizing workflows, and reducing waste. Additionally, as Agile is not one-size-fits-all, it’s important to regularly reassess and adjust the metrics used to ensure they accurately reflect the goals and priorities of the organization.
On the other hand, let’s quickly look at DORA. The DORA (Accelerate) framework is a set of four metrics that provide a comprehensive view of the performance of an organization’s software delivery process. The four metrics are:
Lead time: The time it takes to go from code committed to code successfully running in production.
Deployment frequency: The number of times per day that code is successfully deployed to production.
Mean time to recovery: The average time it takes to restore service after an incident.
Change failure rate: The percentage of changes that result in a production failure.
These metrics align well with the metrics commonly used to measure the performance of an agile delivery pipeline and can be used in a complementary manner to validate the software architecture. For example, a low lead time and high deployment frequency indicate that the delivery pipeline is efficient and streamlined, while a low change failure rate and mean time to recovery indicate that the architecture is robust and reliable.
I promised a bake off, so, here we are π The comparison between using metrics to validate a software architecture and using the DORA framework is that both provide different but complementary perspectives on the performance of an organization’s software delivery process.
On one hand, metrics such as lead time, cycle time, throughput, and defect density focus on efficiency and effectiveness of the delivery pipeline. They help to measure the time taken to complete a task, the speed at which features are delivered, and the quality of the delivered code. These metrics provide insight into the processes and workflows used in the delivery pipeline and help identify areas for improvement.
On the other hand, the DORA framework provides a comprehensive view of the performance of an organization’s software delivery process by focusing on four key metrics: lead time, deployment frequency, mean time to recovery, and change failure rate. These metrics help to measure the speed and reliability of the delivery pipeline and provide insight into the resilience and stability of the software architecture.
So, which of them to use? By using both sets of metrics together, organizations can get a complete picture of their delivery pipeline performance and identify areas for improvement in both architecture and processes. This can help ensure that the architecture supports the needs of the organization and the goals of the delivery pipeline, while also providing a way to continually assess and optimize performance over time. For example, metrics such as lead time and cycle time can highlight bottlenecks and inefficiencies in the delivery pipeline, while metrics such as change failure rate and mean time to recovery can highlight weaknesses in the architecture that may be contributing to production failures.
In summary, using metrics to validate a software architecture and using the DORA framework together provides a comprehensive view of the performance of an organization’s software delivery process and helps to identify areas for improvement in both architecture and processes. As probably you figured out, I like case studies and tools, so… here we are π
Netflix: Netflix uses a combination of metrics, including lead time and cycle time, to measure the performance of its delivery pipeline. They use this data to continuously optimize their processes and improve their architecture, resulting in a highly efficient and effective delivery pipeline.
Amazon: Amazon uses a combination of metrics, including deployment frequency and mean time to recovery, to measure the performance of its delivery pipeline. By regularly monitoring these metrics, Amazon has been able to achieve a high level of reliability and stability in its software architecture, allowing them to quickly and effectively respond to incidents and restore service.
Spotify: Spotify uses a combination of metrics, including lead time and throughput, to measure the performance of its delivery pipeline. By using these metrics to continuously optimize their processes and improve their architecture, Spotify has been able to increase the speed and efficiency of its delivery pipeline, allowing them to deliver high-quality features to users faster.
Google: Google uses a combination of metrics, including lead time, deployment frequency, and mean time to recovery, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Google has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
Microsoft: Microsoft uses a combination of metrics, including lead time and cycle time, to measure the performance of its delivery pipeline. By using these metrics to continuously optimize its processes and improve its architecture, Microsoft has been able to increase the speed and efficiency of its delivery pipeline, allowing it to deliver high-quality features and updates to users faster.
Shopify: Shopify uses a combination of metrics, including deployment frequency, mean time to recovery, and change failure rate, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Shopify has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
Airbnb: Airbnb uses a combination of metrics, including lead time, deployment frequency, and mean time to recovery, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Airbnb has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
These case studies demonstrate the importance of regularly measuring and analyzing performance metrics to validate a software architecture and improve the delivery pipeline. By using a combination of metrics and regularly reassessing and adjusting their approach, organizations can continuously improve their delivery pipeline and ensure that their architecture supports the needs of the organization and the goals of the delivery pipeline. And speaking of tools – there are various tools and software that can be used to measure the DORA framework measures. Some popular options include:
Datadog: Datadog provides real-time monitoring and analytics for cloud-scale infrastructure, applications, and logs. It can be used to track key performance indicators, including lead time, deployment frequency, mean time to recovery, and change failure rate, and generate reports and alerts based on that data.
New Relic: New Relic is a performance management platform that provides real-time visibility into application performance. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
Splunk: Splunk is a software platform for searching, analyzing, and visualizing machine-generated big data. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
AppDynamics: AppDynamics is an application performance management solution that provides real-time visibility into the performance of applications and infrastructure. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
Prometheus: Prometheus is an open-source systems monitoring and alerting toolkit. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
InfluxDB: InfluxDB is an open-source time series database. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
Grafana: Grafana is an open-source data visualization and analysis platform. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
Nagios: Nagios is an open-source IT infrastructure monitoring solution. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
JIRA: JIRA is a project and issue tracking software. It can be used to track the lead time and cycle time of the delivery pipeline by monitoring the time it takes for work items to move through the various stages of the development process.
These are just a few examples of software tools that can be used to measure and track DORA framework metrics. The specific tool or combination of tools used will depend on the needs of the organization and the size and complexity of the delivery pipeline. For agile, we have our set of tools too, I picked only some of them (and yes, JIRA is on both lists):
Trello: Trello is a visual project management tool that can be used to track and visualize the progress of work items through the different stages of the development process.
Asana: Asana is a team collaboration tool that can be used to track and visualize the progress of work items through the different stages of the development process.
JIRA: JIRA is a project and issue tracking software that can be used to track and visualize the progress of work items through the different stages of the development process.
Clubhouse: Clubhouse is a project management tool specifically designed for agile teams. It can be used to track the progress of work items through the different stages of the development process and visualize the flow of work through the delivery pipeline.
Pivotal Tracker: Pivotal Tracker is an agile project management tool that can be used to track and visualize the progress of work items through the different stages of the development process.
I hope this helped answering the DORA vs Agile metrics question, with the answer being:
Some call it penny-pinching, some call it watching my cash-flow, but everyone is trying to avoid the price creep that might happen if you are using the cloud. An honest mistake can lead to terrible effects – and if you don’t know your own architecture well enough, this can be disastrous. So, here are some strategies to effectively avoid creeping cloud hosting prices:
Monitor usage and costs regularly: Use tools such as Azure Cost Analyzer, AWS Cost Explorer or Google Cloud’s billing dashboard to keep track of resource utilization and costs. This will help you identify areas where you can reduce spending, such as underutilized instances or overpriced storage services.
Use reserved instances or committed use contracts: These provide a significant discount compared to on-demand pricing by committing to use a specific amount of resources for a set period of time, e.g., 1 or 3 years.
Take advantage of spot instances: Spot instances are unused Azure or EC2 instances that are made available at a discount to users willing to take the risk of having their instances terminated when the spot price increases. Yes, it needs to think through your microservices infrastructure and your circuit breakers, but the savings can be tremendous.
Optimize your infrastructure: Right-sizing instances, using auto-scaling groups, and using managed services like Azure Managed Cloud Database Service or AWS RDS instead of running your own database servers can help reduce costs.
Use managed services: For me, this is probably the biggest pet peeve: Using managed services like Azure Functions, AWS Lambda or Google Cloud Functions instead of running your own servers can greatly reduce costs and eliminate the need for infrastructure management. Believe or not, you would not be able to optimize your custom solutions, let it be written by you or using some 3rd party solution or similar, to be as cost effective as it has been done by the actual CSP – they are interested in making it cost effective, to be able to handle a better compression ratio on the same hardware, it would be for sure be compatible with other services, authentication would work out of the box, etc.
You can use a combination of various tools to achieve this, just look at these examples:
Dropbox: Dropbox was able to save over 30% on its cloud hosting costs by using reserved instances and spot instances to reduce its spending on compute resources. In addition, Dropbox optimized its infrastructure to make it more cost efficient, including reducing the number of instances it was using, right-sizing its instances, and using auto-scaling groups to ensure that its resources were being used optimally. This combination of strategies allowed Dropbox to reduce its overall cloud hosting costs while maintaining the same level of service and reliability.
Capital One: Capital One reduced its cloud hosting costs by over 50% by using a combination of reserved instances, managed services, and a cost optimization program. Capital One adopted a proactive approach to cost optimization, monitoring its cloud usage and costs on a regular basis, and implementing cost optimization strategies when necessary. For example, Capital One used reserved instances to commit to a certain amount of resources over a set period of time, which allowed it to receive significant discounts compared to on-demand pricing. In addition, Capital One adopted managed services like AWS RDS and AWS Lambda to reduce the amount of infrastructure it needed to manage and maintain.
Expedia: Expedia reduced its cloud hosting costs by 20% by using reserved instances, auto-scaling, and by optimizing its infrastructure for cost efficiency. Expedia adopted a multi-pronged approach to cost optimization, which included committing to a certain amount of resources over a set period of time through reserved instances, using auto-scaling to ensure that its resources were being used optimally, and right-sizing its instances to reduce the amount of resources it was using. These strategies allowed Expedia to reduce its cloud hosting costs while maintaining the same level of service and reliability.
SoundCloud: SoundCloud reduced its cloud hosting costs by over 50% by moving from on-demand instances to reserved instances and by optimizing its infrastructure for cost efficiency. By using reserved instances, SoundCloud was able to commit to a certain amount of resources over a set period of time, which allowed it to receive significant discounts compared to on-demand pricing. In addition, SoundCloud optimized its infrastructure to reduce the amount of resources it was using and to ensure that its resources were being used optimally, which allowed it to further reduce its cloud hosting costs.
SmugMug: SmugMug reduced its cloud hosting costs by over 60% by using reserved instances, spot instances, and by optimizing its infrastructure for cost efficiency. SmugMug adopted a multi-pronged approach to cost optimization, which included using reserved instances to commit to a certain amount of resources over a set period of time, using spot instances to take advantage of unused EC2 instances that were made available at a discount, and optimizing its infrastructure to reduce the amount of resources it was using and to ensure that its resources were being used optimally. These strategies allowed SmugMug to reduce its cloud hosting costs while maintaining the same level of service and reliability.
Netflix: reduced its cloud hosting costs by over 80% through a combination of reserved instances, spot instances, and optimizing its infrastructure.
Of course, as always, it is important to note that these strategies and tools will vary based on the specific cloud provider and your use case, so it is important to carefully evaluate your options and choose the best solution for your needs. However, these case studies provide a good starting point and demonstrate the potential savings that can be achieved through a proactive approach to cloud hosting cost optimization.
Optimizing your software architecture can have many reasons – you can make it cheaper to operate, cheaper to maintain, etc. So, let’s see, how to do that π To optimize software architecture, consider the following steps:
Identify performance bottlenecks: Use tools like profiling, logging, and monitoring to identify performance bottlenecks and inefficiencies in the current architecture.
Choose the right technology: Select the most appropriate technology for each component based on factors such as scalability, reliability, and maintainability.
Design for scalability: Consider design patterns such as microservices and serverless architectures to make the system scalable.
Automate testing: Use tools like automated testing frameworks to test the system and identify potential problems before they become critical issues.
Implement continuous delivery and deployment: Automate the delivery and deployment process to reduce the risk of downtime and improve the speed of delivery.
Monitor and measure: Use monitoring and analytics tools to track key metrics and make data-driven decisions about the architecture.
The other big reason is – to avoid failures. Clearly without trying to cover all of them, some of the recent ones that were due to architectural inconsistencies and mishaps:
The 2017 Equifax data breach was caused by a failure in the company’s architecture, as they were using outdated software and had insufficient monitoring in place.
The 2010 and 2012 outages of Amazon’s S3 storage service were due to architectural limitations and lack of proper monitoring, resulting in widespread disruption and loss of data.
The 2014 AWS outage was caused by a failure in the architecture of the company’s Simple Queue Service (SQS) component, leading to widespread downtime and data loss.
The 2020 Capital One data breach was a result of a vulnerability in the company’s firewall configuration, which was caused by insufficient security testing and monitoring.
The 2013 Target data breach was caused by a failure in the company’s security architecture, as they were using outdated security software and had insufficient monitoring in place.
The 2016 Dyn DDoS attack was caused by a vulnerability in the company’s architecture, which allowed attackers to exploit a weakness in the domain name system (DNS) infrastructure and cause widespread disruption.
So, what is the solution? Beside willing to sit down with the drawing board over and over, there are a set of tools you can use – caveat, using this tool will not save you from the drawing board, just will make you aware of where you have to focus π
Monitoring: New Relic, AppDynamics, Datadog, Nagios, Sensu, Prometheus, Zabbix, Icinga, Checkmk, etc.
So, in a nutshell – beside pulling up your sleeves and put work into it, there is no magic wand that would help you in this – each scenario and app would be different, but you should feel like, that you are not alone, and also, that there are tools out there to help you with the basics.
So, what is the value of blogging in 2023? Why I did not choose some other platforms, like TikTok for this presence? Blogging can be compared with other platforms such as:
Social Media platforms: Blogging is different from social media platforms such as Facebook, Twitter, and Instagram, which are primarily designed for sharing updates and interacting with others in a more personal and social context.
Video sharing platforms: Blogging is different from video sharing platforms such as YouTube, where the primary focus is on sharing short-form video content.
News and media websites: Blogging is different from news and media websites, which focus on delivering news and information to a mass audience.
Online forums and discussion boards: Blogging is different from online forums and discussion boards, which are designed for community-based discussion and debate.
E-commerce platforms: Blogging is different from e-commerce platforms such as Amazon and Shopify, which are designed for buying and selling products online.
But blogging continues to have value in 2023 for several reasons (definitely not all of them applies to this blog π ) :
Blogging allows for in-depth, comprehensive, and well-researched content creation, which can be useful for sharing knowledge, opinions, and expertise.
Blogs offer a platform for building a personal brand, and can establish one’s authority in a particular industry.
Blogs can drive traffic and provide a means of monetization through advertising and affiliate marketing – not this place, as I do not have any monetization on this site.
Blogs provide a permanent and accessible archive of content, whereas platforms like TikTok is a more ephemeral platform with a primary focus on short-form video content.
Blogging can be a source of leads and customers for businesses, especially through search engine optimization (SEO) efforts, although clearly, search is now moving away from being only website SEO driven.
Blogs provide a platform for community building and fostering engagement through comments, social shares, and email subscriptions.
Blogs can be a valuable tool for education and personal growth, with individuals able to share and reflect on their learning experiences.
Blogs can be used as a source of information and news, particularly in niche industries or communities.
Blogs can help individuals and organizations establish themselves as thought leaders and influencers in their industry or community.
Blogs can serve as a source of entertainment and inspiration, with bloggers sharing personal stories, travel experiences, and creative works.
Blogs can provide a platform for activism and social justice advocacy.
Blogs can be used to document and share personal journeys, such as weight loss or career changes.
Blogs can offer insights and advice on various topics, such as personal finance, relationships, or parenting.
Blogs can be a source of revenue for freelancers and independent content creators.
Blogs can be a valuable resource for job seekers, with many companies and organizations using blogs as a recruitment tool.
Blogs can be used as a tool for crisis communication and reputation management, with companies and organizations able to share their perspectives and respond to negative press.
Blogs can provide a platform for storytelling, allowing individuals and organizations to share their unique experiences and perspectives with a wider audience.
In a nutshell, many of the reasons above are the reason I choose to blog instead of being active on other platforms; although I do crosspost these posts to LinkedIn, Twitter and Facebook π
Short update regarding the blog – I added some small new features based on suggestions, these are: share buttons at the bottom of the posts and the ability to subscribe to updates using the sidebar on the main page. Feel free to suggest other features!
As I have been embracing some new projects, and believing in Agile, figured, it would be valuable to be able to effectively calculate the Value of it, as this can be part of a KPI. I have seen some recent discussions on the value equation for new digital products. Some of them were based on the following:
Value = (Benefits – Costs) + (Ease of Use – Complexity) + (Trust – Risk)
Whereas:
Benefits: The value the product provides to the user, such as improved efficiency, convenience, or new capabilities.
Costs: The cost to the user to obtain and use the product, including monetary costs, time, and effort.
Ease of Use: The simplicity and intuitive nature of the product’s interface and functionality.
Complexity: The difficulty of using and understanding the product, as well as any necessary training or support.
Trust: The level of confidence the user has in the product, including its security, reliability, and reputation.
Risk: The potential negative consequences of using the product, such as loss of data, privacy breaches, or security vulnerabilities.
So, how to use this equation? Let’s take an example of a new digital product, a personal finance management app.
Benefits: The app helps the user track their expenses, set budgets, and manage their finances more effectively.
Costs: The app is free to download and use, but it has in-app purchases and subscriptions.
Ease of Use: The app has a user-friendly interface, easy navigation and simple to understand financial reports.
Complexity: The app is easy to set up and use, with minimal training required.
Trust: The app uses bank-level security measures to protect user data and has positive reviews and reputation in the market.
Risk: There is minimal risk associated with using the app, as user data is securely stored and there are no reported security breaches.
Which would translate to:
Value = (Effective financial management – In-app purchases and subscriptions) + (User-friendly interface – Minimal training) + (Secure and reputable – Minimal Risk)
Therefore, the value of this app is high as it provides a lot of benefits such as effective financial management, easy to use, secure and reputable with minimal costs, complexity and risk.
If I were to replace the equation pieces with the usual items (branding, marketing, UX, onboarding and price), this would translate to (continuing the example of the personal finance management app):
Branding: The app has a strong brand that is easily recognizable and associated with financial management.
Marketing: The app is marketed effectively, with clear messaging that highlights its key features and benefits.
User Experience (UX): The app has a user-friendly interface and easy navigation that makes it simple to use.
Onboarding: The app has a smooth onboarding process that guides users through setting up and using the app effectively.
Price: The app is free to download, with in-app purchases and subscriptions available.
Value = (Strong brand + clear messaging + user-friendly interface + smooth onboarding) – In-app purchases and subscriptions
The value of this app is high as it has a strong brand, clear messaging, user-friendly interface, smooth onboarding and a reasonable pricing model, despite having in-app purchases and subscriptions.
As the first equation seems to be more balanced for me, I tend to use that, but I figured the second equation is what is more generally used. How do you calculate the value of a digital product?
I could say “What is technical debt and how to avoid it”, but that is less clickbaity π Working on some open source projects now (like Compose), our aim to avoid inserting it is sometimes overtaking delivery velocity as well. Technical debt refers to the cost of maintaining and updating a codebase as it grows and evolves over time. It is often incurred when shortcuts are taken during development in order to meet deadlines or budget constraints. Some dreadful examples of technical debt include:
Using hacky or poorly-designed code that is difficult to understand or modify
Not properly commenting or documenting code
Not following established coding conventions or best practices
Not properly testing code
Not refactoring code as it becomes more complex
Fighting Technical Debt
To avoid technical debt, it is important to prioritize maintainability and readability in the codebase, as well as to properly plan and budget for ongoing maintenance and updates. Some fun strategies for avoiding technical debt include:
Following established coding conventions and best practices
Writing clear and well-commented code
Implementing automated testing and continuous integration
Regularly reviewing and refactoring code to ensure it remains maintainable and readable
Prioritizing long-term goals over short-term gains
Allowing enough time for code reviews, testing and documentation
Building and following a clear process with defined milestones
Building a culture of ownership and accountability of the codebase
By following these practices, organizations can minimize the amount of technical debt they incur, making their codebase easier to maintain and less expensive to update over time. So in my series of posts on Agile, it’s now the time to see, how these (debt and agile) are related. They are closely related, as both concepts deal with the maintenance and evolution of software over time. Agile development is a methodology that emphasizes flexibility, collaboration, and rapid iteration, with a focus on delivering working software as quickly as possible. Technical debt, on the other hand, refers to the cost of maintaining and updating that software over time. In an agile development environment, technical debt can be incurred when shortcuts are taken in order to meet deadlines or deliver features more quickly. For example, if a developer is under pressure to meet a sprint deadline, they might choose to use a hacky or poorly-designed solution that is quick to implement but difficult to maintain. This can lead to an accumulation of technical debt over time, as the codebase becomes more complex and harder to update.However, when agile development is done correctly and with the right mindset, it can help to minimize technical debt. Agile development methodologies prioritize continuous improvement, which helps to keep the codebase maintainable and readable. Agile also encourages regular reviews and refactoring of code, which can help to identify and address issues with technical debt.
In summary, Agile development and technical debt are closely related, as both deal with the ongoing maintenance and evolution of software. Agile development methodologies can help to minimize technical debt by encouraging continuous improvement, regular reviews and refactoring, but it can also be a source of technical debt if not done properly. Therefore, it is important to balance the speed of delivery with the long-term maintainability of the codebase.
Luckily, there are several ways to fight technical debt:
Code review tools: These tools allow developers to review and approve code changes before they are merged into the codebase. Examples include Gerrit, Review Board, and Crucible, and of course the native tools of GitHub, GitLab, BitBucket and more.
Linting tools: These tools check the codebase for adherence to coding conventions and best practices. Examples include Roslyn, Resharper, ESLint and Pylint.
Refactoring tools: These tools assist developers in restructuring and reorganizing code to make it more maintainable. Examples include Resharper, Rascal and JDeodorant.
Automated testing tools: These tools automatically run tests on the codebase, ensuring that changes do not break existing functionality. Examples include XUnit, JUnit, TestNG, and Selenium.
Code coverage tools: These tools measure how much of the codebase is covered by automated tests, helping to identify areas where testing is insufficient. Examples include dotCover, Coverlet, Cobertura and Jacoco.
Code quality tools: These tools analyze the codebase and provide feedback on areas that need improvement. Examples include NDepend, SonarQube, and CodeClimate.
Dependency management tools: These tools help track the dependencies of a project and its sub-dependencies and ensure that the project is using the latest and most stable versions of the dependencies. Examples include Nuget, Maven, Gradle, and npm.
And this is the point I am going to break – some – people. Believe or not, sometimes technical debt can be useful too π Mainly, it allows an organization to quickly deliver a working product or feature, even if it may have some long-term maintenance costs. Some situations where technical debt might be useful include:
Rapid prototyping: When an organization needs to quickly create a prototype of a product or feature to test its feasibility, it may be acceptable to incur some technical debt in order to get the prototype up and running quickly.
Meeting tight deadlines: When a project has a tight deadline and there is not enough time to properly plan and implement all the features, taking shortcuts can be useful to deliver a working product in time.
Experimentation, Tryouts: Incurring some technical debt can be a good way to try out new technologies or features, as it allows an organization to quickly test the waters and see if they are viable before committing to a full-scale implementation.
Short-term projects: In some cases, technical debt may be acceptable for short-term projects that will be phased out or replaced in the near future.
It is important to note that technical debt should not be taken lightly, as it can quickly accumulate and become a burden on the development process. Therefore, it should be used strategically, and with a clear plan to minimize and pay off the debt in the future. Furthermore, when incurring in technical debt, it is important to have a clear plan on how to address and pay off the debt in the future, it is also important to communicate to the stakeholders the risks of incurring in technical debt and the plan to pay it off.
So, to have a better idea of where you are on the fight with them, you should (yes, Agile!) create KPIs around them and measure them π Technical debt can be measured in a number of ways, including:
Code review metrics: Code review metrics such as the number of comments, the number of issues raised, and the number of defects found can be used to measure the quality of the code and the ease of maintenance.
Test coverage metrics: Metrics such as test coverage, that is the percentage of the codebase that is covered by automated tests, can be used to measure the robustness and maintainability of the codebase.
Complexity metrics: Metrics such as cyclomatic complexity, which measures the complexity of a piece of code, can be used to identify areas of the codebase that are difficult to understand or maintain.
Refactoring metrics: Metrics such as the number of refactoring performed, the number of code smells identified and fixed, can be used to measure the maintainability and readability of the codebase.
Technical debt ratios: These ratios express the relationship between the cost of maintaining the system and the cost of developing it. An example is the debt-to-equity ratio, which compares the estimated cost of paying off technical debt to the estimated value of the system after the debt is paid off.
User feedback: User feedback on the ease of use, the speed and the reliability of the system can be used to measure the impact of the technical debt on the end-user – nothing beats a good hallway testing.
It is worth noting that there is no one-size-fits-all approach to measuring technical debt, and it may be necessary to use a combination of different metrics to get a comprehensive understanding of the state of the codebase. Additionally, it is important to note that measuring technical debt is not a one-time process, but it should be done regularly, so that progress can be tracked, and improvements can be made over time.
I’ll ask for your help, reader. I submitted an idea for a Special Interest Group in FINOS, which is to focus on Emerging Technologies – and I am looking for (special) interest in it π
But you might ask – what benefits of using emerging technologies like spatial computing, quantum, etc. does have for financial companies? Believe or not, these techs do have the potential to bring significant benefits to financial companies. Like:
Spatial computing, which involves the integration of virtual and augmented reality into everyday computing, can be used for financial companies to create immersive and interactive experiences for customers. For example, spatial computing can be used to create virtual branches, enabling customers to interact with banking services in a more natural and intuitive way.
Quantum computing has the potential to revolutionize the financial industry by providing much faster and more efficient ways to process complex financial data. Quantum computing can be used for financial companies to perform complex calculations, such as risk analysis, portfolio optimization, and fraud detection, much faster than traditional computers. Additionally, quantum computing can be used for secure communication and data encryption, which is important for financial companies in terms of security.
There are many other technologies that can be used, either specifically at a financial company, or do have benefits for the company through other means:
Technology
description
applications
Artificial Intelligence
The development of systems that can perform tasks that would normally require human intelligence, such as learning, reasoning, and perception.
Predictive analytics, fraud detection, customer service automation.
Blockchain
A decentralized and distributed digital ledger used to record transactions across a network of computers.
Secure financial transactions, digital identity verification, supply chain management.
Internet of Things (IoT)
The interconnectedness of everyday devices, such as smartphones, appliances, and vehicles, through the internet.
Smart cities, predictive maintenance, energy management.
Robotics and Robotics Process Automation (RPA)
The use of machines to perform tasks that would normally require human intervention.
Automation of repetitive tasks, precision manufacturing, healthcare. Automation of back-office tasks, process optimization, cost reduction.
Spatial Computing, also above
The use of computer-generated simulations to create immersive or interactive experiences.
Training, simulation, entertainment, education.
Quantum Computing, also above
A type of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.
Drug discovery, optimization, cryptography.
Neural Links
The implantation of electronic devices into the human brain, to enhance cognitive abilities or to control prosthetic limbs.
Medical treatment, brain-computer interfaces, cognitive enhancement.
Space Technologies
Advancements in satellite, rocket and space exploration technologies.
Earth observation, telecommunications, scientific research, space tourism.
4D Printing
3D printing with the added dimension of time, allowing the printed objects to change shape or properties over time.
The delivery of computing services, including storage, processing and software, over the internet.
Scalability, cost savings, data security, and business continuity.
Note that this is not a comprehensive list and there are many other emerging technologies that have the potential to disrupt various industries. The specific applications may vary depending on the particular technology and industry. And yes, many of these are now well established technologies – but if I give you this list a decade or two ago, you would not say many of them would ever become reality.
Nevertheless, the way I see this, that these technologies have the potential to help financial companies to increase efficiency, improve customer experiences, and enhance security, as well as providing opportunities for new revenue streams or business models – hence I am imagining that for finding the open standards, for finding the common ground, to understand the regulatory and other implications, there is a clear benefit to have the Special Interest Group mentioned at the beginning.
To help starting up the group, I wrote a set of ‘primers’ which can help the initial phase of the group:
I already wrote about Agile earlier, but it is a rather important topic for me (I was involved with Agile Alliance very early on). And one of my favorite discussions on the topic is the value of estimation. Whether you are aiming to estimate for the hour, or doing only TShirt estimation, you do know, that estimation is an important part of agile processes because it helps teams plan and manage their work effectively. Estimations are used to determine the amount of time and effort required to complete a task or project, which helps teams prioritize their work and make informed decisions about how to allocate resources. Additionally, estimations help teams set realistic expectations and deadlines for themselves and their stakeholders. In agile methodologies like Scrum and Kanban, estimations are done using techniques such as story points, which are relative measures of complexity rather than absolute measures of time or effort. This allows teams to adapt to changes and uncertainty more easily.
On the other hand, there is a tribe of #NoEstimation practitioners, who say that the value of “no estimation” is that it can eliminate the need for time-consuming and potentially inaccurate estimation efforts, allowing teams to focus on delivering working software and satisfying customer needs. It also promotes a culture of continuous improvement, where teams focus on delivering small, incremental changes and learning from them, rather than trying to predict the future. In “no estimation” approach, the team focuses on delivering smaller chunks of work, called “Minimum viable products” (MVP) which provides the maximum value with minimum effort. This approach is based on the assumption that the customer or stakeholders don’t know exactly what they want, and the team should deliver small increments to gather feedback and iterate on the product. This way, the team can deliver the product with more accuracy and customer satisfaction. This approach is more suitable for companies that have a high degree of uncertainty and volatility in their environment and for projects that have a high degree of innovation.
A quick overview of the pros and cons for this:
Pros
Cons
Estimation
Helps teams plan and manage their work effectively
Allows teams to prioritize their work and make informed decisions about resource allocation
Helps teams set realistic expectations and deadlines for themselves and their stakeholders
Useful for making decisions about project scope and feasibility
Can be time-consuming and potentially inaccurate
Can lead to unrealistic expectations and deadlines
Can be difficult to adapt to changes and uncertainty
Can lead to over-engineering or gold-plating
No Estimation
Eliminates the need for time-consuming and potentially inaccurate estimation efforts
Promotes a culture of continuous improvement and learning
Suitable for projects with a high degree of innovation and uncertainty
Focuses on delivering smaller chunks of work, called “Minimum viable products” (MVP)
May not be suitable for projects with well-defined requirements and constraints
May require a significant change in mindset and approach for some teams
May result in a lack of long-term planning or goals
May lead to delays if the customer or stakeholders need specific features or functionalities
So, where can the ideas of Estimation vs #NoEstimation applied to?
Estimation in a software development project: A software development team is tasked with building a new application for a client. The team uses agile methodologies such as Scrum, and they estimate the amount of time and effort required to complete each user story using story points. This allows them to prioritize their work, set realistic deadlines, and manage their resources effectively. However, this process can be time-consuming and may not always be accurate, especially if the requirements change or new information comes to light.
No Estimation in a product development project: A product development team is working on a new IoT device. They are using “no estimation” approach. They are focused on delivering small, incremental changes and gathering feedback from customers and stakeholders. They deliver Minimum Viable Product (MVP) that provides the maximum value with minimum effort. They are not trying to predict the future and are continuously improving their product based on customer feedback. This approach allows them to adapt to changes and uncertainty more easily, but it may not be suitable if the project has well-defined requirements and constraints.
Estimation in a construction project: A construction company is building a new skyscraper. They use estimation to determine the amount of time and resources required to complete the project. They use Gantt chart and critical path method to plan and manage their work. However, this approach can be difficult to adapt to changes in weather, materials, or other factors, and it may lead to unrealistic expectations and deadlines.
No estimation in a research project: A research team is working on a new medical treatment. They use “no estimation” approach and focused on conducting small experiments and gathering data. They are not trying to predict the outcome, instead they are continuously learning and improving their research based on the data they gather. This approach allows them to adapt to new information and uncertainty more easily, but it may not be suitable if the project has specific deliverables and deadlines.
So, no, I haven’t given a straight answer on Estimations, as there isn’t one. Agile is what you make of it, it is NOT one size fits all π
This is a topic I touched a few times earlier. In the current era seeing many firms going through a harder period, many cases resulting in layoffs, let’s see what I think makes a good leader? A good leader typically possesses some (most? all?) the following characteristics:
Vision: They have a clear idea of where they want to take their organization or team and are able to effectively communicate it to others. Example: Steve Jobs and his vision for Apple, or Martin Luther King Jr. and his vision for a more just and equal society for African Americans in the United States.
Integrity: They are honest, ethical, and transparent in their actions and decisions. Example: Nelson Mandela and his commitment to fighting for justice and equality in South Africa, or Malala Yousafzai and her commitment to promoting education for girls and women, despite facing threats and violence.
Decisiveness: They are able to make difficult decisions quickly and effectively. Example: Winston Churchill and his leadership during World War II or Abraham Lincoln and his leadership during the Civil War and the abolition of slavery in the United States.
Emotional intelligence: They are able to understand and manage their own emotions, as well as the emotions of others. Example: Mahatma Gandhi and his ability to inspire peaceful civil disobedience or Mother Teresa and her ability to empathize with and serve the poorest of the poor in India.
Adaptability: They are able to adjust their approach and strategy as needed to achieve their goals. Example: Jack Welch and his ability to transform General Electric into a more efficient and profitable company (inventing Reverse Mentoring) or Barack Obama and his ability to navigate the complex political landscape and successfully implement policies such as the Affordable Care Act.
Strong communication skills: They can effectively communicate their vision and ideas to others, and also actively listen to feedback. Example: Jeff Bezos and his ability to communicate his vision for Amazon (I try to use many of his tricks like the year-start year-end memo) or Oprah Winfrey and her ability to connect with her audience and share her message through her talk show and media empire.
Empowerment: They give their team members the autonomy and resources they need to succeed. Example: Bill Gates and his leadership at Microsoft or Sir Richard Branson and his leadership style that focuses on giving his team members autonomy and support to achieve their goals.
Creativity: They are able to think outside the box and come up with new and innovative solutions. Example: Elon Musk and his companies SpaceX and Tesla or Steve Wozniak and his role in co-founding Apple and developing the first successful personal computer, the Apple I, or Andras Velvart, spatial computing and artificial intelligence ideaman.
Passion: They are passionate about their work and this enthusiasm is contagious. Example: Mark Zuckerberg and his passion for connecting people through Facebook or, again, Sir Richard Branson and his diverse business ventures in multiple industries, driven by his passion for entrepreneurship
Humility: They are self-aware, and comfortable in acknowledging their own mistakes and learning from them. Example: Sheryl Sandberg and her leadership at Facebook and LeanIn.org, or, again, Mark Zuckerberg, and his willingness to admit mistakes and take accountability for them, such as during the Cambridge Analytica scandal.
I won’t bring examples on the coming section π So, there are particular traits, you will want to avoid if you want to be considered a good leader.
Arrogance: An overbearing sense of self-importance that can alienate others and make it difficult to work with them.
Lack of empathy: An inability to understand or care about the needs and feelings of others.
Indecision: A tendency to avoid making decisions or being unable to make them in a timely manner.
Lack of transparency: Being dishonest or withholding information from team members and stakeholders.
Inflexibility: An inability or unwillingness to adapt to changing circumstances or incorporate new ideas.
Poor communication: Inability to effectively communicate goals, instructions and feedback, or actively listen to others.
Micromanagement: An excessive need to control every aspect of a project or team, which can stifle creativity and innovation.
Self-interest: Prioritizing personal gain over the well-being of the team or organization.
Lack of accountability: Failure to take responsibility for one’s actions and decisions.
Lack of emotional intelligence: inability to control one’s emotions or manage the emotions of others.