Does the use of video conferencing enhance remote work?

During the pandemic, everyone tried and: enjoyed, hated (right one to be underlined) doing video conferences. I consider myself an early pioneer of the tech, back in 2010 I requested one of the first cameras in the firm to enable better collaboration between the New York, India and Budapest teams. So, does the use of video conferencing truly enhance remote work? Should you call out your college who has not turned on their camera?

So, yes, video conferencing can enhance remote work by providing a way for remote workers to have face-to-face communication, collaborate on projects in real-time, and build stronger relationships with their colleagues. Video conferencing technology helps to overcome the lack of physical proximity and provides a sense of connection and engagement that is crucial for remote teams. However, it’s important to note that the success of video conferencing in enhancing remote work also depends on various factors such as the quality of the technology, internet connection, and the cultural and organizational support for remote work.

But as most cases, there are two sides of the coin. While video conferencing can have benefits for remote work, it can also have some drawbacks. The following are some reasons why video conferencing may not enhance remote work:

  • Technical Issues: Video conferencing can be hindered by technical problems such as poor internet connectivity, audio and video quality, and compatibility issues. This can lead to frustration and decreased productivity for remote workers.
  • Lack of Privacy: Video conferencing can be intrusive and can make it difficult for remote workers to find the privacy they need to focus on their work. Additionally, remote workers may not feel comfortable having personal conversations or performing sensitive tasks on a video call.
  • Excessive Screen Time: Video conferencing can increase screen time, leading to eye strain and other physical problems. This can be especially challenging for remote workers who may be required to participate in multiple video calls throughout the day.
  • Culture & Interpersonal Challenges: Video conferencing can also create cultural and interpersonal challenges, especially for remote workers who may not be familiar with the norms and expectations of virtual communication. This can lead to miscommunication and decreased team morale.

In conclusion, while video conferencing can have benefits for remote work, it’s important to be mindful of its potential drawbacks and to take steps to minimize them in order to enhance remote work for all team members.

On the other hand (yes, it’s like a pendulum πŸ˜€ ), there are several reasons why you should turn on your camera during a remote video conference call:

  • Improved Communication: Having a visual component to the call helps to build stronger connections and facilitates better communication between participants. It allows for nonverbal cues such as facial expressions and body language to be visible, which can enhance understanding and foster a sense of presence.
  • Increased Engagement: Turning on your camera can increase engagement and participation in the call, as it allows you to be more present and attentive to the conversation.
  • Better Team Dynamics: Seeing each other on camera can help to build a more personal connection and improve team dynamics. This can be particularly important for remote teams who may not have the opportunity to interact in person.
  • Professionalism: In a professional setting, turning on your camera can demonstrate that you are fully engaged and respectful of others’ time.
  • Technical Considerations: In some cases, video conferencing software may require you to turn on your camera in order to access certain features or participate in certain activities.

Note: It’s important to consider the technical requirements and cultural norms of your team, as well as personal preferences and privacy concerns when deciding whether or not to turn on your camera during a remote video conference call. Speaking of norms, similarly how chatting have its norm (see https://www.nohello.com/ ), video conferencing etiquette is a set of guidelines that help to ensure that remote meetings are productive, professional, and respectful of everyone’s time and space. Here are some video conferencing etiquette dos and don’ts:

DosDon’ts
Test your technology before the call to ensure it is working properly.
Dress appropriately as you would for an in-person meeting.
Turn on your camera, if possible, to improve communication and engagement.
Mute your microphone when you are not speaking to reduce background noise.
Keep your surroundings tidy and professional looking.
Arrive on time and be prepared for the meeting.
Use the chat feature to share any important information or documents.
Be attentive and actively participate in the conversation.
Speak clearly and use appropriate language and tone.
End the call promptly when it has concluded.
Do not multitask during the call, it shows lack of engagement and respect.
Do not eat or drink during the call.
Do not interrupt others when they are speaking.
Do not engage in personal or off-topic conversations.
Do not use inappropriate or offensive language.
Do not share confidential or sensitive information.
Do not use a distracting background or have a messy room in the background.
Do not ignore technical difficulties or dismiss them as unimportant.
Do not use your phone or other devices during the call.
Do not end the call abruptly or without warning.

Did I commit all of the don’ts at least once in my lifetime? Most probably – yes πŸ˜€ In conclusion, by following these video conferencing etiquette dos and don’ts, you can help to ensure that remote meetings are productive, professional, and respectful of everyone’s time and space. Like, with the video on, probably even more important to have an actual agenda, have the sections start and finish on time, not to run over, etc. To share materials beforehand as you cannot be sure the device the other side connecting through would display yours adequately. To have your name and – if you wish – your pronoun on your display name – noone likes to address “iPhone9”. To try to engage more – polls, active chat using Q&A, etc tend to be useful.

Of course, this is not an area without research. Research studies (like this HBR) on the use of cameras for remote calls have shown mixed results. Some studies have found that the use of cameras in remote calls can enhance communication, build stronger relationships, and increase engagement and participation. For example, a study by the University of California, Irvine found that video conferencing improved nonverbal communication, leading to more accurate understanding and more effective collaboration.

On the other hand, some studies have shown that the use of cameras in remote calls can be intrusive and can lead to increased self-consciousness, decreased comfort, and reduced communication quality. For example, a study by the University of Haifa found that remote participants who had their cameras turned on felt more self-conscious and reported lower levels of comfort and privacy compared to participants who had their cameras turned off.

So, I don’t think there is an easy answer for this question either…

The value of HATEOAS compared to traditional REST

During a discussion on how to return values from an API, I told someone: just use HATEOAS. I had a reaction like “and what is that?”. HATEOAS (Hypermedia as the Engine of Application State) is a constraint of REST APIs that adds an extra layer of information to the responses that allow clients to dynamically discover and navigate the API. HATEOAS provides a more flexible and discoverable API compared to traditional REST queries that require clients to hardcode URLs for each endpoint.

Many people’s reaction reading the above is like “why not using GraphQL then?”. When compared to GraphQL, HATEOAS and GraphQL serve different purposes. GraphQL provides a more efficient and flexible way of querying and manipulating data from an API, allowing clients to retrieve exactly the data they need in a single request. On the other hand, HATEOAS focuses on providing a discoverable API and enabling clients to navigate the API dynamically, without having to hardcode the API’s structure.

When compared to REST, traditional REST queries typically require the client to know the exact URL of the endpoint they want to request data from, and the client needs to send separate requests to retrieve different pieces of data. This can lead to inefficiencies, as the client may need to make multiple round trips to the server to gather all the data it needs.

In contrast, HATEOAS provides additional information in the API response that allows the client to dynamically discover and navigate the API, making the API more flexible and discoverable compared to traditional REST queries. Same way, when calling an API for creation of an object, instead of returning the new ID or returning the new object with details, it would return you the REST URL instead that you can call to get details about the object – you get more than the ID, and less than the object πŸ˜€

Similarly, GraphQL provides a more efficient and flexible way of querying data compared to traditional REST queries. With GraphQL, the client can specify exactly what data it needs in a single request, reducing the number of round trips to the server and enabling the client to retrieve only the data it requires.

Overall, both HATEOAS and GraphQL offer improvements over traditional REST queries, with HATEOAS focusing on providing a more flexible and discoverable API and GraphQL providing a more efficient way of querying data.

Various languages do support easy navigation of it – however, you should note, that: this list is not exhaustive, and other libraries may also be available. Additionally, HATEOAS support can often be implemented manually in any language, as it is a design constraint rather than a specific technology.

  • C#: WebApi.Hal, HAL-CS
  • Java: Spring HATEOAS, JHAL
  • Python: Django Rest Framework, Tastypie

HATEOAS is useful in situations where the API clients need to discover and navigate the API dynamically, without having to hardcode the API’s structure. Here are a few examples and use cases:

  • SPA (Single Page Application) – An SPA is a web application that updates its content dynamically, without reloading the entire page. HATEOAS can be used to provide the necessary information to the SPA so that it can dynamically navigate and discover the API, without having to hardcode the URLs.
  • Mobile Applications – Mobile applications often have limited connectivity and may not have the latest information about the API. HATEOAS can be used to provide the necessary information to the mobile application so that it can dynamically discover and navigate the API, even when the network is slow or unreliable.
  • Microservices – In a microservices architecture, the APIs between services are often changed and updated frequently. HATEOAS can be used to provide the necessary information to the client so that it can dynamically discover and navigate the API, even when the underlying implementation changes.
  • Versioning – When a new version of an API is released, the URLs and endpoint names may change. HATEOAS can be used to provide the necessary information to the client so that it can dynamically discover and navigate the latest version of the API, without having to hardcode the URLs.

HATEOAS is a widely adopted constraint in RESTful API design and has been used by many companies and organizations. Here are a few examples – many other companies and organizations also use HATEOAS in their APIs. However, specific case studies are not readily available as the usage of HATEOAS is often part of the internal implementation details of an API and is not widely publicized.

  • Amazon Web Services (AWS) – AWS uses HATEOAS in many of its APIs, such as the Amazon S3 API, to allow clients to dynamically discover and navigate the API.
  • Netflix – Netflix uses HATEOAS in its APIs to allow clients to dynamically discover and navigate the API and to improve the discoverability and flexibility of its APIs.
  • Salesforce – Salesforce uses HATEOAS in its APIs to allow clients to dynamically discover and navigate the API and to improve the usability and efficiency of its APIs.
  • Twitter – Twitter uses HATEOAS in its APIs to allow clients to dynamically discover and navigate the API and to improve the usability and efficiency of its APIs.

“Train people well enough so they can leave. Treat them well enough so they don’t want to”

Like always, it is about balancing employee training and treatment. The sentiment behind the statement “train people well enough so they can leave, treat them well enough so they don’t want to” is one of balance and foresight. A company or organization that invests in its employees and helps them grow both professionally and personally has the potential to reap significant rewards. By training its workers well and creating a positive work environment, a business can increase its chances of retaining valuable employees and improve overall morale.

The first half of the statement, “train people well enough so they can leave,” implies that companies should help their employees acquire the skills and knowledge they need to succeed in their careers, regardless of whether they stay with the organization. By doing so, the company is providing them with opportunities for growth and advancement that will benefit both the employee and the company in the long run. This kind of investment in employee development can help establish the company as a desirable place to work and create a reputation for excellence in the industry.

The second half of the statement, “treat them well enough so they don’t want to,” highlights the importance of creating a positive work environment. Employees who feel valued and appreciated are more likely to be productive and motivated, and they are also less likely to leave the company in search of a better work environment. A business that provides its employees with a supportive and inclusive atmosphere, fair pay and benefits, and opportunities for growth and advancement is more likely to retain its best workers and attract new talent.

The key to success, then, is finding the right balance between training and treatment. A company that provides excellent training but fails to create a positive work environment is likely to experience high turnover, as employees seek better opportunities elsewhere. On the other hand, a company that provides a great work environment but fails to invest in employee development will likely struggle to retain its best workers as they seek out more challenging and rewarding career opportunities.

In conclusion, the statement “train people well enough so they can leave, treat them well enough so they don’t want to” underscores the importance of investing in employee development and creating a positive work environment. By doing so, companies can increase the chances of retaining their best workers, improve morale, and establish a reputation for excellence in the industry. And for me, this training I was lucky to give, happened to be many times the TAP

DORA and Agile – a bake off of delivery pipeline measurement techniques

Today’s post is inspired by Matt Shuster, who asked about my opinion on DORA vs Agile pipelines. So, let’s see the basics first; measuring the performance of an agile delivery pipeline requires a combination of metrics that focus on both efficiency and effectiveness. Here are a few metrics that are commonly used for this purpose:

  • Lead time: The time from when a feature is requested to when it is delivered to the customer.
  • Cycle time: The time it takes to complete a specific task or feature from start to finish.
  • Throughput: The number of features delivered per unit of time.
  • Defect density: The number of defects per unit of delivered code.
  • Deployment frequency: The frequency of code releases to production.
  • Time to restore service: The time it takes to restore service after a production failure.
  • User satisfaction: Feedback from users on the quality and functionality of the delivered features.

As with all metrics, these metrics should be regularly monitored and used to continuously improve the delivery pipeline by identifying bottlenecks, optimizing workflows, and reducing waste. Additionally, as Agile is not one-size-fits-all, it’s important to regularly reassess and adjust the metrics used to ensure they accurately reflect the goals and priorities of the organization.

On the other hand, let’s quickly look at DORA. The DORA (Accelerate) framework is a set of four metrics that provide a comprehensive view of the performance of an organization’s software delivery process. The four metrics are:

  • Lead time: The time it takes to go from code committed to code successfully running in production.
  • Deployment frequency: The number of times per day that code is successfully deployed to production.
  • Mean time to recovery: The average time it takes to restore service after an incident.
  • Change failure rate: The percentage of changes that result in a production failure.

These metrics align well with the metrics commonly used to measure the performance of an agile delivery pipeline and can be used in a complementary manner to validate the software architecture. For example, a low lead time and high deployment frequency indicate that the delivery pipeline is efficient and streamlined, while a low change failure rate and mean time to recovery indicate that the architecture is robust and reliable.

I promised a bake off, so, here we are πŸ™‚ The comparison between using metrics to validate a software architecture and using the DORA framework is that both provide different but complementary perspectives on the performance of an organization’s software delivery process.

On one hand, metrics such as lead time, cycle time, throughput, and defect density focus on efficiency and effectiveness of the delivery pipeline. They help to measure the time taken to complete a task, the speed at which features are delivered, and the quality of the delivered code. These metrics provide insight into the processes and workflows used in the delivery pipeline and help identify areas for improvement.

On the other hand, the DORA framework provides a comprehensive view of the performance of an organization’s software delivery process by focusing on four key metrics: lead time, deployment frequency, mean time to recovery, and change failure rate. These metrics help to measure the speed and reliability of the delivery pipeline and provide insight into the resilience and stability of the software architecture.

So, which of them to use? By using both sets of metrics together, organizations can get a complete picture of their delivery pipeline performance and identify areas for improvement in both architecture and processes. This can help ensure that the architecture supports the needs of the organization and the goals of the delivery pipeline, while also providing a way to continually assess and optimize performance over time. For example, metrics such as lead time and cycle time can highlight bottlenecks and inefficiencies in the delivery pipeline, while metrics such as change failure rate and mean time to recovery can highlight weaknesses in the architecture that may be contributing to production failures.

In summary, using metrics to validate a software architecture and using the DORA framework together provides a comprehensive view of the performance of an organization’s software delivery process and helps to identify areas for improvement in both architecture and processes. As probably you figured out, I like case studies and tools, so… here we are πŸ™‚

  • Netflix: Netflix uses a combination of metrics, including lead time and cycle time, to measure the performance of its delivery pipeline. They use this data to continuously optimize their processes and improve their architecture, resulting in a highly efficient and effective delivery pipeline.
  • Amazon: Amazon uses a combination of metrics, including deployment frequency and mean time to recovery, to measure the performance of its delivery pipeline. By regularly monitoring these metrics, Amazon has been able to achieve a high level of reliability and stability in its software architecture, allowing them to quickly and effectively respond to incidents and restore service.
  • Spotify: Spotify uses a combination of metrics, including lead time and throughput, to measure the performance of its delivery pipeline. By using these metrics to continuously optimize their processes and improve their architecture, Spotify has been able to increase the speed and efficiency of its delivery pipeline, allowing them to deliver high-quality features to users faster.
  • Google: Google uses a combination of metrics, including lead time, deployment frequency, and mean time to recovery, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Google has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
  • Microsoft: Microsoft uses a combination of metrics, including lead time and cycle time, to measure the performance of its delivery pipeline. By using these metrics to continuously optimize its processes and improve its architecture, Microsoft has been able to increase the speed and efficiency of its delivery pipeline, allowing it to deliver high-quality features and updates to users faster.
  • Shopify: Shopify uses a combination of metrics, including deployment frequency, mean time to recovery, and change failure rate, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Shopify has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.
  • Airbnb: Airbnb uses a combination of metrics, including lead time, deployment frequency, and mean time to recovery, to measure the performance of its delivery pipeline. By using these metrics to continuously improve its processes and architecture, Airbnb has been able to achieve a high level of reliability and stability in its delivery pipeline, allowing it to deliver high-quality features and updates to users quickly and efficiently.

These case studies demonstrate the importance of regularly measuring and analyzing performance metrics to validate a software architecture and improve the delivery pipeline. By using a combination of metrics and regularly reassessing and adjusting their approach, organizations can continuously improve their delivery pipeline and ensure that their architecture supports the needs of the organization and the goals of the delivery pipeline. And speaking of tools – there are various tools and software that can be used to measure the DORA framework measures. Some popular options include:

  • Datadog: Datadog provides real-time monitoring and analytics for cloud-scale infrastructure, applications, and logs. It can be used to track key performance indicators, including lead time, deployment frequency, mean time to recovery, and change failure rate, and generate reports and alerts based on that data.
  • New Relic: New Relic is a performance management platform that provides real-time visibility into application performance. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Splunk: Splunk is a software platform for searching, analyzing, and visualizing machine-generated big data. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • AppDynamics: AppDynamics is an application performance management solution that provides real-time visibility into the performance of applications and infrastructure. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Prometheus: Prometheus is an open-source systems monitoring and alerting toolkit. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • InfluxDB: InfluxDB is an open-source time series database. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Grafana: Grafana is an open-source data visualization and analysis platform. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • Nagios: Nagios is an open-source IT infrastructure monitoring solution. It can be used to track and analyze key performance indicators, such as lead time, deployment frequency, and mean time to recovery, and generate reports and alerts based on that data.
  • JIRA: JIRA is a project and issue tracking software. It can be used to track the lead time and cycle time of the delivery pipeline by monitoring the time it takes for work items to move through the various stages of the development process.

These are just a few examples of software tools that can be used to measure and track DORA framework metrics. The specific tool or combination of tools used will depend on the needs of the organization and the size and complexity of the delivery pipeline. For agile, we have our set of tools too, I picked only some of them (and yes, JIRA is on both lists):

  • Trello: Trello is a visual project management tool that can be used to track and visualize the progress of work items through the different stages of the development process.
  • Asana: Asana is a team collaboration tool that can be used to track and visualize the progress of work items through the different stages of the development process.
  • JIRA: JIRA is a project and issue tracking software that can be used to track and visualize the progress of work items through the different stages of the development process.
  • Clubhouse: Clubhouse is a project management tool specifically designed for agile teams. It can be used to track the progress of work items through the different stages of the development process and visualize the flow of work through the delivery pipeline.
  • Pivotal Tracker: Pivotal Tracker is an agile project management tool that can be used to track and visualize the progress of work items through the different stages of the development process.

I hope this helped answering the DORA vs Agile metrics question, with the answer being:

How to avoid the cloud hosting price creep?

Some call it penny-pinching, some call it watching my cash-flow, but everyone is trying to avoid the price creep that might happen if you are using the cloud. An honest mistake can lead to terrible effects – and if you don’t know your own architecture well enough, this can be disastrous. So, here are some strategies to effectively avoid creeping cloud hosting prices:

  • Monitor usage and costs regularly: Use tools such as Azure Cost Analyzer, AWS Cost Explorer or Google Cloud’s billing dashboard to keep track of resource utilization and costs. This will help you identify areas where you can reduce spending, such as underutilized instances or overpriced storage services.
  • Use reserved instances or committed use contracts: These provide a significant discount compared to on-demand pricing by committing to use a specific amount of resources for a set period of time, e.g., 1 or 3 years.
  • Take advantage of spot instances: Spot instances are unused Azure or EC2 instances that are made available at a discount to users willing to take the risk of having their instances terminated when the spot price increases. Yes, it needs to think through your microservices infrastructure and your circuit breakers, but the savings can be tremendous.
  • Optimize your infrastructure: Right-sizing instances, using auto-scaling groups, and using managed services like Azure Managed Cloud Database Service or AWS RDS instead of running your own database servers can help reduce costs.
  • Use managed services: For me, this is probably the biggest pet peeve: Using managed services like Azure Functions, AWS Lambda or Google Cloud Functions instead of running your own servers can greatly reduce costs and eliminate the need for infrastructure management. Believe or not, you would not be able to optimize your custom solutions, let it be written by you or using some 3rd party solution or similar, to be as cost effective as it has been done by the actual CSP – they are interested in making it cost effective, to be able to handle a better compression ratio on the same hardware, it would be for sure be compatible with other services, authentication would work out of the box, etc.

You can use a combination of various tools to achieve this, just look at these examples:

  • Dropbox: Dropbox was able to save over 30% on its cloud hosting costs by using reserved instances and spot instances to reduce its spending on compute resources. In addition, Dropbox optimized its infrastructure to make it more cost efficient, including reducing the number of instances it was using, right-sizing its instances, and using auto-scaling groups to ensure that its resources were being used optimally. This combination of strategies allowed Dropbox to reduce its overall cloud hosting costs while maintaining the same level of service and reliability.
  • Capital One: Capital One reduced its cloud hosting costs by over 50% by using a combination of reserved instances, managed services, and a cost optimization program. Capital One adopted a proactive approach to cost optimization, monitoring its cloud usage and costs on a regular basis, and implementing cost optimization strategies when necessary. For example, Capital One used reserved instances to commit to a certain amount of resources over a set period of time, which allowed it to receive significant discounts compared to on-demand pricing. In addition, Capital One adopted managed services like AWS RDS and AWS Lambda to reduce the amount of infrastructure it needed to manage and maintain.
  • Expedia: Expedia reduced its cloud hosting costs by 20% by using reserved instances, auto-scaling, and by optimizing its infrastructure for cost efficiency. Expedia adopted a multi-pronged approach to cost optimization, which included committing to a certain amount of resources over a set period of time through reserved instances, using auto-scaling to ensure that its resources were being used optimally, and right-sizing its instances to reduce the amount of resources it was using. These strategies allowed Expedia to reduce its cloud hosting costs while maintaining the same level of service and reliability.
  • SoundCloud: SoundCloud reduced its cloud hosting costs by over 50% by moving from on-demand instances to reserved instances and by optimizing its infrastructure for cost efficiency. By using reserved instances, SoundCloud was able to commit to a certain amount of resources over a set period of time, which allowed it to receive significant discounts compared to on-demand pricing. In addition, SoundCloud optimized its infrastructure to reduce the amount of resources it was using and to ensure that its resources were being used optimally, which allowed it to further reduce its cloud hosting costs.
  • SmugMug: SmugMug reduced its cloud hosting costs by over 60% by using reserved instances, spot instances, and by optimizing its infrastructure for cost efficiency. SmugMug adopted a multi-pronged approach to cost optimization, which included using reserved instances to commit to a certain amount of resources over a set period of time, using spot instances to take advantage of unused EC2 instances that were made available at a discount, and optimizing its infrastructure to reduce the amount of resources it was using and to ensure that its resources were being used optimally. These strategies allowed SmugMug to reduce its cloud hosting costs while maintaining the same level of service and reliability.
  • Netflix: reduced its cloud hosting costs by over 80% through a combination of reserved instances, spot instances, and optimizing its infrastructure.

Of course, as always, it is important to note that these strategies and tools will vary based on the specific cloud provider and your use case, so it is important to carefully evaluate your options and choose the best solution for your needs. However, these case studies provide a good starting point and demonstrate the potential savings that can be achieved through a proactive approach to cloud hosting cost optimization.

How to optimize your software architecture?

Optimizing your software architecture can have many reasons – you can make it cheaper to operate, cheaper to maintain, etc. So, let’s see, how to do that πŸ™‚ To optimize software architecture, consider the following steps:

  • Identify performance bottlenecks: Use tools like profiling, logging, and monitoring to identify performance bottlenecks and inefficiencies in the current architecture.
  • Choose the right technology: Select the most appropriate technology for each component based on factors such as scalability, reliability, and maintainability.
  • Design for scalability: Consider design patterns such as microservices and serverless architectures to make the system scalable.
  • Automate testing: Use tools like automated testing frameworks to test the system and identify potential problems before they become critical issues.
  • Implement continuous delivery and deployment: Automate the delivery and deployment process to reduce the risk of downtime and improve the speed of delivery.
  • Monitor and measure: Use monitoring and analytics tools to track key metrics and make data-driven decisions about the architecture.

The other big reason is – to avoid failures. Clearly without trying to cover all of them, some of the recent ones that were due to architectural inconsistencies and mishaps:

  • The 2017 Equifax data breach was caused by a failure in the company’s architecture, as they were using outdated software and had insufficient monitoring in place.
  • The 2010 and 2012 outages of Amazon’s S3 storage service were due to architectural limitations and lack of proper monitoring, resulting in widespread disruption and loss of data.
  • The 2014 AWS outage was caused by a failure in the architecture of the company’s Simple Queue Service (SQS) component, leading to widespread downtime and data loss.
  • The 2020 Capital One data breach was a result of a vulnerability in the company’s firewall configuration, which was caused by insufficient security testing and monitoring.
  • The 2013 Target data breach was caused by a failure in the company’s security architecture, as they were using outdated security software and had insufficient monitoring in place.
  • The 2016 Dyn DDoS attack was caused by a vulnerability in the company’s architecture, which allowed attackers to exploit a weakness in the domain name system (DNS) infrastructure and cause widespread disruption.

So, what is the solution? Beside willing to sit down with the drawing board over and over, there are a set of tools you can use – caveat, using this tool will not save you from the drawing board, just will make you aware of where you have to focus πŸ™‚

  • Profiling: JProfiler, YourKit, VisualVM, YourKit Java Profiler, dotTrace, Dynatrace, AQTime, OptimizeIt, Intel VTune, etc.
  • Logging: ELK stack, Logstash, Fluentd, Graylog, LogRhythm, Sumo Logic, Logentries, Papertrail, LogDNA, etc.
  • Monitoring: New Relic, AppDynamics, Datadog, Nagios, Sensu, Prometheus, Zabbix, Icinga, Checkmk, etc.

So, in a nutshell – beside pulling up your sleeves and put work into it, there is no magic wand that would help you in this – each scenario and app would be different, but you should feel like, that you are not alone, and also, that there are tools out there to help you with the basics.

Why is this a blog and not a… TikTok channel, for example?

So, what is the value of blogging in 2023? Why I did not choose some other platforms, like TikTok for this presence? Blogging can be compared with other platforms such as:

  • Social Media platforms: Blogging is different from social media platforms such as Facebook, Twitter, and Instagram, which are primarily designed for sharing updates and interacting with others in a more personal and social context.
  • Video sharing platforms: Blogging is different from video sharing platforms such as YouTube, where the primary focus is on sharing short-form video content.
  • News and media websites: Blogging is different from news and media websites, which focus on delivering news and information to a mass audience.
  • Online forums and discussion boards: Blogging is different from online forums and discussion boards, which are designed for community-based discussion and debate.
  • E-commerce platforms: Blogging is different from e-commerce platforms such as Amazon and Shopify, which are designed for buying and selling products online.

But blogging continues to have value in 2023 for several reasons (definitely not all of them applies to this blog πŸ˜€ ) :

  • Blogging allows for in-depth, comprehensive, and well-researched content creation, which can be useful for sharing knowledge, opinions, and expertise.
  • Blogs offer a platform for building a personal brand, and can establish one’s authority in a particular industry.
  • Blogs can drive traffic and provide a means of monetization through advertising and affiliate marketing – not this place, as I do not have any monetization on this site.
  • Blogs provide a permanent and accessible archive of content, whereas platforms like TikTok is a more ephemeral platform with a primary focus on short-form video content.
  • Blogging can be a source of leads and customers for businesses, especially through search engine optimization (SEO) efforts, although clearly, search is now moving away from being only website SEO driven.
  • Blogs provide a platform for community building and fostering engagement through comments, social shares, and email subscriptions.
  • Blogs can be a valuable tool for education and personal growth, with individuals able to share and reflect on their learning experiences.
  • Blogs can be used as a source of information and news, particularly in niche industries or communities.
  • Blogs can help individuals and organizations establish themselves as thought leaders and influencers in their industry or community.
  • Blogs can serve as a source of entertainment and inspiration, with bloggers sharing personal stories, travel experiences, and creative works.
  • Blogs can provide a platform for activism and social justice advocacy.
  • Blogs can be used to document and share personal journeys, such as weight loss or career changes.
  • Blogs can offer insights and advice on various topics, such as personal finance, relationships, or parenting.
  • Blogs can be a source of revenue for freelancers and independent content creators.
  • Blogs can be a valuable resource for job seekers, with many companies and organizations using blogs as a recruitment tool.
  • Blogs can be used as a tool for crisis communication and reputation management, with companies and organizations able to share their perspectives and respond to negative press.
  • Blogs can provide a platform for storytelling, allowing individuals and organizations to share their unique experiences and perspectives with a wider audience.

In a nutshell, many of the reasons above are the reason I choose to blog instead of being active on other platforms; although I do crosspost these posts to LinkedIn, Twitter and Facebook πŸ™‚

The value equation for new digital products

Short update regarding the blog – I added some small new features based on suggestions, these are: share buttons at the bottom of the posts and the ability to subscribe to updates using the sidebar on the main page. Feel free to suggest other features!


As I have been embracing some new projects, and believing in Agile, figured, it would be valuable to be able to effectively calculate the Value of it, as this can be part of a KPI. I have seen some recent discussions on the value equation for new digital products. Some of them were based on the following:

Value = (Benefits – Costs) + (Ease of Use – Complexity) + (Trust – Risk)

Whereas:

  • Benefits: The value the product provides to the user, such as improved efficiency, convenience, or new capabilities.
  • Costs: The cost to the user to obtain and use the product, including monetary costs, time, and effort.
  • Ease of Use: The simplicity and intuitive nature of the product’s interface and functionality.
  • Complexity: The difficulty of using and understanding the product, as well as any necessary training or support.
  • Trust: The level of confidence the user has in the product, including its security, reliability, and reputation.
  • Risk: The potential negative consequences of using the product, such as loss of data, privacy breaches, or security vulnerabilities.

So, how to use this equation? Let’s take an example of a new digital product, a personal finance management app.

  • Benefits: The app helps the user track their expenses, set budgets, and manage their finances more effectively.
  • Costs: The app is free to download and use, but it has in-app purchases and subscriptions.
  • Ease of Use: The app has a user-friendly interface, easy navigation and simple to understand financial reports.
  • Complexity: The app is easy to set up and use, with minimal training required.
  • Trust: The app uses bank-level security measures to protect user data and has positive reviews and reputation in the market.
  • Risk: There is minimal risk associated with using the app, as user data is securely stored and there are no reported security breaches.

Which would translate to:

Value = (Effective financial management – In-app purchases and subscriptions) + (User-friendly interface – Minimal training) + (Secure and reputable – Minimal Risk)

Therefore, the value of this app is high as it provides a lot of benefits such as effective financial management, easy to use, secure and reputable with minimal costs, complexity and risk.

If I were to replace the equation pieces with the usual items (branding, marketing, UX, onboarding and price), this would translate to (continuing the example of the personal finance management app):

  • Branding: The app has a strong brand that is easily recognizable and associated with financial management.
  • Marketing: The app is marketed effectively, with clear messaging that highlights its key features and benefits.
  • User Experience (UX): The app has a user-friendly interface and easy navigation that makes it simple to use.
  • Onboarding: The app has a smooth onboarding process that guides users through setting up and using the app effectively.
  • Price: The app is free to download, with in-app purchases and subscriptions available.

Value = (Strong brand + clear messaging + user-friendly interface + smooth onboarding) – In-app purchases and subscriptions

The value of this app is high as it has a strong brand, clear messaging, user-friendly interface, smooth onboarding and a reasonable pricing model, despite having in-app purchases and subscriptions.

As the first equation seems to be more balanced for me, I tend to use that, but I figured the second equation is what is more generally used. How do you calculate the value of a digital product?

The value (?) of Technical Debt

I could say “What is technical debt and how to avoid it”, but that is less clickbaity πŸ˜€ Working on some open source projects now (like Compose), our aim to avoid inserting it is sometimes overtaking delivery velocity as well. Technical debt refers to the cost of maintaining and updating a codebase as it grows and evolves over time. It is often incurred when shortcuts are taken during development in order to meet deadlines or budget constraints. Some dreadful examples of technical debt include:

  • Using hacky or poorly-designed code that is difficult to understand or modify
  • Not properly commenting or documenting code
  • Not following established coding conventions or best practices
  • Not properly testing code
  • Not refactoring code as it becomes more complex
Fighting Technical Debt

To avoid technical debt, it is important to prioritize maintainability and readability in the codebase, as well as to properly plan and budget for ongoing maintenance and updates. Some fun strategies for avoiding technical debt include:

  • Following established coding conventions and best practices
  • Writing clear and well-commented code
  • Implementing automated testing and continuous integration
  • Regularly reviewing and refactoring code to ensure it remains maintainable and readable
  • Prioritizing long-term goals over short-term gains
  • Allowing enough time for code reviews, testing and documentation
  • Building and following a clear process with defined milestones
  • Building a culture of ownership and accountability of the codebase

By following these practices, organizations can minimize the amount of technical debt they incur, making their codebase easier to maintain and less expensive to update over time. So in my series of posts on Agile, it’s now the time to see, how these (debt and agile) are related. They are closely related, as both concepts deal with the maintenance and evolution of software over time. Agile development is a methodology that emphasizes flexibility, collaboration, and rapid iteration, with a focus on delivering working software as quickly as possible. Technical debt, on the other hand, refers to the cost of maintaining and updating that software over time. In an agile development environment, technical debt can be incurred when shortcuts are taken in order to meet deadlines or deliver features more quickly. For example, if a developer is under pressure to meet a sprint deadline, they might choose to use a hacky or poorly-designed solution that is quick to implement but difficult to maintain. This can lead to an accumulation of technical debt over time, as the codebase becomes more complex and harder to update.However, when agile development is done correctly and with the right mindset, it can help to minimize technical debt. Agile development methodologies prioritize continuous improvement, which helps to keep the codebase maintainable and readable. Agile also encourages regular reviews and refactoring of code, which can help to identify and address issues with technical debt.

In summary, Agile development and technical debt are closely related, as both deal with the ongoing maintenance and evolution of software. Agile development methodologies can help to minimize technical debt by encouraging continuous improvement, regular reviews and refactoring, but it can also be a source of technical debt if not done properly. Therefore, it is important to balance the speed of delivery with the long-term maintainability of the codebase.

Luckily, there are several ways to fight technical debt:

  • Code review tools: These tools allow developers to review and approve code changes before they are merged into the codebase. Examples include Gerrit, Review Board, and Crucible, and of course the native tools of GitHub, GitLab, BitBucket and more.
  • Linting tools: These tools check the codebase for adherence to coding conventions and best practices. Examples include Roslyn, Resharper, ESLint and Pylint.
  • Refactoring tools: These tools assist developers in restructuring and reorganizing code to make it more maintainable. Examples include Resharper, Rascal and JDeodorant.
  • Automated testing tools: These tools automatically run tests on the codebase, ensuring that changes do not break existing functionality. Examples include XUnit, JUnit, TestNG, and Selenium.
  • Code coverage tools: These tools measure how much of the codebase is covered by automated tests, helping to identify areas where testing is insufficient. Examples include dotCover, Coverlet, Cobertura and Jacoco.
  • Code quality tools: These tools analyze the codebase and provide feedback on areas that need improvement. Examples include NDepend, SonarQube, and CodeClimate.
  • Dependency management tools: These tools help track the dependencies of a project and its sub-dependencies and ensure that the project is using the latest and most stable versions of the dependencies. Examples include Nuget, Maven, Gradle, and npm.

And this is the point I am going to break – some – people. Believe or not, sometimes technical debt can be useful too πŸ˜€ Mainly, it allows an organization to quickly deliver a working product or feature, even if it may have some long-term maintenance costs. Some situations where technical debt might be useful include:

  • Rapid prototyping: When an organization needs to quickly create a prototype of a product or feature to test its feasibility, it may be acceptable to incur some technical debt in order to get the prototype up and running quickly.
  • Meeting tight deadlines: When a project has a tight deadline and there is not enough time to properly plan and implement all the features, taking shortcuts can be useful to deliver a working product in time.
  • Experimentation, Tryouts: Incurring some technical debt can be a good way to try out new technologies or features, as it allows an organization to quickly test the waters and see if they are viable before committing to a full-scale implementation.
  • Short-term projects: In some cases, technical debt may be acceptable for short-term projects that will be phased out or replaced in the near future.

It is important to note that technical debt should not be taken lightly, as it can quickly accumulate and become a burden on the development process. Therefore, it should be used strategically, and with a clear plan to minimize and pay off the debt in the future. Furthermore, when incurring in technical debt, it is important to have a clear plan on how to address and pay off the debt in the future, it is also important to communicate to the stakeholders the risks of incurring in technical debt and the plan to pay it off.

So, to have a better idea of where you are on the fight with them, you should (yes, Agile!) create KPIs around them and measure them πŸ™‚ Technical debt can be measured in a number of ways, including:

  • Code review metrics: Code review metrics such as the number of comments, the number of issues raised, and the number of defects found can be used to measure the quality of the code and the ease of maintenance.
  • Test coverage metrics: Metrics such as test coverage, that is the percentage of the codebase that is covered by automated tests, can be used to measure the robustness and maintainability of the codebase.
  • Complexity metrics: Metrics such as cyclomatic complexity, which measures the complexity of a piece of code, can be used to identify areas of the codebase that are difficult to understand or maintain.
  • Refactoring metrics: Metrics such as the number of refactoring performed, the number of code smells identified and fixed, can be used to measure the maintainability and readability of the codebase.
  • Technical debt ratios: These ratios express the relationship between the cost of maintaining the system and the cost of developing it. An example is the debt-to-equity ratio, which compares the estimated cost of paying off technical debt to the estimated value of the system after the debt is paid off.
  • User feedback: User feedback on the ease of use, the speed and the reliability of the system can be used to measure the impact of the technical debt on the end-user – nothing beats a good hallway testing.

It is worth noting that there is no one-size-fits-all approach to measuring technical debt, and it may be necessary to use a combination of different metrics to get a comprehensive understanding of the state of the codebase. Additionally, it is important to note that measuring technical debt is not a one-time process, but it should be done regularly, so that progress can be tracked, and improvements can be made over time.

Why focus on Emerging Technologies in Financial companies

I’ll ask for your help, reader. I submitted an idea for a Special Interest Group in FINOS, which is to focus on Emerging Technologies – and I am looking for (special) interest in it πŸ™‚

So, if you can, do comment, like, share, etc. the Special Interest Group – Emerging Technologies link πŸ™‚

Horizon next, from 2020, by Deloitte

But you might ask – what benefits of using emerging technologies like spatial computing, quantum, etc. does have for financial companies? Believe or not, these techs do have the potential to bring significant benefits to financial companies. Like:

  • Spatial computing, which involves the integration of virtual and augmented reality into everyday computing, can be used for financial companies to create immersive and interactive experiences for customers. For example, spatial computing can be used to create virtual branches, enabling customers to interact with banking services in a more natural and intuitive way.
  • Quantum computing has the potential to revolutionize the financial industry by providing much faster and more efficient ways to process complex financial data. Quantum computing can be used for financial companies to perform complex calculations, such as risk analysis, portfolio optimization, and fraud detection, much faster than traditional computers. Additionally, quantum computing can be used for secure communication and data encryption, which is important for financial companies in terms of security.

There are many other technologies that can be used, either specifically at a financial company, or do have benefits for the company through other means:

Technologydescriptionapplications
Artificial IntelligenceThe development of systems that can perform tasks that would normally require human intelligence, such as learning, reasoning, and perception.Predictive analytics, fraud detection, customer service automation.
Blockchain A decentralized and distributed digital ledger used to record transactions across a network of computers.Secure financial transactions, digital identity verification, supply chain management.
Internet of Things (IoT)The interconnectedness of everyday devices, such as smartphones, appliances, and vehicles, through the internet. Smart cities, predictive maintenance, energy management.
Robotics and Robotics Process Automation (RPA)The use of machines to perform tasks that would normally require human intervention. Automation of repetitive tasks, precision manufacturing, healthcare. Automation of back-office tasks, process optimization, cost reduction.
Spatial Computing, also aboveThe use of computer-generated simulations to create immersive or interactive experiences. Training, simulation, entertainment, education.
Quantum Computing, also aboveA type of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Drug discovery, optimization, cryptography.
Neural LinksThe implantation of electronic devices into the human brain, to enhance cognitive abilities or to control prosthetic limbs. Medical treatment, brain-computer interfaces, cognitive enhancement.
Space TechnologiesAdvancements in satellite, rocket and space exploration technologies.Earth observation, telecommunications, scientific research, space tourism.
4D Printing3D printing with the added dimension of time, allowing the printed objects to change shape or properties over time.Smart materials, adaptive structures, self-assembling structures
BiotechnologyThe use of living organisms, cells or their derivatives to create products and technologies. Medical treatments, crop improvement, biofuels, biosensors.
5 & 6GThe fifth / sixth generation of mobile networks, characterized by higher speeds, lower latency, and greater capacity for connected devices.Enhanced mobile broadband, massive internet of things, critical communication.
Graphene A single layer of carbon atoms arranged in a hexagonal lattice, it is a strong, light and highly conductive material.Energy storage, electronics, composites, sensors, etc
Natural Language Processing (NLP)The ability of computers to understand and process human language.Chatbot customer service, sentiment analysis, document analysis
Advanced Data AnalyticsCombination of data visualization, statistical analysis, machine learning and other techniques to extract insights from data.Risk management, fraud detection, customer behavior analysis.
Cloud ComputingThe delivery of computing services, including storage, processing and software, over the internet.Scalability, cost savings, data security, and business continuity.

Note that this is not a comprehensive list and there are many other emerging technologies that have the potential to disrupt various industries. The specific applications may vary depending on the particular technology and industry. And yes, many of these are now well established technologies – but if I give you this list a decade or two ago, you would not say many of them would ever become reality.

Nevertheless, the way I see this, that these technologies have the potential to help financial companies to increase efficiency, improve customer experiences, and enhance security, as well as providing opportunities for new revenue streams or business models – hence I am imagining that for finding the open standards, for finding the common ground, to understand the regulatory and other implications, there is a clear benefit to have the Special Interest Group mentioned at the beginning.

To help starting up the group, I wrote a set of ‘primers’ which can help the initial phase of the group: