Sherlock vs AI

In the ever-evolving landscape of technology, the timeless brilliance of Sherlock Holmes stands as an icon of deductive reasoning and astute observation. However, in an era dominated by the rapid advancement of artificial intelligence (AI), one can’t help but wonder: How would Sherlock Holmes fare in a world where machines possess unparalleled analytical capabilities?

The essence of Sherlock Holmes lies in his exceptional powers of deduction, honed through acute observation and logical reasoning. His ability to glean intricate details from seemingly mundane observations sets him apart. Yet, in today’s context, AI algorithms are designed to process vast amounts of data, analyze patterns, and derive conclusions at an unprecedented speed and scale. Could even Holmes match the computational prowess of these AI systems?

Holmes’ methodology, though founded on human intuition and meticulous analysis, might find formidable competition in AI. Machine learning models excel at recognizing complex patterns across extensive datasets, making them proficient in tasks like data analysis, fraud detection, and even diagnosing medical conditions. The speed and accuracy with which AI algorithms operate often surpass human capabilities, potentially outshining Holmes in areas requiring sheer volume and velocity of data processing.

However, where Sherlock Holmes might maintain an edge is in his uniquely human traits. His intuition, creativity, and emotional intelligence provide a depth of understanding that machines, with their rigid programming, struggle to replicate. Holmes’ ability to empathize, understand motives, and perceive nuances of human behavior goes beyond raw data analysis—an area where AI still struggles to excel.

Moreover, Holmes’ capacity for imaginative leaps and “out-of-the-box” thinking, often fueled by his vast knowledge across diverse subjects, remains an aspect where AI currently lags. While AI can process and analyze information, it lacks the inherent curiosity and improvisational skills that enable Holmes to connect seemingly unrelated dots and arrive at groundbreaking conclusions.

In a hypothetical competition against AI, Holmes might harness technology as an ally rather than a rival. He’d likely leverage AI’s capabilities to sift through vast amounts of data swiftly, allowing him more time to focus on the human elements of his investigations. Utilizing AI tools for data collection and initial analysis could free Holmes to delve deeper into the psychological and emotional aspects of cases, areas where his true expertise lies.

Ultimately, the contrast between Sherlock Holmes and AI isn’t a matter of one replacing the other. Instead, it highlights the synergy between human intellect and technological advancement. Holmes’ methods, while classic and rooted in human cognition, could be augmented and enhanced by the precision and speed offered by AI, creating a formidable partnership that marries the best of both worlds.

In a world dominated by AI, Sherlock Holmes’ legacy endures not just as a master detective but as a testament to the enduring value of human intuition, creativity, and the unending quest for knowledge—a beacon guiding us in navigating the ever-expanding realm of technology.

Well Architected for Industry

Microsoft has announced a new set of principles called “Well-Architected for Industry” that provides prescriptive guidance to improve the quality of industry solution deployments. The framework consists of five pillars of architectural excellence including reliability, cost optimization, operational excellence, performance efficiency, and security. The guiding principles can be used to improve the quality of industry cloud workloads.

The framework is intended to provide substantial benefits to partners and customers. For instance, a partner hired to configure Microsoft Sustainability Manager can leverage the framework to design reliable and scalable architecture to build their customer’s carbon emission calculations and reporting workloads. By incorporating security practices, the partner can ensure robust data protection and compliance with industry standards. Additionally, the framework’s performance optimization guidance enables the partner to enhance the application’s responsiveness and user experience.

The Well-Architected for Industry framework is starting with Microsoft Cloud for Sustainability. It is a comprehensive and extensible solution that brings together a set of environmental, social, and governance (ESG) capabilities from Microsoft Azure, Microsoft 365, Microsoft Dynamics 365, and solutions from our global ecosystem of partners. Microsoft Sustainability Manager, a Microsoft Cloud for Sustainability solution, is being expanded to give customers fuller visibility into their environmental impact across carbon, water, and waste. New capabilities will help customers create a comprehensive ESG data estate and prepare them to meet new reporting requirements. The Well-Architected for Industry framework provides context-relevant implementation guidance to support the entire solution development lifecycle starting from design and architecting to operational monitoring. Project Managers, Solution Architects, Developers, operational team members, and System Administrators can benefit by applying these pillars to their workloads.

Another one Microsoft starts with is the Microsoft Cloud for Financial Services provides capabilities to manage financial services data at scale and makes it easier for financial services organizations to deliver differentiated experiences, empower employees, and combat financial crime. It also facilitates security, compliance, and interoperability. The Microsoft Cloud for Financial Services is an end-to-end solution designed for even the most complex control frameworks and regulatory requirements. By integrating existing and new capabilities in Microsoft 365, Azure, Dynamics 365, and Microsoft Power Platform, Microsoft Cloud for Financial Services unlocks unprecedented value. With Microsoft Cloud for Financial Services, financial institutions and insurers of all sizes can optimize costs, reduce time to value, enhance collaboration, and use data and AI to deliver more impactful business outcomes—whether transforming customer and client experiences, empowering employees, managing risk, or modernizing core systems. The security, compliance, and scale of the Microsoft Cloud, combined with our global partner ecosystem, provide a trusted foundation for efficient operations today and sustainable growth tomorrow.

The five pillars of architectural excellence in the “Well-Architected for Industry” framework are:

  1. Reliability: The ability of a system to recover from failures and continue to function.
  2. Cost Optimization: The ability to run systems efficiently and cost-effectively.
  3. Operational Excellence: The ability to operate and monitor systems to deliver business value and to continually improve supporting processes and procedures.
  4. Performance Efficiency: The ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve.
  5. Security: The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.

These pillars provide prescriptive guidance to improve the quality of industry solution deployments.

Not too late to FSOSD certified for FREE!

Make this your Year End Technology Improvement promise!

Are you a developer who wants to contribute to open source projects in the financial industry? Do you want to demonstrate your skills and knowledge of open source best practices, licensing, ethics, and regulation? If so, you might be interested in the FINOS Financial Services Certified Open Source Developer (FSOSD) certification.

The FSOSD certification is a new credential offered by the Fintech Open Source Foundation (FINOS) and the Linux Foundation Training and Certification. It is designed for software engineers, open source developers, DevOps engineers, FinOps practitioners, or information security teams working in financial institutions, fintech, or technology vendors and consultants operating in the financial industry.

The FSOSD certification exam covers the following domains and competencies:

  • Ethics and Behavior
  • Open Source Licensing
  • Consuming Open Source
  • Contributing to Open Source
  • Regulatory Impact on Open Source
  • And more!

A certified FSOSD will have demonstrated a firm conceptual understanding of how open source contribution should occur within the finance industry. They will also have the practical skills to use open source tools and platforms, such as GitHub, GitLab, and FINOS projects.

The good news is that the FSOSD certification exam is currently in beta, which means you can take it at a discounted price (free of charge!) and help shape the future of the certification. The beta exam is an online, proctored, multiple-choice exam with 60 questions. You will have 90 minutes to complete it. The beta exam is available until January 4, 2024, or until all seats are filled.

To register for the beta exam, you need to use a special code that will give you an 80% discount on the exam fee. To get your code, please do reach out to Rob Moffat. You can use this code on the exam registration page. Hurry up, because the code is valid for a limited number of seats and they might fill up quickly.

If you successfully pass the beta exam, you will receive your FSOSD certification and free t-shirt from the Linux Foundation. You will also be among the first to join the elite group of certified open source developers in the financial industry – and spare the 250$ cost of taking the exam normally.

To prepare for the beta exam, you should review the exam details and the skills outline. You should also have hands-on experience with open source projects in the financial industry. You can find some learning resources on the FINOS website and you can also check out the FINOS Open Source Readiness program for more guidance and support.

Don’t miss this opportunity to get certified as a FINOS Financial Services Open Source Developer. Register for the beta exam today and show the world your open source skills and knowledge. Good luck!

Bridging the understanding / explanation gap

Communication is a complex dance where understanding meets explanation. However, there exists a paradox: one person’s inability to comprehend might seem like another’s inability to elucidate. This paradox often leads to frustration, misinterpretation, and a breakdown in communication. Understanding this paradox and finding solutions becomes pivotal in fostering effective communication.

The Paradox Unveiled

Imagine a scenario where Alice explains a concept to Bob. Despite Alice’s efforts to articulate clearly, Bob struggles to grasp the idea. From Alice’s perspective, she believes she has explained it thoroughly. However, Bob’s confusion persists, leading Alice to perceive his inability to understand as a lack of effort or attentiveness.

Conversely, from Bob’s viewpoint, Alice’s explanation seems convoluted or unclear. His inability to comprehend might stem from various factors—lack of context, differing cognitive styles, or even the complexity of the subject matter itself. Bob’s frustration mounts as he perceives Alice’s inability to explain in a way he can comprehend.

The Solution: Bridging the Gulf

  1. Empathy and Patience: Cultivating empathy and patience is crucial. Both parties must acknowledge that their perspectives differ, and patience is key to understanding these differences.
  2. Active Listening: Encourage active listening to enhance comprehension. Both the explainer and the listener should engage in attentive listening to bridge the gap between understanding and explanation.
  3. Adaptability in Communication: Flexibility in communication styles is paramount. Explainers must adjust their approach to match the listener’s preferences, whether it’s visual aids, storytelling, or practical examples.
  4. Clarification and Feedback: Encourage a feedback loop where the listener can ask for clarification without hesitation. Similarly, explainers should seek feedback to ensure their message is being understood.
  5. Simplify and Analogize: Complex concepts can be simplified by breaking them down into smaller, relatable parts. Analogies or real-life examples often aid in conveying abstract ideas effectively.
  6. Contextualization: Providing context helps in understanding. Explainers should ensure that the necessary background information is provided, allowing the listener to connect the dots more easily.

Conclusion

The paradox of understanding and explanation is a common challenge in interpersonal communication. It’s crucial to recognize that these perceived inadequacies aren’t necessarily due to a lack of effort or capability on either side. By fostering empathy, patience, active listening, adaptability, clarification, simplification, and contextualization, individuals can bridge this gap and foster clearer, more effective communication.

Understanding and explaining are symbiotic processes that require collaboration and a willingness to adapt. By implementing these strategies, we can move towards a communication landscape where understanding isn’t an elusive concept but a shared reality.

Why Benchmarking Your Code Matters

In the realm of programming, optimizing code for performance is akin to fine-tuning a well-oiled machine. One of the critical tools in achieving this optimization is benchmarking. Benchmarking refers to the process of measuring the performance of your code, comparing it against various parameters, and iterating to enhance its efficiency. Here are compelling reasons why benchmarking your code is an indispensable practice:

Performance Optimization

Benchmarking serves as a compass to navigate the complexities of code performance. By assessing execution times, memory usage, and other metrics, developers can pinpoint bottlenecks and areas for improvement. This data-driven approach empowers them to fine-tune algorithms, optimize data structures, and enhance overall system performance.

Identifying and Eliminating Bottlenecks

In intricate software systems, inefficiencies often lurk in unexpected places. Benchmarking helps pinpoint these bottlenecks, enabling developers to focus their efforts on specific areas that require optimization. Whether it’s inefficient loops, memory leaks, or suboptimal algorithms, benchmarking sheds light on these areas, allowing for targeted enhancements.

Comparison against Alternatives

When faced with multiple implementation choices or algorithms, benchmarking provides empirical evidence to support decision-making. It facilitates a side-by-side comparison, allowing developers to select the most efficient solution based on actual performance metrics rather than theoretical assumptions.

Preventing Performance Regressions

As software evolves, new features and changes might inadvertently introduce performance regressions. Regular benchmarking acts as a safety net by detecting any performance degradation early in the development cycle. This proactive approach helps maintain and even enhance the overall performance of the codebase.

Validating Optimization Efforts

When implementing optimizations, it’s crucial to verify their impact. Benchmarking allows developers to validate whether their optimizations result in actual performance improvements. This validation ensures that the efforts put into enhancing the codebase yield tangible benefits.

Enhancing User Experience

Efficient code directly translates to a better user experience. Faster load times, smoother interactions, and optimized resource usage contribute to a more responsive and enjoyable software experience for users.

Cultivating a Performance-Oriented Culture

By integrating benchmarking into the development process, teams foster a culture that prioritizes performance. It encourages continuous improvement, knowledge sharing, and a collective effort toward creating efficient and high-performing software.

In conclusion, benchmarking is not merely a tool; it’s a guiding principle for creating high-performance software. By regularly evaluating and optimizing code through benchmarking, developers pave the way for enhanced efficiency, better user experiences, and a more robust foundation for future innovations.

So I’m a .NET guy – what should I use for .NET?

Unlocking Code Performance Excellence with BenchmarkDotNet

In the quest for optimal code performance within the .NET ecosystem, BenchmarkDotNet stands tall as an indispensable tool, offering a suite of features that elevate code benchmarking to a whole new level. Here’s why BenchmarkDotNet reigns supreme as the go-to library for performance measurement and optimization:

Simplicity and Ease of Use

BenchmarkDotNet simplifies the benchmarking process with its intuitive syntax and straightforward setup. Its attribute-based model allows developers to create benchmarks quickly, focusing on the code’s performance rather than wrestling with complex configurations.

Extensive Platform Support

Supporting a wide array of platforms and frameworks within the .NET ecosystem, BenchmarkDotNet is versatile. Whether targeting .NET Framework, .NET Core, or the latest .NET platforms, it seamlessly adapts to various environments, ensuring consistent and reliable benchmarking across different setups.

Robust Statistical Analysis

Beyond merely measuring execution times, BenchmarkDotNet excels in statistical analysis. It provides robust statistical summaries, aiding developers in making informed decisions based on reliable metrics. This statistical prowess ensures accuracy in identifying performance variations and trends.

Flexible Configuration Options

Despite its simplicity, BenchmarkDotNet offers a rich set of configuration options. Developers can fine-tune benchmarks by customizing iterations, changing run settings, adjusting environment variables, and much more. This flexibility empowers users to tailor benchmarks to their specific needs.

Automatic Infrastructure Management

BenchmarkDotNet takes care of the benchmarking infrastructure, automatically handling warm-up periods, overhead calculations, and other intricacies involved in accurate performance measurements. This automation allows developers to focus solely on writing and optimizing their code.

Visualizations and Reporting

The library provides comprehensive visualizations and detailed reports, making it effortless to interpret benchmark results. Through clear charts, graphs, and statistical summaries, developers gain deeper insights into their code’s performance characteristics.

Active Community and Continuous Development

Benefiting from an active community and continuous development, BenchmarkDotNet receives regular updates, improvements, and bug fixes. Its community-driven nature ensures ongoing support and a constantly evolving toolset that adapts to the latest advancements in the .NET ecosystem.

Integration with CI/CD Pipelines

BenchmarkDotNet seamlessly integrates with Continuous Integration and Continuous Deployment (CI/CD) pipelines. This integration facilitates automated performance testing, ensuring that code changes don’t degrade performance inadvertently.

In essence, BenchmarkDotNet isn’t just a benchmarking library; it’s a comprehensive toolkit designed to elevate code performance to new heights within the .NET ecosystem. Its user-friendly nature, robust statistical analysis, and adaptability make it an invaluable asset for developers striving to create high-performance and efficient software. With BenchmarkDotNet, optimizing code performance becomes not just a goal, but an achievable reality.

So, in a nutshell – do benchmark, and if you on .NET, do it with BenchmarkDotNet!

Tablee des Chefs 2023 – fundraising gala!

There are a range of different organizations I do help via giving back – mostly using my skills, like Spectry, Skool, and – La Tablee des Chefs. It is a Canadian organization that encourages the youth to learn culinary skills and become aware of nutritional information regarding food preparation – they are engaging 150+ schools in Canada.

Morgan Stanley, through Code to Give, has committed to this organization 2 years ago to build a mobile application to be used to share information, learning plans, create a social network and maintain engagement with the youth. It contains advanced recipe searches, by cuisine type, dietary concerns, ingredients, etc. The application also provides up to date information on the calculated ecological impact of the meals and allows you to substitute ingredients to see how that changes the carbon footprint and other environmental factors.

At their annual Montreal fundraising gala, we were honored as participants acknowledging Morgan Stanley’s contributions. Jonathan Brunette, Keval Prabhudesai, Olivier Beau were there in person, and Ferenc Hubicsak and me were waiting for the reactions from the digital sidelines.

The fundraising brought in over $500.000 CAD in on night – a nice accomplishment for sure!

FINOS All Community Call Recap & memes

Last week, https://finos.org had their yearly All Community Call Recap – please do watch the recording and check out the deck.

So, as I was watching the recap and was preparing for my section (together with Keith O’Donnell, we were talking about https://zenith.finos.org ), I could not miss that I was mentioned once… twice… three times… up to five times in 60 minutes. And I wasn’t the only one spotting this – after the 3rd one I started to get memes from
people, let me share a little selection here 🙂

* want to mention Dov Katz, Brian Ingenito, Stephen Goldbaum, Rita Chaturvedi, Matthew Bain, Mimi Flynn, Alvin Shih, Amol Shukla, Paul Stevenson, Attila Mihaly, Ferenc Hubicsak, and more, who contributed significantly to FINOS from Morgan Stanley.

So, looking forward to a fun 2024, in my new roles at FINOS (Open Source Readiness, InnerSource, Emerging Technologies, and Technical Oversight Committee), and driving Big Boost Mondays in NYC – see you at an upcoming session (like the one tonight)!

Embrace your self-worth value

Far from the first time I’m touching the topic of imposter syndrome – you know, this might be a way for me to fight it, actually.

brown wooden blocks with number
Photo by Brett Jordan on Pexels.com

Understanding Your Intrinsic Worth: Mel Robbins’ Quote

In a world where validation often comes from external sources, Mel Robbins’ quote stands as a poignant reminder of self-value and self-perception. “There will always be someone who can’t see your worth. Don’t let it be you.” These words encapsulate a profound truth that echoes through the corridors of self-esteem and personal validation.

Acknowledging External Perceptions

The quote touches upon the inevitability of encountering individuals who may fail to recognize your true worth. Society, with its myriad perspectives and biases, often dictates value through its own lens, sometimes disregarding the essence of an individual’s true worth. It could be a boss overlooking your contributions, a peer undermining your efforts, or even societal norms imposing unrealistic standards.

The Power of Self-Perception

However, the crux lies in one’s own perception. The pivotal message within Robbins’ words is the importance of self-assessment and self-acknowledgment. It’s about not internalizing or adopting the limited perceptions of others as a measure of personal value.

Cultivating Self-Worth

Cultivating a strong sense of self-worth involves understanding one’s strengths, acknowledging accomplishments, and embracing imperfections. It’s about recognizing that inherent value isn’t contingent upon external validation. Instead, it springs from a deep understanding and appreciation of one’s unique qualities, experiences, and potential.

Empowerment through Self-Validation

Robbins’ quote is a call to action—an invitation to empower oneself by taking charge of one’s narrative. By reframing self-perception and detaching it from external judgments, individuals can harness the power to recognize and appreciate their worth irrespective of others’ opinions.

Conclusion

Mel Robbins’ quote encapsulates a profound truth about self-worth—one that resonates across cultures and times. It serves as a guiding beacon, encouraging individuals to embrace their intrinsic value, irrespective of the limitations others may impose. It’s a reminder to be the guardian of one’s own worth, refusing to let external perceptions cloud the understanding of one’s true essence. Ultimately, it’s an affirmation of the importance of self-belief in navigating the intricacies of life.

Strategies for combating confirmation bias

Confirmation bias, the tendency to seek out information that aligns with our beliefs while ignoring contradictory evidence, is a pervasive cognitive bias affecting decision-making and critical thinking. Overcoming this bias requires intentional strategies and mindfulness. Here’s a guide on combating confirmation bias:

woman holding mirror against her head in the middle of forest
Photo by Tasha Kamrowski on Pexels.com

1. Recognize Your Biases

Acknowledging that everyone has biases is the first step. Be aware of your predispositions and understand how they influence your perceptions and judgments.

2. Seek Diverse Perspectives

Actively expose yourself to different viewpoints and sources of information. Engage with diverse opinions, cultures, and ideologies to broaden your understanding.

3. Question Your Assumptions

Challenge your own beliefs by asking critical questions. Assess the evidence objectively and consider alternative explanations before drawing conclusions.

4. Practice Mindful Evaluation

Evaluate information systematically. Fact-check sources, cross-verify information, and question the reliability of data before accepting it as true.

5. Encourage Constructive Disagreement

Embrace debates and discussions with people who hold opposing views. Constructive disagreement fosters learning and helps in uncovering blind spots.

6. Develop Critical Thinking Skills

Enhance your critical thinking abilities by learning logic, reasoning, and scientific methods. Apply these skills to analyze information objectively.

7. Stay Open-Minded

Cultivate an open-minded approach. Be willing to change your opinions in the face of new, compelling evidence. Flexibility in thinking is crucial.

8. Pause and Reflect

Before making decisions or forming conclusions, take a moment to reflect. Consider whether your judgments are influenced by bias and consciously reassess them.

9. Create Decision-Making Processes

Establish decision-making frameworks that encourage deliberation, multiple perspectives, and scrutiny to minimize the impact of biases.

10. Continuous Learning

Embrace a continuous learning mindset. Stay curious, be receptive to new information, and adapt your beliefs based on reliable evidence.

Conclusion

By acknowledging and actively addressing confirmation bias, individuals can enhance their decision-making processes and foster a more inclusive and informed worldview. Embracing diverse perspectives, cultivating critical thinking skills, and maintaining an open-minded approach are key in combating confirmation bias.

Remember, while complete elimination of bias might be impossible, consistent efforts to mitigate it can lead to more rational and objective decision-making.

Dangers of Stochastic Parrots: Can Language Models Be Too Big?

“In recent years”, language models have seen exponential growth in size and complexity, leading to the development of immensely powerful AI systems capable of generating human-like text. However, this advancement hasn’t come without concerns, and one of the significant debates revolves around the concept of “Stochastic Parrots” – the potential dangers of excessively large language models.

The Rise of Stochastic Parrots

Stochastic Parrots refer to AI models that, despite their remarkable capabilities, may merely mimic human language without true comprehension or critical thinking. As models grow in size and absorb more data, they become proficient at regurgitating information without genuinely understanding it. This phenomenon raises profound ethical, societal, and technical questions.

Ethical Concerns

The ethical concerns surrounding these expansive models are multifaceted. Firstly, they can perpetuate biases present in the data they are trained on, leading to biased or discriminatory outputs. Moreover, the environmental impact of training and maintaining colossal models cannot be ignored, as they demand immense computational resources, contributing significantly to carbon emissions.

Societal Implications

Language models wield tremendous influence in various domains, from content generation to decision-making processes. However, an overreliance on these models can result in a decrease in critical thinking and creativity among users. Moreover, the propagation of misinformation or malicious content at an unprecedented scale poses a serious threat to society.

Technical Challenges

Apart from ethical and societal issues, the sheer size of these models presents technical challenges. Handling, fine-tuning, and deploying such massive systems require substantial computational power and expertise, limiting access to smaller organizations or researchers with fewer resources.

Addressing the Challenges

Addressing the dangers posed by Stochastic Parrots requires a multi-pronged approach. This includes developing techniques to mitigate biases in training data, promoting transparency and accountability in AI systems, and fostering research into smaller, more efficient models that balance performance with ethical considerations.

Conclusion

While large language models undoubtedly showcase the immense potential of AI, they also come with inherent risks and challenges. As we continue to push the boundaries of AI technology, it becomes imperative to weigh the benefits against the risks and work towards the responsible and ethical development of these powerful tools.

In the pursuit of innovation, it’s crucial to ensure that the evolution of language models aligns with the broader goals of societal well-being, ethical usage, and environmental sustainability.