STEM or humanities – why we chose STEM earlier in our lives?

Talent is a natural aptitude or skill that an individual possesses in a particular subject or field. While some people may show signs of talent from a young age, others may not develop their talent until later in life. The age at which talent becomes visible in various subjects can vary significantly, depending on several factors.

Mathematics

Talent in mathematics can become visible as early as preschool age. Children who are able to count, understand basic shapes and patterns, and solve simple math problems may show an aptitude for mathematics. As children get older, their ability to perform complex calculations and problem-solving may indicate a natural talent for mathematics. However, it is important to note that a lack of early mathematical ability does not necessarily mean that a child does not have the potential to excel in the subject.

Music

Talent in music can also become visible at a young age. Children who show an interest in singing, playing an instrument, or dancing may have a natural talent for music. Some children may demonstrate an excellent sense of rhythm or pitch, while others may be able to play by ear. With training and practice, these skills can develop into impressive musical abilities. However, some musicians do not discover their talent until later in life, and may not begin to pursue music seriously until their teenage or adult years.

Sports

Talent in sports can be visible from a young age as well. Children who demonstrate a natural ability in sports may have a talent for the sport. This can include traits such as agility, coordination, strength, and endurance. As children get older, their skill in the sport may improve, and they may begin to excel in competitions. However, it is important to note that not all talented athletes will pursue their sport professionally, and many may choose to focus on other areas.

Art

Talent in art can be visible from a young age, as children who enjoy drawing, painting, or sculpting may show an aptitude for art. As children get older, their skill in these areas may improve, and they may begin to develop their own unique style. However, talent in art is not always visible at a young age, and some artists may not discover their talent until later in life.

Language

Talent in language can be visible at a young age, as children who are able to communicate effectively may have a natural talent for language. This can include traits such as an excellent vocabulary, an ability to express themselves clearly, and an understanding of grammar and syntax. As children get older, their language skills may improve, and they may become fluent in multiple languages. However, some individuals may not discover their talent for language until later in life, and may choose to pursue language learning as a hobby or career.

What about humanities?

In addition to the STEM (science, technology, engineering, and math) fields, talent in the humanities can also become visible at different ages. The humanities include subjects such as history, literature, philosophy, and language.

History

Talent in history can become visible at a young age, as children who enjoy learning about the past may have a natural talent for the subject. This can include an ability to remember and connect historical events, analyze their causes and consequences, and draw conclusions based on historical evidence. As students get older, their skill in history may improve, and they may begin to specialize in particular areas, such as ancient history or modern political history.

Literature


Talent in literature can also be visible at a young age, as children who enjoy reading and have a strong grasp of language may show a natural aptitude for the subject. This can include an ability to analyze literary works, interpret their themes and symbolism, and express their ideas through writing. As students get older, they may begin to specialize in particular genres or time periods, such as Victorian literature or postcolonial literature.

Philosophy

Talent in philosophy may not become visible until later in life, as this subject requires abstract thinking and complex reasoning skills. However, some individuals may show an interest in philosophical questions from a young age, and may have a natural talent for exploring and debating ethical, metaphysical, or epistemological issues. As students get older, they may begin to specialize in particular areas of philosophy, such as existentialism or feminist philosophy.

Language (again)

Talent in language can also apply to the humanities, as individuals who have a strong grasp of language may be able to excel in subjects such as literature, history, or philosophy. In addition, talent in language learning can be visible at a young age, as children who are able to learn and retain new vocabulary and grammar rules quickly may have a natural aptitude for language. As students get older, they may become fluent in multiple languages and may choose to pursue careers in translation, interpretation, or language teaching.

Conclusion

In conclusion, talent in the humanities can become visible at different ages, depending on the subject and the individual. It is important to encourage and support individuals who show a natural aptitude for the humanities, as these subjects play an important role in understanding the human experience and shaping our society and culture. The age at which talent becomes visible in various subjects can vary significantly. While some individuals may show signs of talent from a young age, others may not discover their talent until later in life. It is important to remember that talent is not the only factor that determines success in a particular subject or field, and that hard work, dedication, and practice can also play a significant role in achieving excellence.

Why to open our wallets?

As technology continues to evolve, the concept of money is also changing. Gone are the days when people would carry physical cash and coins in their wallets. Instead, digital wallets are becoming increasingly popular. These wallets store digital currency, allowing individuals to make purchases and transfer funds electronically. However, many digital wallets are proprietary, meaning that they are owned and operated by private companies. This is where the world needs an open-source digital wallet.

An open-source digital wallet is a wallet that is freely available to anyone to use and modify. It is developed through a collaborative effort, with a community of developers contributing to its creation and maintenance. Open-source wallets operate on an open network, which means that anyone can participate and contribute. This is in contrast to proprietary wallets, which are closed systems, controlled by a single company.

There are several reasons why the world needs an open-source digital wallet. Firstly, open-source wallets promote transparency. With proprietary wallets, users have little visibility into how their data is being used. This lack of transparency can lead to concerns over data privacy and security. However, with an open-source wallet, users can examine the code, ensuring that their data is being stored and used ethically.

Secondly, open-source wallets promote innovation. With a community of developers working on the same project, new features can be added quickly, and bugs can be identified and fixed promptly. This creates a more robust and flexible product, allowing users to customize their wallet to suit their needs. This promotes innovation and competition, as new ideas can be tested and improved upon.

Thirdly, open-source wallets promote interoperability. With proprietary wallets, users are often restricted to a specific platform or network. This can create barriers for users who want to transfer funds or use their wallet with other services. However, with an open-source wallet, users have the flexibility to connect with different networks and services, creating a more seamless user experience.

Lastly, open-source wallets promote inclusivity. With proprietary wallets, users often need to meet certain requirements to access the service. This can exclude individuals who do not have a particular bank account or mobile phone. However, with an open-source wallet, anyone with internet access can participate, promoting financial inclusion and accessibility.

In conclusion, the world needs an open-source digital wallet. Open-source wallets promote transparency, innovation, interoperability, and inclusivity, creating a more robust and flexible product. As technology continues to evolve, an open-source wallet is crucial to promote transparency and innovation, ensuring that digital currency is used ethically and securely.

For further details, do check out an article on the same topic at FINOS.

If you want to have knowledge, collect the dots. If you want experience, connect the dots.

Knowledge and experience are two essential components that shape our perception of the world around us. They help us to make sense of things, understand new concepts, and form opinions. However, there is a fundamental difference between knowledge and experience. Knowledge is collecting dots of information, while experience is connecting dots of information.

Let us explore this statement in detail. Knowledge is simply the accumulation of information. We collect bits of information from various sources such as books, the internet, or conversations with others. We store this information in our minds and use it to understand the world. However, knowledge alone is not sufficient. We must also know how to use this information effectively. This is where experience comes in.

Experience involves using the information we have collected to connect the dots. We use our knowledge to make connections between different ideas, concepts, and experiences. This is what allows us to develop a deeper understanding of the world around us. Experience allows us to see how different pieces of information fit together to form a larger picture.

For example, consider a student who is studying history. They may have read various books and articles about a particular event. They may have memorized the names of key figures, dates, and locations. However, if they have no experience connecting these dots of information, they may not fully understand the significance of the event. It is only when they can connect the dots and see how all the pieces fit together that they can truly appreciate the event’s historical importance.

Similarly, consider a chef who is learning to cook. They may have collected information about various ingredients, cooking techniques, and recipes. However, it is only when they have the experience of cooking and experimenting with different ingredients and techniques that they can truly master the art of cooking.

In addition to connecting dots of information, experience is also built upon learning from failures. Failure is an essential part of the learning process and allows us to gain new insights and perspectives. When we make mistakes, we are forced to reflect on our actions, identify what went wrong, and make changes to improve in the future.

Experience teaches us to view mistakes as opportunities for growth, rather than as failures. When we have experience, we know that mistakes are a natural part of the learning process, and we use them as stepping stones to achieve success. We can also recognize when we have made mistakes and take steps to correct them.

On the other hand, knowledge is recognizing mistakes. When we have knowledge, we have a greater understanding of the concepts and information we have collected. We can identify when something doesn’t make sense, or when there is a flaw in our reasoning. Knowledge helps us to recognize our mistakes, and it also allows us to understand the implications of those mistakes.

It is important to note that knowledge alone is not enough to succeed. We must also have experience to truly learn from our mistakes and grow as individuals. When we combine knowledge and experience, we can achieve a deeper understanding of the world around us.

For example, consider an entrepreneur who is starting a new business. They may have knowledge of the industry, the market, and the competition, but without experience, they may struggle to achieve success. Through experience, they can learn from their mistakes, adapt their strategies, and ultimately achieve their goals.

The story of David and Goliath from the Bible is a classic example of how experience can trump knowledge. Goliath, a giant warrior, was considered to be invincible due to his immense size and strength. In contrast, David, a young shepherd boy, had no formal military training or experience in combat. However, David had something that Goliath did not – experience.

David had spent years fighting off wild animals to protect his sheep. He had honed his skills with a sling and stone, and he knew how to use them to his advantage. When he faced Goliath on the battlefield, David used his experience to his advantage. He knew that he couldn’t defeat Goliath head-on, so he used his knowledge of his own strengths and Goliath’s weaknesses to outsmart him.

Goliath, on the other hand, had knowledge – knowledge of his own strength and prowess in combat. He had years of training and experience, but he relied too heavily on this knowledge. He believed that his size and strength alone would be enough to defeat any opponent.

In the end, David’s experience trumped Goliath’s knowledge. He used his experience with a sling and stone to strike Goliath in the head, ultimately defeating him. This story teaches us that knowledge is important, but experience is what gives us wisdom and the ability to use that knowledge effectively.

In our own lives, we can apply this lesson by recognizing the importance of both knowledge and experience. Knowledge gives us the foundation we need to understand the world around us, while experience helps us to apply that knowledge in meaningful ways. By combining knowledge and experience, we can achieve greater success and wisdom, just like David did on the battlefield.

In conclusion, knowledge and experience are both essential components of learning and growth. Knowledge helps us to recognize mistakes, while experience allows us to learn from those mistakes and grow as individuals. When we combine knowledge and experience, we can achieve a deeper understanding of the world around us and achieve success in our personal and professional lives. So, don’t be afraid to make mistakes – embrace them as opportunities for growth and learning!

Emerging Technologies SIG series – How can quantum computing be useful for financial companies?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Quantum computing is a relatively new technology that has the potential to revolutionize various industries. One such industry that stands to benefit greatly from quantum computing is finance. In this post, we will discuss the value of quantum computing for financial companies.

Firstly, let’s define what quantum computing is. Unlike classical computing, which uses binary digits (bits) to represent information, quantum computing uses quantum bits (qubits). Qubits can exist in multiple states at the same time, allowing quantum computers to perform certain calculations exponentially faster than classical computers.

Now let’s explore how financial companies can benefit from quantum computing. Financial companies deal with large amounts of data on a daily basis, and quantum computing can help them analyze this data more efficiently. For example, quantum computing can be used for portfolio optimization, risk management, fraud detection, and pricing derivatives. These tasks require complex calculations that are time-consuming for classical computers, but quantum computers can perform them much faster.

Another area where quantum computing can be valuable for financial companies is in cryptography. Quantum computing can potentially break many of the current encryption methods used to secure financial data. However, quantum computing can also be used to create new encryption methods that are more secure. This means that financial companies can use quantum computing to protect their sensitive data from hackers.

Moreover, quantum computing can help financial companies develop and test new financial models. Traditional models often rely on simplifying assumptions that may not accurately reflect real-world scenarios. Quantum computing can enable financial companies to model complex systems more accurately, leading to better decision-making.

Finally, quantum computing can help financial companies improve their customer service. For example, quantum computing can be used to analyze customer data and provide personalized recommendations based on the customer’s financial goals and risk appetite.

Unpinking time – while quantum computing has enormous potential, there are still several limitations that need to be addressed before it can be widely adopted by financial companies. Here are some of the current limitations and how researchers are working to mitigate them:

  • Limited number of qubits: Currently, quantum computers have a limited number of qubits, which restricts the complexity of problems that can be solved. However, researchers are working to increase the number of qubits and improve their stability and coherence. This will enable quantum computers to perform more complex calculations.
  • Error correction: Quantum computers are prone to errors due to environmental factors such as temperature fluctuations and electromagnetic interference. Error correction is a significant challenge in quantum computing, but researchers are developing new techniques to mitigate errors and improve the reliability of quantum computers.
  • Quantum algorithms: There is a lack of quantum algorithms for financial applications. Researchers are working on developing new quantum algorithms that can solve specific financial problems. These algorithms will enable financial companies to take advantage of the computational power of quantum computers.
  • Cost: Quantum computers are expensive to build and maintain. Currently, only a few large companies and research institutions have access to quantum computers. However, the cost is expected to decrease as the technology matures and becomes more widespread.
  • Integration with classical computing: Quantum computers are not yet fully compatible with classical computing, which is essential for financial companies to use quantum computing effectively. Researchers are developing hybrid classical-quantum computing systems to enable seamless integration between the two computing paradigms.

In a nutshell, while there are still several limitations to quantum computing, researchers are working hard to mitigate these limitations. As the technology continues to develop, we can expect to see more financial companies investing in quantum computing to gain a competitive edge in the industry.

In conclusion, quantum computing has the potential to bring significant value to financial companies. By using quantum computing, financial companies can process large amounts of data more efficiently, improve their security measures, develop more accurate financial models, and provide better customer service. As the technology continues to develop, we can expect to see more financial companies investing in quantum computing to stay competitive in an increasingly digital world.

Emerging Technologies SIG series – What are the benefits of IoT for financial institutions?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


The Internet of Things (IoT) is a term used to describe the interconnected network of devices, sensors, and machines that can communicate with each other through the internet. It has the potential to revolutionize many industries, including the financial sector.

In the financial industry, IoT technology can be used to improve operational efficiency, enhance customer experiences, and provide new revenue streams. Here are some examples of how IoT can be used at financial institutions:

  • Smart ATMs: ATMs can be equipped with IoT sensors to monitor their health, detect malfunctions, and trigger maintenance alerts. Smart ATMs can also use location-based data to offer personalized services and promotions to customers.
  • Payment wearables: IoT-enabled wearables such as smartwatches and fitness trackers can be used for contactless payments. Payment data can be securely transmitted to the financial institution using blockchain technology.
  • Fraud detection: IoT sensors can be installed in bank branches and ATMs to detect fraudulent activities in real-time. Sensors can also be used to monitor customer behavior and detect unusual transactions.
  • Asset tracking: Financial institutions can use IoT sensors to track the location and condition of assets such as vehicles, equipment, and inventory. This can help optimize asset usage and reduce the risk of theft or loss.
  • Insurance telematics: IoT sensors can be used to collect data on driving behavior, such as speed, acceleration, and braking. This data can be used by insurance companies to offer personalized policies and rewards for safe driving.
  • Predictive maintenance: IoT sensors can be used to monitor the health of financial equipment such as servers and ATMs. Predictive maintenance can help identify potential issues before they become major problems, reducing downtime and repair costs.
  • Personalized banking: IoT data can be used to offer personalized banking experiences based on individual customer needs and preferences. For example, banks can use data from wearable devices to offer personalized financial advice and investment recommendations.

IoT technology can also be used to improve security and compliance in the financial industry. For example, IoT sensors can be used to monitor physical security, such as access control and surveillance. IoT technology can also be used to monitor compliance with regulations such as anti-money laundering (AML) and know-your-customer (KYC) rules.

Getting off from the pink clouds, while IoT technology has many benefits, there are also some limitations that need to be considered. Here are some of the current limitations of IoT and how they can be mitigated:

  • Security: IoT devices can be vulnerable to cyber-attacks, which can compromise sensitive data and cause significant damage. To mitigate this risk, it is essential to implement strong security measures such as encryption, authentication, and access controls. Regular software updates and patching can also help keep IoT devices secure.
  • Interoperability: IoT devices from different vendors may use different communication protocols, which can make it challenging to integrate them into a cohesive system. To mitigate this challenge, standards such as the Open Connectivity Foundation (OCF) and the Industrial Internet Consortium (IIC) have been established to promote interoperability between different IoT devices.
  • Data privacy: IoT devices collect a significant amount of data, which can raise privacy concerns. To mitigate this risk, it is essential to implement robust data privacy policies and practices, such as data encryption, anonymization, and secure storage.
  • Complexity: IoT systems can be complex to deploy and manage, which can make it challenging for organizations to derive value from them. To mitigate this challenge, it is essential to work with experienced vendors and consultants who can help design, deploy, and manage IoT systems.
  • Scalability: IoT systems can generate vast amounts of data, which can strain network bandwidth and storage capacity. To mitigate this challenge, it is essential to implement scalable architectures that can handle large volumes of data. Cloud-based solutions such as AWS IoT and Azure IoT can help organizations scale their IoT systems as needed.

So, while IoT technology has many benefits, it also presents some challenges that need to be addressed. By implementing robust security measures, promoting interoperability, protecting data privacy, simplifying system complexity, and implementing scalable architectures, organizations can mitigate these challenges and realize the full potential of IoT technology. One of the most pressing item is interoperability from above, as many of the other items (like security, scalability, privacy) can be driven from it. 

We are lucky: There are several open standards for IoT that are designed to promote interoperability between different devices and systems. Open standards are important because they allow devices and systems from different vendors to communicate with each other seamlessly, which can help avoid vendor lock-in and promote innovation. Here are some of the open standards for IoT:

  • MQTT (Message Queuing Telemetry Transport): MQTT is a lightweight messaging protocol that is designed for use in IoT applications. It is used for sending messages between devices, and it is designed to be efficient and reliable.
  • CoAP (Constrained Application Protocol): CoAP is a protocol that is used for low-power and low-bandwidth IoT devices. It is designed to be lightweight and easy to implement, making it ideal for use in resource-constrained devices.
  • OCF (Open Connectivity Foundation): OCF is a consortium of companies that are working to create open standards for IoT interoperability. OCF’s goal is to create a common IoT framework that can be used by all IoT devices and systems.
  • ZigBee: ZigBee is a wireless standard that is designed for low-power, low-bandwidth IoT devices. It is used for creating mesh networks, and it is designed to be secure and reliable.
  • LoRaWAN (Long Range Wide Area Network): LoRaWAN is a wireless protocol that is designed for long-range IoT devices. It is used for creating wide-area networks, and it is designed to be low-power and low-cost.

By using open standards like these, IoT devices and systems can communicate with each other seamlessly, regardless of the vendor or technology used. This can help promote innovation and avoid vendor lock-in, which can ultimately benefit both organizations and consumers.

In conclusion, IoT technology has the potential to transform the financial industry by improving operational efficiency, enhancing customer experiences, and providing new revenue streams. Financial institutions that embrace IoT technology will be better equipped to meet the needs of their customers and stay ahead of the competition and even can help creating additional value and sustainable businesses – do check out the journey of HB Antwerp and Botswana.

Emerging Technologies SIG series – Spatial Computing and Creative Designs

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


The rise of spatial computing and the metaverse has opened up a new realm of possibilities for creative design. Spatial computing refers to the use of digital technologies to create experiences that integrate the physical and digital worlds, while the metaverse is a term used to describe a collective virtual shared space. In this post, we will explore how creative design is different in spatial computing and the metaverse compared to traditional creative design.

Firstly, spatial computing and the metaverse require a different approach to design. In traditional design, the focus is often on creating a visual representation of a product or experience. However, in spatial computing and the metaverse, the design process must consider the interaction between the user and the environment. This means that designers must think about how the user will move through the space, how they will interact with objects, and how they will engage with other users.

Secondly, spatial computing and the metaverse offer new opportunities for immersive experiences. Creative design in these contexts can involve the use of augmented reality, virtual reality, and mixed reality technologies to create interactive and engaging experiences that go beyond what is possible with traditional design. For example, a virtual art installation in the metaverse could allow users to explore and interact with the artwork in ways that would not be possible in the physical world.

Thirdly, spatial computing and the metaverse allow for a more collaborative and participatory approach to design. In traditional design, the designer creates a product or experience for the user, but in spatial computing and the metaverse, the user is an active participant in the design process. This means that designers must be open to feedback and willing to make changes based on user input. It also means that users can contribute to the design process by creating their own content and experiences within the metaverse.

Finally, spatial computing and the metaverse require a different set of technical skills and tools for creative design. In traditional design, designers may use tools like Adobe Photoshop or Illustrator to create visual designs, but in spatial computing and the metaverse, designers may need to use software like Unity or Unreal Engine to create interactive environments. Designers must also have a strong understanding of 3D modeling, animation, and game design principles.

In a quick summary, creative design in spatial computing and the metaverse offers new opportunities and challenges for designers. It requires a different approach to design, a focus on immersive experiences, a more collaborative process, and a different set of technical skills and tools. As these technologies continue to evolve, creative design in spatial computing and the metaverse will become increasingly important in creating engaging and memorable experiences for users. However, still as of today, spatial computing is an emerging field that combines digital technology with the physical environment to create new interactive and immersive experiences. So, what are some unique examples of creative design in spatial computing?

  • Virtual Real Estate: One of the most unique applications of spatial computing is the creation of virtual real estate. This involves creating digital spaces that can be bought and sold, just like physical real estate. These spaces can be used for a variety of purposes, such as virtual art galleries, music venues, or even digital storefronts for online businesses.
  • Augmented Reality Advertising: Augmented reality (AR) technology allows designers to overlay digital content onto the physical environment, creating an interactive and immersive experience. AR advertising, for example, can be used to create engaging and memorable experiences for customers. For example, a clothing retailer could create an AR app that allows customers to see how a particular outfit would look on them before making a purchase.
  • Virtual Museums and Galleries: Spatial computing allows designers to create immersive virtual museums and galleries that can be accessed from anywhere in the world. This not only makes art more accessible to a wider audience but also allows for new forms of engagement and interaction with the artwork. For example, virtual museums could allow visitors to interact with exhibits, providing additional information, or even allowing them to create their own artwork within the digital space.
  • Spatial Audio: Spatial audio is a technology that allows designers to create soundscapes that are tailored to the physical environment. This can be used to create immersive audio experiences that match the visual environment, creating a more complete sensory experience. For example, in a virtual reality game set in a forest, the spatial audio could be designed to make the player feel like they are actually surrounded by the sounds of nature.
  • Mixed Reality Performance: Mixed reality combines elements of virtual and physical reality, creating a seamless and interactive experience. In mixed reality performance, for example, performers can interact with virtual objects and environments in real-time. This allows for new forms of storytelling and audience engagement, creating a more immersive and interactive experience for the audience.

In conclusion, spatial computing provides designers with a new and exciting canvas for creative design. From virtual real estate to mixed reality performance, the possibilities for innovation and creativity are endless. As this technology continues to evolve, we can expect to see even more unique examples of creative design in spatial computing.

Emerging Technologies SIG series – What is cognitive AI (and how it is different than ChatGPT and co)

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Cognitive AI and ChatGPT are two different types of artificial intelligence (AI) that operate in distinct ways. While ChatGPT is a large language model designed to generate human-like responses to textual prompts, cognitive AI is a more general term that refers to AI systems that are designed to emulate human cognitive functions such as perception, reasoning, and decision-making.

Cognitive AI is a type of AI that is modeled after the way that the human brain processes information. These systems are designed to recognize patterns, make predictions, and learn from experience, much like humans do. Cognitive AI systems can be used in a variety of applications, including speech and image recognition, natural language processing, and decision support.

One of the key differences between cognitive AI and ChatGPT is the scope of their abilities. While ChatGPT is primarily focused on generating human-like responses to textual prompts, cognitive AI systems are designed to be more flexible and adaptable, capable of handling a wider range of tasks.

Cognitive AI systems are typically more complex than ChatGPT, as they require advanced algorithms and data structures to support their functionality. They also typically require more data to train, as they need to learn from a wider range of inputs and experiences.

Another key difference between cognitive AI and ChatGPT is their level of explainability. ChatGPT generates responses based on statistical patterns found in large datasets, which can make it difficult to understand how it arrives at a particular response. Cognitive AI, on the other hand, is designed to be more transparent and explainable, with clear pathways for understanding how it arrives at its conclusions.

In terms of their applications, cognitive AI has a broader range of potential uses than ChatGPT. For example, cognitive AI can be used in healthcare to analyze patient data and make diagnoses, in finance to analyze market trends and make investment decisions, and in manufacturing to optimize production processes. While ChatGPT and cognitive AI are both forms of artificial intelligence, they operate in distinct ways and have different capabilities.

ChatGPT is primarily focused on generating human-like responses to textual prompts, while cognitive AI is designed to emulate human cognitive functions such as perception, reasoning, and decision-making. Cognitive AI is more complex and adaptable than ChatGPT, with a broader range of potential applications, but it also requires more data and is typically more transparent and explainable.

There are a number of examples of cognitive AI systems that are currently in use or in development. Some examples include:

  • IBM Watson: IBM Watson is a cognitive AI system that uses natural language processing and machine learning algorithms to understand and analyze large amounts of unstructured data, such as medical records, research papers, and social media posts.
  • Google DeepMind: Google DeepMind is a cognitive AI system that uses deep learning algorithms to analyze and interpret complex data, such as images and videos. It has been used in a number of applications, including healthcare, finance, and gaming.
  • Microsoft Cortana: Microsoft Cortana is a cognitive AI system that uses natural language processing and machine learning algorithms to understand and respond to user queries. It is integrated into a number of Microsoft products, including Windows and Xbox.
  • Amazon Alexa: Amazon Alexa is a cognitive AI system that uses natural language processing and machine learning algorithms to understand and respond to user requests. It is integrated into a number of Amazon products, including the Echo and Fire TV.
  • Tesla Autopilot: Tesla Autopilot is a cognitive AI system that uses machine learning algorithms to analyze data from sensors and cameras in order to navigate and control a vehicle. It is designed to assist drivers and improve safety on the road.

These are just a few examples of the many cognitive AI systems that are currently in use or in development. As the field of AI continues to evolve, we can expect to see even more sophisticated and powerful cognitive AI systems emerge in a wide range of industries and applications.

Cognitive AI is a rapidly evolving field, with new developments and advancements being made all the time. Here are some of the ways in which cognitive AI is expected to evolve in the near future:

  • Increased focus on explainability: As cognitive AI becomes more widely used, there is a growing demand for systems that are transparent and explainable. This means that AI systems will need to be designed in a way that allows humans to understand how they arrive at their conclusions and decisions.
  • Improved natural language processing: One of the key challenges in cognitive AI is developing systems that can understand and generate human language with a high degree of accuracy. As natural language processing technology continues to improve, we can expect to see more sophisticated and natural interactions between humans and cognitive AI systems.
  • Greater integration with human workers: While some people have expressed concerns about AI replacing human workers, many experts believe that cognitive AI will actually work in tandem with human workers, augmenting their abilities and providing new opportunities for collaboration.
  • Advancements in machine learning: Machine learning is a key component of cognitive AI, and ongoing research is expected to lead to new algorithms and approaches that improve the accuracy and effectiveness of these systems.
  • Applications in new industries and contexts: As cognitive AI continues to evolve, we can expect to see it being used in new industries and contexts, such as education, entertainment, and environmental monitoring.

Overall, the future of cognitive AI looks very promising, with ongoing advancements and developments opening up new possibilities for how we can use these systems to improve our lives and solve complex problems. However, it will be important to ensure that these systems are developed and deployed in a responsible and ethical manner, with careful consideration given to their potential impact on society and the environment. Going again over the pink clouds, downwards, while cognitive AI has made significant progress in recent years, there are still several limitations that need to be addressed in order for these systems to reach their full potential. Here are some of the current limitations of cognitive AI and the plans to overcome them:

  • Lack of transparency and interpretability: One of the biggest challenges facing cognitive AI is the lack of transparency and interpretability in how these systems arrive at their decisions. This makes it difficult for humans to trust and understand the results produced by AI systems. Researchers are working on developing techniques to increase transparency and interpretability, such as creating visualizations of the decision-making process or providing clear explanations for the reasoning behind a decision.
  • Data bias: Cognitive AI systems are only as good as the data they are trained on. If the data is biased or incomplete, the AI system will also be biased and incomplete. Researchers are working on developing techniques to address bias in data, such as collecting more diverse data and using algorithms that can detect and correct for bias.
  • Limited context awareness: Cognitive AI systems are currently limited in their ability to understand and interpret contextual information, such as social cues or situational factors. Researchers are working on developing techniques to improve context awareness, such as using deep learning algorithms to analyze context-rich data sources.
  • Computational limitations: Cognitive AI systems require a significant amount of computational power and storage capacity in order to function effectively. Researchers are working on developing more efficient algorithms and hardware to address these computational limitations.
  • Ethical considerations: The use of cognitive AI raises a number of ethical considerations, such as privacy, security, and bias. Researchers and policymakers are working on developing ethical guidelines and frameworks to ensure that these systems are developed and deployed in a responsible and ethical manner.

In conclusion, while there are still some limitations to cognitive AI, researchers and developers are actively working on developing new techniques and technologies to address these challenges. As cognitive AI continues to evolve, we can expect to see these systems become more sophisticated, accurate, and useful in a wide range of applications.

Announcing the 2023 Financial Services Autism Hackathon

Mission Statement

Bring developers from the Financial Services industry together to demonstrate how innovative Microsoft technologies can transform autism treatment by tackling real world use cases provided by families and creating lasting open-source projects for the community.

What does the Hackathon entail? It is:

  • 2 days
  • 70+ developers
  • 5 use cases
    • And infinity possibility to help people with using your technology skills!

Background

The first Financial Services Autism Hackathon was held in 2018 founded by Leo Junquera, working at Microsoft at the time, and Peter Smulovics from Morgan Stanley, as a grass roots effort to combine the unique value proposition of Microsoft (technology innovation) with the technology talent which exists in Financial Services, and direct it towards an industry in dire need of technology innovation.

Dates and Location

The date of the event is April 19-20th. It is a hybrid event, with the in-person venue in New York City at the EY Wavespace.

Who should sign up?

It is a combination of roles that are filled by participants like yourself who make the hackathon successful each year. 

  • Developers Work for the digital transformation to demonstrate the power how new technologies can be applied to the issues facing the autism community.
  • UX Designers People with Autism see the world differently – help them see the world! Design interfaces, interaction patterns, both in the digital and in the physical world, that enables them to overcome the differences and to participate in the digital transformation of the autism community!
  • Business Analysts Translate the use cases gathered from families with autistic children, behaviorists in the field, experts in the community, and service providers to enable better interaction between the parties, provide hallway test cases, helping finding the voice, to make the results presentable.

Registration information

If you are among the companies or individuals already pre-registered by domain/name, just visit the registration URL and log in with existing account or apply for a new account. If you are not, please do drop a note to Michelle.Ng@ey.com to add your company’s domain to the list 🙂 You can sign up as an individual or as a group. You will be aligned with a use case and a team and provided training resources if you need to prepare you for the event.

How should I prepare?

We will provide training sessions so that you are armed with the skills needed for success! The training will cover key technologies (Cloud, GitHub, IoT, Machine Learning, AI, etc.). We will also arrange Autism Awareness sessions relating to each of the use cases.

Use Cases

Use Case 1: IOT Data Collection

Dynamic programs require adaptations to data collection which are not supported by these systems. In the end teams frequently resort back to pen, paper, and mechanical devices such as hand tally counters. Can we take a different approach to this problem using an IoT pattern? Can we find a way to simplify data collection using simple mechanisms and use the cloud to store the data and classify it after the fact using modern tools and technologies? Can there be a repository of patterns to match the individual needs of learners and care providers?

Use Case 2: Transform Learner Analysis with Video and AI

Data collection is essential to successful outcomes for learners, but analysts are missing one of the most critical pieces of data when they are reviewing progress and creating programs, video of the learner. Social Media knows how important and compelling this information is but it is not being used to help people with Autism more effectively. This use case looks to develop a platform to capture this information, integrate it with data from other sources, and enrich the analysis with AI. This is an ambitious use case to transform the industry using a vital form of information which is currently missing.

Use Case 3: Using ML To Transform Learner Outcomes

There is a long-held belief that each individual case of autism is so unique that comparisons cannot be drawn to other cases, but marketing companies are able to use data to target the right message to us at the right time to sell their product. The therapies used to help learners with autism are data intensive. Can we use this data to transform outcomes for learners using modern technology and methods?

Use Case 4: HoloLens Skills Training

This use case will focus on the use of HoloLens for helping to teach job skills to people with autism, particularly those aging out of the supports provided in the school environment. Unemployment for those with autism is significant and the supports in the work environment are limited. The use case will use HoloLens augmented reality to help with basic job skills by presenting a visual cues on how to complete task such as stocking shelves or preparing food. This scenario should also address the issue of moving away from 1:1 support to a 1:n as an a student becomes an adult.

Use Case 5: Metaverse Social Practice

Developing social relationships with peers can be one of the biggest challenges for people with autism, despite a strong desire to form them. Social programs can be hard to practice due to limited opportunities, which can lead to disappointing outcomes in the few interactions, and stress can and frustration. The Metaverse offers an opportunity to practice social interactions in a fully immersive environment and the ability to formulate successful programs which can replicated at scale. This use case will lay the groundwork for a social interaction in the Metaverse between a subject and an AI generated peer.

Emerging Technologies SIG series – What is Space Technology and how it is relevant outside of Space?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Space technology has advanced tremendously over the past few decades and has become an essential tool for many industries. From satellite communications to weather forecasting, space technology has significantly impacted many sectors outside of the space industry. In this article, we will explore the relevance of space technology in various industries and look at some examples and case studies.

Communication

One of the most significant contributions of space technology to industries outside of space is in the field of communication. Satellites have revolutionized communication by making it possible to connect people across the globe. The use of satellites has enabled the provision of internet services, global positioning systems (GPS), and satellite phones. In remote areas where traditional communication methods are not available, satellite communication has become a critical tool for many industries.

Example: The Iridium satellite constellation is an excellent example of how space technology has impacted communication. The constellation consists of 66 satellites that provide voice and data communication services globally. The system has been used in several industries, including aviation, maritime, and government.

Agriculture

Space technology has also made significant contributions to the agricultural industry. It has enabled farmers to monitor crop growth, soil moisture, and weather patterns, leading to improved crop yields and reduced costs.

Example: The European Space Agency’s (ESA) Sentinel-2 satellite constellation is a prime example of space technology’s impact on agriculture. The satellites provide high-resolution imagery of agricultural land, which enables farmers to monitor their crops’ growth and health. This information helps farmers make better decisions on when to plant, water, and harvest their crops.

Disaster Management

Space technology has proven to be an essential tool in disaster management. Satellites provide crucial information that helps emergency responders make informed decisions during natural disasters such as hurricanes, earthquakes, and wildfires.

Example: During the 2010 earthquake in Haiti, satellite imagery was used to assess the damage and identify areas that required emergency assistance. This information was crucial in guiding the rescue and relief efforts.

Transportation

The use of space technology has also led to significant advancements in the transportation industry. Satellite data is used to monitor and manage traffic flow, improving road safety and reducing travel time.

Example: The global positioning system (GPS) is an excellent example of how space technology has impacted the transportation industry. GPS is used in navigation systems in cars, ships, and airplanes, making it easier for people to navigate and reach their destination.

Energy

Space technology has also contributed to the energy industry. Satellites provide data that helps energy companies locate new sources of energy and monitor their operations.

Example: The NASA Earth Observing System Data and Information System (EOSDIS) provides data that helps energy companies monitor their operations. The system provides data on land cover, vegetation, and weather patterns that help energy companies manage their operations effectively.

Space technology has made significant contributions to industries outside of the space industry. From communication to disaster management, the use of satellites has revolutionized various industries, improving efficiency, and reducing costs. The examples and case studies mentioned above show how space technology has made a positive impact on many industries. As technology continues to evolve, it will be interesting to see how space technology will continue to shape the future of these industries. So, you can probably understand why I wrote about digital twinning, 4D printing, even neural links before – but why space tech?

What about the financial industry?

Space technology has also impacted the finance industry in significant ways. Satellites are being used to provide critical data that helps financial institutions to make informed decisions on investments and risk management. The use of satellite technology has also led to the development of new financial products.

The use of satellite imagery and data has led to the development of crop insurance products for farmers. Insurance companies are using satellite imagery to assess crop yields and losses, which enables them to provide crop insurance products to farmers. This information helps farmers manage their risks and protect their investments.

Another example is the use of satellite data to track economic activity. Satellites can provide information on shipping, transportation, and manufacturing activities, which is useful in making investment decisions. Hedge funds and asset managers are using satellite data to gain insights into economic activity, giving them an edge in the market.

Space technology is also used in the banking sector. Banks are using satellite imagery and data to assess the risk of lending to certain areas. The data can provide insights into natural disasters, land use, and infrastructure, which is useful in assessing the risk of lending to a particular area.

In conclusion, space technology has revolutionized the finance industry, providing critical data that is useful in making investment decisions and managing risks. The use of satellite data is expected to increase as the technology continues to evolve, leading to the development of new financial products and services. The finance industry is just one example of how space technology is impacting various industries, and it is exciting to see how it will shape the future.

What does the future hold?

And again, not everything is in pink clouds 🙂 While space technology has made significant contributions to various industries, there are still some limitations and shortcomings that need to be addressed. Here are some of the current limitations and how they are being mitigated:

Cost: One of the main limitations of space technology is the cost associated with building, launching, and maintaining satellites. The cost of building and launching a satellite can be in the range of hundreds of millions of dollars, making it difficult for some industries to afford.

Mitigation: One way to mitigate the cost is through partnerships and collaborations. Several companies are partnering to share the cost of building and launching satellites. There is also a trend towards smaller, cheaper satellites, known as CubeSats, which are easier to build and launch. The use of reusable rockets, such as those developed by SpaceX, can also reduce the cost of launching satellites.

Technology Limitations: Space technology is continually evolving, and there are still some technological limitations that need to be addressed. For example, the current satellite communication technology has limitations in terms of bandwidth and speed.

Mitigation: The development of new technologies, such as quantum communication, could overcome some of these limitations. Quantum communication is a secure and fast method of communication that uses the principles of quantum mechanics.

Orbital Debris: The amount of space debris in orbit is increasing, which poses a threat to the operation of satellites and spacecraft. Orbital debris can collide with satellites, causing damage and potentially leading to the loss of the satellite.

Mitigation: Efforts are underway to mitigate the amount of space debris. Satellites are being designed with built-in propulsion systems that can help them avoid collisions. There are also initiatives to remove space debris from orbit, such as the European Space Agency’s Clean Space Initiative.

Data Security: The data transmitted through satellites is vulnerable to interception and hacking, which can pose a security threat to industries that rely on satellite technology.

Mitigation: The use of encryption and other security measures can mitigate the risk of data interception and hacking. There are also efforts to develop secure satellite communication systems, such as quantum communication, which are highly resistant to hacking.

In conclusion, while space technology has made significant contributions to various industries, there are still limitations and shortcomings that need to be addressed. Efforts are underway to mitigate these limitations through partnerships, the development of new technologies, and initiatives to reduce space debris and improve data security. As technology continues to evolve, it is expected that these limitations will be addressed, leading to further advancements in space technology and its impact on various industries. 

Emerging Technologies SIG series – What is neural linking?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Neural links, also known as brain-computer interfaces (BCIs), are emerging technologies that enable communication between the human brain and an external device or system. These technologies have the potential to revolutionize fields such as healthcare, entertainment, education, and communication. In this article, we will explore the current state of neural links and examine some examples and case studies that demonstrate their potential.

They work by detecting and interpreting the electrical signals that are generated by the brain. These signals can be used to control external devices, such as computers or prosthetic limbs, or to receive sensory input, such as visual or auditory information. The most advanced neural links currently available are invasive, meaning that they require surgery to implant electrodes directly into the brain. However, there is ongoing research into non-invasive methods, such as using scalp electrodes or magnetic stimulation.

One of the most promising applications of neural links is in the field of healthcare. For example, neural links can be used to help patients with spinal cord injuries regain movement and control of their limbs. A study published in Nature in 2016 demonstrated that a patient with quadriplegia was able to control a robotic arm using a neural link, allowing him to perform tasks such as pouring water into a cup and stirring it with a spoon.

Another example of the potential of neural links in healthcare is their use in treating neurological disorders such as Parkinson’s disease. A study published in The Lancet in 2018 showed that patients with Parkinson’s who received deep brain stimulation via a neural link experienced significant improvements in their symptoms compared to those who received standard treatment.

Neural links also have the potential to transform entertainment and communication. For example, imagine being able to experience a movie or video game directly in your brain, without the need for a screen or speakers. This could be achieved through a neural link that delivers sensory input, such as visual and auditory information, directly to the brain. In 2018, a company called Neurable demonstrated a prototype of a virtual reality game that could be controlled using a neural link, allowing players to use their thoughts to interact with the virtual environment. Or imagine being able to log into an application, create and approve a financial transaction, etc. using just a neural link. Together with technologies like ChatGPT/GPT, this could open a new way of work, communication, life.

In the field of education, neural links could be used to enhance learning by providing students with personalized feedback and assistance. For example, a neural link could detect when a student is struggling with a particular concept and provide them with additional resources or support. In addition, neural links could be used to create more immersive and engaging educational experiences, such as virtual field trips or interactive simulations.

However, there are also concerns about the ethical and societal implications of neural links. One concern is the potential for neural links to be used for surveillance or mind control. Another concern is the potential for neural links to widen the gap between those who can afford the technology and those who cannot.

A company we cannot miss from any kind of compare on the topic in Neuralink – it is a company founded by Elon Musk in 2016 with the goal of developing neural links that are safe, affordable, and easy to use. Unlike most other neural link technologies, Neuralink aims to create a minimally invasive system that can be implanted in the brain without requiring major surgery. The system consists of tiny threads, thinner than a human hair, that are implanted using a custom robot. The threads are connected to a small device called the “Link” that is implanted behind the ear and can communicate wirelessly with external devices.

Neuralink’s ultimate goal is to enable humans to merge with artificial intelligence, creating a symbiotic relationship that enhances our cognitive abilities and enables us to keep up with the rapid pace of technological progress. While this vision is still a long way off, Neuralink has made significant progress in developing its technology. In 2020, the company demonstrated a prototype of its neural link system in pigs, showing that the technology is capable of transmitting signals from the brain to a computer. While there is still much work to be done, Neuralink has the potential to revolutionize the field of neural links and transform the way we interact with technology.

Despite the promise of neural links, and even beside the ones mentioned above, there are still several limitations and shortcomings that need to be addressed before they can become widely used. Some of these limitations and plans to remediate them are as follows:

  • Invasiveness: Most current neural links require invasive surgery to implant electrodes directly into the brain, which carries significant risks and limitations. Non-invasive methods, such as using scalp electrodes or magnetic stimulation, are being researched to overcome this limitation.
  • Scalability: Current neural links are limited in terms of the number of neurons they can record or stimulate at once. This limits their ability to provide precise and detailed control over external devices. Research is being conducted to develop more scalable systems that can record or stimulate a larger number of neurons.
  • Longevity: Neural links are currently limited in terms of their lifespan, as the electrodes can degrade over time or become displaced. Research is being conducted to develop more durable and longer-lasting materials for neural links.
  • Cost: Current neural links are expensive and not affordable for most people. Research is being conducted to develop more affordable and accessible neural link technologies.
  • Ethics: There are ethical concerns regarding the use of neural links, particularly regarding issues such as privacy, autonomy, and consent. These concerns need to be addressed to ensure that the use of neural links is ethical and does not violate individuals’ rights.

To address these limitations and shortcomings, ongoing research and development are being conducted in the field of neural links. Researchers are exploring new materials and technologies to make neural links more durable, scalable, and affordable. They are also working on developing non-invasive methods for implanting neural links and addressing ethical concerns related to their use. With continued research and development, it is expected that neural links will become more accessible, affordable, and practical for widespread use in the future. One way to make these limitations remediated faster is having open standards for neural links – at present, there are no widely accepted open standards for neural link technologies. Most companies and researchers in the field are working on proprietary systems that are not interoperable with one another. This lack of standardization can create issues such as limited compatibility between different systems and limited access to data.

However, there are efforts underway to establish open standards for neural link technologies. The IEEE Standards Association, for example, has launched a working group to develop a standard for brain-machine interface devices. The aim of this standard is to provide guidelines for designing, testing, and evaluating these devices to ensure that they are safe, effective, and reliable. The standard is being developed with input from experts in academia, industry, and regulatory agencies.

The creation of open standards for neural link technologies could have significant benefits for the field. It could increase interoperability between different systems, making it easier for researchers to collaborate and share data. It could also lead to more rapid innovation and development of new neural link technologies, as companies and researchers could build on existing standards rather than starting from scratch. However, the development of open standards will require collaboration and agreement among a wide range of stakeholders, including researchers, companies, and regulatory agencies.