Emerging Technologies SIG (Zenith) Proposal Meeting on 3/29

Without further ado, I am happy to announce, that based on the proceedings on top of the Why focus on Emerging Technologies in Financial companies post, and thanks to FINOS management, namely Gabriele Columbro (Executive Director of FINOS) and Maurizio Pillitu (CTO at FINOS), we are ready to have a ‘proposal’ kickoff meeting for the Emerging Technologies SIG. It will provide a forum for FINOS members and the wider fintech community to discuss, collaborate on, and develop new and innovative solutions that can help drive the financial industry forward.

The SIG, if approved, would host regular meetings, events, webinars, workshops and other activities to help members stay up-to-date with the latest trends and developments in emerging technologies. We do encourage all FINOS members who are interested in emerging technologies to join our kickoff session below and become part of this exciting new community. Together, we can help to drive innovation and advance the fintech industry in fresh and exciting ways!

Proposed logo for the Zenith group

Proposal to create a FINOS Emerging Technologies Special Interest Group

Proposing the creation of a FINOS Emerging Technologies Special Interest Group (Zenith). The purpose of the Zenith SIG would be to explore and promote the adoption of new and innovative technologies in the financial services industry. The proposed goals of the SIG are to:

  1. identify and evaluate emerging technologies that have potential to transform the sector
  2. to share best practices, use cases, and insights for the broader community in the form of webinars, podcasts, and articles.

To gather interest and commitment, FINOS is organizing an initial exploratory meeting – which will also help to prepare for the SIG approval submission (to FINOS Board of Directors) – on Wednesday 29th of March at 10:00 US/Eastern. Agendas and conference info can be found in the issues.

License

Copyright 2023 Fintech Open Source Foundation

Distributed under the Apache License, Version 2.0.

SPDX-License-Identifier: Apache-2.0


Details of the upcoming meeting:

Google Calendar Link

Date: March 29th, 2023, at 10am EST / 3pm GMT

Location: Zoom

Agenda:

  •  Convene & roll call (5mins)
  •  Display FINOS Antitrust Policy summary slide
  •  Review Meeting Notices
  •  Presenting SIG draft charta 
  •  Acceptance / reviewing of charta
  •  AOB, Q&A & Adjourn (5mins)

Emerging Technologies SIG series – How can quantum computing be useful for financial companies?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Quantum computing is a relatively new technology that has the potential to revolutionize various industries. One such industry that stands to benefit greatly from quantum computing is finance. In this post, we will discuss the value of quantum computing for financial companies.

Firstly, let’s define what quantum computing is. Unlike classical computing, which uses binary digits (bits) to represent information, quantum computing uses quantum bits (qubits). Qubits can exist in multiple states at the same time, allowing quantum computers to perform certain calculations exponentially faster than classical computers.

Now let’s explore how financial companies can benefit from quantum computing. Financial companies deal with large amounts of data on a daily basis, and quantum computing can help them analyze this data more efficiently. For example, quantum computing can be used for portfolio optimization, risk management, fraud detection, and pricing derivatives. These tasks require complex calculations that are time-consuming for classical computers, but quantum computers can perform them much faster.

Another area where quantum computing can be valuable for financial companies is in cryptography. Quantum computing can potentially break many of the current encryption methods used to secure financial data. However, quantum computing can also be used to create new encryption methods that are more secure. This means that financial companies can use quantum computing to protect their sensitive data from hackers.

Moreover, quantum computing can help financial companies develop and test new financial models. Traditional models often rely on simplifying assumptions that may not accurately reflect real-world scenarios. Quantum computing can enable financial companies to model complex systems more accurately, leading to better decision-making.

Finally, quantum computing can help financial companies improve their customer service. For example, quantum computing can be used to analyze customer data and provide personalized recommendations based on the customer’s financial goals and risk appetite.

Unpinking time – while quantum computing has enormous potential, there are still several limitations that need to be addressed before it can be widely adopted by financial companies. Here are some of the current limitations and how researchers are working to mitigate them:

  • Limited number of qubits: Currently, quantum computers have a limited number of qubits, which restricts the complexity of problems that can be solved. However, researchers are working to increase the number of qubits and improve their stability and coherence. This will enable quantum computers to perform more complex calculations.
  • Error correction: Quantum computers are prone to errors due to environmental factors such as temperature fluctuations and electromagnetic interference. Error correction is a significant challenge in quantum computing, but researchers are developing new techniques to mitigate errors and improve the reliability of quantum computers.
  • Quantum algorithms: There is a lack of quantum algorithms for financial applications. Researchers are working on developing new quantum algorithms that can solve specific financial problems. These algorithms will enable financial companies to take advantage of the computational power of quantum computers.
  • Cost: Quantum computers are expensive to build and maintain. Currently, only a few large companies and research institutions have access to quantum computers. However, the cost is expected to decrease as the technology matures and becomes more widespread.
  • Integration with classical computing: Quantum computers are not yet fully compatible with classical computing, which is essential for financial companies to use quantum computing effectively. Researchers are developing hybrid classical-quantum computing systems to enable seamless integration between the two computing paradigms.

In a nutshell, while there are still several limitations to quantum computing, researchers are working hard to mitigate these limitations. As the technology continues to develop, we can expect to see more financial companies investing in quantum computing to gain a competitive edge in the industry.

In conclusion, quantum computing has the potential to bring significant value to financial companies. By using quantum computing, financial companies can process large amounts of data more efficiently, improve their security measures, develop more accurate financial models, and provide better customer service. As the technology continues to develop, we can expect to see more financial companies investing in quantum computing to stay competitive in an increasingly digital world.

Emerging Technologies SIG series – What are the benefits of IoT for financial institutions?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


The Internet of Things (IoT) is a term used to describe the interconnected network of devices, sensors, and machines that can communicate with each other through the internet. It has the potential to revolutionize many industries, including the financial sector.

In the financial industry, IoT technology can be used to improve operational efficiency, enhance customer experiences, and provide new revenue streams. Here are some examples of how IoT can be used at financial institutions:

  • Smart ATMs: ATMs can be equipped with IoT sensors to monitor their health, detect malfunctions, and trigger maintenance alerts. Smart ATMs can also use location-based data to offer personalized services and promotions to customers.
  • Payment wearables: IoT-enabled wearables such as smartwatches and fitness trackers can be used for contactless payments. Payment data can be securely transmitted to the financial institution using blockchain technology.
  • Fraud detection: IoT sensors can be installed in bank branches and ATMs to detect fraudulent activities in real-time. Sensors can also be used to monitor customer behavior and detect unusual transactions.
  • Asset tracking: Financial institutions can use IoT sensors to track the location and condition of assets such as vehicles, equipment, and inventory. This can help optimize asset usage and reduce the risk of theft or loss.
  • Insurance telematics: IoT sensors can be used to collect data on driving behavior, such as speed, acceleration, and braking. This data can be used by insurance companies to offer personalized policies and rewards for safe driving.
  • Predictive maintenance: IoT sensors can be used to monitor the health of financial equipment such as servers and ATMs. Predictive maintenance can help identify potential issues before they become major problems, reducing downtime and repair costs.
  • Personalized banking: IoT data can be used to offer personalized banking experiences based on individual customer needs and preferences. For example, banks can use data from wearable devices to offer personalized financial advice and investment recommendations.

IoT technology can also be used to improve security and compliance in the financial industry. For example, IoT sensors can be used to monitor physical security, such as access control and surveillance. IoT technology can also be used to monitor compliance with regulations such as anti-money laundering (AML) and know-your-customer (KYC) rules.

Getting off from the pink clouds, while IoT technology has many benefits, there are also some limitations that need to be considered. Here are some of the current limitations of IoT and how they can be mitigated:

  • Security: IoT devices can be vulnerable to cyber-attacks, which can compromise sensitive data and cause significant damage. To mitigate this risk, it is essential to implement strong security measures such as encryption, authentication, and access controls. Regular software updates and patching can also help keep IoT devices secure.
  • Interoperability: IoT devices from different vendors may use different communication protocols, which can make it challenging to integrate them into a cohesive system. To mitigate this challenge, standards such as the Open Connectivity Foundation (OCF) and the Industrial Internet Consortium (IIC) have been established to promote interoperability between different IoT devices.
  • Data privacy: IoT devices collect a significant amount of data, which can raise privacy concerns. To mitigate this risk, it is essential to implement robust data privacy policies and practices, such as data encryption, anonymization, and secure storage.
  • Complexity: IoT systems can be complex to deploy and manage, which can make it challenging for organizations to derive value from them. To mitigate this challenge, it is essential to work with experienced vendors and consultants who can help design, deploy, and manage IoT systems.
  • Scalability: IoT systems can generate vast amounts of data, which can strain network bandwidth and storage capacity. To mitigate this challenge, it is essential to implement scalable architectures that can handle large volumes of data. Cloud-based solutions such as AWS IoT and Azure IoT can help organizations scale their IoT systems as needed.

So, while IoT technology has many benefits, it also presents some challenges that need to be addressed. By implementing robust security measures, promoting interoperability, protecting data privacy, simplifying system complexity, and implementing scalable architectures, organizations can mitigate these challenges and realize the full potential of IoT technology. One of the most pressing item is interoperability from above, as many of the other items (like security, scalability, privacy) can be driven from it. 

We are lucky: There are several open standards for IoT that are designed to promote interoperability between different devices and systems. Open standards are important because they allow devices and systems from different vendors to communicate with each other seamlessly, which can help avoid vendor lock-in and promote innovation. Here are some of the open standards for IoT:

  • MQTT (Message Queuing Telemetry Transport): MQTT is a lightweight messaging protocol that is designed for use in IoT applications. It is used for sending messages between devices, and it is designed to be efficient and reliable.
  • CoAP (Constrained Application Protocol): CoAP is a protocol that is used for low-power and low-bandwidth IoT devices. It is designed to be lightweight and easy to implement, making it ideal for use in resource-constrained devices.
  • OCF (Open Connectivity Foundation): OCF is a consortium of companies that are working to create open standards for IoT interoperability. OCF’s goal is to create a common IoT framework that can be used by all IoT devices and systems.
  • ZigBee: ZigBee is a wireless standard that is designed for low-power, low-bandwidth IoT devices. It is used for creating mesh networks, and it is designed to be secure and reliable.
  • LoRaWAN (Long Range Wide Area Network): LoRaWAN is a wireless protocol that is designed for long-range IoT devices. It is used for creating wide-area networks, and it is designed to be low-power and low-cost.

By using open standards like these, IoT devices and systems can communicate with each other seamlessly, regardless of the vendor or technology used. This can help promote innovation and avoid vendor lock-in, which can ultimately benefit both organizations and consumers.

In conclusion, IoT technology has the potential to transform the financial industry by improving operational efficiency, enhancing customer experiences, and providing new revenue streams. Financial institutions that embrace IoT technology will be better equipped to meet the needs of their customers and stay ahead of the competition and even can help creating additional value and sustainable businesses – do check out the journey of HB Antwerp and Botswana.

Emerging Technologies SIG series – Spatial Computing and Creative Designs

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


The rise of spatial computing and the metaverse has opened up a new realm of possibilities for creative design. Spatial computing refers to the use of digital technologies to create experiences that integrate the physical and digital worlds, while the metaverse is a term used to describe a collective virtual shared space. In this post, we will explore how creative design is different in spatial computing and the metaverse compared to traditional creative design.

Firstly, spatial computing and the metaverse require a different approach to design. In traditional design, the focus is often on creating a visual representation of a product or experience. However, in spatial computing and the metaverse, the design process must consider the interaction between the user and the environment. This means that designers must think about how the user will move through the space, how they will interact with objects, and how they will engage with other users.

Secondly, spatial computing and the metaverse offer new opportunities for immersive experiences. Creative design in these contexts can involve the use of augmented reality, virtual reality, and mixed reality technologies to create interactive and engaging experiences that go beyond what is possible with traditional design. For example, a virtual art installation in the metaverse could allow users to explore and interact with the artwork in ways that would not be possible in the physical world.

Thirdly, spatial computing and the metaverse allow for a more collaborative and participatory approach to design. In traditional design, the designer creates a product or experience for the user, but in spatial computing and the metaverse, the user is an active participant in the design process. This means that designers must be open to feedback and willing to make changes based on user input. It also means that users can contribute to the design process by creating their own content and experiences within the metaverse.

Finally, spatial computing and the metaverse require a different set of technical skills and tools for creative design. In traditional design, designers may use tools like Adobe Photoshop or Illustrator to create visual designs, but in spatial computing and the metaverse, designers may need to use software like Unity or Unreal Engine to create interactive environments. Designers must also have a strong understanding of 3D modeling, animation, and game design principles.

In a quick summary, creative design in spatial computing and the metaverse offers new opportunities and challenges for designers. It requires a different approach to design, a focus on immersive experiences, a more collaborative process, and a different set of technical skills and tools. As these technologies continue to evolve, creative design in spatial computing and the metaverse will become increasingly important in creating engaging and memorable experiences for users. However, still as of today, spatial computing is an emerging field that combines digital technology with the physical environment to create new interactive and immersive experiences. So, what are some unique examples of creative design in spatial computing?

  • Virtual Real Estate: One of the most unique applications of spatial computing is the creation of virtual real estate. This involves creating digital spaces that can be bought and sold, just like physical real estate. These spaces can be used for a variety of purposes, such as virtual art galleries, music venues, or even digital storefronts for online businesses.
  • Augmented Reality Advertising: Augmented reality (AR) technology allows designers to overlay digital content onto the physical environment, creating an interactive and immersive experience. AR advertising, for example, can be used to create engaging and memorable experiences for customers. For example, a clothing retailer could create an AR app that allows customers to see how a particular outfit would look on them before making a purchase.
  • Virtual Museums and Galleries: Spatial computing allows designers to create immersive virtual museums and galleries that can be accessed from anywhere in the world. This not only makes art more accessible to a wider audience but also allows for new forms of engagement and interaction with the artwork. For example, virtual museums could allow visitors to interact with exhibits, providing additional information, or even allowing them to create their own artwork within the digital space.
  • Spatial Audio: Spatial audio is a technology that allows designers to create soundscapes that are tailored to the physical environment. This can be used to create immersive audio experiences that match the visual environment, creating a more complete sensory experience. For example, in a virtual reality game set in a forest, the spatial audio could be designed to make the player feel like they are actually surrounded by the sounds of nature.
  • Mixed Reality Performance: Mixed reality combines elements of virtual and physical reality, creating a seamless and interactive experience. In mixed reality performance, for example, performers can interact with virtual objects and environments in real-time. This allows for new forms of storytelling and audience engagement, creating a more immersive and interactive experience for the audience.

In conclusion, spatial computing provides designers with a new and exciting canvas for creative design. From virtual real estate to mixed reality performance, the possibilities for innovation and creativity are endless. As this technology continues to evolve, we can expect to see even more unique examples of creative design in spatial computing.

Emerging Technologies SIG series – What is cognitive AI (and how it is different than ChatGPT and co)

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Cognitive AI and ChatGPT are two different types of artificial intelligence (AI) that operate in distinct ways. While ChatGPT is a large language model designed to generate human-like responses to textual prompts, cognitive AI is a more general term that refers to AI systems that are designed to emulate human cognitive functions such as perception, reasoning, and decision-making.

Cognitive AI is a type of AI that is modeled after the way that the human brain processes information. These systems are designed to recognize patterns, make predictions, and learn from experience, much like humans do. Cognitive AI systems can be used in a variety of applications, including speech and image recognition, natural language processing, and decision support.

One of the key differences between cognitive AI and ChatGPT is the scope of their abilities. While ChatGPT is primarily focused on generating human-like responses to textual prompts, cognitive AI systems are designed to be more flexible and adaptable, capable of handling a wider range of tasks.

Cognitive AI systems are typically more complex than ChatGPT, as they require advanced algorithms and data structures to support their functionality. They also typically require more data to train, as they need to learn from a wider range of inputs and experiences.

Another key difference between cognitive AI and ChatGPT is their level of explainability. ChatGPT generates responses based on statistical patterns found in large datasets, which can make it difficult to understand how it arrives at a particular response. Cognitive AI, on the other hand, is designed to be more transparent and explainable, with clear pathways for understanding how it arrives at its conclusions.

In terms of their applications, cognitive AI has a broader range of potential uses than ChatGPT. For example, cognitive AI can be used in healthcare to analyze patient data and make diagnoses, in finance to analyze market trends and make investment decisions, and in manufacturing to optimize production processes. While ChatGPT and cognitive AI are both forms of artificial intelligence, they operate in distinct ways and have different capabilities.

ChatGPT is primarily focused on generating human-like responses to textual prompts, while cognitive AI is designed to emulate human cognitive functions such as perception, reasoning, and decision-making. Cognitive AI is more complex and adaptable than ChatGPT, with a broader range of potential applications, but it also requires more data and is typically more transparent and explainable.

There are a number of examples of cognitive AI systems that are currently in use or in development. Some examples include:

  • IBM Watson: IBM Watson is a cognitive AI system that uses natural language processing and machine learning algorithms to understand and analyze large amounts of unstructured data, such as medical records, research papers, and social media posts.
  • Google DeepMind: Google DeepMind is a cognitive AI system that uses deep learning algorithms to analyze and interpret complex data, such as images and videos. It has been used in a number of applications, including healthcare, finance, and gaming.
  • Microsoft Cortana: Microsoft Cortana is a cognitive AI system that uses natural language processing and machine learning algorithms to understand and respond to user queries. It is integrated into a number of Microsoft products, including Windows and Xbox.
  • Amazon Alexa: Amazon Alexa is a cognitive AI system that uses natural language processing and machine learning algorithms to understand and respond to user requests. It is integrated into a number of Amazon products, including the Echo and Fire TV.
  • Tesla Autopilot: Tesla Autopilot is a cognitive AI system that uses machine learning algorithms to analyze data from sensors and cameras in order to navigate and control a vehicle. It is designed to assist drivers and improve safety on the road.

These are just a few examples of the many cognitive AI systems that are currently in use or in development. As the field of AI continues to evolve, we can expect to see even more sophisticated and powerful cognitive AI systems emerge in a wide range of industries and applications.

Cognitive AI is a rapidly evolving field, with new developments and advancements being made all the time. Here are some of the ways in which cognitive AI is expected to evolve in the near future:

  • Increased focus on explainability: As cognitive AI becomes more widely used, there is a growing demand for systems that are transparent and explainable. This means that AI systems will need to be designed in a way that allows humans to understand how they arrive at their conclusions and decisions.
  • Improved natural language processing: One of the key challenges in cognitive AI is developing systems that can understand and generate human language with a high degree of accuracy. As natural language processing technology continues to improve, we can expect to see more sophisticated and natural interactions between humans and cognitive AI systems.
  • Greater integration with human workers: While some people have expressed concerns about AI replacing human workers, many experts believe that cognitive AI will actually work in tandem with human workers, augmenting their abilities and providing new opportunities for collaboration.
  • Advancements in machine learning: Machine learning is a key component of cognitive AI, and ongoing research is expected to lead to new algorithms and approaches that improve the accuracy and effectiveness of these systems.
  • Applications in new industries and contexts: As cognitive AI continues to evolve, we can expect to see it being used in new industries and contexts, such as education, entertainment, and environmental monitoring.

Overall, the future of cognitive AI looks very promising, with ongoing advancements and developments opening up new possibilities for how we can use these systems to improve our lives and solve complex problems. However, it will be important to ensure that these systems are developed and deployed in a responsible and ethical manner, with careful consideration given to their potential impact on society and the environment. Going again over the pink clouds, downwards, while cognitive AI has made significant progress in recent years, there are still several limitations that need to be addressed in order for these systems to reach their full potential. Here are some of the current limitations of cognitive AI and the plans to overcome them:

  • Lack of transparency and interpretability: One of the biggest challenges facing cognitive AI is the lack of transparency and interpretability in how these systems arrive at their decisions. This makes it difficult for humans to trust and understand the results produced by AI systems. Researchers are working on developing techniques to increase transparency and interpretability, such as creating visualizations of the decision-making process or providing clear explanations for the reasoning behind a decision.
  • Data bias: Cognitive AI systems are only as good as the data they are trained on. If the data is biased or incomplete, the AI system will also be biased and incomplete. Researchers are working on developing techniques to address bias in data, such as collecting more diverse data and using algorithms that can detect and correct for bias.
  • Limited context awareness: Cognitive AI systems are currently limited in their ability to understand and interpret contextual information, such as social cues or situational factors. Researchers are working on developing techniques to improve context awareness, such as using deep learning algorithms to analyze context-rich data sources.
  • Computational limitations: Cognitive AI systems require a significant amount of computational power and storage capacity in order to function effectively. Researchers are working on developing more efficient algorithms and hardware to address these computational limitations.
  • Ethical considerations: The use of cognitive AI raises a number of ethical considerations, such as privacy, security, and bias. Researchers and policymakers are working on developing ethical guidelines and frameworks to ensure that these systems are developed and deployed in a responsible and ethical manner.

In conclusion, while there are still some limitations to cognitive AI, researchers and developers are actively working on developing new techniques and technologies to address these challenges. As cognitive AI continues to evolve, we can expect to see these systems become more sophisticated, accurate, and useful in a wide range of applications.

Emerging Technologies SIG series – What is Space Technology and how it is relevant outside of Space?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Space technology has advanced tremendously over the past few decades and has become an essential tool for many industries. From satellite communications to weather forecasting, space technology has significantly impacted many sectors outside of the space industry. In this article, we will explore the relevance of space technology in various industries and look at some examples and case studies.

Communication

One of the most significant contributions of space technology to industries outside of space is in the field of communication. Satellites have revolutionized communication by making it possible to connect people across the globe. The use of satellites has enabled the provision of internet services, global positioning systems (GPS), and satellite phones. In remote areas where traditional communication methods are not available, satellite communication has become a critical tool for many industries.

Example: The Iridium satellite constellation is an excellent example of how space technology has impacted communication. The constellation consists of 66 satellites that provide voice and data communication services globally. The system has been used in several industries, including aviation, maritime, and government.

Agriculture

Space technology has also made significant contributions to the agricultural industry. It has enabled farmers to monitor crop growth, soil moisture, and weather patterns, leading to improved crop yields and reduced costs.

Example: The European Space Agency’s (ESA) Sentinel-2 satellite constellation is a prime example of space technology’s impact on agriculture. The satellites provide high-resolution imagery of agricultural land, which enables farmers to monitor their crops’ growth and health. This information helps farmers make better decisions on when to plant, water, and harvest their crops.

Disaster Management

Space technology has proven to be an essential tool in disaster management. Satellites provide crucial information that helps emergency responders make informed decisions during natural disasters such as hurricanes, earthquakes, and wildfires.

Example: During the 2010 earthquake in Haiti, satellite imagery was used to assess the damage and identify areas that required emergency assistance. This information was crucial in guiding the rescue and relief efforts.

Transportation

The use of space technology has also led to significant advancements in the transportation industry. Satellite data is used to monitor and manage traffic flow, improving road safety and reducing travel time.

Example: The global positioning system (GPS) is an excellent example of how space technology has impacted the transportation industry. GPS is used in navigation systems in cars, ships, and airplanes, making it easier for people to navigate and reach their destination.

Energy

Space technology has also contributed to the energy industry. Satellites provide data that helps energy companies locate new sources of energy and monitor their operations.

Example: The NASA Earth Observing System Data and Information System (EOSDIS) provides data that helps energy companies monitor their operations. The system provides data on land cover, vegetation, and weather patterns that help energy companies manage their operations effectively.

Space technology has made significant contributions to industries outside of the space industry. From communication to disaster management, the use of satellites has revolutionized various industries, improving efficiency, and reducing costs. The examples and case studies mentioned above show how space technology has made a positive impact on many industries. As technology continues to evolve, it will be interesting to see how space technology will continue to shape the future of these industries. So, you can probably understand why I wrote about digital twinning, 4D printing, even neural links before – but why space tech?

What about the financial industry?

Space technology has also impacted the finance industry in significant ways. Satellites are being used to provide critical data that helps financial institutions to make informed decisions on investments and risk management. The use of satellite technology has also led to the development of new financial products.

The use of satellite imagery and data has led to the development of crop insurance products for farmers. Insurance companies are using satellite imagery to assess crop yields and losses, which enables them to provide crop insurance products to farmers. This information helps farmers manage their risks and protect their investments.

Another example is the use of satellite data to track economic activity. Satellites can provide information on shipping, transportation, and manufacturing activities, which is useful in making investment decisions. Hedge funds and asset managers are using satellite data to gain insights into economic activity, giving them an edge in the market.

Space technology is also used in the banking sector. Banks are using satellite imagery and data to assess the risk of lending to certain areas. The data can provide insights into natural disasters, land use, and infrastructure, which is useful in assessing the risk of lending to a particular area.

In conclusion, space technology has revolutionized the finance industry, providing critical data that is useful in making investment decisions and managing risks. The use of satellite data is expected to increase as the technology continues to evolve, leading to the development of new financial products and services. The finance industry is just one example of how space technology is impacting various industries, and it is exciting to see how it will shape the future.

What does the future hold?

And again, not everything is in pink clouds 🙂 While space technology has made significant contributions to various industries, there are still some limitations and shortcomings that need to be addressed. Here are some of the current limitations and how they are being mitigated:

Cost: One of the main limitations of space technology is the cost associated with building, launching, and maintaining satellites. The cost of building and launching a satellite can be in the range of hundreds of millions of dollars, making it difficult for some industries to afford.

Mitigation: One way to mitigate the cost is through partnerships and collaborations. Several companies are partnering to share the cost of building and launching satellites. There is also a trend towards smaller, cheaper satellites, known as CubeSats, which are easier to build and launch. The use of reusable rockets, such as those developed by SpaceX, can also reduce the cost of launching satellites.

Technology Limitations: Space technology is continually evolving, and there are still some technological limitations that need to be addressed. For example, the current satellite communication technology has limitations in terms of bandwidth and speed.

Mitigation: The development of new technologies, such as quantum communication, could overcome some of these limitations. Quantum communication is a secure and fast method of communication that uses the principles of quantum mechanics.

Orbital Debris: The amount of space debris in orbit is increasing, which poses a threat to the operation of satellites and spacecraft. Orbital debris can collide with satellites, causing damage and potentially leading to the loss of the satellite.

Mitigation: Efforts are underway to mitigate the amount of space debris. Satellites are being designed with built-in propulsion systems that can help them avoid collisions. There are also initiatives to remove space debris from orbit, such as the European Space Agency’s Clean Space Initiative.

Data Security: The data transmitted through satellites is vulnerable to interception and hacking, which can pose a security threat to industries that rely on satellite technology.

Mitigation: The use of encryption and other security measures can mitigate the risk of data interception and hacking. There are also efforts to develop secure satellite communication systems, such as quantum communication, which are highly resistant to hacking.

In conclusion, while space technology has made significant contributions to various industries, there are still limitations and shortcomings that need to be addressed. Efforts are underway to mitigate these limitations through partnerships, the development of new technologies, and initiatives to reduce space debris and improve data security. As technology continues to evolve, it is expected that these limitations will be addressed, leading to further advancements in space technology and its impact on various industries. 

Emerging Technologies SIG series – What is neural linking?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Neural links, also known as brain-computer interfaces (BCIs), are emerging technologies that enable communication between the human brain and an external device or system. These technologies have the potential to revolutionize fields such as healthcare, entertainment, education, and communication. In this article, we will explore the current state of neural links and examine some examples and case studies that demonstrate their potential.

They work by detecting and interpreting the electrical signals that are generated by the brain. These signals can be used to control external devices, such as computers or prosthetic limbs, or to receive sensory input, such as visual or auditory information. The most advanced neural links currently available are invasive, meaning that they require surgery to implant electrodes directly into the brain. However, there is ongoing research into non-invasive methods, such as using scalp electrodes or magnetic stimulation.

One of the most promising applications of neural links is in the field of healthcare. For example, neural links can be used to help patients with spinal cord injuries regain movement and control of their limbs. A study published in Nature in 2016 demonstrated that a patient with quadriplegia was able to control a robotic arm using a neural link, allowing him to perform tasks such as pouring water into a cup and stirring it with a spoon.

Another example of the potential of neural links in healthcare is their use in treating neurological disorders such as Parkinson’s disease. A study published in The Lancet in 2018 showed that patients with Parkinson’s who received deep brain stimulation via a neural link experienced significant improvements in their symptoms compared to those who received standard treatment.

Neural links also have the potential to transform entertainment and communication. For example, imagine being able to experience a movie or video game directly in your brain, without the need for a screen or speakers. This could be achieved through a neural link that delivers sensory input, such as visual and auditory information, directly to the brain. In 2018, a company called Neurable demonstrated a prototype of a virtual reality game that could be controlled using a neural link, allowing players to use their thoughts to interact with the virtual environment. Or imagine being able to log into an application, create and approve a financial transaction, etc. using just a neural link. Together with technologies like ChatGPT/GPT, this could open a new way of work, communication, life.

In the field of education, neural links could be used to enhance learning by providing students with personalized feedback and assistance. For example, a neural link could detect when a student is struggling with a particular concept and provide them with additional resources or support. In addition, neural links could be used to create more immersive and engaging educational experiences, such as virtual field trips or interactive simulations.

However, there are also concerns about the ethical and societal implications of neural links. One concern is the potential for neural links to be used for surveillance or mind control. Another concern is the potential for neural links to widen the gap between those who can afford the technology and those who cannot.

A company we cannot miss from any kind of compare on the topic in Neuralink – it is a company founded by Elon Musk in 2016 with the goal of developing neural links that are safe, affordable, and easy to use. Unlike most other neural link technologies, Neuralink aims to create a minimally invasive system that can be implanted in the brain without requiring major surgery. The system consists of tiny threads, thinner than a human hair, that are implanted using a custom robot. The threads are connected to a small device called the “Link” that is implanted behind the ear and can communicate wirelessly with external devices.

Neuralink’s ultimate goal is to enable humans to merge with artificial intelligence, creating a symbiotic relationship that enhances our cognitive abilities and enables us to keep up with the rapid pace of technological progress. While this vision is still a long way off, Neuralink has made significant progress in developing its technology. In 2020, the company demonstrated a prototype of its neural link system in pigs, showing that the technology is capable of transmitting signals from the brain to a computer. While there is still much work to be done, Neuralink has the potential to revolutionize the field of neural links and transform the way we interact with technology.

Despite the promise of neural links, and even beside the ones mentioned above, there are still several limitations and shortcomings that need to be addressed before they can become widely used. Some of these limitations and plans to remediate them are as follows:

  • Invasiveness: Most current neural links require invasive surgery to implant electrodes directly into the brain, which carries significant risks and limitations. Non-invasive methods, such as using scalp electrodes or magnetic stimulation, are being researched to overcome this limitation.
  • Scalability: Current neural links are limited in terms of the number of neurons they can record or stimulate at once. This limits their ability to provide precise and detailed control over external devices. Research is being conducted to develop more scalable systems that can record or stimulate a larger number of neurons.
  • Longevity: Neural links are currently limited in terms of their lifespan, as the electrodes can degrade over time or become displaced. Research is being conducted to develop more durable and longer-lasting materials for neural links.
  • Cost: Current neural links are expensive and not affordable for most people. Research is being conducted to develop more affordable and accessible neural link technologies.
  • Ethics: There are ethical concerns regarding the use of neural links, particularly regarding issues such as privacy, autonomy, and consent. These concerns need to be addressed to ensure that the use of neural links is ethical and does not violate individuals’ rights.

To address these limitations and shortcomings, ongoing research and development are being conducted in the field of neural links. Researchers are exploring new materials and technologies to make neural links more durable, scalable, and affordable. They are also working on developing non-invasive methods for implanting neural links and addressing ethical concerns related to their use. With continued research and development, it is expected that neural links will become more accessible, affordable, and practical for widespread use in the future. One way to make these limitations remediated faster is having open standards for neural links – at present, there are no widely accepted open standards for neural link technologies. Most companies and researchers in the field are working on proprietary systems that are not interoperable with one another. This lack of standardization can create issues such as limited compatibility between different systems and limited access to data.

However, there are efforts underway to establish open standards for neural link technologies. The IEEE Standards Association, for example, has launched a working group to develop a standard for brain-machine interface devices. The aim of this standard is to provide guidelines for designing, testing, and evaluating these devices to ensure that they are safe, effective, and reliable. The standard is being developed with input from experts in academia, industry, and regulatory agencies.

The creation of open standards for neural link technologies could have significant benefits for the field. It could increase interoperability between different systems, making it easier for researchers to collaborate and share data. It could also lead to more rapid innovation and development of new neural link technologies, as companies and researchers could build on existing standards rather than starting from scratch. However, the development of open standards will require collaboration and agreement among a wide range of stakeholders, including researchers, companies, and regulatory agencies.

Emerging Technologies SIG series – What is 4D printing?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


3D printing has been a revolution in the world of manufacturing and engineering, enabling the creation of complex geometries and prototypes with unprecedented speed and precision. However, researchers and scientists have been exploring the possibility of taking 3D printing to the next level, and that is 4D printing. In this article, we will explain what 4D printing is, why it is important, and provide some examples of its use cases.

What is 4D Printing?

4D printing is a relatively new manufacturing technology that uses advanced materials and 3D printing techniques to create objects that can change their shape or functionality over time. The fourth dimension refers to time, as the printed object is designed to transform or self-assemble in response to an external trigger such as temperature, humidity, light, or magnetic field. These transformations can be either gradual or sudden, and they allow for the creation of complex structures that are difficult or impossible to achieve with traditional manufacturing methods.

One of the key features of 4D printing is the use of smart materials or shape-memory polymers, which can remember their original shape and recover it when exposed to a specific stimulus. These materials are often combined with 3D printing techniques, such as multi-material printing or 3D bioprinting, to create structures with intricate geometries and functionalities. The resulting objects can be used in a variety of applications, from medicine and robotics to architecture and aerospace.

Why is 4D Printing Important?

4D printing has the potential to revolutionize many industries and fields, by enabling the creation of structures that can adapt to their environment and perform multiple functions. Here are some reasons why 4D printing is important:

Greater design flexibility: 4D printing allows for the creation of objects with complex geometries and functions that are difficult or impossible to achieve with traditional manufacturing methods. This opens up new design possibilities for engineers and designers, allowing them to create objects that can adapt to changing conditions or perform multiple functions.

Self-assembly and self-repair: 4D printed objects can self-assemble or self-repair in response to external triggers, reducing the need for manual intervention or maintenance. This can be particularly useful in applications such as infrastructure or aerospace, where access and maintenance are challenging.

Customization and personalization: 4D printing can be used to create customized objects that are tailored to individual needs or preferences. This can be particularly useful in applications such as medicine or wearable technology, where personalized devices can improve patient outcomes or user experience.

Sustainable manufacturing: 4D printing can reduce waste and energy consumption by using smart materials and additive manufacturing techniques that require less material and energy than traditional manufacturing methods.

Use Cases and Examples of 4D Printing

Here are some examples of how 4D printing is being used in different fields:

  • Medicine: 4D printing is being used to create medical implants and devices that can adapt to the body’s changing needs. For example, a 4D printed stent can change its shape in response to blood flow or temperature changes, reducing the risk of complications or blockages. 4D printing is also being used to create bioprinted tissues and organs that can self-assemble and grow into functional structures.
  • Architecture: 4D printing is being used to create structures that can adapt to changing environmental conditions or user needs. For example, a 4D printed building facade can change its shape or transparency in response to sunlight or air quality, improving energy efficiency and user comfort.
  • Robotics: 4D printing is being used to create soft robots that can change their shape or stiffness in response to external stimuli. For example, a 4D printed gripper can adapt to the shape and size of the object it is picking up.

Limitations as of today of 4D printing

Life is not full on pink clouds: despite the potential of 4D printing, the technology is still in its early stages of development, and there are several limitations that need to be overcome to realize its full potential. Here are some of the current limitations of 4D printing and how they can be addressed in the future:

  • Material properties: 4D printing requires materials that can change their shape or functionality in response to external stimuli. However, the range of available smart materials is limited, and they can be expensive or difficult to process. To overcome this limitation, researchers are exploring new types of smart materials, such as shape-changing metals and alloys, or using multiple materials in a single print to create composite structures with unique properties.
  • Printing resolution: 4D printing requires high printing resolution to create objects with intricate geometries and functions. However, current 4D printers have limited printing resolution, which can affect the accuracy and reliability of the final product. To address this limitation, researchers are exploring new printing techniques, such as micro-scale 3D printing or multi-photon lithography, which can achieve higher printing resolution.
  • Trigger mechanisms: 4D printing requires an external trigger, such as temperature, humidity, or light, to activate the transformation process. However, the trigger mechanisms can be complex and difficult to control, which can affect the reliability and reproducibility of the printed object. To overcome this limitation, researchers are developing new trigger mechanisms, such as magnetic fields or acoustic waves, which can be more precise and controllable.
  • Scalability: 4D printing is currently limited to small-scale objects due to the complexity of the printing process and the materials used. However, for 4D printing to be widely adopted in industries such as construction or aerospace, it needs to be scalable to larger objects. To address this limitation, researchers are exploring new printing techniques, such as robotic printing or large-scale extrusion, which can achieve higher printing speed and scalability.

Conclusion

In conclusion, 4D printing has the potential to revolutionize many industries by enabling the creation of structures that can adapt to their environment and perform multiple functions. While the technology is still in its early stages, researchers are working to overcome the current limitations of 4D printing, such as material properties, printing resolution, trigger mechanisms, and scalability, to realize its full potential. As the technology advances, we can expect to see more innovative and practical applications of 4D printing in the future.

Emerging Technologies SIG series – What is Digital Twinning?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Digital twinning is a technology that is rapidly gaining popularity in the industrial world. It is a technique where a digital replica of a physical object is created, which is also known as a “twin”. This twin can be used for a variety of purposes such as simulation, analysis, and monitoring. With the advancements on the Internet of Things (IoT) and Artificial Intelligence (AI), digital twinning has become a promising tool that enables companies to make better decisions, optimize processes, and improve product quality.

Digital twins can be created for various things such as machines, buildings, cities, and even people. The purpose of creating a digital twin is to create a real-time replica of a physical object that can be monitored, simulated, and analyzed. This allows for more accurate and efficient decision-making processes.

One example of digital twinning is the manufacturing industry. Digital twins can be used to simulate production processes, analyze machine performance, and predict maintenance needs. By creating a digital twin of a machine, it is possible to monitor its performance, predict potential issues, and optimize its operations. This can lead to a reduction in downtime and an increase in overall efficiency.

Another example is the construction industry. Digital twins can be created for buildings, which can be used for planning, construction, and maintenance. This can help to reduce costs, improve safety, and optimize energy consumption. Digital twins can also be used for smart cities, where sensors and other IoT devices are used to create a digital replica of a city. This can be used to monitor traffic flow, optimize energy consumption, and improve overall city planning.

A notable case study of digital twinning is the use of digital twins in the aerospace industry. NASA has been using digital twins for several years to simulate the performance of spacecraft. By creating a digital twin of a spacecraft, it is possible to predict its behavior in different environments, simulate potential malfunctions, and optimize its design. This has helped NASA to reduce costs, improve safety, and increase the reliability of its spacecraft.

Another example is the use of digital twins in healthcare. Digital twins can be created for patients, which can be used to simulate the effects of different treatments and predict potential health issues. By creating a digital twin of a patient, doctors can make more accurate diagnoses and create more personalized treatment plans.

Challenges

While digital twinning is a powerful tool for businesses, it is not without its limitations. One of the major shortcomings of digital twinning is the need for high-quality data. Without accurate and reliable data, digital twins cannot provide the expected benefits. This can be a challenge for businesses that operate in complex environments or deal with large amounts of data.

Another challenge is the lack of standardization. There are currently no established standards for creating digital twins, which can lead to inconsistencies in the data and models used to create them. This can limit the interoperability of digital twins and make it difficult to share data across different platforms.

To address these limitations, there are plans to improve data quality and standardization. Some companies are investing in machine learning algorithms to improve the accuracy and reliability of data used to create digital twins. They are also exploring ways to standardize the creation and management of digital twins, including developing common data models and formats.

Another solution is to increase collaboration among businesses, academia, and government agencies to develop and share best practices for digital twinning. This can help to ensure that digital twins are created using consistent and reliable data, and that they can be easily integrated with other systems.

In addition, advancements in technologies like 5G networks and edge computing are expected to improve the reliability and speed of data collection and analysis, making it easier to create and manage digital twins.

Standards

Currently, there are no widely accepted open standards for digital twinning. However, there are efforts underway to establish standards and protocols for digital twinning to improve interoperability and facilitate data exchange among different systems.

One such effort is the Industrial Internet Consortium (IIC), a global organization that aims to accelerate the adoption of the Industrial Internet of Things (IIoT) by developing common architectures, frameworks, and protocols. The IIC has developed a reference architecture for digital twinning, which provides guidance on how to design, implement, and manage digital twins in a consistent manner.

Another organization that is working on standardizing digital twinning is the Object Management Group (OMG). The OMG is a not-for-profit organization that develops and maintains standards for distributed computing systems. They have created the Digital Twin Consortium, a collaborative community of organizations that are developing open-source software, frameworks, and standards for digital twinning.

In addition, various industry groups and standards organizations are also working on digital twinning standards. For example, the Institute of Electrical and Electronics Engineers (IEEE) has created a working group to develop standards for the interoperability of digital twins.

While there are currently no widely accepted open standards for digital twinning, the efforts of these organizations and industry groups are a step towards developing common frameworks and protocols for digital twinning. These standards will help improve interoperability and enable more efficient and effective use of digital twins in various industries.

In conclusion, digital twinning has the potential to transform businesses by improving decision-making and optimizing processes. While there are still challenges to be addressed, the industry is actively working on solutions to improve data quality, standardization, and interoperability. As these challenges are addressed, digital twinning is expected to become an even more powerful tool for businesses in a wide range of industries.