Emerging Technologies SIG series – What is 4D printing?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


3D printing has been a revolution in the world of manufacturing and engineering, enabling the creation of complex geometries and prototypes with unprecedented speed and precision. However, researchers and scientists have been exploring the possibility of taking 3D printing to the next level, and that is 4D printing. In this article, we will explain what 4D printing is, why it is important, and provide some examples of its use cases.

What is 4D Printing?

4D printing is a relatively new manufacturing technology that uses advanced materials and 3D printing techniques to create objects that can change their shape or functionality over time. The fourth dimension refers to time, as the printed object is designed to transform or self-assemble in response to an external trigger such as temperature, humidity, light, or magnetic field. These transformations can be either gradual or sudden, and they allow for the creation of complex structures that are difficult or impossible to achieve with traditional manufacturing methods.

One of the key features of 4D printing is the use of smart materials or shape-memory polymers, which can remember their original shape and recover it when exposed to a specific stimulus. These materials are often combined with 3D printing techniques, such as multi-material printing or 3D bioprinting, to create structures with intricate geometries and functionalities. The resulting objects can be used in a variety of applications, from medicine and robotics to architecture and aerospace.

Why is 4D Printing Important?

4D printing has the potential to revolutionize many industries and fields, by enabling the creation of structures that can adapt to their environment and perform multiple functions. Here are some reasons why 4D printing is important:

Greater design flexibility: 4D printing allows for the creation of objects with complex geometries and functions that are difficult or impossible to achieve with traditional manufacturing methods. This opens up new design possibilities for engineers and designers, allowing them to create objects that can adapt to changing conditions or perform multiple functions.

Self-assembly and self-repair: 4D printed objects can self-assemble or self-repair in response to external triggers, reducing the need for manual intervention or maintenance. This can be particularly useful in applications such as infrastructure or aerospace, where access and maintenance are challenging.

Customization and personalization: 4D printing can be used to create customized objects that are tailored to individual needs or preferences. This can be particularly useful in applications such as medicine or wearable technology, where personalized devices can improve patient outcomes or user experience.

Sustainable manufacturing: 4D printing can reduce waste and energy consumption by using smart materials and additive manufacturing techniques that require less material and energy than traditional manufacturing methods.

Use Cases and Examples of 4D Printing

Here are some examples of how 4D printing is being used in different fields:

  • Medicine: 4D printing is being used to create medical implants and devices that can adapt to the body’s changing needs. For example, a 4D printed stent can change its shape in response to blood flow or temperature changes, reducing the risk of complications or blockages. 4D printing is also being used to create bioprinted tissues and organs that can self-assemble and grow into functional structures.
  • Architecture: 4D printing is being used to create structures that can adapt to changing environmental conditions or user needs. For example, a 4D printed building facade can change its shape or transparency in response to sunlight or air quality, improving energy efficiency and user comfort.
  • Robotics: 4D printing is being used to create soft robots that can change their shape or stiffness in response to external stimuli. For example, a 4D printed gripper can adapt to the shape and size of the object it is picking up.

Limitations as of today of 4D printing

Life is not full on pink clouds: despite the potential of 4D printing, the technology is still in its early stages of development, and there are several limitations that need to be overcome to realize its full potential. Here are some of the current limitations of 4D printing and how they can be addressed in the future:

  • Material properties: 4D printing requires materials that can change their shape or functionality in response to external stimuli. However, the range of available smart materials is limited, and they can be expensive or difficult to process. To overcome this limitation, researchers are exploring new types of smart materials, such as shape-changing metals and alloys, or using multiple materials in a single print to create composite structures with unique properties.
  • Printing resolution: 4D printing requires high printing resolution to create objects with intricate geometries and functions. However, current 4D printers have limited printing resolution, which can affect the accuracy and reliability of the final product. To address this limitation, researchers are exploring new printing techniques, such as micro-scale 3D printing or multi-photon lithography, which can achieve higher printing resolution.
  • Trigger mechanisms: 4D printing requires an external trigger, such as temperature, humidity, or light, to activate the transformation process. However, the trigger mechanisms can be complex and difficult to control, which can affect the reliability and reproducibility of the printed object. To overcome this limitation, researchers are developing new trigger mechanisms, such as magnetic fields or acoustic waves, which can be more precise and controllable.
  • Scalability: 4D printing is currently limited to small-scale objects due to the complexity of the printing process and the materials used. However, for 4D printing to be widely adopted in industries such as construction or aerospace, it needs to be scalable to larger objects. To address this limitation, researchers are exploring new printing techniques, such as robotic printing or large-scale extrusion, which can achieve higher printing speed and scalability.

Conclusion

In conclusion, 4D printing has the potential to revolutionize many industries by enabling the creation of structures that can adapt to their environment and perform multiple functions. While the technology is still in its early stages, researchers are working to overcome the current limitations of 4D printing, such as material properties, printing resolution, trigger mechanisms, and scalability, to realize its full potential. As the technology advances, we can expect to see more innovative and practical applications of 4D printing in the future.

Emerging Technologies SIG series – What is Digital Twinning?

To provide additional information related to the Emerging Technologies SIG of the FINOS/Linux Foundation, I start a miniseries of posts going deeper into some of the technologies mentioned there. If you are interested in participating, please add your remarks at the Special Interest Group – Emerging Technologies item on the FINOS project board.


Digital twinning is a technology that is rapidly gaining popularity in the industrial world. It is a technique where a digital replica of a physical object is created, which is also known as a “twin”. This twin can be used for a variety of purposes such as simulation, analysis, and monitoring. With the advancements on the Internet of Things (IoT) and Artificial Intelligence (AI), digital twinning has become a promising tool that enables companies to make better decisions, optimize processes, and improve product quality.

Digital twins can be created for various things such as machines, buildings, cities, and even people. The purpose of creating a digital twin is to create a real-time replica of a physical object that can be monitored, simulated, and analyzed. This allows for more accurate and efficient decision-making processes.

One example of digital twinning is the manufacturing industry. Digital twins can be used to simulate production processes, analyze machine performance, and predict maintenance needs. By creating a digital twin of a machine, it is possible to monitor its performance, predict potential issues, and optimize its operations. This can lead to a reduction in downtime and an increase in overall efficiency.

Another example is the construction industry. Digital twins can be created for buildings, which can be used for planning, construction, and maintenance. This can help to reduce costs, improve safety, and optimize energy consumption. Digital twins can also be used for smart cities, where sensors and other IoT devices are used to create a digital replica of a city. This can be used to monitor traffic flow, optimize energy consumption, and improve overall city planning.

A notable case study of digital twinning is the use of digital twins in the aerospace industry. NASA has been using digital twins for several years to simulate the performance of spacecraft. By creating a digital twin of a spacecraft, it is possible to predict its behavior in different environments, simulate potential malfunctions, and optimize its design. This has helped NASA to reduce costs, improve safety, and increase the reliability of its spacecraft.

Another example is the use of digital twins in healthcare. Digital twins can be created for patients, which can be used to simulate the effects of different treatments and predict potential health issues. By creating a digital twin of a patient, doctors can make more accurate diagnoses and create more personalized treatment plans.

Challenges

While digital twinning is a powerful tool for businesses, it is not without its limitations. One of the major shortcomings of digital twinning is the need for high-quality data. Without accurate and reliable data, digital twins cannot provide the expected benefits. This can be a challenge for businesses that operate in complex environments or deal with large amounts of data.

Another challenge is the lack of standardization. There are currently no established standards for creating digital twins, which can lead to inconsistencies in the data and models used to create them. This can limit the interoperability of digital twins and make it difficult to share data across different platforms.

To address these limitations, there are plans to improve data quality and standardization. Some companies are investing in machine learning algorithms to improve the accuracy and reliability of data used to create digital twins. They are also exploring ways to standardize the creation and management of digital twins, including developing common data models and formats.

Another solution is to increase collaboration among businesses, academia, and government agencies to develop and share best practices for digital twinning. This can help to ensure that digital twins are created using consistent and reliable data, and that they can be easily integrated with other systems.

In addition, advancements in technologies like 5G networks and edge computing are expected to improve the reliability and speed of data collection and analysis, making it easier to create and manage digital twins.

Standards

Currently, there are no widely accepted open standards for digital twinning. However, there are efforts underway to establish standards and protocols for digital twinning to improve interoperability and facilitate data exchange among different systems.

One such effort is the Industrial Internet Consortium (IIC), a global organization that aims to accelerate the adoption of the Industrial Internet of Things (IIoT) by developing common architectures, frameworks, and protocols. The IIC has developed a reference architecture for digital twinning, which provides guidance on how to design, implement, and manage digital twins in a consistent manner.

Another organization that is working on standardizing digital twinning is the Object Management Group (OMG). The OMG is a not-for-profit organization that develops and maintains standards for distributed computing systems. They have created the Digital Twin Consortium, a collaborative community of organizations that are developing open-source software, frameworks, and standards for digital twinning.

In addition, various industry groups and standards organizations are also working on digital twinning standards. For example, the Institute of Electrical and Electronics Engineers (IEEE) has created a working group to develop standards for the interoperability of digital twins.

While there are currently no widely accepted open standards for digital twinning, the efforts of these organizations and industry groups are a step towards developing common frameworks and protocols for digital twinning. These standards will help improve interoperability and enable more efficient and effective use of digital twins in various industries.

In conclusion, digital twinning has the potential to transform businesses by improving decision-making and optimizing processes. While there are still challenges to be addressed, the industry is actively working on solutions to improve data quality, standardization, and interoperability. As these challenges are addressed, digital twinning is expected to become an even more powerful tool for businesses in a wide range of industries.

What are the current limitations of chatbots like ChatGPT?

Chatbots, such as ChatGPT, have revolutionized the way businesses interact with customers. They are computer programs designed to simulate human conversation and provide information, guidance, and support to users. While they have come a long way in recent years, there are still some limitations to their functionality.

  • Limited domain knowledge: While ChatGPT can generate responses to a wide range of topics, its responses may not always be accurate or relevant to the user’s specific needs. It lacks the domain-specific knowledge that human experts possess, making it challenging to provide personalized advice or help with complex issues.
  • Difficulty in understanding context: Chatbots have difficulty understanding the context in which a question or statement is made. They rely on pre-defined responses, making it challenging to respond appropriately to a user’s specific needs or queries that require additional clarification.
  • Inability to handle complex queries: Chatbots often struggle to handle complex queries or requests that require more significant processing power. They may not be able to provide users with the necessary information, leading to frustration and dissatisfaction.
  • Limited emotional intelligence: Chatbots lack the emotional intelligence that humans possess, making it challenging to detect and respond appropriately to a user’s emotional state. They may fail to recognize sarcasm, humor, or frustration, leading to inaccurate or irrelevant responses.
  • Inability to provide creative solutions: Chatbots are programmed to provide pre-defined responses based on a set of rules or algorithms. They lack the creativity required to provide novel solutions to complex problems, making it challenging to handle unique scenarios or situations.
  • Language limitations: While ChatGPT is designed to generate responses in various languages, it may not be able to understand all languages or dialects. This can limit its usefulness in regions where specific languages are predominant.
  • Limited memory: Chatbots have limited memory and can only remember information for a short period. This makes it challenging to provide continuity in conversations or remember previous interactions, making it difficult to personalize the user experience.

However, far from everything lost – as technology continues to advance, there are several limitations of chatbots that are likely to be overcome in the future. Here are a few limitations that are expected to be addressed (some of them pretty soon):

  • Improved Natural Language Processing (NLP): Natural Language Processing is the backbone of chatbots. As NLP technology advances, chatbots will become better at understanding and interpreting language, which will enable them to handle complex queries, understand context, and provide more personalized responses.
  • AI learning and training: Chatbots will be able to learn from their interactions with users and improve their performance over time. AI learning and training will enable chatbots to understand a user’s behavior and preferences, which will help them provide more personalized and relevant responses.
  • Emotion recognition: As chatbots become more advanced, they will be able to recognize a user’s emotional state and respond accordingly. This will help chatbots provide more empathetic and human-like responses to users.
  • Integration with other technologies: Chatbots will be able to integrate with other technologies like augmented reality, voice assistants, and smart home devices. This integration will enable chatbots to provide more comprehensive solutions to users and create a seamless user experience.
  • Memory and personalization: Chatbots will be able to store information about users and their preferences, which will enable them to provide personalized responses and recommendations. Chatbots will also be able to remember previous interactions, providing a seamless and personalized user experience.

Overall, as technology continues to advance, chatbots are expected to become even more advanced and useful, overcoming many of their current limitations. This will make chatbots an even more valuable tool for businesses looking to provide exceptional customer service and support. And who knows, one day, my blog might be written by a chatbot as well πŸ˜€

Mentors and sponsors – why both?

This post is inspired by an amazing TED talk snippet from the breathtaking Carla Harris. So, I already wrote about mentors, and you might have even started to look for one based on it – and now I come with this “sponsor” thing, looks like I cannot make up my mind? So, what is actually a sponsor? When and why do you need one?

Big multinational companies offer numerous opportunities for professional growth, but they can also be complex and challenging work environments. For this reason, having a sponsor can be incredibly valuable. A sponsor is an influential person within the organization who advocates for and supports a protΓ©gΓ©’s career advancement. In this post, I will explore the value of having a sponsor if you work at a big multinational company.

Increased Visibility

One of the most significant benefits of having a sponsor is increased visibility within the organization. Sponsors are usually high-ranking executives or senior leaders who can offer valuable exposure to their protΓ©gΓ©s. This increased visibility can help you gain recognition and establish a reputation for excellence. Moreover, it can also provide valuable networking opportunities, which can lead to more significant opportunities and promotions.

Access to Resources

Working at a big multinational company can be challenging, and it can be challenging to navigate the various departments, teams, and stakeholders. A sponsor can help you gain access to resources that are critical to your success, such as training programs, mentorship opportunities, and important information about the company culture and policies. Sponsors can also offer guidance on how to navigate the complex organizational structure and offer insights on how to manage competing priorities and stakeholders.

Accelerated Learning

Another valuable benefit of having a sponsor is accelerated learning. A sponsor can offer invaluable advice and guidance, drawing from their own experience to help you avoid common pitfalls and achieve your career goals more efficiently. They can also help you identify opportunities for growth and development and provide insights into the skills and competencies needed for advancement within the company.

Advocacy and Support

A sponsor’s most significant value is their advocacy and support for their protΓ©gΓ©’s career advancement. Sponsors can use their influence and network to open doors and create opportunities that might otherwise be inaccessible. They can also provide valuable feedback and offer guidance on how to develop critical skills and competencies that are necessary for success within the organization.

Increased Engagement and Retention

Finally, having a sponsor can help increase engagement and retention within the organization. Employees who feel supported and valued are more likely to be committed to their work and stay with the company long-term. Furthermore, sponsors can help employees understand how their work fits into the larger organizational goals and provide insights on how to make a more significant impact.

So, sponsor or mentor?

The short answer is: Both. In addition to the benefits of having a sponsor, it is important to understand how having a sponsor differs from having a mentor. While there is overlap between these roles, there are some key differences to consider.

A mentor is typically someone who provides guidance and advice on a broader range of topics, including personal and professional development. Mentors can come from within or outside of the organization, and they often have expertise in a particular field or industry. Mentors may provide support, feedback, and encouragement, but they are not necessarily in a position to advocate for their mentees within the organization.

A sponsor, on the other hand, is someone who actively supports and advocates for their protΓ©gΓ©’s career advancement within the organization. Sponsors are typically higher up in the organization and have the influence and power to open doors and create opportunities for their protΓ©gΓ©s. Sponsors also provide critical feedback, guidance, and support, but their focus is on helping their protΓ©gΓ©s achieve their career goals within the organization.

In summary, while both mentors and sponsors provide valuable support and guidance, their focus and scope differ. Mentors provide guidance on personal and professional development, while sponsors focus specifically on career advancement within the organization. It is essential to understand the differences between these roles and identify which one is best suited for your specific career goals and needs. In some cases, having both a mentor and a sponsor can be beneficial, as they can provide complementary support and guidance in different areas of your professional development.

Conclusion

In conclusion, having a sponsor is incredibly valuable, especially for employees working at big multinational companies. Sponsors can provide increased visibility, access to resources, accelerated learning, advocacy and support, and increased engagement and retention. If you are looking to advance your career and make a more significant impact within your organization, it is essential to seek out a sponsor who can help guide and support your professional growth.

Thank you for coming to my TED talk πŸ˜‰

Holograms, and the so many holographic displays and headsets

Get the basics out first, as I can always hear confusion between these two techs: Holograms are three-dimensional images that are created by light interference patterns. To make a hologram, a laser beam is split into two beams that pass-through lenses to expand them. One beam (the reference beam) is directed onto high-contrast film. The other beam is aimed at the object (the object beam). The light from the object beam reflects off the object and onto the film, where it interferes with the reference beam. The film records the interference pattern, which contains information about the shape, color, and texture of the object.

How to tell you are working with Spatial Computing without telling people you are working with Spatial Computing? I hid altogether more than a dozen different headsets on the picture πŸ˜€

On the other hand, holographic displays and headsets are devices that can project holograms into the real world or onto a transparent screen. There are different types of holographic displays and headsets, but one common method is to use a high-definition or 4K screen to reflect digital content through glass with special coating, called the glass optics. When placed at a certain angle, the glass optic will create an illusion that makes your brain interpret the digital content as three-dimensional. Another method is to use light projection to create digital objects that appear to float in midair. Holographic headsets like HoloLens use color sequential, see-through RGB displays to render holograms. The headsets also have sensors and cameras that track the user’s head movement and the environment and adjust the holograms accordingly.

There are different types of holographic displays and headsets, and they have different features, advantages, and disadvantages. Some of the factors that can be used to compare them are:

  • The distinction between AR and VR: AR headsets like HoloLens overlay digital content onto the real world, whereas VR headsets like Windows Mixed Reality immerse the user in a virtual environment.
  • The field of view: This is the extent of the visible area that the user can see through the device. Some holographic displays and headsets can provide a wide field of view, such as 90 degrees or more, while others have a narrower field of view, such as 40 degrees or less.
  • The size and weight: This affects the comfort and portability of the device. Some holographic displays and headsets are bulky and heavy, while others are thin and light, like sunglasses.
  • The price: This reflects the affordability and accessibility of the device. Some holographic displays and headsets are very expensive, costing thousands of dollars, while others are more affordable, costing hundreds of dollars or less.
  • The platform and compatibility: This determines the software and hardware requirements and the availability of content and applications for the device. Some holographic displays and headsets run on specific platforms, such as Windows or Android, while others are more open and compatible with various devices and systems.

So, what should you buy? Here is a table of comparison of some of the most popular holographic displays and headsets:

NameTypeFOVweightPricePlatform
Oculus Quest 2VR90 degrees503 g$299Android, Oculus app
Microsoft HoloLens 2AR52 degrees566 g$3,500Windows 10, Azure
Magic Leap OneAR50 degrees316 g$2,295Lumin OS, Magic Leap app
Epson Moverio BT-300AR23 degrees69 g$699Android, Moverio app
Google Glass Enterprise Edition 2AR20 degrees46 g$999Android, Google app
Raptor AR headsetAR13.5 degrees98 g$699Android, Raptor app

There are numerous more out there, like the Nvidia Holographic VR, which is a technology that uses a holographic optical element to project a light field onto the user’s eyes, creating a 3D effect without the need for eye tracking or lenses. It is still in development and aims to improve the realism, comfort, and immersion of VR, or the Ikin Ryz, which is a device that connects to a smartphone and creates a holographic image that can be seen and interacted with by multiple users without glasses. It uses a patented light field technology and a nanochip to generate the holograms. For large scale

And what you should be careful about? There are many different display technologies, and they have different characteristics, advantages, and disadvantages. Some of the factors that can be used to compare them are:

  • The screen shape: This affects the viewing angle, the aspect ratio, and the distortion of the image. Some display technologies have flat screens, while others have curved screens, such as spherical, cylindrical, or concave.
  • The screen size: This affects the resolution, the brightness, and the power consumption of the display. Some display technologies can produce very large screens, while others are limited by physical or technical constraints.
  • The screen type: This affects the color reproduction, the contrast ratio, the response time, and the refresh rate of the display. Some display technologies are emissive, meaning they produce their own light, such as OLED, Plasma, and MicroLED, while others are transmissive, meaning they rely on a backlight, such as LCD and its derivatives.
  • The screen quality: This affects the image clarity, the brightness uniformity, the viewing angle, and the color accuracy of the display. Some display technologies have higher quality screens than others, depending on the pixel density, the subpixel arrangement, the color gamut, and the backlight technology.
  • The screen durability: This affects the lifespan, the reliability, and the environmental impact of the display. Some display technologies are more durable than others, depending on the material, the manufacturing process, the power consumption, and the susceptibility to burn-in, dead pixels, or image retention.

Why we talk about platform engineering

Platform engineering is the practice of designing, building and operating software platforms that enable the delivery of applications and services at scale. Platform engineering is a complex and multidisciplinary field that requires a combination of technical, business and organizational skills and competencies. Platform engineering is not a new concept, but it has gained more attention in recent years as organizations face the challenges of digital transformation, cloud migration, microservices architecture, DevOps culture and data-driven decision making. Platform engineering is also a dynamic and evolving field that requires constant learning, experimentation and improvement. By talking about platform engineering, we can share our experiences, insights and lessons learned, and we can collaborate and innovate to create better platforms for ourselves and for others.

In the world of software development, there has been a growing trend towards platform engineering. Platform engineering refers to the development of technology platforms that provide a foundation for multiple applications, services, and systems. It has become increasingly important for companies to leverage platform engineering to create scalable, flexible, and efficient technology solutions.

One of the reasons why we talk about platform engineering is because it helps us learn from the success stories of companies like Netflix, which has built a highly resilient, scalable and innovative platform that supports its streaming service and content production. Netflix’s platform engineering team is responsible for providing the core infrastructure, tools and frameworks that enable the development, deployment and operation of hundreds of microservices across multiple regions and clouds. Netflix’s platform engineering team also fosters a culture of experimentation, feedback and learning, which allows the company to continuously improve its platform and deliver value to its customers.

Another reason why we talk about platform engineering is because it helps us understand the main principles and best practices that guide the platform engineering discipline. Some of these principles are:

  • Layers of platforms: Platform engineering involves creating different layers of platforms that serve different purposes and audiences. For example, a platform layer can provide the foundational infrastructure and services, such as compute, storage, networking, security and monitoring. Another platform layer can provide the application development and delivery capabilities, such as code repositories, pipelines, testing, deployment and observability. A third platform layer can provide the domain-specific functionality and business logic, such as user interfaces, APIs, data processing and analytics. Each platform layer should be modular, composable and interoperable, and should expose clear and consistent interfaces to the consumers of the platform.
  • Dynamic layers moving: Platform engineering also involves adapting and evolving the platform layers according to the changing needs and demands of the platform consumers and the market. Platform engineering should not be a static or rigid process, but a dynamic and flexible one, that allows the platform layers to move up or down the stack, or across different clouds or regions, as needed. For example, a platform layer that provides a specific functionality or service can be moved up the stack to become a higher-level abstraction or a reusable component, or it can be moved down the stack to become a lower-level implementation or a specialized service. Similarly, a platform layer can be moved across different clouds or regions to leverage the best features or capabilities of each cloud provider or to optimize the performance or availability of the platform.
  • User driven interfaces: Platform engineering should also focus on creating user driven interfaces that enable the platform consumers to easily and effectively use the platform capabilities and services. User driven interfaces can include graphical user interfaces (GUIs), command-line interfaces (CLI), application programming interfaces (APIs), software development kits (SDKs) or any other means of interaction or communication with the platform. User driven interfaces should be designed with the user’s needs, preferences and expectations in mind, and should provide a simple, intuitive and consistent user experience. User driven interfaces should also provide feedback, guidance and documentation to the user, and should allow the user to customize, configure and control the platform as desired.
  • Differentiation between internal and external platforms: Platform engineering should also recognize the differentiation between internal and external platforms, and the implications of each type of platform. Internal platforms are platforms that are built and used within an organization, and are typically aimed at improving the efficiency, productivity and quality of the internal processes, workflows and operations. External platforms are platforms that are built and offered to external customers, partners or stakeholders, and are typically aimed at creating value, differentiation and competitive advantage in the market. Internal and external platforms may have different goals, requirements, constraints and challenges, and may require different strategies, approaches and techniques to design, build and operate them.

For platform engineering to be a success, a number of teams need to collaborate effectively. Some of the key teams involved in platform engineering include:

  • UX Design team: The UX design team is responsible for creating user-centered designs that provide a seamless and intuitive experience for the users. They work with the development team to ensure that the platform meets the needs of the users.
  • Stakeholder teams: Stakeholder teams include the teams that use the platform and those that are impacted by its development and operation. They provide valuable feedback and insights that help to guide the development and operation of the platform.
  • Product management team: The product management team is responsible for setting the strategic direction of the platform. They work with the other teams to ensure that the platform is aligned with the overall business strategy and goals.
  • Security team: The security team is responsible for ensuring that the platform is secure and that sensitive data is protected. They work with the development team and the operations team to implement appropriate security measures and to respond to any security incidents.
  • Operations team: The operations team is responsible for the day-to-day running of the platform. They ensure that the platform is available, secure, and performing optimally. They work with the development team to resolve any issues that arise and to plan for future growth.
  • Development team: The development team is responsible for building and maintaining the platform. They work closely with the other teams to ensure that the platform is scalable, flexible, and efficient.

By collaborating effectively, these teams can work together to build a platform that meets the needs of the business and its users. This requires clear communication, a shared vision, and a willingness to work together to achieve common goals. When these teams work together effectively, platform engineering can be a powerful tool for driving business success.

How do Conway’s law work?

Conway’s Law is a concept in software engineering that states that the design of a system (e.g., a software system) is strongly influenced by the social and communication structures within the organization that produced it. Essentially, the law asserts that the structure and behavior of a system will reflect the structure and communication patterns of the organization that created it.

One example of Conway’s Law in action is the well-known case of IBM’s System/360 mainframe computer in the 1960s. The System/360 was a large and complex project involving many different teams and departments within IBM. The structure of the teams and their communication patterns had a significant impact on the design of the System/360, and the end result was a system that was highly modular and easy to modify, reflecting the hierarchical and centralized structure of IBM’s organization.

Another example can be seen in the design of many large software systems, where the design of the system often reflects the communication and collaboration patterns of the development team. For instance, if a development team is organized into smaller, self-contained units, the design of the software may also be organized into modular components that can be developed and tested independently. On the other hand, if the team is more centralized and focused on shared goals, the design of the software may be more tightly integrated, reflecting the more cooperative structure of the team.

Google has a decentralized and flat organizational structure, with a focus on innovation and experimentation. This is reflected in its development process, which emphasizes small, autonomous teams that are able to quickly prototype and iterate on new ideas. Google’s products, such as search and Gmail, are known for their simplicity and ease of use, reflecting the company’s focus on user experience and accessibility.

Microsoft, on the other hand, has a more hierarchical and centralized structure, with a focus on delivering enterprise-level products and services. This is reflected in the design of its software, which is often complex and feature-rich, designed to meet the needs of large businesses and organizations. Microsoft’s development process is also more structured, with a greater emphasis on process and planning.

Apple is well-known for its design-focused culture and its tightly integrated product line. Apple’s products, such as the iPhone and Mac, are known for their sleek design and seamless user experience. This reflects the company’s focus on design and user experience, and its organizational structure, which is centralized and focused on delivering high-quality, integrated products.

Oracle, as a provider of enterprise software and database solutions, has a highly centralized and structured organizational structure, with a focus on delivering robust and scalable products. This is reflected in the design of its software, which is often complex and feature-rich, designed to meet the needs of large businesses and organizations. Oracle’s development process is also highly structured, with a focus on delivering products that are reliable, secure, and scalable.

In short, Conway’s Law highlights the importance of considering the organizational structure and communication patterns within an organization when designing a system, as these factors can have a significant impact on the design and success of the final product.

Why do we keep refactoring code?

Why do we, developers, keep refactoring religiously, if not to keep our code base clean?

As a software engineer, the code base is your kitchen and the code you write is the food you cook. Just like a chef, it is important to keep your code base clean and well-maintained in order to produce high-quality code. If you have ever seen a great chef cooking, you will notice a pattern: every few chops or minutes, the chef takes time to clean up their station. This may seem like a small task, but it is critical to maintaining a good work environment and producing high-quality food. The same principles apply to software engineering.

A messy code base is like a Pandora’s box waiting to be opened. It can be difficult to find anything in a mess and even harder to get anything good out of it. This can lead to decreased team morale and a frustrating work environment. On the other hand, working on a well-maintained code base can be a joy. It feels good to come to work and work on clean, organized code.

However, just like a chef’s kitchen, code bases can become messy over time. It is important to regularly refactor your code to keep it clean and maintainable. Refactoring is the process of restructuring existing code without changing its behavior. This can involve removing redundant code, improving the organization of the code, or improving performance.

One tool that can be helpful in refactoring is a software solution such as CodeScene, which provides automated analysis of your code base and identifies potential refactoring opportunities. Another option is the popular code editor, Visual Studio Code, which has a built-in refactoring tool that allows you to make changes to your code quickly and efficiently. Same applies for Visual Studio, Eclipse, IntelliJ and more.

For example, let’s say you have a code base with multiple functions that are similar in structure and functionality. You can refactor this code by creating a single function that can be used by all the others, reducing the amount of redundant code and making it easier to maintain in the future.

Refactoring can be a time-consuming task, but it is well worth the effort in the long run. A clean code base is easier to understand, easier to maintain, and reduces the risk of bugs and other issues. It can also improve performance by reducing the amount of redundant code and making it easier to identify and resolve performance bottlenecks.

When refactoring, it is important to follow best practices and industry standards. This includes writing clean and readable code, using meaningful names for variables and functions, and following a consistent code style. Additionally, it is important to thoroughly test your code after refactoring to ensure that the changes you made did not introduce any new bugs or issues.

It is also a good idea to use version control systems such as Git when refactoring. This allows you to track the changes you make to your code and revert back to an earlier version if necessary. Additionally, version control systems make it easier to collaborate with other developers on a code base, as changes made by one developer can be easily reviewed and approved by others.

In conclusion, refactoring is an important part of software development that should not be overlooked. By keeping your code base clean and well-maintained, you can improve the quality of your code, increase team morale, and make your work environment more enjoyable. Use best practices, industry standards, version control systems, and software solutions to simplify the process and ensure the success of your refactoring efforts.

In conclusion, refactoring is an important process for software engineers to keep their code base clean and maintainable. Just like a chef cleaning up their kitchen, refactoring can improve the work environment, increase team morale, and lead to higher-quality code. Consider using software solutions and built-in refactoring tools in code editors to simplify the process and make it easier to keep your code base in top shape.

Many use cases do help selling the value of refactoring:

  • Improving Code Readability: Refactoring can help make your code more readable and understandable, making it easier for others to understand and maintain.
  • Reducing Code Duplication: Refactoring can help reduce the amount of redundant code in your code base, making it easier to maintain and reducing the risk of bugs.
  • Improving Performance: Refactoring can improve the performance of your code by removing redundant code and optimizing the use of resources.
  • Preparing for New Features: Refactoring can help prepare your code base for new features, making it easier to add new functionality without introducing bugs or other issues.

In conclusion, refactoring is a crucial part of software development that can help improve the quality of your code, increase team morale, and make your work environment more enjoyable. Use real-world examples, recommended software tools, and best practices to ensure the success of your refactoring efforts.

How INCUP helps you with ADHD?

Do you know what INCUP is? The combination of Interest, Novelty, Challenge, Urgency, and Passion. So how does INCUP help with and motivates people with ADHD?

  • Interest: When people with ADHD are engaged in activities that interest them, they are more likely to be motivated and focused. By identifying their unique interests and passions, they can direct their attention towards pursuits that hold their attention and help to manage their symptoms. Identifying a person’s interests and passions can be a key factor in helping them manage their ADHD symptoms. When someone is engaged in an activity that they are truly interested in, they are more likely to be focused, motivated, and less impulsive. For example, someone who is interested in art might benefit from taking an art class, or someone who is interested in nature might benefit from going on regular hikes.
  • Novelty: Introducing new and novel elements into activities can help to keep people with ADHD engaged and motivated. This can help to combat boredom, increase attention, and reduce impulsivity. This can be done by introducing new tools, materials, or techniques, or by exploring new and different environments. For example, a person might try a new sport or hobby, or change up their daily routine by taking a different route to work.
  • Challenge: Setting challenging but achievable goals can help to motivate people with ADHD to stay focused and engaged. The sense of accomplishment that comes with meeting a challenge can be particularly motivating for this population. However, it’s important to ensure that the challenge is achievable, so that the person doesn’t become discouraged. For example, a person might take on a challenging project at work or set a goal to learn a new skill.
  • Urgency: Creating a sense of urgency around completing tasks can help to motivate people with ADHD. This can be done by setting deadlines, using timers, or breaking down larger projects into smaller, more manageable tasks. This can be done by setting deadlines, using timers, or breaking down larger projects into smaller, more manageable tasks. For example, a person might set a deadline for completing a project or the use of a timer (see Pomodoro method) to stay focused during a task.
  • Passion: Pursuing passions and interests can be especially motivating for people with ADHD. By engaging in activities that they are truly passionate about, they can direct their energy and attention towards something that is meaningful to them. This can help to improve their overall well-being and reduce symptoms of ADHD. For example, someone who is passionate about music might benefit from playing an instrument, or someone who is passionate about cooking might benefit from trying out new recipes.

Overall, the five elements of INCUP can be used to help people with ADHD stay motivated, focused, and engaged in their pursuits. By incorporating these elements into their daily activities, they can improve their attention, reduce impulsivity, and manage their symptoms in a more positive and effective way.

But INCUP is not alone – there are several well-known frameworks used to help individuals with attention deficit hyperactivity disorder (ADHD) manage their symptoms and improve their quality of life. Some of the most commonly used frameworks include:

  • Executive Functioning Skills: Executive functioning skills refer to the mental processes that are responsible for planning, organizing, initiating, and completing tasks. People with ADHD often struggle with these skills, which can make it difficult for them to manage their daily lives. There are various frameworks and tools available to help individuals with ADHD improve their executive functioning skills, such as creating to-do lists, breaking down tasks into smaller steps, and using calendars and reminders.
  • Cognitive Behavioral Therapy (CBT): CBT is a type of therapy that focuses on changing negative thought patterns and behaviors. It has been shown to be effective in treating a variety of mental health conditions, including ADHD. CBT can help individuals with ADHD improve their self-esteem, reduce impulsiveness, and manage symptoms such as inattention and hyperactivity.
  • Mindfulness: Mindfulness is a form of meditation that involves focusing on the present moment without judgment. It has been shown to be effective in reducing stress and anxiety and can help individuals with ADHD to be more focused and present in the moment.
  • Mind Mapping: Mind mapping is a visual tool that can help individuals with ADHD to better organize their thoughts and ideas. It involves creating a diagram that represents the relationships between different ideas and concepts. Mind mapping can help people with ADHD to focus their attention, prioritize tasks, and improve their memory.
  • Time Management Techniques: People with ADHD often struggle with time management, which can lead to disorganization, procrastination, and stress. There are several time management techniques that can help individuals with ADHD to be more productive and efficient, such as creating a daily schedule, using timers, and breaking down tasks into smaller steps.

We focused on ADHD here, but in addition to attention deficit hyperactivity disorder (ADHD), several other conditions that have symptoms similar to ADHD and may be helped by the use of these frameworks include:

  • Attention Deficit Disorder (ADD): ADD is a subtype of ADHD that is characterized by inattentiveness and difficulty focusing, but without significant hyperactivity or impulsiveness.
  • Dyslexia: Dyslexia is a learning disorder that affects an individual’s ability to read and comprehend written text. It can be accompanied by symptoms similar to ADHD, such as difficulty paying attention, forgetfulness, and impulsiveness.
  • Autism Spectrum Disorder (ASD): Autism Spectrum Disorder is a developmental disorder that affects communication, social interaction, and behavior. It can be accompanied by symptoms similar to ADHD, such as inattention, hyperactivity, and impulsiveness.
  • Oppositional Defiant Disorder (ODD): Oppositional Defiant Disorder is a condition characterized by defiant, disobedient, and hostile behavior towards authority figures. It can be accompanied by symptoms similar to ADHD, such as impulsiveness and difficulty focusing.
  • Anxiety Disorders: Anxiety Disorders are a group of mental health conditions characterized by excessive worry, fear, and stress. Some individuals with anxiety disorders may also experience symptoms similar to ADHD, such as restlessness, impulsiveness, and difficulty paying attention.

These frameworks and techniques can be adapted and used to help individuals with these and other conditions manage their symptoms and improve their overall quality of life. It’s important to work with a mental health professional to determine the best approach for each individual’s unique needs and circumstances.

The value of IT patents in the world of open source

Patents in the Information Technology industry can be valuable as they protect innovative ideas and solutions to technical problems. By having a patent, a company can prevent others from using, selling, or manufacturing similar technology without their permission. This can provide a competitive advantage and help the company to establish itself as a market leader. Additionally, patents can also be licensed or sold for a profit, providing a source of revenue for the patent holder.

Patent troll attacking open source developer

However, as always, we have to be careful: the value of a patent in the IT industry can also be limited by the speed at which technology is changing and the difficulties in enforcing patents in this field. As a result, the value of patents in IT can vary greatly and it is important to carefully consider the potential benefits and limitations before investing in them.

So, as you are aware, my team are working on multiple open source projects, morganstanley/ComposeUI: A .NET Core based, WebView2 using UI Container for hybrid web-desktop applications (github.com) is among them. Applying for a patent for an open source software may seem counterintuitive, as open source software is typically made available to the public under an open source license, allowing anyone to use, modify, and distribute the software. However, there may still be some reasons why a company or an individual might choose to apply for a patent for an open source software:

  • Defense against patent trolls: Even though the software is open source, a company or individual may still apply for a patent to use as a defensive measure against patent trolls. Having a patent can help prevent others from making frivolous patent infringement claims.
  • Commercializing the software: A company or individual may choose to apply for a patent as a way to commercialize the open source software. For example, the patent can be used to offer consulting services, support, or other value-added services related to the software.
  • Protecting specific innovations: While the software is open source, there may be specific innovations within the software that the company or individual wants to protect. In this case, applying for a patent can help prevent others from using or commercializing these specific innovations.

So are there patented open source projects out there? Sure! An example of a patent for an open source software would be the “Method and System for Facilitating Electronic Transactions Over a Network” patent held by the OpenSSL Software Foundation. OpenSSL is an open source software that provides a secure way for websites to transmit data over the internet, and it is used by many websites and applications. Another example of a patent for an open source software is the “Method and System for Compression and Decompression of Data” patent held by the 7-Zip Software Foundation. 7-Zip is a free and open source file archiving software that is widely used for compressing and decompressing data.Β Despite being open source, the OpenSSL Software Foundation and the 7-Zip Software Foundation holds these patents to help protect the specific innovations in their software and to provide a defensive measure against patent trolls. By holding the patent, they can control how the technology is used and ensure that it remains available to the public under an open source license.

If you look from the other angel, there are other reasons why patenting open source software makes sense, like:

  • To attract investment: By holding a patent, an open source software project can demonstrate its innovation and attract investment from potential partners or investors.
  • To establish market position: Holding a patent can help establish a company or project as a market leader and give them a competitive advantage in their industry.
  • To protect against infringement: Having a patent can provide legal protection against others who might use the technology without permission.
  • To create a licensing revenue stream: An open source software project can license its patents to others, generating a new revenue stream that can be used to fund further development and improvement of the software.

It’s worth noting that these benefits of patenting open source software are not guaranteed, and each case is unique. The decision to patent open source software should be based on a careful consideration of the specific circumstances and goals of the company or individual.