Using AD…what? ADR?

And the ADR for using ADRs is there in ComposeUI/adr-001-use-adrs.md at main · morganstanley/ComposeUI (github.com)


If you are watching the new repo, you might have spotted that the first few PRs are about adding a set of “ADR”s, Architecture Decision Records.


So you could ask, why? You could ask the same question, why a project is using X instead of Y – actually a new person joining a team definitely going to ask the question, why you using Entity Framework and not NHibernate? And believe me, most likely you would hand wave and say “for historic reasons” because you would absolutely not remember anymore. And that’s generally very unfortunate.

Documentation. Yes, I’m also a developer, and I also hate writing documentation. Who doesn’t hate that? Actually, I think, above a given age, when you are being told ‘documentation’ you do have a particular picture in your mind, which is books and books of lot of pages, probably bound together using twines called project documentation while still using Rational Unified Process or Microsoft Solution Framework like I did? Or the other way around, yes, there are tons of documentation, scattered among different systems, hosted off the code, in wiki or something – gets outdated the day you write it, not part of the source control, not structured (or rarely), therefore, why to write it at all, again?

So, documentation, why hate it, but can it be done better? I don’t think I ever started to like writing documentation, believe me. But writing Architecture Decision Records, following a certain scheme in them and keeping them in the same git as my code – that’s more of my liking, believe me. So, just to summarize, an Architectural decision is a software choice. One that addresses either a functional or a non-functional (like project structure – or the fact you do write ADRs!) requirement that is architecturally significant – framework choices, programming languages you choose, OSes you target, data storage, communication patterns, persistence, validation, logging, telemetry; you just name it. Of course, you should not take the term “architecture” too seriously or interpret it too strongly. I mean, look at not-that-recent update of ThoughtWorks’s radar when talks about ADRs: Lightweight Architecture Decision Records | Technology Radar | ThoughtWorks – they went from ‘trial’ in 2016 and speaking about evolutionary architecture to a mainstream adopt in just a few years. And they also suggest to store them in version control instead of wiki/website. Let me finish it with the sentence from the Radar: “For most projects, we see no reason why you wouldn’t want to use this technique.” So, our http://github.com/MorganStanley/ComposeUI is not different 🙂

Ok, so, where is the project?

I got a few people coming back to me after reading the previous post, Going opensource – why?, asking, OK, so, where is the project, how come the repo GitHub – morganstanley/ComposeUI is still empty?

I might have not written it clearly enough OR you might have not read clearly enough the previous post (most likely the former), so this time the idea is that the project would be developer in the open instead of playing a magician’s role with a rabbit and a hat – we wouldn’t just pull up a finished project without the reader understanding the steps we took to arrive to it, rather each discussion, each wrong idea, etc would be also developed and showed in the wild open.

I hope this helps regards some people who quickly just clicked through the link at the top of the previous post, waiting to see the actual project – yes, this is not exactly the case here 🙂

Going opensource – why?

Small update on Ok, so, where is the project? added as a separate post 🙂


I suppose you haven’t spotted yet – we just created a new opensource project, GitHub – morganstanley/Compose . It might worth explaining the why behind that choice.

It is always appropriate to challenge the need for a project to be open source, given the lack of foreknowledge of anyone else having a desire to use a platform versus the other possible options or their own proprietary one. While we have the desire and the capability to support this if it comes to be, it’s not the primary motivation. With a foundational architectural aspiration of a pluggable platform, it’s important for it to be possible to selectively choose components from open source and if needed, from commercial vendors that make sense given your requirements. There is no reason for a company to own every line of code or to find something available that tries to meet all our needs. Vendors will always yearn to get their foot in the door and being open source accelerates and simplifies partnerships through increased visibility and collaboration without the red/yellow tape or proprietary integrations or NDAs. We have already seen this model work successfully with other projects like desktopJS as ongoing collaborative efforts with vendors to use and extend simply for their own (sales) demos to others. While it is naïve to think they aren’t also doing this to make it easier for us to choose their platform. It has proven the usefulness of open source being successful in other ways than just marketing, recruiting or reuse by others. It is also a tool to better work with the industry to outsource or “buy” features off the shelf that give a commercial momentum with faster delivery. Having this project in the open is also useful to drive the roadmap of some companies by adding use cases that finserv developers need and expect.

Open source also provides us with innovative methods to explore different avenues of staff augmentation – e.g. working in an open manner eliminates the onboarding needs as well as associated overhead resulting in immediate participation and contribution.

A tangential benefit of being open source is keeping us free of binding ourselves unnecessarily to internal tooling and processes and implementation. By using off the shelf build infrastructure and GitHub capabilities we will be able to swiftly adopt and leverage industry leading DevOps tooling.

So what does open source mean to me? Whether you are operating on the cloud or on premises, you want to spend most of your time on solving the business problems in front of you, not on re-inventing the wheel of algorithms, APIs, pipelines, deployment and so on. Fortunately, we get to stand on the shoulders of giants by relying on robust well-maintained open source libraries or specialized vendor software, when appropriate. Most of the popular modern solutions work equally well on the cloud and in an internal infrastructure. In our view, .NET developers should be working on problems that are critical to the functioning of the company, and it’s the job of projects like this one to enable that focus by providing the functional foundational building blocks. We do help finding and creating the robust solutions to the generic problems so practitioners can remain focused on the aspects that are specific to their business. By developing battle tested code, harnessing the brain power within the OSS community and building relationship with selected vendors, we are able to provide the best-in-class tools and libraries for all .NET Developers. This amounts to less custom code scattered across, that becomes a technical debt, and results in an easier learning curve for anyone joining a firm – like ours.

Where customizations are required of other OSS projects, we promote contributing those changes back to the project on the firm’s behalf – this is in line with the overall firmwide OSS strategy.

We are doing .NET Core – .NET Core itself is a fairly young technology and the usage of it is also at the very beginning of its journey. Our mission, our approach and our thinking will evolve as .NET Core adoption matures and as the subject itself evolves. This will always be a collaborative effort and we do invite input from all of our constituents. This input allows us to be better informed and to better understand the needs of practitioners around the world. My group, my team will strive to be a resource for .NET Core expertise on desktop, cloud, software, vendor needs, advanced techniques, academic research and related topics. But for this, we need to go open source 🙂

Morgan Stanley at Microsoft //Build’21

After having our logo up last year and being talked about, and being showed as part of the 2014 Build keynote (What kept me busy recently #4), this year we would be again part of //Build – this time my manager, Dov Katz is going to be interviewed by Ben Walters, we even got a real aka.ms vanity URL, http://aka.ms/msbuild_morganstanley 🙂 Beside this, it’s expected that even I’d show up for a short segment speaking about the WebView2 journey as part of MyBuild – Microsoft Edge: State of the platform 🙂 And as above, Dov as part of MyBuild – Morgan Stanley joins Customer Tech Talks to discuss operating at the speed of international finance with WebView2.

Opportunities in the .NET Team at Morgan Stanley

We are on the verge of making huge sweeping changes – moving to Linux, moving to .NET Core, moving to Azure, moving to Edge control, moving to open source etc just to name a few. We are a small, highly skilled team of .NET experts, whose role is to set the technical direction for .NET, collaborate with the development teams to encourage the adoption of best practices and provide specialized set of proprietary libraries. The moves above – moving to open source, to the cloud, etc – are some of the biggest changes we faced, and the roles below provide the opportunity to help set the direction for the next generation of .NET applications. We are looking for passionate technologist with design skills, strong CS fundamentals and a deep knowledge of .NET as well as a curiosity about what is happening under the hood – how, why and what are the questions we answer on a daily basis.

We offer:

  • Opportunity to challenge yourself by solving some of the biggest technical challenges in Morgan Stanley.
  • Chance to be at the forefront of Morgan Stanley’s adoption of latest platforms, tools and techniques.
  • Insight into how technology is used in large scale enterprise deployments through collaborations with multiple teams across the firm.
  • Opportunity to look ‘under the hood’ – how, why and what if are questions that we answer on a daily basis.

You will collaborate with your team to:

  • Design and implement the next generation of our proprietary libraries, tools and components to support more modern architectures.
  • Provide direction and define best practices for designing modern applications for all the firms developers.
  • Work with application teams to identify and adopt the best solutions for their use cases.
  • Provide technical solutions to adopting new techniques and libraries which interface with existing deployments.
  • Increasing our involvement in the Open Source projects that we rely on.

You have:

  • Solid .NET C# experience.
  • Strong fundamental technology skills (OO design, threading).
  • Server side, WPF or Winforms experience.
  • Ability to converse verbally and in writing in English with other .NET developers on complicated technical requirements.
  • Have an interest and aptitude for technology.
  • Can adapt to a dynamic and multifaceted environment where business and technical skills are intermingled.

You may also have:

  • Knowledge of .NET Core.
  • Low level networking knowledge.
  • Advanced debugging skills.
  • Knowledge of development in sandbox environments.
  • Focus on User Experience.

If you know someone who might be interested please get in touch with peter (.) smulovics (at) morgan stanley .com .

//Build 2018 – day 1 – Keynote part 1

Not for the first time (and hopefully not the last Smile) I was a happy participant at Microsoft’s annual Build conference – you know, the conference called “Christmas in May for Microsoft developers”. I started the participation on these conferences a long time ago – to be specific, I was there when .NET was announced, or was there when WPF was christened – yes, they weren’t touted as ‘Build’, rather ‘Professional Developer Conference’, but close enough. Later, was part of ‘TechReady’, the internal Microsoft conferences tailored for the ‘field’ part of Microsoft, but nevertheless similar to Build/PDC, and been participating/presenting on TechEd as well.

After this short intro, what do I think about the 2018 Build? It’s different and it’s the same. The same energy was there – save for the keynote, but more about that later – many of the same presenters (sometimes about a completely new topic), same and new friends to meet. Why it’s different? Because it’s no longer the same company, not even the same company compared to a year ago. With some more execs leaving from the ‘old’, ‘Ballmer’ era, now the company can completely tear off the old wounds, and can change trajectory; although they are little like the Titanic (hopefully not by hitting the iceberg, rather than the way how it’s hard to change direction), so there are areas which are slower to understand – everything has changed.

It started for MVPs and RDs a day earlier – I’m neither of that –, but this meant good news for Oren Novotny, my old friend and ex-mate – not only his project (RX.NET) got moved to the .NET Foundation (becoming https://github.com/dotnet/reactive), was he appointed to join the Microsoft Regional Director program, but he was also honored to become the Ninja Cat of the Year! Btw, do you know where Ninja Cat is coming from? Originally it was featured in an internal powerpoint deck about Windows 10, and quickly became the logo to celebrate and symbolize the passion and energy of the people behind code. Unofficially, it represents the spirit of Microsoft employees. Officially, the award, one of the 2018 Windows Developer Awards decided by voting, recognizes the Developer (with capital D), who demonstrates the core values of Microsoft the best and made one or more significant contributions to Windows development in the past year. So, Oren ‘NinjaCat’ Novotny – congratulations! Well Done!

So, on day 1, as I was standing in the registration line (one of the first in line), I tried to summarize my expectations for the conference and for the announcements. As Morgan Stanley is heavily involved in the early discussions for many of the products (whether that is the next version of .NET, .NET Core, Azure, Azure Stack, etc.), many of the announcements coming up I was already aware, or in some cases, already played with. So I was more to look forward meeting with some of the industry leaders, meeting with the product groups, meeting with the decision makers and to learn more broader term next steps the industry and Microsoft is going to take. It also gave me a thrill – for the first time ever, I wasn’t only to enter the Build conference – I was to also visit ‘MRJam’, a ‘semi secret’ Mixed Reality subconference, also by Microsoft, on the same premise. What did I expect from MRJam? As being the first one of the kind, did not had any special expectations, just being able to see other people doing development for mixed reality headsets and the HoloLens (I’m the latter) and to listen some sessions seemed to be adequate – however it turned out to be something significantly better. More about that later Smile

Freshly registered we lined up for the keynote – I had to admit the registration experience was significantly more organized than last time (that was a mess in a small room, now it was a 5x as big room with 3x as many registration stations AND the swag handling (t-shirts only) was done in a separate section), we could walk next to the expo and a newly designed, much bigger, better Channel9 area.

While walking down the fenced corridors, could not avoid spotting Seth Juarez preparing for his opening Channel9 remarks – not a small feat to do to cold open a 3 day conference AND prepare for 3 straight day of talking to people – haven’t seen the Channel9 stage empty during the days Smile Btw, could not avoid spotting, beard became now the new norm Smile 

We definitely weren’t alone. I haven’t seen an official tally yet, but based on what I saw on the various sessions, in the lunch room, etc., I guess 7.000 people – don’t quote me on that, I might be absolutely crazily completely off with that.

 

The line of announcements started meanwhile standing outside, if you had an open ear to the Channel9 recording booth, started with – Windows 10 1803 ISOs showed up on the Microsoft Volume License Center’s website (with its hopping 4382 MB to download)! Also, we all knew for sure that we would be Hanselmanned sooner or later, but he did actually show up shortly before the keynote with many ‘aaah’ and ‘oooh’ Smile

 


Than the show began! Same as last time, I was sitting next to the Press area, just to slightly right from it, Mary Jo, Thurrott sitting just a few feet from me – this has to be a good seat Smile with tongue out

 

The opening words were new in so many ways. Yes, we have had opening words delivered at Connect 2016 by no other than Stephen Hawking, and many other amazing scientists, but this time, it came from Charlotte Yarkoni about Project Emma, a wearable device which we saw as a prototype a year ago. It’s to help those patients, who are suffering from Parkinson’s – Emma herself got the disease at age 29, and being someone talented with wonderful ideas (being a designed and a creative director) she was afraid the diagnosis might end her career. With the new Emma watch, she is able to live a full life, and can have her wonderful ideas being fleshed out by herself.

And here starts the list of endless announcements! Visual Studio 2017 15.7 (actually now we have 15.7.1 due to a last minute security bugfix, so make sure you grabbed the newer version) with C# 7.3 with enum/delegate constraints, ref reassignment, stackalloc, unpinned fixed buffer indexing, custom fixed, and many more; and with asp.net core to noncontainer app service Linux publishing, codelens unit testing, responsive testing icons, step back for .net core, github auth for sourcelink, javascript debugging for Edge (this is big!), XF editor intellisence (also big!), VSM for UWP (another biggy!), signed Nuget support, and many more. Many of these are thanks to the Developer Advocates – Donovan Brown, Seth Juarez and many more. And, Visual Studio for Mac 7.5 (why they cannot share versioning, sigh?) with Razor for ASP.Net, TypeScript and JavaScript for web, wifi debugging for iOS, Android SDK manager for Android, .editorconfig support, and more! Lastly, on this topic, Microsoft brings its Visual Studio App Center lifecycle management tool for iOS, Android, Windows, macOS developers to the GitHub marketplace.

The actual opening notes from Satya Nadella made me googling/binging quickly – why does Bill Gates speak about Apple stock prices? Yes, I wasn’t following what exactly Warren Buffet said about stock prices, and wasn’t sure others were following him either – lost me a little bit there. And if your cold open is not hitting the right vibes, you start to feel out of blues… Back to announcement mode, two new mixed reality business applications announced, layout and remote assist. As the name implies, it is showing two common usecase – layouting physical objects over real world, and providing real 3D, context sensitive help when needed using a mixture of devices – I actually have had the ability to try out an earlier implementation of the latter by a company called Kazendi where they showcased their holomeeting product. We got another reason for using Visual Studio Code – Sort JSON Objects addin for sorting both your objects and your settings. Announcement of Microsoft’s own content delivery network (CDN) – this time again Microsoft is trying to get into a rather busy field, although with the promise of a rather big number of EDGE sites this might be more promising than it sounds (50ms on average in 60 countries with 54 edge pops in 33 countries + 16 cache pops); although I don’t see this as an imminent threat to akamai, cloudfront, cloudflare, level 3, etc.

We went a little hardware and IoT Edge from here (so many edges Microsoft have now Winking smile), so when you were thinking (for a good reason actually) that Microsoft XBox Kinect is dead – here is Kinect for Azure. A hardware solution not insimilar to the one in the next HoloLens in 2019, with depth sensor resolution bumped up to 1024×1024 from 640×480. As Microsoft did give a Kinect to each developer at a previous Build, many developers who weren’t really into gaming figured out, that with the ‘Kinect for Windows’ SDK, many amazing industrial solutions can be made – this is what lead to the fourth reinvention of Kinect into this small devices that can be fit along with other IoT solutions. To be honest, I can already see some amazing mixed reality projects forming in my mind. News about IoT is not ended: IoT Edge itself got opensourced, along with an AI Developer Kit for Cameras from Qualcomm and a Drone SDK from DJI, demonstrated by flying an actual drone in the keynote recognizing flaws on the pipes on stage. This is actually enabled by many updated cognitive services, some of them are part of Azure IoT Edge, enabling you to train in the cloud but run on the device enabling blazingly fast decisions while being superiorly precise at the same time. Already at Azure, did you know about the new Terraform resource provider? This would enable you to write an ARM template that creates a Kubernetes Cluster on Azure Container Services (AKS), and then, via the Terraform OSS provider, Kubernetes resources such as pods, services and secrets can be created as dependent resources. Also, geo replication for Azure Container Registry with ACR Build, to enable easier OS and framework patching with Docker. Also – actually pretty slick way to enable/disable – Azure SignalR with literally half line of code change (AddSignalR –> AddAzureSignalR), moving SignalR connectivity to Azure Edge using a fully managed service.

This next one is big: Microsoft is changing the revenue mix. App developers now going to get 85% instead of 70%, and going to get 95% if you get redirected from the developer website to the store. Would this be enough to be a gamechanger? Would this result in a landslide? Very hard to tell, when looking at this number I don’t feel little, but I feel late. Back to User Interfaces – XAML Islands are coming up (enabling the use of Edge WebView in the earlier tech)! These enable you to host UWP content in your Winforms or WPF applications – who said that Winforms/WPF is dead? Actually, everyone said it.

But if, we are already at WPF/Winforms – next to announcing .NET Core 2.1 RC1 (which contains a performance related PR and a Linux compatibility PR from me!), an important announcement of .NET Core 3.0 is also there – with the ability to run Winforms and WPF on top of .NET Core 3.0, giving it side-by-side ability and probably opening the way of them becoming opensource. And actually, clippy can help you code it. Not kidding, IntelliCode is trying to become the clippy of development and provide AI assisted capabilities by not just better contextual intellisense and focused PR reviews, but in the future actually trying to compare your source code to other source codes and trying to point out if you used the wrong variable in a line.

When it came to AI again, Microsoft showed all the breakthrough we had in AI in the short past – Vision test with object recognition parity in 2016, Speech recognition test parity in 2017, Reading comprehension parity in 2018 January, and Machine translation parity in 2018 March. But where it becomes scary, is at things like Project Brainwave, an Intel FPGA solution enabling superior and never before seen acceleration for real time AI while being cost effective and efficient.

 

 

When it comes to AI, one of the topics always brought up are bots & intelligent assistants. We got used to frozen hells by now – and looks like the previously announced Cortana-Alexa partnership is actually going to happen, although I found it quite awkward that I explicitly had to sign in/out of them. One of the biggest applause of the keynote was a rather unexplainable moment – one of the canned responses of Alexa about Cortana. Another very  good example for the kind of thinking about AI came up than, as part of a mega Microsoft 365 demo with an ASL interpreter included – we had a sense what is coming up, but it was still groundbreaking how easily the machine was able to transcribe not only what was spoken but also who spoke and what was the call-to-action. We saw integration with HoloLens, integration with LUIS, integration with Surface Hub, etc. – if only my workspace would look like this Open-mouthed smile 

 

I’ll continue after the break – we actually had a break with a little gym exercise included Smile Open-mouthed smile Smile with tongue out

 

DevOps 2.0 – Beyond the Buzzword

A few days ago, I participated in the DevOps 2.0 – Beyond the Buzzword session hosted in the Microsoft building. The event was sponsored by Neudesic, Microsoft, and Docker. It differed from the DevOps 1.0 session, which was sponsored by Neudesic and Amazon in the sense that it focused little (actually not that much) on Microsoft technologies (like Azure). The presentations contained a lot of interactive forums and Q&A, and was presented by Mike Graff, Kunaal Kapoor, Chad Cook, Eric Stoltze and Chad Thomas, all from Neudesic.

 

The agenda consisted of four major sections; “DevOps in Review”, “Building the Continuous Delivery Culture”, “Automating the Secure Enterprise” and “Modernizing Legacy Apps”.

 

Starting with basic questions – Is your org implementing DevOps, Is your org using the cloud, What did you come to learn today – we quickly set the stage, and the sessions started. There was one axiom I really did not agree with, especially as someone in a team dealing with R&D and participating in Morgan Stanley's Innovation program – “You cannot innovate and standardize at the same time”. I’m happy to challenge this 🙂

 

In the digital age, it will not be the big who eat the small, it will be the fast that eat the slow!

 

The questions and discussions in the session mostly focused on why and how we do continuous delivery and DevOps. We looked at Finance 101 – Net Present Value (NPV) = cashflowin/(1+r)^t-cashflowout (where r is the cost of money and t is time), and discussed how DevOps can help change each of these variables. We alsp looked into basic Return On Investment (ROI) calculations, to see how DevOps can change it to the benefits. We discussed how we see opportunity costs (the benefit that could have been realized, but was given up, by choosing another course of action) and the cost of delay (a twist on opportunity cost that combines urgency and value).

 

By the way, did you know that in the average industry, 60-90% of required features for a product produce zero or negative value (like non-automatic hygiene)? This even applies to giants like Google or Microsoft, so they are not specific to company size or industry.

 

So, why do we practice continuous delivery and DevOps? The keywords are:

  • Faster Delivery – High Performing teams deliver software 20 times more often with 200% better lead time; this nicely binds into the Continious Delivery and Deployment agenda
  • Safer Delivery – High Performing teams have 48 times lower MTTR and 3 times lower change failure rate; this is something that binds into the blue/green agenda
  • More Efficient – High performing teams spend half as much time on rework and three times as much time on new work
  • Better Security – High performing teams have a 50% reduction in security related incidents
  • Improved Satisfaction – Teams adopting CD / DevOps doubled their internal net promoter score and had tripled their customer net promoter score
  • Increased Profitability – Organizations practicing CD / DevOps are 26% more profitable than traditional divisions

 

The three ways to achieve the DevOps nirvana according to the discussions are: flow, feedback and experiment.

 

How to work with Flow?

 

  • Make Work Visible
  • Limit Work-in-Process/Progress
  • Reduce Batch Size
  • Reduce Handoffs
  • Identify and Elevate Constraints
  • Eliminate Hardships and Waste

 

The first two items are part of most teams’ Kanban implementations, so I won’t spend extra characters on them. Making problems more bite size does help maintaining a healthier throughput, and helps ironing out the bumps in the road when an item has a wrong T-shirt size assigned. Handoffs – each time you are doing a handoff, you are literally losing track of it and automatically building a queue. Try to limit the number of handoffs through automation, by using commodity components. The last two items are in some way obvious ones; although sometimes we tend to be delivering features faster than having time to recognize that we are delivering under constraints or that we create unnecessary waste.

 

When it comes to Feedback, we can look into ways to:

 

  • Work Safely in a Complex System
  • See Problems as They Occur
  • Swarm and Solve Problems to Build Knowledge
  • Keep Pushing Quality Close to the Source
  • Enable Optimization for Downstream

 

How many times have you felt unsafe in a system or system’s admin UI you had to interact with, and didn’t know what the next click or swipe would result? Whether there is going to be another ‘are you sure?’ question, or have you just been able to render your environment unusable? Or checking out this question from the other side – are you monitoring the right thing? Do you understand what needs to be done when the alert comes? Is it coming through a system that is able to target them correctly? No one wants to be the target of a blastmail to 500+ people with a multi megabyte attachment trying to figure out which items are relevant; neither had you wanted to get notifications tailored to you but not mentioning any of the action items to be taken. And when an issue happens, what should be the steps followed? Yes, in short term, it’s “cheaper” to have a follow the sun support model and have the support person dealing with it, as for sure the support person would “document the steps needed for later reuse” – I never see the latter happening. Whereas, if available people do swarm, the issue might be solved at a higher realized price when it comes to manpower, but the crosspollination happening during the solution is priceless. And – it shouldn’t be a big deal to help the downstream system optimize them instead of building another layer/façade/presystem around it; as the latter would just lead to another piece to maintain in the long term.

 

Lastly, when it comes to Experiment, we speak of the followings:

 

  • Enable Continuous Learning
  • Create a Safety Culture
  • Institutionalize Continuous Improvement
  • Transform Local Discoveries to Global Knowledge
  • Inject Resilience into Daily Work
  • Leaders Reinforce Learning Culture

 

We learn as long as we live. With Pluralsight, Lynda, Harvard reviews and more training tools available there is no real excuse on not taking the proverb to heart. We need to support people taking a leap – looking at Morgan Stanley's Innovation Program for the second time in this one article; this may be one of the best approaches to this problem so far. Continuous improvements shouldn’t stop at writing the software. It should be part of the fabric of the team itself; implementing so called ‘kata’s to improve the process and to seek improvements of day-to-day work is more than desirable. As teams span across multiple locations, it is imperative to host knowledge share sessions, brown bags, post write-ups, and more. We should aim to record as many of these as possible and make available tagged, even sometimes transcribed/minutes written up. And when people should listen to this? Building ‘slack’ time or possibility for slack time into a system, using Pomodoro Technique and gifting yourself experimenting time for successful Pomodoros, etc. might have a bigger benefit than you think. Lastly, such change (if there are changes needed) are most successful if they are reinforced or started by management; or you experience them through mentorship – try to seek for the latter if you don’t feel the former.

 

Because, yes, DevOps2.0 needs more cultural change then technological ones – for some of the problems in the DevOps space there are off-the-shelf solutions; but to make them successful, the 20% of the technological change needs to be met with the 80% of the cultural change needed. You cannot just introduce a DevOps team between your Dev and Ops teams – that would result in another set of handovers (see above ), and no real benefits.

 

In a later session, when it came to the question of Security as part of DevOps,

 

Fewer than 20% of enterprise security architects actively and systematically incorporated information security into their DevOps initiatives (Gartner)

 

As part of looking into the importance of the enterprise DevOps process, you can look into:

 

  • Automating Security
  • Empowering Engineering Teams
  • Maintaining Continuous Assurance
  • Setting Up Operational Hygiene
  • Increasing Engineering Autonomy
  • Increasing Availability of Dev Technologies
  • Ensuring Constant Change is the New Norm
  • Leveraging Wide-ranging Operational Responsibilities of DevOps

 

We have had a long discussion of this, but I think our requirements when it comes to security and the maturity of these solutions might not be aligned to where we are now and where we would like to be in the future.

 

Lastly, as part of Application Modernization Strategies, we went into discussion of the “6 Re”:

 

  • Retire – Remove the application or platform completely
  • Replace – Replace the application of platform with new version or competing solution
  • Remediate – Invest in extending the lifeline of the application or platform
  • Re-architect – Redesign the application or platform to meet demands
  • Re-platform – Redesign the application to run on different infrastructure
  • Re-build – Rewrite the application to remove technical debt or change the implementation

 

The whole day was full of demos, everything from Security Kits through Kubernetes Orchestration to Mobile Crash Analysis, and useful discussions with industry peers. And the best discussion of the day? “How you get around SOX and Segregation of Duties when using DevOps?” – You don’t. Instead of that you educate the audit/regulator and show that the relevant controls and the segregation are still there.

finJS.IO 2017 Fall conference

 

disclaimer: Morgan Stanley was invited to the event by OpenFin. 

 

This year I have had the pleasure to represent Morgan Stanley on the finJS.IO NYC conference in midtown. We had an awesome crowd, many friends and former colleges showing up – who could resist such a line up of presenters?

 

We had OpenFin's own Mazy Dar providing the opening words and giving a heads up on the sessions: Graham Odds from Scott Logic, Fergus Keenan from Adaptive, Terry Thorsen from ChartIQ, Jason Dennis from FactSet, John David Dalton from Microsoft and Mazy himself to present and amuse us.

 

FactSet

 

First we have had FactSet taking the stage, speaking about unbundling, interoperability and data concordance. He showed us what FactSet is and what they do provide – with 14 datacenters, 650 3rd party data sets, 25 of their own content sets, with over 300 exchanges, over 130 news sources, etc the amount of data they have on their fingertips are astonishing. They built their application stack on four pillars – workflows, apps/components, content and technology, with the following bingo board in technology (that being the most relevant for us): HTML5, Node, websockets, angular, vue, http/2, elastic, hadoop, spark, C++, C#, Java, Python, Go, memcached, vmware, nsx, linux, windows, hpc, cdn. Their focus is at 3 steps – survey, pick, implement. Resulting in process on how to normalize content, how to implement workflow. We have to admit – concordance and interoperability is hard. They are excelling in the former – spent 30 years building concordance model on securities, they do understand directives and object symbology needed, etc. They are (next to Morgan Stanley) part of the Financial Desktop Connectivity and Collaboration Consortium (FDC3) to enable financial participants to build solutions that link components and data. When it comes to interoperability on the desktop, they depend on OpenFin and Symphony for local and global messaging bus, and leveraging the power of fins:// for live collaboration in contexts.

 

Next they talked about how they moved from their WPF only UI to the web and hosting the website built using Angular later React in the WPF container; than stepping another step and moving their application out of the host container and running as a native web app. However seeing the limitations of such approach, they turned backwards again and started using a thick container to host their application, so to enable some of the otherwise unexploited desktop features to their application – like this enabled them proper out of band notifications out of their machine learning platform.

 

Adaptive

 

Next presenter was from Adaptive, who spoke about mostly the evolution of trading in the world, starting from 300 BC up till today, touching the 17th, 18th century in the meanwhile. He did speak about where the IT budget of the financial companies are going – from the $127bn annual budget, in 2017 we have had $2.5bn on advanced analytics with a +15% growth, on artificial intelligence $1.5bn with a +14% growth, on robotic process automation $0.5bn with a +10% growth and on cloud computing a $1bn with a +19% growth – all significant numbers and the he made us think about the implications as well.

 

ChartIQ

 

Next came ChartIQ which spoke about their finsemble application platform, showing frameless browser windows, and showing the analogy we have: windows are the nodes, the communication bus is the network and the microservices are the services when it comes to compare it to distributed systems. Applications are and "ensemble" effort, components that are visible and services that are hidden. What they came up with is a solution based on openfin and their custom architecture. When it comes to synchronizing state, the fact that components can start in various orders needs to be supported by a pub/sub mechanism. When it comes to performance, you have to invest into Chromium's process splintering to avoid unnecessary overhard from Chromium instances; and you have to introduce dependency ordering to avoid startup race condition errors. When it comes to debugging, you need to introduce central mgmt and logging, when it comes to user experience you cannot just have a loose interface (you need to introduce window management like snapping, docking, etc). Lastly, when it comes to aesthetics, and having a heterogeneous set of component interfaces, there isn't a better solution than having a hard word and introducing something along the lines of 'Font Finance' similar to 'Font Awesome', a set of web components that is mandated to be used.

 

And than there is the tough tail to be fixed – monitor plug/unplug/resolution change, orientation changes, event storms & throttling, working around issues on max server connections to enable features like webpack hot reload, to support hotkeys, to support CORS, to enable proper timer sync (needed for central logging as well!), to support fitting to the DOM when it comes to dynamic height content, to avoid missing mouseout events at the window edges, etc, etc., the list is nearly endless.

 

OpenFin

 

Next presenting was Mazy Dar from OpenFin, who talked about the 'bank of the future' and what banks can learn from DropBox. We did see the achievements and development of mobile applications for banks in the last decade, but where is the similar change for the desktop apps? We should get out of band notifications, offline synced information, etc like we had in the '90s, but with today's technology stack and with integration into the builtin desktop services.

 

Scott Logic

 

Following was Scott Logic who talked about how to create reactive desktop applications through an example that involved someone doing a two in a refrigerator and immediately regretting this decision  But the presentation was serious, and talked about what we can achieve when trying to do more than just switching between a 'desktop' and a 'mobile' web mode, and did involve some very eye catchy demos of interaction models that adapted themselves based on not just screen size, but rather smaller differences like "how is my container being aligned on the screen". The magic behind is moving a level further than reactive media queries, and – you can do that today, of course with a polyfill ( https://github.com/ausi/cq-prolyfill ) for container queries ( Container Queries ).

 

Microsoft

 

The last session was from Microsoft, which was focusing on ecmascript modules ( https://nodejs.org/api/esm.html ) , and the current issues with them – whether it comes to debugging, error reporting, builtin variables, babel, etc all do suffer if you try –experimental-modules today. What was shown by the John wearing a BladeRunnerJS tshirt how https://www.npmjs.com/package/@std/esm is solving all the problems in Node 4+, by not only providing support for mixing common js and mjs, but also to be available in the node REPL (see https://medium.com/web-on-the-edge/es-modules-in-node-today-32cff914e4b for much more details):


$ node

> require("@std/esm")

@std/esm enabled

> import p from "path"

undefined

> p.join("hello", "world")

'hello/world'

 

It also enabled to unlock features like dynamic import, live bindings (yes, actual live bindings cross module boundaries), file uri scheme, etc. It also have unambiguous module support, named exports, top level main await, gzipped modules, etc. You could ask what is the performance hit you have to pay? Loading CJS ~0.28 ms/m, built-in ESM ~0.51 ms/m, first @std/esm with no warm cache ~1.56 ms/m, but with warm caches @std/esm cached runs were ~0.54 ms/m. Which means it is nearly the built in performance but with a lot more flexibility, better error reporting, path protocol resolution, environment variables, etc.

 

 

To summarize, the whole event with the mingling at the beginning and at the end, also the various presentations were just fantastic 

PowerShellStravaganza

I was strongly looking forward for Techstravaganza NYC – not just because it’s a New York bound conference, not just because the different tracks focusing on different materials (PowerShell, Windows, Exchange and Azure Cloud), but also because our very own Tome was (as always) one of the organizers and presenters. His session (Anarchists guide to PowerShell – What you’re mom told you not to do with PowerShell and how to do it!) not only won the competition on best title, but I think close to winning the best content award too.

From the 4 available tracks I went with PowerShell – mostly because my knowledge is the lacking there most. From the lineup of strong sessions I picked: “Anarchists guide to PowerShell – What you’re mom told you not to do with PowerShell and how to do it!” by Tome, “Rewriting history: extending and enhancing PowerShell features with fantastic results.” by Kirk Munro and a panel discussion featuring all of the days’ PowerShell speakers.

But not to run forward too much, for me the day was kicked off by Kirk’s rewriting history session. Next to learning where PowerShell’s builtin capabilities trail off and when you need to start touching compiled cmdlets (like proper parameter usage, deriving commands, proxy cmdlets, performance!) we should not forget that PowerShell is now cross platform and opensource – if someone wants to add one of the extensions, they are usually should feel free to do that using github, PRs, etc. To write such a custom cmdlet, you only need your favorite editor (Sapien, VS2017, VSCode – your choice), the Microsoft.PowerShell.*.ReferenceAssemblies for the matching PowerShell version, and some patience 🙂 So with PowerShell 6 just in alpha, being cross platform, what is the better time to learn writing cmdlets than now?

One of the first magic we seen and resonated with me was about something eerily similar to https://github.com/nvbn/thefuck (if you are not aware of that tool, it adds missing sudos, it adds missing –set-upstream to your git command, tries to find out what command you mistyped, and like 50 more – see https://github.com/nvbn/thefuck/blob/master/README.md#how-it-works for details). Using some lesser known command-not-found-handling extension point, it enabled nice shorthands to replace “verb -verbs” constructs with “verb-verb” constructs, and much more.

The next item that caught my attention was LanguagePX – a demonstration of ExcelPX by building a DSL that enabled moving from:

Get-Process | Export-Excel Path "$([Environment]::GetFolderPath('MyDocuments'))\test.xlsx" Force Show AutoFitColumns IncludePivotTable IncludePivotChart PivotRows Name PivotData WorkingSet64

 

to:

ExcelDocMagic "$([Environment]::GetFolderPath('MyDocuments'))\test.xlsx" {

    Properties {

        Overwrite = $true

        Show = $true

        AutoSize = $true

    }

    PivotData {

        RowPropertyName = 'Name'

        RowAggregateValue = 'WorkingSet64'

        IncludeChart = $true

    }

}

 

Turns out this not only caught my eyes – this is on the way to become part of PowerShell 6 to support custom DSL languages thanks to an RFC. This way the option to achieve something that would have been crazy idea before, like generating a Visio document an object oriented way and have PowerShell fill out the data there, all with proper type safety – I’m sold.

Next to tackling capability to fix mistakes and provide nice syntactic sugars, next to provide a way to define DSLs, the 3rd item that surprised me from Kirk was HistoryPX – providing an extended set of history information AND last output support in PowerShell. Not only does it capture output, exception criterias, command duration, but also the number of returned objects! Makes it so much easier to work with history. Add to this the $__ variable capturing last output, with configuration when it should be captured – e.g. I can say I don’t want sort operation to be captured if I don’t want to.

As part of the sponsorship section, I seen Sapien’s IDE for the nth time – but this was actually the first time I spotted the fact that there is support for a splatting refactor in the context menu. Now, how did I live without it??

The next interesting tidbit was coming from Stephen Owen – before his session I was not that much involved (let me rephrase: I had absolutely no overview nor details) in the Enterprise Mobility Management suite’s integration with VMWare AirWatch and PowerShell, but he showed some interesting tidbits on how these component work toghether: either AirWatch sends command to EAS to authenticate, and with the established secure trusted relationship AirWatch can send cmdlets to PowerShell in accordance with the email policies (and it gets executed) (this does not require a server per se), or using the AirWatch Enterprise Integration Service to deliver the PowerShell commands to the server. It was eye opening to see PowerShell used together with VMWare – not two tech/companies I’d put in the same sentence otherwise.

Then came the session I was waiting for since the morning: the anarchist guide on how not to use PowerShell with Tome. Starting with the nice library of ShowUI which enables you to *easily* build graphical user interfaces using PowerShell scripts – to the level of using ‘New-Button -Content "Hello World" -Show’ or:

$User = [ordered]@{

   FirstName = "John"

   LastName = "Doe"

   BirthDate = [DateTime]

   UserName = "JDoe"

}

 

Get-Input $User -Show

Next demo was about Azure Functions, PowerShell – with removing the comma between them – small powershell snippets you can use in a serverless scenario provided by Azure Functions to provide stateless microservices. And if microservices, we saw a way using Apache Zookeeper to spin up PowerShell worker processes on incoming data feeds – think about like a distributed zookeeper powershell cluster. And this was the best place I think for a Treadmill shoutout J

And the thing we (I?) weren’t really prepared for – http://www.leeholmes.com/projects/ps_html5/Invoke-PSHtml5.ps1 – I’ll leave you to try it out. And the list continued – a very interesting demo of using Unity (the 3D game engine) and PowerShell together to display data (some of them on a spinning globe) using PSUnity, another crazy language experiment by allowing to write native .NET IL in PowerShell using PSILAsm

The list is endless 🙂