Opportunities in the .NET Team at Morgan Stanley

We are on the verge of making huge sweeping changes – moving to Linux, moving to .NET Core, moving to Azure, moving to Edge control, moving to open source etc just to name a few. We are a small, highly skilled team of .NET experts, whose role is to set the technical direction for .NET, collaborate with the development teams to encourage the adoption of best practices and provide specialized set of proprietary libraries. The moves above – moving to open source, to the cloud, etc – are some of the biggest changes we faced, and the roles below provide the opportunity to help set the direction for the next generation of .NET applications. We are looking for passionate technologist with design skills, strong CS fundamentals and a deep knowledge of .NET as well as a curiosity about what is happening under the hood – how, why and what are the questions we answer on a daily basis.

We offer:

  • Opportunity to challenge yourself by solving some of the biggest technical challenges in Morgan Stanley.
  • Chance to be at the forefront of Morgan Stanley’s adoption of latest platforms, tools and techniques.
  • Insight into how technology is used in large scale enterprise deployments through collaborations with multiple teams across the firm.
  • Opportunity to look ‘under the hood’ – how, why and what if are questions that we answer on a daily basis.

You will collaborate with your team to:

  • Design and implement the next generation of our proprietary libraries, tools and components to support more modern architectures.
  • Provide direction and define best practices for designing modern applications for all the firms developers.
  • Work with application teams to identify and adopt the best solutions for their use cases.
  • Provide technical solutions to adopting new techniques and libraries which interface with existing deployments.
  • Increasing our involvement in the Open Source projects that we rely on.

You have:

  • Solid .NET C# experience.
  • Strong fundamental technology skills (OO design, threading).
  • Server side, WPF or Winforms experience.
  • Ability to converse verbally and in writing in English with other .NET developers on complicated technical requirements.
  • Have an interest and aptitude for technology.
  • Can adapt to a dynamic and multifaceted environment where business and technical skills are intermingled.

You may also have:

  • Knowledge of .NET Core.
  • Low level networking knowledge.
  • Advanced debugging skills.
  • Knowledge of development in sandbox environments.
  • Focus on User Experience.

If you know someone who might be interested please get in touch with peter (.) smulovics (at) morgan stanley .com .

//Build 2018 – day 1 – Keynote part 1

Not for the first time (and hopefully not the last Smile) I was a happy participant at Microsoft’s annual Build conference – you know, the conference called “Christmas in May for Microsoft developers”. I started the participation on these conferences a long time ago – to be specific, I was there when .NET was announced, or was there when WPF was christened – yes, they weren’t touted as ‘Build’, rather ‘Professional Developer Conference’, but close enough. Later, was part of ‘TechReady’, the internal Microsoft conferences tailored for the ‘field’ part of Microsoft, but nevertheless similar to Build/PDC, and been participating/presenting on TechEd as well.

After this short intro, what do I think about the 2018 Build? It’s different and it’s the same. The same energy was there – save for the keynote, but more about that later – many of the same presenters (sometimes about a completely new topic), same and new friends to meet. Why it’s different? Because it’s no longer the same company, not even the same company compared to a year ago. With some more execs leaving from the ‘old’, ‘Ballmer’ era, now the company can completely tear off the old wounds, and can change trajectory; although they are little like the Titanic (hopefully not by hitting the iceberg, rather than the way how it’s hard to change direction), so there are areas which are slower to understand – everything has changed.

It started for MVPs and RDs a day earlier – I’m neither of that –, but this meant good news for Oren Novotny, my old friend and ex-mate – not only his project (RX.NET) got moved to the .NET Foundation (becoming https://github.com/dotnet/reactive), was he appointed to join the Microsoft Regional Director program, but he was also honored to become the Ninja Cat of the Year! Btw, do you know where Ninja Cat is coming from? Originally it was featured in an internal powerpoint deck about Windows 10, and quickly became the logo to celebrate and symbolize the passion and energy of the people behind code. Unofficially, it represents the spirit of Microsoft employees. Officially, the award, one of the 2018 Windows Developer Awards decided by voting, recognizes the Developer (with capital D), who demonstrates the core values of Microsoft the best and made one or more significant contributions to Windows development in the past year. So, Oren ‘NinjaCat’ Novotny – congratulations! Well Done!

So, on day 1, as I was standing in the registration line (one of the first in line), I tried to summarize my expectations for the conference and for the announcements. As Morgan Stanley is heavily involved in the early discussions for many of the products (whether that is the next version of .NET, .NET Core, Azure, Azure Stack, etc.), many of the announcements coming up I was already aware, or in some cases, already played with. So I was more to look forward meeting with some of the industry leaders, meeting with the product groups, meeting with the decision makers and to learn more broader term next steps the industry and Microsoft is going to take. It also gave me a thrill – for the first time ever, I wasn’t only to enter the Build conference – I was to also visit ‘MRJam’, a ‘semi secret’ Mixed Reality subconference, also by Microsoft, on the same premise. What did I expect from MRJam? As being the first one of the kind, did not had any special expectations, just being able to see other people doing development for mixed reality headsets and the HoloLens (I’m the latter) and to listen some sessions seemed to be adequate – however it turned out to be something significantly better. More about that later Smile

Freshly registered we lined up for the keynote – I had to admit the registration experience was significantly more organized than last time (that was a mess in a small room, now it was a 5x as big room with 3x as many registration stations AND the swag handling (t-shirts only) was done in a separate section), we could walk next to the expo and a newly designed, much bigger, better Channel9 area.

While walking down the fenced corridors, could not avoid spotting Seth Juarez preparing for his opening Channel9 remarks – not a small feat to do to cold open a 3 day conference AND prepare for 3 straight day of talking to people – haven’t seen the Channel9 stage empty during the days Smile Btw, could not avoid spotting, beard became now the new norm Smile 

We definitely weren’t alone. I haven’t seen an official tally yet, but based on what I saw on the various sessions, in the lunch room, etc., I guess 7.000 people – don’t quote me on that, I might be absolutely crazily completely off with that.

 

The line of announcements started meanwhile standing outside, if you had an open ear to the Channel9 recording booth, started with – Windows 10 1803 ISOs showed up on the Microsoft Volume License Center’s website (with its hopping 4382 MB to download)! Also, we all knew for sure that we would be Hanselmanned sooner or later, but he did actually show up shortly before the keynote with many ‘aaah’ and ‘oooh’ Smile

 


Than the show began! Same as last time, I was sitting next to the Press area, just to slightly right from it, Mary Jo, Thurrott sitting just a few feet from me – this has to be a good seat Smile with tongue out

 

The opening words were new in so many ways. Yes, we have had opening words delivered at Connect 2016 by no other than Stephen Hawking, and many other amazing scientists, but this time, it came from Charlotte Yarkoni about Project Emma, a wearable device which we saw as a prototype a year ago. It’s to help those patients, who are suffering from Parkinson’s – Emma herself got the disease at age 29, and being someone talented with wonderful ideas (being a designed and a creative director) she was afraid the diagnosis might end her career. With the new Emma watch, she is able to live a full life, and can have her wonderful ideas being fleshed out by herself.

And here starts the list of endless announcements! Visual Studio 2017 15.7 (actually now we have 15.7.1 due to a last minute security bugfix, so make sure you grabbed the newer version) with C# 7.3 with enum/delegate constraints, ref reassignment, stackalloc, unpinned fixed buffer indexing, custom fixed, and many more; and with asp.net core to noncontainer app service Linux publishing, codelens unit testing, responsive testing icons, step back for .net core, github auth for sourcelink, javascript debugging for Edge (this is big!), XF editor intellisence (also big!), VSM for UWP (another biggy!), signed Nuget support, and many more. Many of these are thanks to the Developer Advocates – Donovan Brown, Seth Juarez and many more. And, Visual Studio for Mac 7.5 (why they cannot share versioning, sigh?) with Razor for ASP.Net, TypeScript and JavaScript for web, wifi debugging for iOS, Android SDK manager for Android, .editorconfig support, and more! Lastly, on this topic, Microsoft brings its Visual Studio App Center lifecycle management tool for iOS, Android, Windows, macOS developers to the GitHub marketplace.

The actual opening notes from Satya Nadella made me googling/binging quickly – why does Bill Gates speak about Apple stock prices? Yes, I wasn’t following what exactly Warren Buffet said about stock prices, and wasn’t sure others were following him either – lost me a little bit there. And if your cold open is not hitting the right vibes, you start to feel out of blues… Back to announcement mode, two new mixed reality business applications announced, layout and remote assist. As the name implies, it is showing two common usecase – layouting physical objects over real world, and providing real 3D, context sensitive help when needed using a mixture of devices – I actually have had the ability to try out an earlier implementation of the latter by a company called Kazendi where they showcased their holomeeting product. We got another reason for using Visual Studio Code – Sort JSON Objects addin for sorting both your objects and your settings. Announcement of Microsoft’s own content delivery network (CDN) – this time again Microsoft is trying to get into a rather busy field, although with the promise of a rather big number of EDGE sites this might be more promising than it sounds (50ms on average in 60 countries with 54 edge pops in 33 countries + 16 cache pops); although I don’t see this as an imminent threat to akamai, cloudfront, cloudflare, level 3, etc.

We went a little hardware and IoT Edge from here (so many edges Microsoft have now Winking smile), so when you were thinking (for a good reason actually) that Microsoft XBox Kinect is dead – here is Kinect for Azure. A hardware solution not insimilar to the one in the next HoloLens in 2019, with depth sensor resolution bumped up to 1024×1024 from 640×480. As Microsoft did give a Kinect to each developer at a previous Build, many developers who weren’t really into gaming figured out, that with the ‘Kinect for Windows’ SDK, many amazing industrial solutions can be made – this is what lead to the fourth reinvention of Kinect into this small devices that can be fit along with other IoT solutions. To be honest, I can already see some amazing mixed reality projects forming in my mind. News about IoT is not ended: IoT Edge itself got opensourced, along with an AI Developer Kit for Cameras from Qualcomm and a Drone SDK from DJI, demonstrated by flying an actual drone in the keynote recognizing flaws on the pipes on stage. This is actually enabled by many updated cognitive services, some of them are part of Azure IoT Edge, enabling you to train in the cloud but run on the device enabling blazingly fast decisions while being superiorly precise at the same time. Already at Azure, did you know about the new Terraform resource provider? This would enable you to write an ARM template that creates a Kubernetes Cluster on Azure Container Services (AKS), and then, via the Terraform OSS provider, Kubernetes resources such as pods, services and secrets can be created as dependent resources. Also, geo replication for Azure Container Registry with ACR Build, to enable easier OS and framework patching with Docker. Also – actually pretty slick way to enable/disable – Azure SignalR with literally half line of code change (AddSignalR –> AddAzureSignalR), moving SignalR connectivity to Azure Edge using a fully managed service.

This next one is big: Microsoft is changing the revenue mix. App developers now going to get 85% instead of 70%, and going to get 95% if you get redirected from the developer website to the store. Would this be enough to be a gamechanger? Would this result in a landslide? Very hard to tell, when looking at this number I don’t feel little, but I feel late. Back to User Interfaces – XAML Islands are coming up (enabling the use of Edge WebView in the earlier tech)! These enable you to host UWP content in your Winforms or WPF applications – who said that Winforms/WPF is dead? Actually, everyone said it.

But if, we are already at WPF/Winforms – next to announcing .NET Core 2.1 RC1 (which contains a performance related PR and a Linux compatibility PR from me!), an important announcement of .NET Core 3.0 is also there – with the ability to run Winforms and WPF on top of .NET Core 3.0, giving it side-by-side ability and probably opening the way of them becoming opensource. And actually, clippy can help you code it. Not kidding, IntelliCode is trying to become the clippy of development and provide AI assisted capabilities by not just better contextual intellisense and focused PR reviews, but in the future actually trying to compare your source code to other source codes and trying to point out if you used the wrong variable in a line.

When it came to AI again, Microsoft showed all the breakthrough we had in AI in the short past – Vision test with object recognition parity in 2016, Speech recognition test parity in 2017, Reading comprehension parity in 2018 January, and Machine translation parity in 2018 March. But where it becomes scary, is at things like Project Brainwave, an Intel FPGA solution enabling superior and never before seen acceleration for real time AI while being cost effective and efficient.

 

 

When it comes to AI, one of the topics always brought up are bots & intelligent assistants. We got used to frozen hells by now – and looks like the previously announced Cortana-Alexa partnership is actually going to happen, although I found it quite awkward that I explicitly had to sign in/out of them. One of the biggest applause of the keynote was a rather unexplainable moment – one of the canned responses of Alexa about Cortana. Another very  good example for the kind of thinking about AI came up than, as part of a mega Microsoft 365 demo with an ASL interpreter included – we had a sense what is coming up, but it was still groundbreaking how easily the machine was able to transcribe not only what was spoken but also who spoke and what was the call-to-action. We saw integration with HoloLens, integration with LUIS, integration with Surface Hub, etc. – if only my workspace would look like this Open-mouthed smile 

 

I’ll continue after the break – we actually had a break with a little gym exercise included Smile Open-mouthed smile Smile with tongue out

 

DevOps 2.0 – Beyond the Buzzword

A few days ago, I participated in the DevOps 2.0 – Beyond the Buzzword session hosted in the Microsoft building. The event was sponsored by Neudesic, Microsoft, and Docker. It differed from the DevOps 1.0 session, which was sponsored by Neudesic and Amazon in the sense that it focused little (actually not that much) on Microsoft technologies (like Azure). The presentations contained a lot of interactive forums and Q&A, and was presented by Mike Graff, Kunaal Kapoor, Chad Cook, Eric Stoltze and Chad Thomas, all from Neudesic.

 

The agenda consisted of four major sections; “DevOps in Review”, “Building the Continuous Delivery Culture”, “Automating the Secure Enterprise” and “Modernizing Legacy Apps”.

 

Starting with basic questions – Is your org implementing DevOps, Is your org using the cloud, What did you come to learn today – we quickly set the stage, and the sessions started. There was one axiom I really did not agree with, especially as someone in a team dealing with R&D and participating in Morgan Stanley's Innovation program – “You cannot innovate and standardize at the same time”. I’m happy to challenge this 🙂

 

In the digital age, it will not be the big who eat the small, it will be the fast that eat the slow!

 

The questions and discussions in the session mostly focused on why and how we do continuous delivery and DevOps. We looked at Finance 101 – Net Present Value (NPV) = cashflowin/(1+r)^t-cashflowout (where r is the cost of money and t is time), and discussed how DevOps can help change each of these variables. We alsp looked into basic Return On Investment (ROI) calculations, to see how DevOps can change it to the benefits. We discussed how we see opportunity costs (the benefit that could have been realized, but was given up, by choosing another course of action) and the cost of delay (a twist on opportunity cost that combines urgency and value).

 

By the way, did you know that in the average industry, 60-90% of required features for a product produce zero or negative value (like non-automatic hygiene)? This even applies to giants like Google or Microsoft, so they are not specific to company size or industry.

 

So, why do we practice continuous delivery and DevOps? The keywords are:

  • Faster Delivery – High Performing teams deliver software 20 times more often with 200% better lead time; this nicely binds into the Continious Delivery and Deployment agenda
  • Safer Delivery – High Performing teams have 48 times lower MTTR and 3 times lower change failure rate; this is something that binds into the blue/green agenda
  • More Efficient – High performing teams spend half as much time on rework and three times as much time on new work
  • Better Security – High performing teams have a 50% reduction in security related incidents
  • Improved Satisfaction – Teams adopting CD / DevOps doubled their internal net promoter score and had tripled their customer net promoter score
  • Increased Profitability – Organizations practicing CD / DevOps are 26% more profitable than traditional divisions

 

The three ways to achieve the DevOps nirvana according to the discussions are: flow, feedback and experiment.

 

How to work with Flow?

 

  • Make Work Visible
  • Limit Work-in-Process/Progress
  • Reduce Batch Size
  • Reduce Handoffs
  • Identify and Elevate Constraints
  • Eliminate Hardships and Waste

 

The first two items are part of most teams’ Kanban implementations, so I won’t spend extra characters on them. Making problems more bite size does help maintaining a healthier throughput, and helps ironing out the bumps in the road when an item has a wrong T-shirt size assigned. Handoffs – each time you are doing a handoff, you are literally losing track of it and automatically building a queue. Try to limit the number of handoffs through automation, by using commodity components. The last two items are in some way obvious ones; although sometimes we tend to be delivering features faster than having time to recognize that we are delivering under constraints or that we create unnecessary waste.

 

When it comes to Feedback, we can look into ways to:

 

  • Work Safely in a Complex System
  • See Problems as They Occur
  • Swarm and Solve Problems to Build Knowledge
  • Keep Pushing Quality Close to the Source
  • Enable Optimization for Downstream

 

How many times have you felt unsafe in a system or system’s admin UI you had to interact with, and didn’t know what the next click or swipe would result? Whether there is going to be another ‘are you sure?’ question, or have you just been able to render your environment unusable? Or checking out this question from the other side – are you monitoring the right thing? Do you understand what needs to be done when the alert comes? Is it coming through a system that is able to target them correctly? No one wants to be the target of a blastmail to 500+ people with a multi megabyte attachment trying to figure out which items are relevant; neither had you wanted to get notifications tailored to you but not mentioning any of the action items to be taken. And when an issue happens, what should be the steps followed? Yes, in short term, it’s “cheaper” to have a follow the sun support model and have the support person dealing with it, as for sure the support person would “document the steps needed for later reuse” – I never see the latter happening. Whereas, if available people do swarm, the issue might be solved at a higher realized price when it comes to manpower, but the crosspollination happening during the solution is priceless. And – it shouldn’t be a big deal to help the downstream system optimize them instead of building another layer/façade/presystem around it; as the latter would just lead to another piece to maintain in the long term.

 

Lastly, when it comes to Experiment, we speak of the followings:

 

  • Enable Continuous Learning
  • Create a Safety Culture
  • Institutionalize Continuous Improvement
  • Transform Local Discoveries to Global Knowledge
  • Inject Resilience into Daily Work
  • Leaders Reinforce Learning Culture

 

We learn as long as we live. With Pluralsight, Lynda, Harvard reviews and more training tools available there is no real excuse on not taking the proverb to heart. We need to support people taking a leap – looking at Morgan Stanley's Innovation Program for the second time in this one article; this may be one of the best approaches to this problem so far. Continuous improvements shouldn’t stop at writing the software. It should be part of the fabric of the team itself; implementing so called ‘kata’s to improve the process and to seek improvements of day-to-day work is more than desirable. As teams span across multiple locations, it is imperative to host knowledge share sessions, brown bags, post write-ups, and more. We should aim to record as many of these as possible and make available tagged, even sometimes transcribed/minutes written up. And when people should listen to this? Building ‘slack’ time or possibility for slack time into a system, using Pomodoro Technique and gifting yourself experimenting time for successful Pomodoros, etc. might have a bigger benefit than you think. Lastly, such change (if there are changes needed) are most successful if they are reinforced or started by management; or you experience them through mentorship – try to seek for the latter if you don’t feel the former.

 

Because, yes, DevOps2.0 needs more cultural change then technological ones – for some of the problems in the DevOps space there are off-the-shelf solutions; but to make them successful, the 20% of the technological change needs to be met with the 80% of the cultural change needed. You cannot just introduce a DevOps team between your Dev and Ops teams – that would result in another set of handovers (see above ), and no real benefits.

 

In a later session, when it came to the question of Security as part of DevOps,

 

Fewer than 20% of enterprise security architects actively and systematically incorporated information security into their DevOps initiatives (Gartner)

 

As part of looking into the importance of the enterprise DevOps process, you can look into:

 

  • Automating Security
  • Empowering Engineering Teams
  • Maintaining Continuous Assurance
  • Setting Up Operational Hygiene
  • Increasing Engineering Autonomy
  • Increasing Availability of Dev Technologies
  • Ensuring Constant Change is the New Norm
  • Leveraging Wide-ranging Operational Responsibilities of DevOps

 

We have had a long discussion of this, but I think our requirements when it comes to security and the maturity of these solutions might not be aligned to where we are now and where we would like to be in the future.

 

Lastly, as part of Application Modernization Strategies, we went into discussion of the “6 Re”:

 

  • Retire – Remove the application or platform completely
  • Replace – Replace the application of platform with new version or competing solution
  • Remediate – Invest in extending the lifeline of the application or platform
  • Re-architect – Redesign the application or platform to meet demands
  • Re-platform – Redesign the application to run on different infrastructure
  • Re-build – Rewrite the application to remove technical debt or change the implementation

 

The whole day was full of demos, everything from Security Kits through Kubernetes Orchestration to Mobile Crash Analysis, and useful discussions with industry peers. And the best discussion of the day? “How you get around SOX and Segregation of Duties when using DevOps?” – You don’t. Instead of that you educate the audit/regulator and show that the relevant controls and the segregation are still there.

finJS.IO 2017 Fall conference

 

disclaimer: Morgan Stanley was invited to the event by OpenFin. 

 

This year I have had the pleasure to represent Morgan Stanley on the finJS.IO NYC conference in midtown. We had an awesome crowd, many friends and former colleges showing up – who could resist such a line up of presenters?

 

We had OpenFin's own Mazy Dar providing the opening words and giving a heads up on the sessions: Graham Odds from Scott Logic, Fergus Keenan from Adaptive, Terry Thorsen from ChartIQ, Jason Dennis from FactSet, John David Dalton from Microsoft and Mazy himself to present and amuse us.

 

FactSet

 

First we have had FactSet taking the stage, speaking about unbundling, interoperability and data concordance. He showed us what FactSet is and what they do provide – with 14 datacenters, 650 3rd party data sets, 25 of their own content sets, with over 300 exchanges, over 130 news sources, etc the amount of data they have on their fingertips are astonishing. They built their application stack on four pillars – workflows, apps/components, content and technology, with the following bingo board in technology (that being the most relevant for us): HTML5, Node, websockets, angular, vue, http/2, elastic, hadoop, spark, C++, C#, Java, Python, Go, memcached, vmware, nsx, linux, windows, hpc, cdn. Their focus is at 3 steps – survey, pick, implement. Resulting in process on how to normalize content, how to implement workflow. We have to admit – concordance and interoperability is hard. They are excelling in the former – spent 30 years building concordance model on securities, they do understand directives and object symbology needed, etc. They are (next to Morgan Stanley) part of the Financial Desktop Connectivity and Collaboration Consortium (FDC3) to enable financial participants to build solutions that link components and data. When it comes to interoperability on the desktop, they depend on OpenFin and Symphony for local and global messaging bus, and leveraging the power of fins:// for live collaboration in contexts.

 

Next they talked about how they moved from their WPF only UI to the web and hosting the website built using Angular later React in the WPF container; than stepping another step and moving their application out of the host container and running as a native web app. However seeing the limitations of such approach, they turned backwards again and started using a thick container to host their application, so to enable some of the otherwise unexploited desktop features to their application – like this enabled them proper out of band notifications out of their machine learning platform.

 

Adaptive

 

Next presenter was from Adaptive, who spoke about mostly the evolution of trading in the world, starting from 300 BC up till today, touching the 17th, 18th century in the meanwhile. He did speak about where the IT budget of the financial companies are going – from the $127bn annual budget, in 2017 we have had $2.5bn on advanced analytics with a +15% growth, on artificial intelligence $1.5bn with a +14% growth, on robotic process automation $0.5bn with a +10% growth and on cloud computing a $1bn with a +19% growth – all significant numbers and the he made us think about the implications as well.

 

ChartIQ

 

Next came ChartIQ which spoke about their finsemble application platform, showing frameless browser windows, and showing the analogy we have: windows are the nodes, the communication bus is the network and the microservices are the services when it comes to compare it to distributed systems. Applications are and "ensemble" effort, components that are visible and services that are hidden. What they came up with is a solution based on openfin and their custom architecture. When it comes to synchronizing state, the fact that components can start in various orders needs to be supported by a pub/sub mechanism. When it comes to performance, you have to invest into Chromium's process splintering to avoid unnecessary overhard from Chromium instances; and you have to introduce dependency ordering to avoid startup race condition errors. When it comes to debugging, you need to introduce central mgmt and logging, when it comes to user experience you cannot just have a loose interface (you need to introduce window management like snapping, docking, etc). Lastly, when it comes to aesthetics, and having a heterogeneous set of component interfaces, there isn't a better solution than having a hard word and introducing something along the lines of 'Font Finance' similar to 'Font Awesome', a set of web components that is mandated to be used.

 

And than there is the tough tail to be fixed – monitor plug/unplug/resolution change, orientation changes, event storms & throttling, working around issues on max server connections to enable features like webpack hot reload, to support hotkeys, to support CORS, to enable proper timer sync (needed for central logging as well!), to support fitting to the DOM when it comes to dynamic height content, to avoid missing mouseout events at the window edges, etc, etc., the list is nearly endless.

 

OpenFin

 

Next presenting was Mazy Dar from OpenFin, who talked about the 'bank of the future' and what banks can learn from DropBox. We did see the achievements and development of mobile applications for banks in the last decade, but where is the similar change for the desktop apps? We should get out of band notifications, offline synced information, etc like we had in the '90s, but with today's technology stack and with integration into the builtin desktop services.

 

Scott Logic

 

Following was Scott Logic who talked about how to create reactive desktop applications through an example that involved someone doing a two in a refrigerator and immediately regretting this decision  But the presentation was serious, and talked about what we can achieve when trying to do more than just switching between a 'desktop' and a 'mobile' web mode, and did involve some very eye catchy demos of interaction models that adapted themselves based on not just screen size, but rather smaller differences like "how is my container being aligned on the screen". The magic behind is moving a level further than reactive media queries, and – you can do that today, of course with a polyfill ( https://github.com/ausi/cq-prolyfill ) for container queries ( Container Queries ).

 

Microsoft

 

The last session was from Microsoft, which was focusing on ecmascript modules ( https://nodejs.org/api/esm.html ) , and the current issues with them – whether it comes to debugging, error reporting, builtin variables, babel, etc all do suffer if you try –experimental-modules today. What was shown by the John wearing a BladeRunnerJS tshirt how https://www.npmjs.com/package/@std/esm is solving all the problems in Node 4+, by not only providing support for mixing common js and mjs, but also to be available in the node REPL (see https://medium.com/web-on-the-edge/es-modules-in-node-today-32cff914e4b for much more details):


$ node

> require("@std/esm")

@std/esm enabled

> import p from "path"

undefined

> p.join("hello", "world")

'hello/world'

 

It also enabled to unlock features like dynamic import, live bindings (yes, actual live bindings cross module boundaries), file uri scheme, etc. It also have unambiguous module support, named exports, top level main await, gzipped modules, etc. You could ask what is the performance hit you have to pay? Loading CJS ~0.28 ms/m, built-in ESM ~0.51 ms/m, first @std/esm with no warm cache ~1.56 ms/m, but with warm caches @std/esm cached runs were ~0.54 ms/m. Which means it is nearly the built in performance but with a lot more flexibility, better error reporting, path protocol resolution, environment variables, etc.

 

 

To summarize, the whole event with the mingling at the beginning and at the end, also the various presentations were just fantastic 

PowerShellStravaganza

I was strongly looking forward for Techstravaganza NYC – not just because it’s a New York bound conference, not just because the different tracks focusing on different materials (PowerShell, Windows, Exchange and Azure Cloud), but also because our very own Tome was (as always) one of the organizers and presenters. His session (Anarchists guide to PowerShell – What you’re mom told you not to do with PowerShell and how to do it!) not only won the competition on best title, but I think close to winning the best content award too.

From the 4 available tracks I went with PowerShell – mostly because my knowledge is the lacking there most. From the lineup of strong sessions I picked: “Anarchists guide to PowerShell – What you’re mom told you not to do with PowerShell and how to do it!” by Tome, “Rewriting history: extending and enhancing PowerShell features with fantastic results.” by Kirk Munro and a panel discussion featuring all of the days’ PowerShell speakers.

But not to run forward too much, for me the day was kicked off by Kirk’s rewriting history session. Next to learning where PowerShell’s builtin capabilities trail off and when you need to start touching compiled cmdlets (like proper parameter usage, deriving commands, proxy cmdlets, performance!) we should not forget that PowerShell is now cross platform and opensource – if someone wants to add one of the extensions, they are usually should feel free to do that using github, PRs, etc. To write such a custom cmdlet, you only need your favorite editor (Sapien, VS2017, VSCode – your choice), the Microsoft.PowerShell.*.ReferenceAssemblies for the matching PowerShell version, and some patience 🙂 So with PowerShell 6 just in alpha, being cross platform, what is the better time to learn writing cmdlets than now?

One of the first magic we seen and resonated with me was about something eerily similar to https://github.com/nvbn/thefuck (if you are not aware of that tool, it adds missing sudos, it adds missing –set-upstream to your git command, tries to find out what command you mistyped, and like 50 more – see https://github.com/nvbn/thefuck/blob/master/README.md#how-it-works for details). Using some lesser known command-not-found-handling extension point, it enabled nice shorthands to replace “verb -verbs” constructs with “verb-verb” constructs, and much more.

The next item that caught my attention was LanguagePX – a demonstration of ExcelPX by building a DSL that enabled moving from:

Get-Process | Export-Excel Path "$([Environment]::GetFolderPath('MyDocuments'))\test.xlsx" Force Show AutoFitColumns IncludePivotTable IncludePivotChart PivotRows Name PivotData WorkingSet64

 

to:

ExcelDocMagic "$([Environment]::GetFolderPath('MyDocuments'))\test.xlsx" {

    Properties {

        Overwrite = $true

        Show = $true

        AutoSize = $true

    }

    PivotData {

        RowPropertyName = 'Name'

        RowAggregateValue = 'WorkingSet64'

        IncludeChart = $true

    }

}

 

Turns out this not only caught my eyes – this is on the way to become part of PowerShell 6 to support custom DSL languages thanks to an RFC. This way the option to achieve something that would have been crazy idea before, like generating a Visio document an object oriented way and have PowerShell fill out the data there, all with proper type safety – I’m sold.

Next to tackling capability to fix mistakes and provide nice syntactic sugars, next to provide a way to define DSLs, the 3rd item that surprised me from Kirk was HistoryPX – providing an extended set of history information AND last output support in PowerShell. Not only does it capture output, exception criterias, command duration, but also the number of returned objects! Makes it so much easier to work with history. Add to this the $__ variable capturing last output, with configuration when it should be captured – e.g. I can say I don’t want sort operation to be captured if I don’t want to.

As part of the sponsorship section, I seen Sapien’s IDE for the nth time – but this was actually the first time I spotted the fact that there is support for a splatting refactor in the context menu. Now, how did I live without it??

The next interesting tidbit was coming from Stephen Owen – before his session I was not that much involved (let me rephrase: I had absolutely no overview nor details) in the Enterprise Mobility Management suite’s integration with VMWare AirWatch and PowerShell, but he showed some interesting tidbits on how these component work toghether: either AirWatch sends command to EAS to authenticate, and with the established secure trusted relationship AirWatch can send cmdlets to PowerShell in accordance with the email policies (and it gets executed) (this does not require a server per se), or using the AirWatch Enterprise Integration Service to deliver the PowerShell commands to the server. It was eye opening to see PowerShell used together with VMWare – not two tech/companies I’d put in the same sentence otherwise.

Then came the session I was waiting for since the morning: the anarchist guide on how not to use PowerShell with Tome. Starting with the nice library of ShowUI which enables you to *easily* build graphical user interfaces using PowerShell scripts – to the level of using ‘New-Button -Content "Hello World" -Show’ or:

$User = [ordered]@{

   FirstName = "John"

   LastName = "Doe"

   BirthDate = [DateTime]

   UserName = "JDoe"

}

 

Get-Input $User -Show

Next demo was about Azure Functions, PowerShell – with removing the comma between them – small powershell snippets you can use in a serverless scenario provided by Azure Functions to provide stateless microservices. And if microservices, we saw a way using Apache Zookeeper to spin up PowerShell worker processes on incoming data feeds – think about like a distributed zookeeper powershell cluster. And this was the best place I think for a Treadmill shoutout J

And the thing we (I?) weren’t really prepared for – http://www.leeholmes.com/projects/ps_html5/Invoke-PSHtml5.ps1 – I’ll leave you to try it out. And the list continued – a very interesting demo of using Unity (the 3D game engine) and PowerShell together to display data (some of them on a spinning globe) using PSUnity, another crazy language experiment by allowing to write native .NET IL in PowerShell using PSILAsm

The list is endless 🙂

 

 

 

Electron, Win10, UWP

I have been following Microsoft's adventure on HTML5/Javascript based desktop UIs for a while – actually, did participate on the journey while working for Microsoft back in 2003 on a Microsoft POC project codenamed IceFace which was doing something akin of websockets (using a proprietary ActiveX and doing low level TCP from it) and something akin of an HTA application. In the recent years, I was watching when they announced WinRT and one of the programming models being HTML5/Javascript; also watched with awe as they failed to penetrate the world with WinJS Tutorial ; than when UWP enabled using HTML5/Javascript as one of the technologies to use. So when I heard last year they tackle this area again, I didn't have high hopes.

 

I was wrong. They actually listened to the people – yes, they misinterpreted what they want the first few (?) times, and lost their expected leadership in the desktop app segment (which were quickly eaten into by the HTML5 hybrid applications), but at last they stopped fighting the inevitable.

 

Why do I say this? Next to the bridges Microsoft announced and later opensourced (centennial for desktop app, the later cancelled bridge for Android applications, the bridges for hosted web applications, for iOS applications and for Silverlight applications) Microsoft sneaked in an announcement: Electron applications for the Windows Store . Using Centennial technologies (so no registry or file system is used, and the application is running on full speed, but still running in a sandbox) you are able to 'compile' your Electron application to a Windows Store AppX (either for the external application store or the internal one). Moreover – using NodeRT (which can be downloaded through npm, and does some crazy magic of generating node.js native addon's C++ code using C# reflection over WinMD manifest files) you are able to access the same WinRT/UWP APIs like any other native application would do – see Showing Native Windows Notifications from Electron Using NodeRT for details.

 

This easily enabled applications do interaction with the native Windows experience, like setting the lockscreen image from JavaScript (TypeScript):

 

const {KnownFolders} = require('windows.storage')

const {LockScreen} = require('windows.system.userprofile')

myFolder.getFileAsync('image.jpg', (err, file) => {

  LockScreen.setImageFileAsync(file, (err) => { })

})

 

Or popping up a toast using https://github.com/felixrieseberg/electron-windows-notifications :

 

const appId = 'electron-windows-notifications'

const {ToastNotification} = require('electron-windows-notifications')

 

let notification = new ToastNotification({

    appId: appId,

    template: `<toast><visual><binding template="ToastText01"><text id="1">%s</text></binding></visual></toast>`,

    strings: ['Hi!']

})

 

notification.on('dismissed', () => console.log('Dismissed!'))

notification.show()

 

So yes, Windows 10 migration for some of the applications and firms might be far ahead, but I think we would be prepared. We are still to see how they will tackle the issues regards Electron security sandbox, but I feel like this time Microsoft might be just naturally doing what was expected from them for a long time, and we can see others stepping into the same direction with getting on the Electron bandwagon.

Sphinx, Pygment, and more

New year, new petprojects. One of them is moving existing documentation in an easier to manage, easier to use format. I looked around what is being used right now internally, and seen many tools from doxygen to sandcastle. One of the tools that caught my attention was sphinx. Being a Python tool, it enables to run on both on Windows and Linux, and looking little more around I found support for VSCode editing – I'm in.

 

So, what is sphinx? Sphinx is a tool that makes it easy to create intelligent and beautiful documentation, written by Georg Brandl and licensed under the BSD license. It was originally created for the Python documentation, and it has excellent facilities for the documentation of software projects in a range of languages.

 

So I gave it a try and I ended up setting up something similar to github pages – I send a PR to a particular internal repo, and when it gets merged, it kicks off a build which automatically uploads it as the new content.

 

Locally I did set up the following in my tasks.json:

 

{

    "version": "0.1.0",

    "command": "python",

    "isShellCommand": true,

    "args": [ "sphinx-build", "-b", "html", "-d", "build/doctrees", ".\\src\\sphinx\\source", ".\\doc"],

    "showOutput": "always"

}

 

Next thing was to set up my conf.py, I picked a few plugins I was thinking I'd use:

 

extensions = [

    'sphinx.ext.autodoc',

    'sphinx.ext.intersphinx',

    'sphinx.ext.viewcode',

#    'sphinx.ext.autosectionlabel', — this depends on a newer sphinx that I have

http://www.sphinx-doc.org/en/1.5.1/ext/autosectionlabel.html

    'sphinx.ext.autosummary',

http://www.sphinx-doc.org/en/1.5.1/ext/autosummary.html

    'sphinx.ext.todo'

http://www.sphinx-doc.org/en/1.5.1/ext/todo.html

]

 

I started playing than around with the RST syntax, and very quickly figured out that there is good support for python (not a surprise) in code coloring, but even if I did set .. code:: csharp it wasn't recognizing it.

 

So time to write my first sphinx plugin, in _ext/csharplexer.py:

 

def setup(app):

    import pygments

    from pygments.lexers import CSharpLexer

    app.add_lexer('csharp', CSharpLexer())

 

than I can add 'csharplexer' as an extension.

 

Next, I started looking making the rendering little nicer – I'll likely have longer listings as part of the code, and I wanted to have a syntax for toggles working. So time for the next extension – toggle. Taking the following source:

 

.. container:: toggle

    .. container:: header

        **Example to show how to add unitycontainer**

    .. code-block:: csharp

        :linenos:


        Assert(true);  // OK

        var i = 1;

        Console.WriteLine("Hello World!");

 

I could just add _static/custom.css:

 

.toggle .header {

    display: block;

    clear: both;

    cursor: pointer;

}

 

 

.toggle .header:after {

    content: " ▼";

}

 

 

.toggle .header.open:after {

    content: " ▲";

}

 

and _templates/page.html:

 

{% extends "!page.html" %}

 

{% set css_files = css_files + ["_static/custom.css"] %}

 

{% block footer %}

<script type="text/javascript">

    $(document).ready(function() {

        $(".toggle > *").hide();

        $(".toggle .header").show();

        $(".toggle .header").click(function() {

            $(this).parent().children().not(".header").toggle(400);

            $(this).parent().children(".header").toggleClass("open");

        })

    });

</script>

{% endblock %}

 

Which resulted in something I liked 🙂

 

Will be continued – next time I'll try to get viewcode working.

32 bit vs 64, revisited (again)

I did post previously about 32 bit vs 64 bit through the magnifying glass of .NET – good news is, that it’s now high time to scrap all those results, and revisit the question. The reason behind is one of the changes between .NET 4.5.2 and 4.6 (and therefore 4.6.1) is the introduction of a new JITter (ryujit) which should result each of us carefully revisiting this question.

 

But, let’s not go that quick; what the problem is we are trying to solve.

 

32 bit vs 64 bit

 

“I’m in .NET, why should I be interested in 32 bit vs 64 bit? Isn’t .NET bit agnostic?”

 

Yes, .NET itself is agnostic; however some of the libraries and technologies you might use might be not. Think about technologies like: P/Invoke, COM Interop, Unsafe Code, Marshaling, Serialization, Managed CPP, … So, yes, if you happen to have 100% type safe managed code, you can just copy your application from a 32 bit system to a 64 bit system, and it would “just run” successfully under the 64 bit CLR. However, likely you are using some of the technologies just mentioned, so you should do your homework to investigate whether your code is depending on the bit length. Be aware, that unlike C++, .NET only changes size of the pointers (IntPtr) and not the builtin value types (e.g. int is going to stay the same). So moving between 32 and 64 bit world either result in no changes or a set of changes related to pointers, changes related to 3rd party libraries, marshaling, serialization, and more; and you can use System.IntPtr.Size and System.Reflection.Module.GetPEKind to determine the current bitlength and/or querying a deployment assembly for platform affinity.

 

Why 64 bit? Actually, why 32 bit?

 

What does 64 bit allows you to? Addressing (not necessary accessing) a bigger chunk of memory. 32 bit applications inherently (because of the pointers they use) are limited into a 2Gb section of the memory, 64 bit applications don’t have this limitation.

 

So, that means I should just specify I want to have 64 bit, and that’s it? I’d have more memory and would be faster? Actually, not necessarily. 64 bit pointers do occupy more memory. Cache lines in the processor gets evicted more likely. Stack becomes bigger. Your application will likely (mileage might vary) occupy more memory, and there is a chance (mileage will vary) it will perform worse – despite the fact that running on 32 bit results involved in the WOW64 subsystem that has its own performance hit.

 

So should I not update to 64 bit? You should measure; although because what explained above it might be not trivial, you might not want to put effort into it right now.

 

Why this is a topic now?

 

With .NET 4.6 a new JITter got introduced that is a significant rewrite of the existing JITter (and caused some uproar when just after .NET 4.6 release a problem in a tail call optimization caused issues). It’s actually optimized to bring 64 bit nirvana for the masses by incorporating more usecases to use SIMD and SSE for. Yes, I’m going to talk about synthetic microbenchmarks here. Synthetic microbenchmarking is evil and you shouldn’t trust any of the results below, rather test your code – mileage will vary.

 

There are many usecases, like: matrix multiplication, simple floating point arithmetic and more where there is a significant speedup – we speak 4-5x (due to better usage of registers, opcodes, due to better coalescing of arithmetic instructions and use of noeffect code reorder). However, there are many other usecases – just calling a static method, or calling a virtual method might slow down by the same 4-5x factors.

 

Conclusion

 

Don’t believe any of the results above – please do measure yourself, and feel free to leave in the comments below whether you did see any performance improvements using .NET 4.6.1 and 64 bit over your 32 bit application. Also, going over 2Gb of memory usage – is it possible your application should be restructured not to hold all the data on the client side? Probably revisit a different pattern for client-server interaction is timely?

Connect 2016 Keynote – oh boy

Any Application, Any Developer, Any PlatformThe tower of Bagel. Bagel high rise.And I was thinking that last years’ keynote was a hard act to summarize. I’ll try to avoid dumping bunch of links here – many other people done that. Let me still try to summarize how Connect was – it all started on the Tuesday for me when I have had the opportunity to sit down with Jay Schmelzer, Director of Program Management and Sam Guckenheimer, Product owner of Visual Studio Strategy to speak about the upcoming announcements without actually speaking about the upcoming announcements; and would like to thank Eric Maloney from Microsoft for making that possible. For the big day itself, I’ll try to follow the ‘any’ trifecta – Any Platform, Any Application, Any Developer here the same way as this triplet became the official mantra of the day. The trifecta actually came live as a physical token as well. But let’s not jump too much forward.

image15 minutes before startAny Developers’ (and DevOps’, security architect’s, etc) life was made easier by ditching the static KB security article page and moving to a shiny new https://portal.msrc.microsoft.com/en-us/security-guidance portal. And of course the new Flow Partner Program with 6 new services. Any Platform has seen a significant change by Samsung releasing support for .NET for Tizen (TVs, watches got shown), by Google (and Nancy and xUnit) joining the .NET foundation (and as it turns out they have been a contributor for .NET for a while, and now ASP.NET became a first class citizen on Google Cloud), and of course, nothing smaller than (wait, I need to check whether hell is feeling well in the freezer) the end of one and a half decade of cancer – Microsoft <3 Linux Foundation (actually Microsoft been a significant contributor for the Linux kernel for a while now). Btw, did you know that the first 25.000 dev essentials users get 3 month of Linux Academy free with ‘MCSA: Linux on Azure’ course preloaded? Also I suggest starting to browse http://docs.microsoft.com – new documentation on Azure, SQL Server for Linux, Visual Studio 2017 RC (btw, ever considering moving away from ReSharper in favor of native, I suggest https://aka.ms/vs2017productivityguide, also, I’m looking forward what NCrunch’s next step would be as test as you type is now part of the IDE), C++, EntityFramework and more, and all being editable through github PRs. So go ahead, and do extend missing documentation instead of just leaving comments on it.

#RedShirtRocks - new shirt?imageThrough the color of shirts, we saw that the love for Any Developer (red shirts) and Any Platforms (black ‘devops’ shirts) were undivided, still one of the best burns of the keynote was when Scott Guthrie got the question from Beth Massi: “New shirt?”. Any Developer (probably just ‘Anyone’) could see how the Visual Studio family got extended with two new members – next to Visual Studio Code, Visual Studio Team Services and Visual Studio (for Windows? Original? Vintage version? – Miguel de Icaza), now we do have Visual Studio for Mac (based on Xamarin Studio / Monodevelop, however many pieces has been replaced with actual Visual Studio code) and Visual Studio Mobile Center (your one stop mission center for your app, the integration of all Mobile related ALM and Azure functionality). We have seen demonstration for the Any Platform part by Chris Dias with a demo covering MongoDB, a Mac, docker, azure, nodejs, .NET – actually Linux Foundation membership started to make sense as this might have been a OSS conference just I missed the title somewhere? 🙂 Chris not only showed lightweight interaction – one of the more heavyweight pieces when injecting the MongoDB connection string into environment variable for the NodeJS application to pick up from Azure Portal UI (of course it would work using just the rest API and / or DSC application deployment templates).

Github founder Chris WanstrathimageIf you would think no other surprise guests on the stage, ‘ohmyyyy’, we are so wrong – Any Platform. Github founder and CEO Chris Wanstrath showed up briefly and demonstrated the unprecedented love of Microsoft towards anything opensource. You cannot be a CEO without numbers, so he brought some along around commits, committers, projects and so. Not only did we speak about the Linux Foundation membership, but also proved by a person no other than Jim Zemlin, Executive DirectorJim Zemlin, Executive Director of Linux Foundation of the foundation.

Next section was about Mobile, and when it’s about Mobile and Microsoft it’s about Xamarin – distinguished engineer Miguel de Icaza and Xamarin CEO Nat Friedman gave strong demonstration of the first hand experience with the new Mobile development capabilities around Visual Studio, around the new Mobile Center and it’s integration into Azure, and much more.image Xamarin getting lot of love from Developers, and we hear about the trust it gets from the enterprise.image Comes as a little surprise that Visual Studio 2017 RC (release notes) is being announced by Nat – actually a good example for even more ‘mobile first’ 🙂 James Montemagno started to raise the heat in the room showing off the iPhone emulator, the realtime forms preview, live edit, Xamarin inspector for Android and many more – so more that he ends up announcing Visual Studio for Mac (see above), with ASP.NET Core support in it. It gets included in each and every Visual Studio subscription – even the community edition. Nat and James gets the privilege to demo Visual Studio Mobile Center – seldom you do see kicking off builds, CI, pipelines, etc on a conference stage (although I still remember last year when we were dissecting the new experience of Visual Studio Installer – who would think watching an installer is gonna be interesting?).James Monemagno and Nat Friedman doing build and CI on stage

Growing family of Visual Studio

 

From the devops world, not only have we seen ‘overnight’ (I’m not sold on that one yet) migration from onprem to cloud, but also we have seen Mr. Demo himself – big applause for Donovan Brown 🙂

Mr. Demo

Did you know you can love that warm, fuzzy feeling when you do deploy to docker from Visual The hardware backstage for the demosStudio? And you only need to sing ‘Just riiiiiight click’ while you are doing that. Actually, I’d need to have Donovan’s right clicked shipped in my production code, can I do that? Even Scott Guthrie needs Donovan’s right click, on stage. I have to admit, that Donovan not just being awesome presenter, but again had a few ‘this is where you clap’ moments by showing off cross docker boundary work by rubbing some devops on it. Continuous delivery rulez! And before you would think the world is only about right click, we welcome the selfies as well: Beth Massi shows of taking a selfie and taking a live video feed, demoing cognitive services and Azure functions in an elegant and entertaining way.

Beth Massi proving that there is a reason 'selfie' was the word of the year

SQL Server SP1 delivers enterprise features for the masses

Than we had one of the surprise announcements for the day (at least for me) by closing the gap with SQL Server SP1 delivering most if not all of the SQL Server Enterprise features for all versions – yes, even for the free express version. OLTP and data warehouse, operational analytics, compression and partitioning, polybase, encryption, row level security and masking – are all there. I really liked the deepdive into SQL Server (me being a server side guy originally) with Lara Rubbelke, and installing SQL Server for Linux into Docker literally in seconds blown my mind.

#redshirtrocks speaks about SQL Server news

Lara Rubbelke adding indexes to SQL Server for Linux

Not only basic functionality got reproduced for Linux like querying, data load, sharding, partitions (actually not that basic), but also all the special indexes that makes SQL Server blazingly fast. (did you know there is a USQL extension for VSCode? of course next to the mssql extension) Related to performance – in case you were living under a rock, stackoverflow.com is a Microsoft shop – they actually do run the whole site on a hot-warm setup, e.g. 1 SQL Server bearing with the load.

From SQL Server to bots – we have seen the bot framework before, now it has new support for Slack, Facebook Messenger, Microsoft Teams (surprised?) next to skype, and with the possibility to debug into the bots, have them access internal and external services and more.

Show what makes things real with Scott Hanselman

Stacey Doerr and Scott Hanselmann demoing coded UI testing using Selenium for UWPI shouldn’t miss the set of basic announcements from here: Entity Framework Core 1.1, ASP.NET Core 1.1, and .NET Core 1.1 itself (did you know that over 60% of contributions to Core are coming from the community? I’m also to thank the community to making it so great and fast!). However this wasn’t the part of Scott’s presentation which left me speechless (rare occasion), and not the fact that he prefers zsh when running the shell inside VSCode – it was the demo using Selenium to automated, coded UI test applications written not just in UWP, but also in VB6. Kasey Uhlenhuth also showed up to ‘filter out those suckers right out’ by demoing the new navigation and intellisense capabilities of the IDE – ReSharper, your turn 🙂 .

Maria Naggaga demoing the new .csproj formatSo we all heard about the demise of the project.json which we started to love (who wants GUIDs in the project file?), so I wasn’t happy when it was announced a few months back that .NET Core 1.1 would prefer .csproj files again. Through the demo of Maria Naggaga I learned this is not actually the case! Yes, project.json got removed in favor of .csproj – but this is not my grandfather’s csproj! No GUIDs, clean structure, ability to use wildcards – actually it feels and seems like using grunt, gulp or similar. While Maria showed up amazing performance numbers on a 20 way server, the newest round 13 TechEmpower data came in – not only ASP.NET delivered 859x speed advance, but delivers unbelievable low latency numbers and ended up in the top 10 with ASP.NET for Linux!

With these words I finish the keynote blog post, watch this space for the next rounds of posts covering the afternoon sessions and the special #FINTech sessions that were delivered the day after.