Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s astounding just how fast modern .NET has become. I’d be curious as to how the .NET (Framework excluded) benchmarks run in a Linux container.


I just did some benchmarks of this!

Linux in general provides the same speed for pure CPU workloads like generating JSON or HTML responses.

Some I/O operations run about 20% better, especially for small files.

One killer for us was that the Microsoft.Data.SqlClient is 7x slower on Linux and 10x slower on Linux with Docker compared to a plain Windows VM!

That has a net 2x slowdown effect for our applications which completely wipes out the licensing cost benefit when hosted in Azure.

Other database clients have different performance characteristics. Many users have reported that PostgreSQL is consistent across Windows and Linux.


> Microsoft.Data.SqlClient is 7x slower on Linux

It is probably worth reporting your findings and environment here: https://github.com/dotnet/SqlClient

Although I'm not sure how well-maintained SqlClient w.r.t. such regressions as I don't use it.

Also make sure to use the latest version of .NET and note that if you give a container anemic 256MB and 1C - under high throughput it won't be able to perform as fast as the application that has an entire host to itself.


I’m using the latest everything and it’s still slow as molasses.

This issue has been reported years ago by multiple people and Microsoft has failed to fix it, despite at least two attempts at it.

Basically, only the original C++ clients work with decent efficiency, and the Windows client is just a wrapper around this. The portable “managed”, MARS, and async clients are all buggy (including data corruption) and slow as molasses. This isn’t because of the .NET CLR but because of O(n^2) algorithms in basic packet reassembly steps!

I’ve researched this quite a bit, and a fundamental issue I noticed was that the SQL Client dev team doesn’t test their code for performance with realistic network captures. They replay traces from disk, which is “cheating” because they never see a partial buffer like you would see on an Ethernet network where you get ~1500 bytes per packet instead of 64KB aligned(!) reads from a file.


This is unfortunate. I've been mainly using Postgres so luckily avoided the issues you speak of. I guess yet another reason towards the bucket of "why use Postgres/MariaDB instead".


> luckily avoided the issues you speak of

That may be a bit of an assumption. I've been perpetually surprised by expectation-versus-reality, especially in the database world where very few people publish comparative benchmarks because of the "DeWitt clause": https://en.wikipedia.org/wiki/David_DeWitt

Additionally, a lot of modern DevOps abstractions are most decidedly not zero cost! Containers, Envoys, Ingress, API Management, etc... all add up rapidly, to the point where most applications can't utilise even 1/10th of one CPU core for a single user. The other 90% of the time is lost to networking overheads.

Similarly, the typical developers' concept of "fast" doesn't align with mine. My notion of "fast" is being able to pump nine billion bits per second through a 10 Gbps Ethernet link. I've had people argue until they're blue in the face that that is unrealistic.


I agree, .NET Core has improved by gigantic leaps and bounds. Which makes it all the more frustrating to me that .NET and Java both had "lost decades" of little to no improvement. Java mostly only on the language side, where 3rd-party JVMs still saw decent changes, but .NET both on the language and runtime side. I think this freeze made (and continues to make) people think the ceiling of both performance and developer ergonomics of these languages is much lower than it actually is.


I certainly agree that Java / JVM had a lost decade (or even more), but not really with C# / .NET. When do you consider that lost decade to have been? C# has had a major release with new language features every 1-3 years, consistently for the past 20+ years.


Lost decade in another sense in the case of C#.

It's sooooo good now. Fast, great DX, LINQ, Entity Framework, and more!

But I still come across a lot of folks that think it's still in the .NET Framework days and bound to Windows or requires paid tooling like Visual Studio.


Those people are all wilfully ignorant at this point.


I know!

I'm working on a large TypeScript codebase right now (Nest.js + Prisma) and it's actually really, really bad.

Primarily because Prisma generates a ton of intermediate models as output from the schema.

On the other hand, in EF you simply work with the domain model and anonymous types that you transform at the boundary.

Nest.js + Prisma ends up being far more complex than .NET web APIs + EF because of this lack or runtime types. Everything feels like a slog.


.NET was always fast. I remember in the .NET framework 2.0 days, .NET's JIT for derived from the Microsoft C++ compiler, with some of the more expensive optimizations (like loop hoisting) removed and general optimization effort pared back.

But If you knew what you were doing, for certain kinds of math heavy code, and aggressive use of low level features (like raw pointers) you could get within 10% of C++ code, with the general case being that garden variety non super optimized code being half as fast as equivalent C++ code.

I think this ratio has remained pretty consistent over the years.


I wonder how it compares to (1) Go, (2) the JVM, and (3) native stuff like Rust and C++.

Obviously as with all such benchmarks the skill of the programmer doing the implementing matters a lot. You can write inefficient clunky code in any language.


All modern popular languages are fast, except the most popular one.


JavaScript is hella fast for a dynamically typed language, but that's because we've put insane amounts of effort into making fast JITing VMs for it.


Sure, but "for a dynamically typed language" still means that it's slow amongst all languages.


And Python+Ruby


I would say go is not in the same category of speed as rust anf c/c++. The level of optimisation done by them is next level. Go also doesn't inline your assembly functions, has less vectorisation in the standard libraries, and doesn't allow you to easily add vectorisation with intrinsics.



Java and .NET (and JS or anything that runs under v8 or HotSpot) usually compare favorably to others because they come out of the box with PGO. The outcomes for peak-optimized C++ are very good, but few organizations are capable of actually getting from their C++ build what every .NET user gets for free.


.NET go as far as having D(ynamic)PGO, which is enabled by default.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: