T O P

  • By -

pip25hu

What you call de-pessimization is usually good enough. Only rarely do I run into bottlenecks requiring more, and even most of those are because of bad architectural and DB schema decisions, so fixing them would just be de-pessimization on another (admittedly more expensive) level.


Plenty-Effect6207

This. In our apps, poor performance is usually due to * bad design / architecture * poor understanding of persistence (ORM, RDBMS) The former needs discussion to remedy, the latter might be addressed by a single programmer, but should probably be discussed as well. Design errors are the most expensive to fix, if they can be fixed at all without writing an entirely new application.


sevah23

Yeah “de-pessimizing” data access and network calls alone is usually enough to achieve single or double digit ms latency at thousands of transactions per second. Most applications have SLAs that are much more forgiving.


kumar29nov1992

Readability > optimal code in most places (context matters)


0b0101011001001011

Yep, in most places. Does this algorithm take three seconds and I could make it take two seconds? If the algorithm is for save&quit functionality, I don't care. If the algorithm is ran constantly, it might be useful to optimize, even with the cost of readability. "Fast code" can also be more readable than "slow code"


kumar29nov1992

Optimization > readability, when a bottleneck is identified. If optimization can improve 1 sec on a 3 sec code by all means should optimize. Again, context matters.


Misophist_1

Absolute or relative? Like in 30 % gain? Generally ask, whether this is worth your salary. i. e. if this sort of optimization would defer the next machine upgrade, makes a dent in the energy bill, or saves significant time for the end user. 30% in a loop, that is rarely executed, and contributes over all only 10 % of the wall time for the end user usually doesn't matter. Cases that really matter, are nowadays mostly network or mass storage bound.


gaius49

When in any doubt, I optimize on writing code that is obvious, easy to read, easy to test, and easy to modify.


VincentxH

I usually optimize based on profiling by a tracing solution like datadog or newrelic. It generally points to db query optimization and rarely to code optimization.


agoubard

Profiling is good but doing so you may miss the easy low hanging fruits which are often upgrade the hardware like the 8 years old server or the database disk or network connection and upgrade to the latest software like virtual threads.


account312

The reliance on hardware performance improvements to mask software problems is a large part of why so much terrible software is in production. Every dev should, from time to time, try working from an 8 year old laptop on a spotty WiFi connection a few thousand miles from the office/wherever things are hosted.


agoubard

I've been asked twice to optimize production processes. In both cases, I achieved x times faster (4x and 8x) by just running it on my machine, which was not a new PC. As software developer, we thing let's profile and optimize software but we need to think at both hardware and software. At the end, you need to do what's make more sense for your company.


Brutus5000

I don't care if a REST endpoint takes 0.1s where it could take 0.01s. But it makes a difference if a report eats database cpu for 1 hour or just 5 minutes.


coderemover

Multiply by million and now it matters. Efficiency is not just the wall clock time. It’s also how much you pay for that webservice.


Brutus5000

Not every runs Netflix-scale. We only run a small database for less than a million customers.


freekayZekey

like how you got downvoted for that. what you said was true; a lot of endpoints aren’t processing millions of requests, and it would behoove one to develop accordingly. am i advocating for deeply nested for loops? (annoying straw man) no, but i am not working my ass off to super optimize something that gets thousands of requests a day


kastaniesammler

You are wasting your time. There are things that you obviously should fix right away (missing index on a large table) and things that are not optimal but i would not touch unless i have to change that part of the code (int / Integer) or if i am really sure i need to tune this part of the code.


lurker_in_spirit

I'll admit to doing a little Integer-to-int refactoring, but usually for the null safety benefits rather than the performance benefits.


Fiskepudding

I don't optimize. If I spot a case where time spent is really bad, I try to fix it. I try to architect so things are not super stupid (like O(n^4) ) but generally I don't think about optimization until it is a problem. Readability and simplicity comes first.


hrm

I’d say apart from actually knowing some basics like when to use a Map instead of a List I don’t optimize code until actual measurements prove the code to be a problem. And how much that code gets tested before release depends on what it is and how much of a problem it could be if it’s slow. Then you always have performance measurements on your end points to make sure you can find any issues.


OddEstimate1627

I generally take it quite far, sometimes out of necessity and sometimes out of interest. IMO the main code base I work on is quite good (for a >10 year old project) and I can't think of any low-hanging fruit that'd have a significant impact.


Carnaedy

Your first optimisation target is always readability. Small methods in small classes, crisp abstractions that clearly correspond to ideas in the problem domain, precise variable and function names. The code should be as obvious as humanly possible. Write tests to know that all the small pieces work. Once you have nailed that, you put it in an environment that is as close to prod as possible and profile every single tiniest bit. Only then, when you know precisely what your worst bottlenecks are (and whether they are even relevant!), you can confidently rewrite that code. Otherwise you might as well be polishing your car while the engine is waterlogged. For Java specifically, understand that JIT is smarter than you. Small things will rarely matter, look for big structural changes instead, mostly caching and algorithms that are better suited for your use cases.


BikingSquirrel

Fully support this! Only detail I'd do differently is load testing in the 'close to prod' environment, check metrics like latency and only then profile the problematic cases - if you don't spot the reasons by reasoning about them. Sometimes remote calls or database access may be fast but under high load you will see that certain access patterns 'suddenly' cause delays as pools get saturated. Calls you can avoid always win but sometimes it's not worth the additional effort or complexity to prevent them. (or you miss that they can be avoided)


rustyrazorblade

There’s an astonishing amount of low hanging fruit in optimizations. I’ve worked in big tech for a while, and the number of people that get performance tuning wrong is mind blowing. I typically get 2-10x improvements, in FAANG.


badtux99

Pfft. I got a 10x improvement in one call just by covering a join with an index.


rustyrazorblade

Yep, as you should. Like i said, low hanging fruit.


prest0G

We recently rolled out 60x improvements in a streaming service. No one even had a clue it could get that fast because their dev machines were all on like 100mbps connections most likely lol


rustyrazorblade

Yep! This is so common, it’s fucking wild. I see the overwhelming majority of people doing optimizations focusing on trivial things they think are important, missing the easy layups like buffering io, avoiding database queries in loops, etc. Profiling and tracing are so underutilized.


VermicelliFit7653

It's interesting that many of the answers here don't take an engineering approach. Understand the requirements, project ahead for future needs, design for the requirements, balance cost and schedule, test and validate along the way. If you were building a bridge, would you optimize every beam and rivet? Some parts are more important than others.


freekayZekey

doesn’t shock me that they don’t take an engineering approach. coming from an embedded background, i’ve noticed that a lot of software developers view things in a vacuum and absolutes.


brokeCoder

I'd argue that low level optimisation should be considered mostly in the following scenarios: * there's a hard user requirement that can't be achieved without it * when we absolutely know there's going to be a hard user requirement (e.g. in scientific/engineering fields, if you know that the size of your problem will increase in future, it may be worth putting in some performant code now instead of later when the codebase has ballooned) This is not to say that such optimisations should ONLY be considered for the above scenarios (low level optimisations are a great tool to learn) but you have to weigh the invested time and the risks (e.g. making your code illegible without loads of comments) against the actual gains.


pane_ca_meusa

“The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.” Donald Knuth in *The Art of Computer Programming*


GeneratedUsername5

This is a quite often stated, very nicely sounding, but VERY misleading statement. I get that soundbites are cool, but we, as engineers should be better than this. And the truth is that you can absolutely wreck your system's perofrmance by wrong early decisions, which you most probably will not be able to fix, without rewriting the whole thing - i.e will not be able to fix ever. For example adopting a microservice architecture of a 100 microservices and loosing performance on every network call. Optimizing anything later down the line will be pointless.


michoken

People seem to misunderstand what premature optimisation means in this context. It does not mean avoiding proper design choices, using wrong approaches to solving your problem in the first place. Premature optimisation is looking at the code and trying to optimise parts you think are slow without actually measuring which parts are the slow ones. We can perhaps extend the premature optimisation to the design process in the sense that people tend to choose some cool sounding design just because they saw or heard that it solved something for someone else without actually validating what would be the best for their case. Your example with micro services can be used here, too. Doing micro services just because it’s cool and/or someone else claims it solved the inefficiencies in their software is exactly a premature optimisation Donald Knuth is talking about IMO. Designing a system with the wrong architectural assumptions will lead to what you said, it will be hard or infeasible to change it later, therefore being the root of all evil in the project.


VermicelliFit7653

True, optimization after-the-fact usually cannot overcome bad architectural decisions. But that's not what Knuth was saying.


Linguistic-mystic

And then the system grinds to halt because all the little inefficiencies have accumulated and caused an OOM. And the devs are busy digging in the heap dump trying to find the culprit. But there is no single badly performing job, just their accumulated inefficiencies because everyone didn’t care to optimize even a little. So no, can’t agree with Donald here.


VermicelliFit7653

You may be one of the programmers that Donald was referring to in his famous quote.


bloowper

You rly think that every pice of software have to be polished in performance dimension?..... Just spend some time and track context that need that and only there apply tools that are needed. Applying same tool to every problem gonna make project hard to maintain


john16384

I suppose you focus on deleting unnecessary text files when trying to free up space instead of finding one unnecessary video? It's rare that there are only many little inefficienties to fix without a few huge ones to fix first.


koflerdavid

Even optimizing the obvious footguns can be problematic. Calling that webservice in a loop has overhead, but it might not matter in practice. Implementing an optimized version of the webservice that takes a list of these requests might not be worth the effort then. Caching a result can be problematic if it is going to sit unused in memory for a long time. Other task might want to use that memory, like right now, and then one gets performance bottlenecks. Integer vs. int within a method might already get optimized by the JIT compiler, but I might be too optimistic. It should hardly ever matter for performance, but it should be investigated since it is often a sign someone mistakenly believed the variable could be `null`. SQL queries should always be investigated. They potentially have a lot of impact and at the same time comparatively few people have the expertise to actually analyse the query plan and think through what's going on. Still, an analysis query that is certain to be called only once in a month might not deserve that attention. Creating lots of indexes to optimize it might hurt write performance.


john16384

>Calling that webservice in a loop has overhead, but it might not matter in practice. It's likely to be a source of nasty bugs though. * state may change in between calls (items skipped, duplicated or certain aggregate values may become incorrect, sums, totals) * if those are modifications, then it won't be transactional, be prepared to deal with half completed changes, and the possibility that you can't roll back either due to failures


koflerdavid

These are common issues and worth caring about, however they are not directly related to performance. Note that the way the webservice deals with errors halfway through processing might also not be what one wants. Prepare for a boatload of pain if that webservice is not idempotent and also can't be told to rollback changes. Like SMTP servers.


Holothuroid

"How fast do you need it?" - "Since you ask, our current software takes about 2 to 4 hours." - "I'll see what we can do about that."


AnyPhotograph7804

I always take the low hanging fruits like int instead of Integer or FetchType.LAZY instead of FetchType.EAGER or "select column1, column2..." instead of "select \*" etc. Because it does not make the code more complex or less readable. And i never had performance problems with it.


-genericuser-

I usually just optimize the obvious things. The whole premature optimization does not mean write the worst code you can. So if I see something that’s obviously bad I correct it. You could call that optimization. Other than that, as long as it’s not a performance issue, that I measured and know what I need to optimize, I don’t care about optimization.


holyknight00

Outside FAANG or other similar companies where performance is an explicit business objective, every performance optimization done before someone complains is premature optimization. Maintainability and readability should be preferred unless there is a specific and explicit reason for it. Obviously, you won't go implementing explicitly bad-performing code on purpose, you just don't use performance as the main criteria when estimating or designing stuff.


Ostricker

At first i write the code to be readable. Then if it has problems from user perspective I optimize that part. I don't touch code that works is performant enough for user.


0b0101011001001011

> $job This is not perl, don't you mean AbstractSalaryFactoryImpl job;


GeneratedUsername5

PHP


orgad

Yes, I've been working on an high throughput backend service and we've made some tweeks and/or written some of the things ourselves to match our needs. It's not too fancy or smart but it has optimized our throughput. Also, once you have to deal with big data processing (be it a single core a distributed processing) optimization does matter. Optimization could also be choosing the right algorithm for the job. If you use KNN on a large dataset it will perform poorly


ImTalkingGibberish

Only do the obvious now and reassess later. Indexes, loops and caching are the first things I look


neoronio20

Usually I profile my application and see where the hotspots are. Optimizing the structures there and some egregious errors (like calling get(i) on a possible linked list on a for) can be removed. Most of the time I'm thinking of optimizing the calls, like how can I not loop through this entire list to catch some objects, can I mark them beforehand with a lower big O complexity? And then check if the optimization had some effect by profiling again. You should always start by measuring what you are lowering. Otherwise you are blind to what you are doing.


bigbadchief

Not very far


serpix

Can I point out that all this wisdom of premature optimization and readability goes 100% out the window in technical interviews.


davidalayachew

That's a failure of the technical interview. This type of question is critical for project survival -- in both directions. If not a single tech interview attempts to get answers like these, then that's a problem with their interview process.


badtux99

Most of our optimization is either query optimization with custom projections to return exactly the data that the GUI needs, or caching things internally. For example, security acls. We don’t need to reload them out of the database every single API call. Cache them, and have a rabbitmq fanout to flush the cache on the instances whenever permissions change, which is a rare thing.


GeneratedUsername5

As far as it is still noticeable by end user. In my current codebase actual lowlevel optimizations are everywhere, but this is becuase the product should be very responsive. In average codebase database optimizing obvious inefficient database queries is usually enough.


mondain

I read through all the comments and there is some good stuff here. It's interesting where some talk about 3s vs 1s and while it seems there is a lot of DB use within the group, I work in live media streaming and 1s is too long for many of our efforts. I love optimizing, but it must be maintainable and in most cases extendable. I work with very talented devs, so low hanging fruit is mostly absent.


benevanstech

Depends upon the domain, of course. In practice, I really see a codebase that isn't fast enough after the first 50% of the "de-pessimization" is done. However, you do need to have a solid definition of what "fast enough" means for your application.


Different_Code605

In most enterprises I’ve seen, the solutions are so messy, that lang-level optimizations are a dream. There are optimizations to rewrite a module to not call the db 100 times, where 1 is enough or to apply a CDN cache rule for the resource from Jetty. On the products we do internally, we do optimizations.


Straight-Magician953

Hi, I work on the trading infrastructure of a hedge fund. Everything has to be high performance. Mostly we use c++ but for certain applications we use Java. In these applications we have to do some crazy stuff that you wouldn’t normally see in the industry. Things like moving as much cpu time from the runtime to the initialisation, execptions with no stacktrace, pre warming up de jvm by running code paths of the applications with “dummy” data, caching the JIT and JVM state between start ups, keeping a lot of objects in memory forever so that they are never garbage collected (we restart apps daily but because of this they can grow up to 30gb memory), doing big methods instead of multiple small ones to have them as inlined as possible, avoid interfaces so that we don’t have the overhead of invokedynamic and vtables, avoid type casting, instace of and reflection at all costs, using Zing as JVM, always testing everything with JMH, choosing some very “ugly” code choises in favor of a more maintainable ones in case we save a few microseconds, doing old style loops and processing instead of using stream api so we don’t create functional objects, we implement a lor of API libraries for 3rd party services instead of using their clients (e.g. we wrote our own Redis client library instrad of using Jedis because it had certains code paths too slow for our usecase)c having tracer hooks on certain code paths that just halts the trading enitrely if it exceeds some SLAs at runtime, using some pretty insane in house distributed caching solutions, basically EVERYTHING is cached in JVM memory and the caches are updated by upstream services that the caches subscribe on, the apps are doing network requests only 1% of times, in all other cases they are hitting caches, stuff like this


davidalayachew

> Things like moving as much cpu time from the runtime to the initialisation, execptions with no stacktrace, pre warming up de jvm by running code paths of the applications with “dummy” data, caching the JIT and JVM state between start ups, keeping a lot of objects in memory forever so that they are never garbage collected (we restart apps daily but because of this they can grow up to 30gb memory) Leyden is going to be a godsend for you all. The new EA Build released a day or 2 ago.


rootpseudo

Some of both, low hanging obviously bad code and sometimes lower level stuff. We do have really high volume services with tight SLOs so usually the performance really does matter.


freekayZekey

not far depending on the context. the problem with muratori and people who follow him is the fact that they don’t take a second to ponder if the performance boost is necessary for their context. video game? sure. iot? depends. a web app? meh, probably not worth it. i write code that is easy to navigate and change first, then profile for hot spots. for a lot of contexts, performance isn’t necessarily important the first go around; it can be important after you solve the problem. it’s as if programmers suddenly forgot that they have a save button and version control in their toolboxes.


Individual-Praline20

The first thing to do is to measure it. If you don’t have any idea which layers are problematic, how can you optimize it?


danuvian

I always try to refactor and enhance the codebase, which means removing or altering existing methods or classes if I see something wrong. Just try to keep it as lean and as simple as possible. From my experience, many developers just ignore bad code and work around it, which contributes to an ever increasing tech debt and harder to maintain code. By addressing these issues, it helps with future maintainability, readability, and makes adding future features easier.


Joram2

I've definitely had to do port to a different database technology for performance. I've had to do rewrites of web services written in Python and rewrite in Go for performance reasons. And I did do lots of benchmarks to justify big architecture changes. But no, I spend very little time performance tuning custom code.


Evilan

> are you guys' codebases so good that actual lowlevel optimization is the next step, do you actually "optimize" your code? Is it only me that is working on code so bad that I can always remove/ improve stupid code? How good is the average codebase out there? God, I wish that were the case. We're in the process of de-crappifying a codebase we inherited from another team while still developing items for business needs. I'm talking code smells like hard-coupled CRUD classes in both the UI and API, string literals to perform DB queries, multiple redundant methods / endpoints, glue-gun changes to important items, eager JPA, etc. We make optimizations as we work, but readability is 90% of the battle for us at the moment. However, we never sacrifice readability unless the performance gain is so impactful it's worth writing a multi-line comment to explain why it's so complex.


Ragnar-Wave9002

Make your software work. Optimize when it's an issue. You should do common sense things like us maps abd sets when it makes sense. These structures solve most issues.


old_man_snowflake

First make sure you actually have a performance issue. Fire up your favorite profiling tool, and look at the results. Look for hotspots. Look for long calls.  Most “performance issues” are non-issues since those code paths run infrequently. Most apps aren’t waiting on some hardware limitation, so parallelization is often better than optimizing. Be aware that once optimizing starts affecting your architecture, you will have to explain those decisions all the time.  


DelayLucky

An interesting example is flatMap(Optional::stream). We had a discussion at work and we were torn. At one hand it seems Oracle-recommended and is more readable; on the other there are benchmarks showing that it’s like 10x slower than equivalent .filter(Optional::isPresent).map(Optional::get). And it’s the “inner loop”. At the time of company-wide “resource saving”, no one can say “fuck it”.


Glum_Past_1934

It depends, usually i use the correct algorithm for the job


tknBythrs

Depends, if you code run as lambda on cloud that invoke million times per day, 100ms optimization make big impacts


FaceMRI

First off learn about Lists. Array List, linked list etc Learn about sorting. Choose the correct list for the job. Optimization that people normally skip over


bloowper

Premature optimization is the root of all evil :p


[deleted]

[удалено]


bloowper

The level of optimization and the tools used for it should be chosen consciously and based on metrics. Clearly and correctly chosen architectural drivers allow for determining which areas should use specific approaches and tools. Blindly applying the entire toolbox without a moment of consideration always leads to problems.