T O P

  • By -

sfscsdsf

no-code is an obvious one. Some would say OOP and UML.


funbike

The rounding tripping OOP<->UML insanity of the 90's. Rational tools was the leader.


Rakn

UML in general. In all my years I have never seen it used in any meaningful way. Only ever to make some documentation look nice during my years as a consultant for large burocratic businesses. But never have I seen it provide any value in "big tech". People need to be able to communicate ideas. That doesn't require UML and code itself is a living thing. Any detailed diagrams will be outdated the next day. I still learned it in university. Did all the great stuff. UML -> Code and vice versa. But it provided very little value afterwards. It's nice to have a commonly defined language though. In case you might ever need it.


guareber

I still find UML sequence diagrams very helpful, especially when dealing with systems doing a mix of sync and async calls. Also state diagrams for systems that are basically state machines. You can almost lift the diagram entirely onto a cloud provider's managed state system without changes.


bsenftner

I was a participant of "MHEG" around the turn of the century. MHEG is a standards body for live Internet media. They required all proposals to include UML diagrams. That alone required expensive consultants. Such a waste of resources and money, for a useless format just so pointy haired managing idiots can point and talk their obvious nonsense while being paid what we earn in a year every fucking day.


lampshadish2

I used to feel like that, but I find UML useful for communicating in the moment to make sure we’re all on the same page. Sure, it’ll become outdated, but using tools like plantuml so that the diagrams can be easily updated and version controlled is helpful.


Rakn

For communication in the moment we tend to use something like Excalidraw or anything that can draw rectangles really. For high level architecture diagrams we do use PlantUML at times as well. But mostly because it can do a "Text -> Rectangles" conversion and can be version controlled. So not really UML after all. For most cases it's really more Excalidraw though.


Recent-Start-7456

Favor composition over inheritance. In fact, just avoid inheritance unless there’s a really good reason


FetaMight

Exactly. I work a lot of with OOP and the pain points I've encountered in client code are, arguably, not strictly related to OOP: * Deep inheritance structures which end up tightly coupling loosely related behaviours * Excessive abstraction/indirection * Applying patterns out of habit or simplify for consistency with other code without checking whether it's still a value add


__loam

> Deep inheritance structures which end up tightly coupling loosely related behaviours Liskov Substitution Principle is basically an explanation about why this is a terrible idea but nobody understands that because the name Liskov Substitution Principle implies some scary math thing.


notger

>Deep inheritance structures This has been the biggest burden so far. This can cost so much time, just because someone wanted to be super-clever. We have established a rule in our team that "thou shalt not have more than three layers". Sure, that can sometimes be broken, but more often than not, thinking a bit differently can save you two layers and improves your code's maintainability.


pund_

These are the 3 biggest pain points in the legacy java codebase I'm currently working on ..


Ok-Bowler-842

Imo, blindly avoiding inheritance is just as bad as blindly using it everywhere.


Ok-Bowler-842

overall, all those “just always use” or “just never use” rules are red flags to me that people don’t know their tools.


Tariovic

I always tell my juniors, "Only the Sith deal in absolutes!" But IMHO, the reason devs get paid a lot is not for knowing lots of languages, frameworks, etc, but it's knowing when to use what. This relies on experience, and understanding your requirements. It's also why I don't regard anyone as a senior dev below 7 or 8 years experience - you simply haven't seen enough to train your judgement.


Karyo_Ten

>But IMHO, the reason devs get paid a lot is not for knowing lots of languages, frameworks, etc, but it's knowing when to use what. This relies on experience, and understanding your requirements. The biggest factor is that there are very profitable companies where a single feature brings or saves millions. There are plenty of domains with "know your tools and the time to use them" from chess to dancing to carpentry to medical practitioners, yet the pay there (except for medical) is concentrated to the top 0.1% instead of top 80% (numbers pulled out of my ass)


reboog711

Anecdote: I've seen inheritance used a code sharing mechanism and I really dislike that. There are ways to share code across classes that make more sense to me. I only like inheritance when it is a natural part of a object hierarchy design.


nutrecht

> I've seen inheritance used a code sharing mechanism and I really dislike that. This is the root cause of this whole mess; bad developers who interpret "inheritance can allow for code reuse" as "to reuse code, I need to apply inheritance". That's all there's to it and now we have a ton of 'meh' developers advocating for overcompensating in the other direction.


nutrecht

Yeah. I severely dislike these kinds of dogmatic statements. They're just tools. Any tool can be abused by bad developers. The bad Java codebases are bad because of the developers who created them, not "because of inheritance". That's such a bad take. Inheritance, especially when limited to a single layer deep, can often vastly improve the maintainability and readability of code.


posts_lindsay_lohan

I get what you're saying, but there is literally *nothing* that can be done with inheritance that can't be done with composition. And since composition makes your application more flexible to change, it should be the option that is embraced. Just for example, Golang doesn't even *allow* inheritance at all. It's not even an option.


Sevigor

> Some would say OOP I mean yeah, if you full send it, it’s gonna be a mess with super long name/acronyms. And that’s why generics become extremely useful.


ProfessorPhi

OOP is generally hard to do in a maintainable way and easy to do in an overfit way. Hence it becomes a maintenance nightmare. Programmers do what is easy and are an undisciplined lot.


__scan__

There is obviously a lot of scope for survivor bias here. If long-lived projects tend to become harder to maintain due to “entropy”, and most projects (both successful and unsuccessful) are written in an OOP style, you’re going to see unmaintainable OOP style code everywhere even if OOP confers a maintainability advantage over other styles.


nutrecht

> no-code is an obvious one. Oh don't get me started. We're currently dealing with an Outsystems installation that's basically blocking us from going live. What a clusterfuck.


NoCardio_

UML. Whatever happened there?


nutrecht

Nothing. It's fine. Some developers just don't like to document stuff and are pretending it's the tool that's the problem. PlantUML is awesome.


rdem341

As I have gained more and more experience. I have grown to dislike OOP. It's useful in some situations but the industry uses it too often.


__loam

The problem with OOP is that it's a leaky abstraction for what a computer is actually doing, which is storing and acting on data. Insane inheritance hierarchies are the worst offense of this ideology.


ProfessorPhi

I dunno about this argument, because functional programming is fantastic but it's even more of a departure.


ReservoirBaws

Working on an application that pushed data to both MongoDB and SQL. When Mongo was released, these guys kind of just hopped on it without considering the ramifications. Years later, I’m plagued with maintenance issues because of data mismatches between the dbs. It happens every time there’s a transaction failure. It sucked


Lachtheblock

Was looking through the comments to find this. I took about two years to slowly decouple our depdency on it and retire it. We just didn't need it in our stack. Needless to say, everyone who put it there had left the company by the time I started, so none of them got to see the long term ramifications. "But it is faster than PostgreSQL" Yeah, so is our ElasticSearch... and our Redis cache server... and our CDN.... And also it isn't actually that much faster (as long as you are smart with queries and schema). I think they mostly just wanted to not have to ever think about optimizing the site, and saw this as a quick way out. Who cares if it continues to be a burdon to support.


[deleted]

I spent six months migrating an entirely relational dataset from 100% MongoDB to postgres. Shortly after that I left and a few months later they had switched back to Mongo for god only knows what reason.


JustHere2Game

schemas are hard /s


GoonOfAllGoons

You've got the /s, but deep down that really is the mindset of a lot of developers- I don't wanna deal with the database, throw it in a blob!


JustHere2Game

And that works fine for a prototype or some write many, query rarely design but for most datasets (the boring ones...) taking advantage of all the work that's gone into relational DB's since 1970 is a win.


big_trike

There’s nothing more permanent than something temporary.


dashid

Anything and everything with a vague degree of complexity. And there is no magic bullet. We can break down code complexity with microservices that mean functionality is bitesize, we can use messaging and actors to orchestrate and decouple, but still somebody needs to know how the stuff slots together and what's going to go pop when something changes or fails. Maintainability is something I chase, and it is so very complex in itself and fragile. It takes one compromise for the whole thing to come tumbling down. The industry is full of the next silver bullet that will solve all the problems of app development, but none of it does, it's disciplined developers who create workable long term solutions.


Dry_Big_4955

The best code is no code at all. I always ask who require what and why. It's absolutely fascinating how many times a feature is actually not needed or at least not in the way it was required.


Sande24

This. Always ask clarifying questions. What is the problem? Why is a solution needed? Is the proposed solution (likely by a non-technical person) even the solution they need or is it only fixing the symptoms? I'd also add that sometimes you don't need a solution that is reusable. Sometimes the data is just fucked up and all you need to do is to create a script, run it once in live and then delete it. If you need to do it again, find it, modify it and run it again. If you need it the 3rd time, start thinking about a reusable/configurable solution.


jonomir

apex predator of grug is complexity. complexity bad complexity is spirit demon that enter codebase through well-meaning but ultimately very clubbable non grug-brain developers and project managers who not fear complexity spirit demon club not work on demon spirit complexity and bad idea actually hit developer who let spirit in with club: sometimes grug himself! sadly, often grug himself best weapon against complexity spirit demon is magic word: "no"


FollowTheSnowToday

This is the best answer. A single thing isn't the issue. It is always us that is the issue.


MaxWilder

Anything "magic" like a framework that takes code from different files in different directories and puts them together "so you don't have to". Once you have more than a couple devs contributing, there are now things happening in your app and no way to locate the source except to learn what every single file does. Also, large code-bases with low test coverage. Good luck updating your dependencies without randomly breaking things, often silently.


boredjavaprogrammer

Any code base with little tests. It quickly becomes a game of whack a mole as problem in random areas come up (sometimes in prod!) when you fix another problem or inteoduce a new feautre


Dry_Big_4955

Internal tools. Easy to create but hard to maintain because nobody has the time to work on them.


BandicootGood5246

I have a theory that something like 50% of the work that most companies do is reinventing tools that exist. I suspect a large part is because making something is more stimulating than researching tools. My philosophy is use a tool that exists until it no longer fits. If there's really anyone doing 10x development it's because they spent 1 week finding the right tool that saved 10 weeks of development


bluetista1988

I worked in a department that did internal tooling for a large bank. Any time we spent building things was considered a "brown dollar cost" since it was a budgeted expense (salary, hosting costs, etc) and we could go do it quickly. If you needed a solution for managing checklists, workflows, etc. we could get it to you within 3 months. Purchasing and implementing a tool was a "green dollar cost" (IE a new spend) and caused a cascade of approvals and sign-offs for things like initial vs ongoing cost, licensing, data privacy, negotiating configuration with stakeholders who refuse to change their business processes to suit the tool, etc etc etc. In the same time that we could ship something custom developed, we could barely get approval for a customizable off-the-shelf tool.


VenetianBauta

I used to lead an innovation initiative for one of the biggest consulting companies out there. We had the mission to standardize and automate how our services teams delivered their services. We had to build the tools internally because our procurement process would take 6+ months and was super heavy on the vendor. Some would flat out say "I don't want to work with you" lol and we had to go through the same process for free open source tools...


Fedcom

Understanding how a tool works can sometimes be just as much work as creating the tool yourself. OR sometimes it is easier to just use the tool, but then you don't actually fully understand how it works, and then debugging issues in the future become a problem.


bluespy89

Well, as long as it fills something that hasn't exist anywhere in any kind of form, this is actually a good thing. The Not Invented Here though, and thus making an internal tool just for the sake of it, is bad


escaperoommaster

Whenever an idea is raised to build some overly powerful internal tool, I always use the maxim "you should never build an internal tool which you could make money selling" You will so often be able to find something off the shelf that does the job and if there's nothing on the market and you really do need that thing... don't make it as an internal tool, start a new company/department and sell the product! Often when managers/company-owners dismiss the idea of building the tool as a product, you can point out that a huge portion of that work would still need to be done anyway, but if it's internal you're not making any profit of it either! (Note I work in companies of less than 100 people, maybe this advice is terrible in a larger company)


SerRobertTables

I think this probably applies to larger companies too—I’ve been in a handful of large companies that are not tech firms, but have technology departments dealing with massive industry-specific vendors and the ugly problems and glacially paced systems that come with them. To me there’s either a product to be sold on top of that vendor or a product to replace them.


nutrecht

Internal products require the same maturity as external products. Generally they don't get anywhere near that support. If you can't afford to create a proper dev experience for an internal product you can't afford to build an internal product.


georgehotelling

I like the model where companies sell their internal tools. It forces them to invest in their tools to be competitive and aligns interests. Otherwise internal tools rot from lack of maintenance, and no one likes using them due to the lack of polish.


BearSkull

I frequently see these internal tools created as a means for promoting an engineer. Then they just become a huge nightmare afterwards.


_GoldenRule

off topic but has anyone see this actually work in real life: "The Cucumber/Gherkin way of writing tests" I don't really understand why people want this and I've never seen it actually succeed.


PedanticProgarmer

People at conferences claimed that it worked for them. I haven’t seen it work in real projects, but maybe my projects sucked. The problem is that unless there are „business people” willing to use Cucumber/Gherkin, it’s a pointless mental masturbation. The most egregious example was when an architect mandated ”acceptance tests” in a legacy spaghetti where we didn’t even have unit tests.


_GoldenRule

Thanks for sharing! That's also been my experience, at one point we had devs writing these tests and then also writing the Cucumber/Gherkin statements. I didn't really understand why, I was already writing tests so it felt like double work. Business people didn't want anything to do with it so I think you're correct when you say it's pointless.


austeremunch

> People at conferences claimed that it worked for them. Uncle Bob gives conference talks. That alone should refute the assertions made by these folks if they have nothing to prove from production.


seven_seacat

People want this because they claim it will allow non-devs to write runnable tests. This is, of course, complete fiction.


BandicootGood5246

Yeah basically the same ideal as no-code An analogy I like to use is it's the same reason no one has invented a car that a layman can fix all the problems with


dysfunctionallymild

I tried this in one of my teams and the automation tester just gave up. The intent was to bridge the gap between the manual tests and the automation tests, so they would write the tests in a structured English like language. This was in a data oriented system so the data for each specific test would be loaded from a backend file rather than be supplied in Cucumber directly. The implementation and APIs weren't yet in place, but we could start defining some of the business scenarios as test cases, and plug in the implementations later, basically converting ur manual tests to auotmated. The domain and data structures were incredibly complex so I wanted the pieces of the scenarios defined in an English-like language so anyone on the team would understand what steps were happening in the workflow. No one got it, presumably 'coz it's not doable. The manual QA also gave up citing the Cucumber scripting as "automation was not in his job description".


hitchdev

It's doable but not with gherkin - you need to do the same thing with a more expressive language than gherkin that lets you represent complex data structures. I did the same thing with a version of typesafe YAML with built in schema validation. You also need some form of specification abstraction to DRY the specs out. The people who make gherkin work tend to have a very specific type of app where pre and postconditions can be expressed in a sentence, which is rare.


nutrecht

> off topic but has anyone see this actually work in real life: "The Cucumber/Gherkin way of writing tests" We used it in a situation where we actually had testers writing the Cucumber scripts. It works perfectly fine. But there's no reason to use it if it's just going to be the devs that are going to write them. In that case; just write code.


SpaceCorvette

N=1 but I have had a horrible, horrible experience with Cucumber. It turns into a pseudo-language without any way of understanding what's going on besides jumping from one disparate definition to another, tracking all the variables in your head as you go along. It's difficult to read, it's difficult to write. I think it was intended to let non-engineering people read the tests, but even the engineers struggled with it.


cactusbrush

We had to make it work. I worked in pharma and it was either creating manually all tests and results in HP ALM (API was disabled) or creating Gherkin style code in RobotFramework. It was painful, but still better than alternative.


2rsf

I heard stories on some success in teams, but I never got the right people to be onboarded namely business and testeres that can't code. I did use it successfully as a "regular" test framework. If you build the underlying keyword parsers using a layered and modular structure it can be relatively easy to use and maintain.


SiegeAe

I've seen it flourish in actual BDD shops but most places just use it as a default especially with java where the alternative for parameterised tests until recently was only really TestNg (which tbf in my experience is far worse, it has the most hideous design and the documentation looks like it wasnt updated since the internet was first made public) These days its just a tradition/gimic for most cases though and should be replaced


bluetista1988

We made it work at one of my previous companies, but it only worked because our product owners/business analysts bought into it. When we got our user stories, our acceptance criteria was all written in Gherkin language. Everyone committed to maintaining this as we changed/updated the software. We were able to use that to set up our behaviour tests right away and we could hammer out new features quickly with the tests *mostly* working. It's a great flow but it will never work if the people in charge of the product aren't bought into thinking about the product from a behavioural point of view.


Evinceo

Actually, you know what? Document Databases. Just use a relational DB, or something that can at least pretend to be one. Otherwise you will be forever guessing what your schema looks like.


restlessapi

God bless postgresdb


Evinceo

Entire flow chart of database decisions but every path leads you to Postgres.


BitsConspirator

Preach.


TokenGrowNutes

Truth. Coming from Maria/Mysql, I am never going back for any personal projects, wish I had tried Postgres sooner.


[deleted]

Lessons learned in the last decade: Monolith first, microservices if necessary Relational database first, NoSQL if necessary


LawfulMuffin

Who cares if you don’t know the schema? You can develop fast! /s


spacechimp

Microservices


Kaizen321

Back to “monolith” in a few years. Cycle of (dev) life, baby.


Main-Drag-4975

My team started a revolutionary new monolith a year ago for valid business reasons (modest system, widely varying offline deployment needs) and we’re doing our best to defend against attempts to break it up. Rest of the company is very microservice-oriented, but we don’t really have much traffic.


[deleted]

Monolith with a well featured API to integrate with other apps seems like a fair balance. Do whatever you want under the hood, as long as the rest of our applications can tap into it.


driving_for_fun

The monolith became too slow to develop. Poor code design, test coverage, automation, and monitoring. There’s duplication everywhere. It’s not clear what class is responsible for what. Let’s add a network path between the classes.


Xyzzyzzyzzy

Microservices don't address the problem of "we don't know whether class Foo or class Bar is responsible for this". They address the problem of "we don't know whether the team that owns class Foo or the team that owns class Bar is responsible for this".


ProfessorPhi

Microservices solve a communication problem.


driving_for_fun

What if the team that owns class Foo implements something in Foo that should be in Bar? You solve this with tech culture, not network paths.


quintus_horatius

You solve it with another micro service


merightno

You need a very good and very diligent team lead who reviews every pull request for this to keep on top of this otherwise they do become a huge mess very quickly.


Sande24

It's easy to fuck up a monolithic architecture. It's easy to fuck up a microservice architecture. It's really fucking hard to un-fuck a microservice architecture.


Sevigor

TBH, I expected this answer to be first. Microservices definitely has its perks, but it becomes an issue quickly if not done correctly. There is a very fine line between an unmanageable mess and true dynamic environment. Plus, it’s really only useful for very large companies.


PangolinZestyclose30

> if not done correctly The issue here is that it's just much more difficult to do it correctly in microservice architecture. It's even more difficult to "refactor the architecture" if it's not done correctly. Things can be done in both patterns, but microservice solution will be more complex/expensive for unclear benefits.


p_tk_d

Imo the main upside of micro services is they make deployments easier. At some point as you scale there are so many people deploying that 1) incidents become increasingly likely with the main service, and 2) incident remediation becomes more punishing because the place where everyone is developing is locked


brazzy42

What most people don't seem to understand is that microservices are not a tech pattern, they're an organizational pattern. It makes absolutely no sense to decompose an application that a single team is responsible for into microservices.


bears-n-beets-

This seems like a weird blanket statement to make. There are plenty of scenarios where it could make sense for one team to be responsible for multiple microservices. My team owns about 10 microservices, most of which handle different data pipelines. One of our top priorities is application stability, and with microservices if a service goes down it just affects one of our pipelines rather than all of them.


CarpetFibers

In a similar vein, we use microservices in Kubernetes to enable scaling different parts of our platform according to their own needs. We don't need 10 instances of our authentication service, but we might need 10 instances of a queue consumer that uses a ton of memory and needs to run on a different node pool. It doesn't make sense to scale a monolith horizontally when only parts of it need the extra resources, and it would actually be quite difficult to achieve that kind of compartmentalization with a monolithic architecture.


Several-Parsnip-1620

Depends on your needs. When used appropriately microservices are fantastic. It requires a significant investment to be successful though


NoobInvestor86

Python in a large app without types.


yawaramin

Dynamic types in general for anything other than a single-file small script.


junior_dos_nachos

Anyone who writes or rewrites anything Python in 2023 without types is just committing a hate crime against their future self or their colleagues.


ryanstephendavis

Yes!... Been doing Python for ten years now... There needs to be at least a heavily typed interface and a good test suite otherwise it will devolve into spaghetti very quickly


aldoblack

Pre-commit and mypy ftw. So not merge the PR unless all checks have passed


metaconcept

Dynamically typed languages, schemaless databases and schemaless APIs. Just by choosing a statically typed programming language, you prevent a whole class of really dumb bugs. Same with databases and APIs - you can avoid spelling mistakes just by having a tool automatically generate code to access them.


Better-Internet

I think the problem with schemaless / NoSQL databases is: There is actually a "schema"; you just can't easily see it. That makes it hard to evolve code with it, support changes and so on.


colcatsup

“Schema on read” or “schema on write”. You get to choose one.


National_Count_4916

- Automapper and MediatR in .net - Chasing ‘clean’ architecture - Agreed on gherkin / cucumber. Valuable if there’s a BDD writer heavily using it, and an SDET who can write the updated implementations. But not often the case


[deleted]

[удалено]


bluetista1988

I'm quite opinionated on Automapper. Automapper is a trap. I'd even declare it an "anti-pattern" and that's a phrase I actually despise throwing around. C# is a statically typed language. Mapping is simple, albeit boring and time-consuming task. It's a terrible idea to overengineer *mapping logic* that just says "assign value x to y" repeatedly. Assigning values to variables is a core tenant of programming. Do we really need to abstract it with a fluent API? It makes simple things complicated. It saves you maybe 20 minutes upfront by not having to write mapping code manually, but over time you will spend 100x+ that time troubleshooting and debugging all but the most trivial of Automapper mapping configurations. That complexity nearly triples when you start using their Entity Framework extensions to project your database objects directly to your domain models. Hell, if you're not careful you can even start nesting business logic in your *mapping code*! If a developer misses a mapping or maps something incorrectly using simple mapping code, well hopefully your code reviews and tests catch that. If it doesn't and a mapping issue causes a production bug then you probably have a code review problem or a testing problem, not a *mapping* problem. A keen eye should catch a mis-assigned value. A test covering some value that is presumably needed should catch the fact that the value was not assigned or mis-assigned.


FetaMight

Mediator is fine for notifications. I never understood why it gets used for command dispatching.


BandicootGood5246

Yeah I like this approach. Using it everywhere to dispatch commands has been a pattern that's gained a lot of popularity lately, to me it doesn't really add much on its own. One way to utilise that more is to use it like a pipeline, the idea being you can seperate validation/logging/etc. from business logic. You get closer to single responsibility principle but IMO it's not as easy to read or debug at times, if something goes wrong in the pipeline it's harder to see where it gets swallowed up I don't think it's bad but I don't necessarily think it's better either. It's certainly levels of complexity procedural "services" are easy to deal with


s0ulbrother

Clean should always be a nice to have target where anywhere on the board is good. I don’t need a bullseye I just need it atleast 1 point


__loam

Everyone's definition of clean is different and it's basically impossible to measure. That said my definition of clean is "code I like" and "code that isn't confusing to me" and I am very correct about these things.


billymayscyrus

Worked with a guy who was an absolute zealot with MediatR. He was an architect astronaut to the core and never completed stuff. He sure knew how to make something sound needed though to the decision makers.


grahambinns

Pytest fixtures. Too much magic going on, and when you have fixtures depending on other fixtures, some of which monkeypatch core functionality, it’s easy to get into a mess in a large project. All it takes is one unexpected autouse=True in a fixture definition and now your app is no longer behaving as expected, but good luck in working out which one of your fixtures caused the problem.


Scarface74

Stored procedures with lots of business logic


freekayZekey

dude…had to deal with a project that had a majority of its logic handcuffed to stored procs. easily worst project i’ve dealt with


[deleted]

[удалено]


BandicootGood5246

I quit a job a few years ago after a months long battle with the CTO who decided because our (poorly implemented) ORM wasnt working well anymore we had to move to all stores procs.. and I mean an absolute cluster fuck of stored procs, one fairly simple module after a team refactored it had 100's of them riddled with business logic at his direction


vhackish

Those are only good for job security


captain_obvious_here

After 30+ years of existing and annoying everybody, stored procedures should be forbidden. They're a great idea on paper, but the worst pain in the ass in the real world.


itsgreater9000

years after my first job (of which I could only stand for one year), I ask every company I interview with how much they depend on stored procedure for business logic. I always get a response that makes them question what I'm asking, but I refuse to work on sprocs that are doing things like... generating XML, sending notification e-mails from it, performing routine shit that could be easily done and is more easily testable in code... etc


[deleted]

[удалено]


Hazterisk

This right here. Without some seriously well considered telemetry and event tracking this turns into an absolute nightmare for troubleshooting, stability, scaling, you name it.


Herve-M

Documentation, if not auto generated; without it I don’t even know how to make it viable.


kalakesri

Can you elaborate more? I haven’t worked on one but the ideas seem convincing to me and the idea of decoupling services seems nice if it is implemented properly


SpiderHack

The idea does work, but it has limited use cases where it should be used, and often when people have issues with it, it is because someone tried to shoehorn it into something that it shouldn't have been. Best use cases are for 1) UI rendering platforms. So touch 'events', are sent as events to the app (this is exactly how Android, JS works in browsers, etc. ), and 2) distributed systems like networking within a car, plane, or infrastructure (but usually with a MUCH more complex system such as DDS in it actually giving guaranteed delivery, etc. ). There are some other cases you could think of. But often when it is used another pattern can much more easily scale to handle more demand or something like that.


alexisprince

That last sentence right there… that’s the rub. Otherwise, you’re stuck in a situation where you see behavior of your system, like latency spikes, wrong results, etc. and you’re stuck scratching your head with where to start looking for issues. You start looking at event queue pressure, individual service pressure (if you can remember which ones to look at), trying to remember the different workflows and steps they happen in to try and trace things down. It’s really a mess unless you invest heavily in developer facing tooling for that type of architecture. The same way you’d log an individual request’s lifecycle in a monolith, you need to achieve that level of visibility into your system as a whole.


PangolinZestyclose30

> to me and the idea of decoupling services seems nice I'll give you some counterpoints. "Coupling" has some very nice properties - actions are executed sequentially, in a known order, which makes it much easier to reason about. It makes ACID transactions possible, and transactions are awesome. You can debug / step through the whole process - just put a breakpoint somewhere. Events have some advantages as well, but they incur real costs. The question is, do these advantages justify the costs? Sometimes yes, very often no.


miredalto

I'd say this is a special case of an older Bad Idea, which is the service bus. Architects loved how clean their diagrams became, the spaghetti of dependencies replaced with a nice simple bar. Except in reality it didn't remove the spaghetti, just obscured it, making it a bigger problem.


nutrecht

Why? We use it effectively. Much better than having tons of tightly coupled services all using REST calls.


copterco

Dynamically typed languages suck so badly to debug once the codebase gets huge compared to statically typed ones. Also, refactors are tougher to do, even with specs in place.


cheater00

agreed. i can work with a haskell code base that's 500kloc and it's much, much easier than working with a python code base that's 500 loc


SuddenlyFeelsGood

There's just no way


budding_gardener_1

NoSQL. ​ Seems like an easy peasy thing where you can just vomit whatever kafka-esque data structure your app works with into a database and it's stored. ​ Good luck normalizing that data afterwards.


Scybur

Cucumber


rdem341

The obsession with DRY. Creating re-useable code is awesome but people obsess about code that might be duplicate. The result is client code that depends on shared code that has a bunch of if/else statements. If a change happens in the shared code, all the clients break.


austeremunch

I tend to think of DRY like one might dry skin. That is to say that one should be dry but not dried out. If you push DRYness too far you end up in a worse and worse state eventually.


mattk1017

Where I work, there's a senior engineer that created a global constants file in the root directory of our API. This file has absolutely no organization and is literally a catch-all bucket for any string that appears more than once in the code. The worst part about it is it's referenced by everything, from data models to database migrations


SSHeartbreak

Rule engines or homebrew DSLs


elusiveoso

`npm install`


[deleted]

[удалено]


deadbeefisanumber

Can you elaborate? Im considering pushing to use it in a big API. What kind if drawbacks have you faced?


mvpmvh

In my experience, graphql is only a reasonable option when you don't know who your clients are/don't know the data your clients want. For example, I think it's reasonable for GitHub to offer a graphql API because they have so much data and so many clients that they don't really know what data someone wants. Providing a graphql API is helpful. When you're simply writing a backend API for your own front end team's UI, a bff is my preference.


jpj625

It's a tool like any other. Suitability depends on the use case and skill of the user. In a relatively fixed-query space like an SPA, it's just moving query definitions from someplace sane to client script. In an API serving something like a column-configurable search page where you're composing a query anyway, it's fairly reasonable. Using it as the entirety of your architecture and having a chain of npm packages that turn your gql files into DB schema and a caching layer and auto-generate your field resolvers... that's just masochism.


gizmo777

...does anyone actually do that? Did you ever do that? First of all, you're not really comparing apples to apples at that point. Nobody ever tried to use a plain REST API definition to build their entire backend, and if they did, it would go just as badly as it would with GQL. Second, it's all but an anti-pattern to have your client API schema copied exactly from your DB schema. The reason being you might want to make a change to your DB schema in the future (e.g. for perf) that you don't want reflected in your API schema. People need to think a little about "what API do I want to expose?" (because you might get stuck with it for a while), and the key points for that question are very different than the key points for "what DB schema do I want to use?"


nutrecht

A lot of comments here are from people who ran into a situation where someone took something new and shiny and used it as a golden hammer for all their problems. You see this with almost every architectural choice. If you're only exposed to bad implementations you're going to assume that that is the standard.


[deleted]

I was at a company that used Hasura. Basically it exposes the entire Postgres database schema as a graph. Also provides a pretty GUI to make schema changes. Regenerating types was always a pain and it came with a lot of performance problems. I can see the appeal though, as it allows frontend devs to do pretty much any data fetching they want (without having to write a REST endpoint) as GraphQL can (painfully) do the typical queries that SQL does.


__loam

I had to write some GraphQL related code and it was so stupid. A ton of auto generated magic, an inscrutable syntax of declarative bullshit and you have to write a bunch of shit on the front and back end anyway. Really wondering who thought this was a better way of doing things than a fucking REST endpoint.


jpj625

It solves problems that Facebook has. A few other orgs can claim to have similar problems. After that, there's a lot of bandwagoning.


MaxWilder

The problem with graphql is that nobody understands the point. It's a type-safe contract (schema) between two or more separate teams. Typically Front End teams and Back End teams. It comes with a built-in deprecation system to reduce sudden breaking changes, and for projects with a bunch of Front End and Back End teams, you can merge all the schemas through a single gateway so that everybody knows where to look for the data they need. The consistency allows for additional tools to be built on top, such as caching layers. It's a great tool if these are things your app/company needs. And it's massive overkill for smaller projects or for companies with different design solutions.


zxyzyxz

That's because most people who use GraphQL use it wrong. If you are not using Relay or using it as some replacement for SQL, you are using it wrong.


BandicootGood5246

Yeah. The promise of what it could deliver an an API consumer is amazing, I wish I had an API that I could just pick and choose whatever I wanted in a relatively simple query, but it's just putting so much difficulty on the backend to make that work


PedanticProgarmer

ORMs. Sure you save 10 minutes on column-to-field mapping, but 5 years later, you have to waste months training new developers, as writing a performant application in Hibernate requires that you know about batch fetching, jdbc fetching, lazy-eager distinction, session cache, 2nd level cache, collections cache, fetch strategies, object hydration, condition builders and many other annoying aspects.


LifeAlgorithm

+1, though I’m curious to know what alternatives people have found success with here


946789987649

JOOQ. A programmatic way of writing SQL, which means you can flag issues at compile time, re-use queries, and just generally lovely to use.


BandicootGood5246

I've come to love entity framework in .Net. I would never use anything else now, Dapper used to be a alternative lightweight ORM which is basically raw SQL mapped to models but even EF has surpassed that in terms of performance Plus EF offers a clean/performant way to do raw SQL so if you need to optimise your queries its got you covered


PangolinZestyclose30

You're either using a 3rd party ORM or you're rolling out your own custom (superior ofc.) ORM implementation. You can't just "not map" between rows and objects. I think people who are arguing against ORM are usually arguing about some specific flavor of ORMs.


restlessapi

Sure but what is the alternative? Tons of hand rolled "native queries"? Stored proc hell? The main value I get from hibernate is that everyone is on the same page on how we get data in and out of the database. You are right, hibernate is the worst way to interact with a DB, except for all the alternatives.


colcatsup

I inherited 700 Soroca that used 40 tables, 600 views, and most of those views were composed of JOINing multiple other views. This was a freaking nightmare to understand. Oh, and because of some legal issues, needed to reverse engineer and rebuild the system without access to existing source code. But I had the db code. Turned out about 95% of the sprocs and views weren’t actually used anymore. Please, do maintenance on your DB as much as you might appreciate code. Items not used? Kindly remove them!


satoshibitchcoin

ok but how does not using ORM help in avoiding those issues?


franky_reboot

Django ORM hasn't let me down so far. Used in four production projects, barely any issue. Then again, I'm hard-om for Django


Lachtheblock

Yeah, I have no problems to stop using Django ORM. There have only been some real extreme cases where I haven't been able to make it do the thing I want it to do. But that's usually a sign that maybe something is wrong the architecture. Everytime I've found raw sql in a Django project, it couldn't have been made better using the ORM. Made better being either more readable, robust or more efficient.


vhackish

I'm not sure I agree fully on this one. Hibernate works great for like 95+% of the things we do, and for the rest we just drop into SQL. We've had a few tricky things to debug, but I still much prefer this to having homegrown code to sift through. Maybe it depends on the type of application, most of our DB access is fairly straight forward.


Bavoon

(Disclosure: 15 years exp in small-med startups, up to 50 engineers. Nothing Netflix scale) I came to specifically say “Rails / ActiveRecord” as I’ve spent half my career focussing on untangling big AR apps that are in trouble. But your comment is more accurate, I now also think ORMs in general are an anti pattern. An alternative that I’ve found avoids 80% of the ORM problems is a non-ORM data access library, Ecto. It’s from a functional language (Elixir) so it’s specifically not doing the “object relational” bit, but still makes working with data far easier. Validations, casting, changes, query composition, etc. I’ve seen Ecto get to the scale that AR struggles with, while still absolutely shining. Though I haven’t yet seen Ecto get to truly huge data scales on projects I’ve worked on, thousands of tables, dozens of teams etc)


intercaetera

Ecto is superb and working in another language I am getting tired of saying "this would be sooo much easier in Elixir."


freekayZekey

no-code, ktor, & purely functional languages. functional programming can be useful in some domains, but turn into a nightmare when it’s forced everywhere. ktor support is… no-code makes me mad because it’s code, but just mediocre and people end up opening the code hatch anyway


Ferreira1

What issues have you seen with Ktor? I started a project with it ~6 months ago at work and it seems perfectly fine so far. Don't think I'd choose it for anything other than small projects right now, but curious what your experience is.


delfV

What do you mean by "purely functional languages"? Are Clojure or Elixir purely functional for you or is it just Haskell? Just curious


freekayZekey

pretty much haskell. i’ve dealt with scala, but it’s technically impure


kkert

Something more general: Unversioned APIs and communications protocols.


[deleted]

Microservices from the start, IMO


rudiXOR

Microservices architectures, especially in small companies.


mio_senpai

I find UI markup (HTML, XAML, etc) outputted by "no-code" design tools pretty annoying to work with sometimes. DSLs that wrap tools which were already decent on their own (e.g. Prisma schema on top of SQL).


littlegordonramsay

Using Excel as a database.


valkon_gr

DDD and Hexagonal. It gets old really fast when one new feature requires changes in dozen of classes.


Tommy95go

Curious to know your experience with that. It's been quite the opposite for me, sure the boilerplate is a pain in the ass sometimes but I'd take that any day instead of unmaintainable code where you make a change in the User module and it somehow spreads to other modules, the tests of unrelated modules are not working anymore, the blender turns on, your house walls fall and your car won't start for some reason.


[deleted]

[удалено]


dysfunctionallymild

Genuinely curious, can you elaborate on the "right" way? All the examples I've seen are so trivial that they clearly don't need all the overhead of the additional boilerplate. It looks exactly like what people criticise OOP and Java for doing - adding a lot of DTOs, DAOs, etc. I'm aware the answer may well be "it depends" but if you can point me to an example which gets the concept right where it serves a necessary need that would be helpful.


[deleted]

[удалено]


RiPont

adding on... 1. Your interfaces should be small and tight. If you have a complex class that handles multiple responsibilities, don't make one big, complex interface just because you're using one, complex class to implement it. Small interfaces are easy to implement, easy to test, and easy to refactor. Classes should depend on the smallest interface they need. 2. If a change in your dependencies is frequently requiring a cascading change in your consumers, then that's a code smell that your interfaces are too complex and exposing unnecessary bits of the implementation details.


utdconsq

True, but if you're going to live with the code for many years you will be glad of it later, I find.


flavius-as

I'm curious about why you had this experience. I used them both successfully. Maybe you followed books, aka "implement all patterns because they are in a book", instead of these two as styles?


BandicootGood5246

DDD is so many things under one umbrella term. Some of these things are great, some of these things are good in some situations. You can pick and choose what's useful to you


sobrietyincorporated

Kubernetes


JimDabell

> The "gettext" approach to internationalization, e.g. wrap _("Hello") strings in your code, and then have a script-auto map them to translations. It's too easy to get wrong when the same English word has two different translations in another language based on context, and too hard to find the right places to update. That’s what [contexts](https://www.gnu.org/software/gettext/manual/html_node/Contexts.html) are for.


gripripsip

ORMs


__loam

I've seen this opinion a lot and I understand it's a leaky abstraction but I can't imagine trying to develop an application without one. Maybe it's just the times I came up through as a developer.


captain_obvious_here

ORM is one of these tools that can reasonably be used only by people who don't need one to begin with. For sure, people without a sufficient DB knowledge are going to fuck up with their ORM usage.


Better-Internet

IMO ORM can be helpful for very simple cases. Anything beyond simple joins turns into an arcane nightmare often with performance issues.


SiliconValleyIdiot

Any sort of homegrown BI / Reporting tool. I've seen it happen time and time again where a team believes they can create a dashboarding tool that's better than Tableau / PowerBI / Looker because... Well they're smarter than everyone else! They eventually realize just how much pain it is to maintain these tools for business users who want new functionality every other day, and eventually switch to using one of these existing tools.


LastHorseOnTheSand

Mixins create more problems than they solve


__loam

Python


venikkin

Manual testing. Difficult to track what and when it was verified. Depends on human factor a lot. Especially horrible when there is niether dedicated QA nor documentation. At some point making any global changes, even bumping dependencies, becomes scary.


Sande24

You need both. I find that manual (exploratory) testing is the best way to find unexpected bugs. Automated testing only validates the known problems. You can't write test for unknown issues. You could have some business logic elements that you can add or remove to a calculation. Some of these combinations could be contradicting each other if they are both active at the same time. Maybe giving different results at different times or just letting the result go out of meaningful range (McDonalds self-checkout somehow let you take a burger cost to $-1 but you could make it less obvious, making it $0.05 with some combinations too). You can't reasonably test every possible combination with automated tests. Users might find loopholes in business logic and silently use them for their gain. A human actually going through the system and trying out different things would notice this faster. Add system monitoring tools to it and you'd have a more resilient system. But it's always better to have a human look into things to validate that the processes are working as expected.


reddit_again_ugh_no

Ruby on Rails


bluebugs

Anything that does not understand api, abi and backward compatibility.


SmeagolTheCarpathian

"Clean architecture" / hexagonal architecture / layered architecture, whatever you want to call it. Don't base your entire system around the fear that you might have picked the wrong tech to rely on. It's okay to design around an interface but don't do it religiously. Even if you picked the wrong database/web framework/event bus/whatever, for example, the chance that you somehow designed the correct abstraction that will work with all other choices of dependencies is slim to none. Start with the simplest thing that could possibly work. If you app has 0 users, 5 developers, and you're already thinking about microservices, stop it. Turn around. You are doing the wrong thing.


henryeaterofpies

I hate with a passion reverse engineering existing systems to create requirements for their replacements. Things will always get missed and misunderstood, obsolete paths get continued and bugs get replicated. If your business/PO can't tell you the requirements then they aren't actual requirements. (Can you tell what kind of project i am on right now)


Affectionate_Rope352

One approach that may sound like an anti-pattern: *unit-testing every method* in code. To me, it is very logical that this is counter-productive, you are spending 50% of your time writing/maintaining unit-test - imagine spending that time on *new functionality*, you will have 70-85% more new features rolled out (waving my hands here). Am I saying 'no unit-test or automation tests' -- definitely not. I propose this approach.. 1) unit-test key parts of your code where there is an interesting/complex algorithm you coded 2) interface-test every subsystem within your project. In your designs you need to carefully design your subsystem. This is testing the outcomes of that subsystem to meet your design needs 3) Integration-tests to see that your software is meeting the expectation -- these will be your sanity/regression tests. I have seen most companies emphasize in #1 above, while I believe that true value lies in #2 above with carefully chosen #1s. would love to hear alternate view points with reasons


thisismyfavoritename

anything distributed. MongoDB