Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
N00bPancakes8293238dWhy did they even ask then?
SortOfTested24504238dThe day I became the captain now, is the day FatOrms went away at our company.
It's the broblem ignored by many developers. When you need to improve performance usually all you can do is add indexes to make generated queries faster
netikras26347238d@Gxost aaand when the indices become too slow and you don't see a right way to tickle the db so it would change the exec plan - .... The end?
Or perhaps there IS a way to ask postgre to change its exec plan? I like how Ora OEM does - its analyser can suggest a dba a few exec plans to select from. Not sure how that's done in postgre
IntrusionCM6790238dMy experience with MS is pretty low.
Mostly because MS is a red flag to me.
But I've seen some LINQ stuff on devRant...
And it is like it always is: Yes, to a certain point it's useful.
But: Raw SQL must be an option. This is not for debate.
You can test SQL. You can seed a database, even fuzz it and do tests.
And till now I've never seen an ORM who makes it impossible to run an raw SQL query.
Compiler checking sounds great.
Except it is not. The compiler can check if it's an valid SQL and maybe run some static checks.
But only a human can see the complexity of eg different input's for search queries, compute resulting rows and put all that together to generate an optimized SQL query.
From experience, it's sometimes impossible to get an ORM to produce the SQL you need, unless you force choke the inner guts, which makes version upgrades a PITA.
Raw SQL should always be an option.
Entity framework can get you pretty far but you need to understand what queries it's coming up with and why. So you can't rely on it as an abstraction for all that nasty sql or relational databases in general.
But here's the problem with just using sql: The sql you execute is often determined at runtime. So *you* don't write sql. You write a program that generates sql at runtime. And that is all kinds of scary. I'm not talking about just shoving a parameter into a where clause. I'm talking about dynamically determining group bys and havings and wheres and fucking froms and mother fucking joins.
I've seen things, man. I've worked on long lived projects that have sent invalid queries to their database. So just go fix the sql, right? Wrong. The query isn't in the source code. There's no one place where the query is. This generator function calls generator functions which call generator functions.
So just use stored procedures, right? Wrong. That doesn't magic away the need for dynamic sql. You're just gonna end up using sql to generate sql.
So yeah, maybe it's not the worst thing to use a library to generate sql rather than try to roll your own. Ya know, some kind of object-relational mapper you might call it.
At the end of the day, there is no perfect solution for connecting your application logic to your relational database. If you ever say "just use X" you're wrong. I don't care what X is.
@Lor-inc The complete list of column names shouldn't come from somewhere obscure. It should be a rather obvious array somewhere very close to the generator function for the specific operator for the specific table.
Table names shouldn't even be selectable, as in a relational database tables with overlapping schemas are merely a coincidence.
C0D464790238dWhen performance starts to matter, it doesn't matter how much you fine tune it, at some point your going to end up removing it and going raw.
This can be said for almost any thing.
If you get any more complex than that, you need to either get an ORM or create views or generated tables or whatever your DBMS calls them.
@Lor-inc Have you ever seen an advanced search screen? How do you think all those optional filters and sorts work?
Or maybe a generated report with all kinds of configuration options?
All of what I mentioned would be a massive fuck up for simple crud. But simple crud isn't what I was talking about
@M3m35terJ05h I didn't consider the kind of advanced search that's basically a GUI over SQL, I assumed we were talking about things you do with an ORM, i.e. complex queries behind simple actions. I don't think LINQ can even be used to dynamically specify columns, as you do that with C# and the SQL is generated at compile time.
@Lor-inc Entity framework won't generate a query until it needs to. It'll convert your lambda expressions into *something* at compile time, but it doesn't know what sql it needs generate until it's time to generate it.
You can do something like:
var query = context.Foos;
var value = request.BarFilter;
query = query.Where(foo => foo.Bar > value);
return query.ToList(); // database call here
@Lor-inc What you can do is request the whole entity of the "main" table you're querying and then dynamically tell it which of its relationships to include.
Like if Bar has a one to many relationship with Foo, you're querying Foo and the user might sometimes request data from Bar, you can have say:
var query = context.Foos:
query = query.Include(f => f.Bar);
and that's a join that only happens sometimes. Entity framework will go and modify the select to include all the columns from Bar. You'll get a sequence of Foos and their Bar will be null or an object depending on if you ran that include. You're probably getting more columns back than you need but that's probably not the worst thing in the world for something like a report or search.
Of course you can narrow the select to specific columns but you do have to know what those columns are at compile time.
@Lor-inc I haven't dealt with really huge databases, but on my generated test database SQL generated by LINQ performed well. With million records and a lot of related entities it worked as fast as with less than a hundred records. Sure, I needed to add indexes, but in result it worked fine. On MS SQL Server Express. Tried the same setup on MS SQL Server enterprise and it worked faster, even without indexes.
If a company uses EF or hibernate, the dbas/devs with access should periodically do an evaluation of the queries that are being sent as a sanity check. Issues with buffered round tripping and deletes, and unintentional n+1 are well known.
@Lor-inc AutoMapper's ProjectTo allows to get only columns required for mapping. Since LINQ works with IQueryable<T> it's possible to add any conditions and includes dynamically, and with custom extension methods for LINQ such query may look pretty good.
Grouping and lazy loading are still broken, however. There's more than 1 n+1 issue.
I'm not crazy about using automapper for this use case. It has a fair amount of overhead, so using it in place of a select statement, or dapper, is leaving a lot of perf on the table for a boilerplate that can be generated by ides.