Re: MS SQL Server vs Oracle, MySQL or MongoDB

  • From: Hans Forbrich <fuzzy.graybeard@xxxxxxxxx>
  • To: Jeremiah Peschka <jeremiah@xxxxxxxxxxxxx>
  • Date: Mon, 01 Sep 2014 12:07:59 -0600

Thanks for that update.  I admit my SQL Server information is a decade old.

The SQL Server expert who advised me back then insisted that SQL Server does not need BIND Variables or equivalent - and the best way to handle things is to build the SQL statement each time by concatenating statement segments - because the engine re-parses each and every time.

/Hans

On 01/09/2014 7:20 AM, Jeremiah Peschka wrote:

As an aside, SQL Server uses a similar parsing concept to Oracle. Queries are compiled as infrequently as possible and cached until something triggers a plan recompile or until the plan is forced out of memory. The SQL Server optimizer does a good enough job, and employs some relatively good algorithms, to make sure it doesn't spend too long compiling a single statement. The overarching idea is that it's better to run a lot of "good enough" plans than a few perfect plans. It's becoming rarer, with newer optimizer versions, to see compilation time outs, but they do still happen.

---
sent from a tiny portion of the hive mind...
in this case, a phone

On Aug 30, 2014 6:59 PM, "Hans Forbrich" <fuzzy.graybeard@xxxxxxxxx <mailto:fuzzy.graybeard@xxxxxxxxx>> wrote:

    "but irrelevant here."

    You disassociate the transaction engine, and therefore the way the
    RDBMS is designed to behave, from te individual statement.  In my
    opinion, that is a fundamental flaw in the discussion.

    One basic assumption in Oracle is that a sequence of statements
    will be performed many times (transactional) and therefore it is
    important to parse ONCE and reuse that repeatedly at a minimal
    cost for each successive time.

    Therefore, your comparison of SQL Server ('good enough parse' each
    time) vs Oracle (optimal parse as infrequently as possible) is
    irrelevant if the developer actually understands that.  If they do
    not, and they incur a new parse for each occurrence, then it is a
    sign that the developer has not bothered to learn the engine, and
    is deliberately or inadvertently sabotaging the way the engine can
    perform.  And THAT is based on the transactional engine.

    Another assumption, in reporting/BI, is that the cost (time) of
    parsing is irrelevant when looking at the overall operational
    duration.   Therefore, spending additional effort during the
    parsing, to get an optimal plan for the technology involved
    (different plans for Exadata vs home grown) is desirable.

    /Hans

    On 30/08/2014 1:22 PM, Yong Huang wrote:
    Hans,

    Any pointer would be appreciated. Note that I'm talking about
    parsing overhead in Oracle vs. SQL Server. I don't see the
    connection to transactions or resource management in general,
    which would be an interesting topic but irrelevant here.

    Yong Huang

    --------- original message ---------

    On 29/08/2014 9:21 AM, Yong Huang (Redacted sender
    yong321@xxxxxxxxx <mailto:yong321@xxxxxxxxx> for
    DMARC) wrote:
    > Reading Jonathan's blogs on SQL Server reminds me of Laimutis's old
    > question: Why is SQL statement parsing a big issue to Oracle
    but not to
    > SQL Server?
    >
    
//www.freelists.org/post/oracle-l/Any-reason-not-to-have-logic-in-the-db,17
    >
    > That's a great question and I'd love to hear some comments.
    That question has been around for several decades and has created
    some
    very amusing and heated flame wars, especially in CDOS.

    But it does tie to 'what is a transaction' and how the different
    engines
    handle transactions as well as 'what is resource management' and
    how do
    the different engines manage resources.



Other related posts: