Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Database & SysAdmin
  3. Database
  4. Efficient searching

Efficient searching

Scheduled Pinned Locked Moved Database
questiondatabasealgorithmsperformanceannouncement
4 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Offline
    M Offline
    mjackson11
    wrote on last edited by
    #1

    We have very large tables of price data that need to be searched efficiently.   The table consists of id - bigint (Primary Key w/ clustered index) iCurve - bigint - links to a small table defining price curves dtFwdDate - forward date dtEffDate - effective date fValue - price The usual search is SELECT fValue FROM table WHERE iCurve = 1 AND dtFwdDate='01/01/2010' AND dtEffDate='11/11/2009' I created an index that keys on iCurve, dtFwdDate, dtEffDate and includes fValue. It works okay but the index size is now larger than the data. We have to update this table daily with very large numbers of data points (100,000+) Questions - What is the best way to index this table for fast searches? How can we keep the index size small? How can we optimize this so insertions are not painful? In terms of performance, I would rather have performance problems inserting data. thx Mark Jackson

    C M M 3 Replies Last reply
    0
    • M mjackson11

      We have very large tables of price data that need to be searched efficiently.   The table consists of id - bigint (Primary Key w/ clustered index) iCurve - bigint - links to a small table defining price curves dtFwdDate - forward date dtEffDate - effective date fValue - price The usual search is SELECT fValue FROM table WHERE iCurve = 1 AND dtFwdDate='01/01/2010' AND dtEffDate='11/11/2009' I created an index that keys on iCurve, dtFwdDate, dtEffDate and includes fValue. It works okay but the index size is now larger than the data. We have to update this table daily with very large numbers of data points (100,000+) Questions - What is the best way to index this table for fast searches? How can we keep the index size small? How can we optimize this so insertions are not painful? In terms of performance, I would rather have performance problems inserting data. thx Mark Jackson

      C Offline
      C Offline
      Covean
      wrote on last edited by
      #2

      Do you use a sql-server so I would say try to use the sql-profiler and the optimizion tool in combination and let the optimizer make a proposal for. But the problem will be that you will not be able to create a small index and have a good performance on it. At last one idea is to remove the clustered index of the primary key and make it to a not clustered one and use the clustered index on iCurve, but only if iCurve differs very often. I read this tip some days ago in the book SQL Server 2008, but its efficent if you do not often query for the id and if the new clustered column doesn't have too often the same value.

      Greetings Covean

      1 Reply Last reply
      0
      • M mjackson11

        We have very large tables of price data that need to be searched efficiently.   The table consists of id - bigint (Primary Key w/ clustered index) iCurve - bigint - links to a small table defining price curves dtFwdDate - forward date dtEffDate - effective date fValue - price The usual search is SELECT fValue FROM table WHERE iCurve = 1 AND dtFwdDate='01/01/2010' AND dtEffDate='11/11/2009' I created an index that keys on iCurve, dtFwdDate, dtEffDate and includes fValue. It works okay but the index size is now larger than the data. We have to update this table daily with very large numbers of data points (100,000+) Questions - What is the best way to index this table for fast searches? How can we keep the index size small? How can we optimize this so insertions are not painful? In terms of performance, I would rather have performance problems inserting data. thx Mark Jackson

        M Offline
        M Offline
        mjackson11
        wrote on last edited by
        #3

        This is in MS-SQL server BTW

        1 Reply Last reply
        0
        • M mjackson11

          We have very large tables of price data that need to be searched efficiently.   The table consists of id - bigint (Primary Key w/ clustered index) iCurve - bigint - links to a small table defining price curves dtFwdDate - forward date dtEffDate - effective date fValue - price The usual search is SELECT fValue FROM table WHERE iCurve = 1 AND dtFwdDate='01/01/2010' AND dtEffDate='11/11/2009' I created an index that keys on iCurve, dtFwdDate, dtEffDate and includes fValue. It works okay but the index size is now larger than the data. We have to update this table daily with very large numbers of data points (100,000+) Questions - What is the best way to index this table for fast searches? How can we keep the index size small? How can we optimize this so insertions are not painful? In terms of performance, I would rather have performance problems inserting data. thx Mark Jackson

          M Offline
          M Offline
          Mycroft Holmes
          wrote on last edited by
          #4

          Tuning a database is an art and there is no 1 silver bullet, you can only try a number of things and see if they are worth the changes. For a start the size of your indexes should not concern you, on a heavily indexed table this is not unusual. If you insert is a bacth job (with 100k records is should be) then it may be worth dropping the indexes for the insert and recreating them. This is a DRASTIC measure and should be tested for cost on another server. Consider the changes to the clustered index suggested Consider moving the indexes to another physical drive Partitioning your data (think archiving but you can query across partitions) These are but some of your options, I assume you have use query profiler to asses the cost of your queries.

          1 Reply Last reply
          0
          Reply
          • Reply as topic
          Log in to reply
          • Oldest to Newest
          • Newest to Oldest
          • Most Votes


          • Login

          • Don't have an account? Register

          • Login or register to search.
          • First post
            Last post
          0
          • Categories
          • Recent
          • Tags
          • Popular
          • World
          • Users
          • Groups