Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Database & SysAdmin
  3. Database
  4. Slow query when using @variable in Where clause [modified]

Slow query when using @variable in Where clause [modified]

Scheduled Pinned Locked Moved Database
databasecsshelpannouncement
34 Posts 4 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • W Wendelius

    Maysam Mahfouzi wrote:

    after I disabled the index which only was on Date column, both queries ran very fast

    Yeah, that's on of those annoying features for SQL Server optimizer. It isn't very good in index selection.

    Maysam Mahfouzi wrote:

    But when I set @start to 100,000, the query become a long-running one

    Yes, that was predictable since the further you go, the more rows have to be read before you're "in the right position". And I suspect that this was the way both query versions behaved?

    Maysam Mahfouzi wrote:

    In the real-world running database though, the Date value is greater for greater ID columns for sure.

    Is that something we can rely on? If it is, it would be possible to change the query to be based on id (however in that case you should regenerate the test environment data to correspond actual data). Another option is that can you change the parameters? Instead of defining start point as number of previous rows is ot ok to use some other mechanism (...thinking about the solution...)

    The need to optimize rises from a bad design.My articles[^]

    M Offline
    M Offline
    Meysam Mahfouzi
    wrote on last edited by
    #19

    Mika Wendelius wrote:

    And I suspect that this was the way both query versions behaved?

    It only happened for @c = 1

    Mika Wendelius wrote:

    Is that something we can rely on?

    Probably, but I think it would be better to write a query independent of Date values. Frankly, I think my query is not much complicated and everybody now and then my find himself dealing with such queries. It's really sad that SQL Server is not able to handle it. Thanks for trying to help me Mika, please let me know if you come up with any solution.

    W 1 Reply Last reply
    0
    • M Meysam Mahfouzi

      Mika Wendelius wrote:

      And I suspect that this was the way both query versions behaved?

      It only happened for @c = 1

      Mika Wendelius wrote:

      Is that something we can rely on?

      Probably, but I think it would be better to write a query independent of Date values. Frankly, I think my query is not much complicated and everybody now and then my find himself dealing with such queries. It's really sad that SQL Server is not able to handle it. Thanks for trying to help me Mika, please let me know if you come up with any solution.

      W Offline
      W Offline
      Wendelius
      wrote on last edited by
      #20

      What about the parameter question I asked? I was thinking about a solution where you would define a starting point as float. Initial starting point is 0 and you pass how many rows you want per page (for example 10). Next time you call this query, you would specify the starting point as the float found on the last row of the page you got and then again how many rows you want. So what this means in action is that first you create a new column:

      ALTER TABLE News
      ADD StartPoint AS ((Culture * 100000000) + CONVERT(float, Date))
      PERSISTED

      Then you index it:

      CREATE NONCLUSTERED INDEX X_StartPoint ON News (StartPoint)

      Now the query would be like:

      DECLARE @Start float, @Count INT
      SET @Start = 0
      SET @Count = 5
      DECLARE @c TINYINT
      SET @c = 1;
      SELECT TOP(@Count)
      id, title, description, StartPoint
      FROM News a
      WHERE Culture = @c
      AND StartPoint > @Start
      AND StartPoint < ((@c + 1) * 100000000)
      ORDER BY Start

      The need to optimize rises from a bad design.My articles[^]

      M 1 Reply Last reply
      0
      • M Meysam Mahfouzi

        Rob Philpott wrote:

        Have you tried inserting @c into a single line temporary table and joining on that?

        Dear Rob, I used the following query but it didn't make any difference:

        DECLARE @Start INT, @Count INT

        SET @Start = 10
        SET @Count = 5

        DECLARE @c TINYINT
        SET @c = 0;

        WITH paging AS (
        SELECT id, title, [description], ROW_NUMBER() OVER (ORDER BY Date DESC) rownum
        FROM news
        JOIN (SELECT @C AS Cult) AS CC ON news.Culture = CC.Cult
        --WHERE Culture = @c
        )
        SELECT * FROM paging WHERE rownum BETWEEN @Start AND (@Start + @Count - 1)

        modified on Wednesday, January 28, 2009 3:03 AM

        R Offline
        R Offline
        Rob Philpott
        wrote on last edited by
        #21

        Right, might be barking up the wrong tree then. How about using a proper temporary table such:

        declare @c tinyint
        set @c = 0

        create table #c (Culture tinyint)

        insert into #c select @c

        select id, title, [description] from news join #c on news.Culture = #c.Culture

        drop table #c

        Regards, Rob Philpott.

        1 Reply Last reply
        0
        • W Wendelius

          What about the parameter question I asked? I was thinking about a solution where you would define a starting point as float. Initial starting point is 0 and you pass how many rows you want per page (for example 10). Next time you call this query, you would specify the starting point as the float found on the last row of the page you got and then again how many rows you want. So what this means in action is that first you create a new column:

          ALTER TABLE News
          ADD StartPoint AS ((Culture * 100000000) + CONVERT(float, Date))
          PERSISTED

          Then you index it:

          CREATE NONCLUSTERED INDEX X_StartPoint ON News (StartPoint)

          Now the query would be like:

          DECLARE @Start float, @Count INT
          SET @Start = 0
          SET @Count = 5
          DECLARE @c TINYINT
          SET @c = 1;
          SELECT TOP(@Count)
          id, title, description, StartPoint
          FROM News a
          WHERE Culture = @c
          AND StartPoint > @Start
          AND StartPoint < ((@c + 1) * 100000000)
          ORDER BY Start

          The need to optimize rises from a bad design.My articles[^]

          M Offline
          M Offline
          Meysam Mahfouzi
          wrote on last edited by
          #22

          Thanks Mika for the solution, But I can not make the column persisted because of the following error: Computed column 'StartPoint' in table 'News' cannot be persisted because the column is non-deterministic.

          W 1 Reply Last reply
          0
          • M Meysam Mahfouzi

            Thanks Mika for the solution, But I can not make the column persisted because of the following error: Computed column 'StartPoint' in table 'News' cannot be persisted because the column is non-deterministic.

            W Offline
            W Offline
            Wendelius
            wrote on last edited by
            #23

            Sorry, at least one typo. The date column must be in angle brackets. I made succesfully a test like this. Is it similar to your structure:

            CREATE TABLE News (
            [Date] datetime,
            [Id] int,
            [Culture] tinyint
            )

            ALTER TABLE News
            ADD StartPoint AS ((Culture * 100000000) + CONVERT(float, [Date]))
            PERSISTED

            The need to optimize rises from a bad design.My articles[^]

            M 1 Reply Last reply
            0
            • W Wendelius

              Sorry, at least one typo. The date column must be in angle brackets. I made succesfully a test like this. Is it similar to your structure:

              CREATE TABLE News (
              [Date] datetime,
              [Id] int,
              [Culture] tinyint
              )

              ALTER TABLE News
              ADD StartPoint AS ((Culture * 100000000) + CONVERT(float, [Date]))
              PERSISTED

              The need to optimize rises from a bad design.My articles[^]

              M Offline
              M Offline
              Meysam Mahfouzi
              wrote on last edited by
              #24

              Mika Wendelius wrote:

              CREATE TABLE News ( [Date] datetime, [Id] int, [Culture] tinyint ) -- ALTER TABLE News ADD StartPoint AS ((Culture * 100000000) + CONVERT(float, [Date])) PERSISTED

              I can execute this query too, but when I want to alter the current News table, it tells me that the columns is non-deterministic. :confused: Apart from that, I didn't get the concept of multiplying Culture by 100000000. Why didn't you write something just like this?

              ADD StartPoint AS CONVERT(float, [Date])

              And yet another question, what't the difference between sorting on a float column rather than date column?

              W 1 Reply Last reply
              0
              • M Meysam Mahfouzi

                Mika Wendelius wrote:

                CREATE TABLE News ( [Date] datetime, [Id] int, [Culture] tinyint ) -- ALTER TABLE News ADD StartPoint AS ((Culture * 100000000) + CONVERT(float, [Date])) PERSISTED

                I can execute this query too, but when I want to alter the current News table, it tells me that the columns is non-deterministic. :confused: Apart from that, I didn't get the concept of multiplying Culture by 100000000. Why didn't you write something just like this?

                ADD StartPoint AS CONVERT(float, [Date])

                And yet another question, what't the difference between sorting on a float column rather than date column?

                W Offline
                W Offline
                Wendelius
                wrote on last edited by
                #25

                Maysam Mahfouzi wrote:

                when I want to alter the current News table, it tells me that the columns is non-deterministic

                Could you post the script for the News table.

                Maysam Mahfouzi wrote:

                I didn't get the concept of multiplying Culture by 100000000

                The idea is to create a single column which orders the culture and date columns together. To combine those columns to a single column and to have unique values I converted the date to float so that I can add culture to it. However since Cultures are 1,2,3... I cannot add them to the date since it would change the "date portion" of the float so I decided to multiply the culture with a number big enough so that it won't get mixed with the date. For example if I would add a new record to the new right now with culture 1 the result would be 100039839.592728. After that I can use these numbers as "pointers" to rows to define a specific starting point to the query. This eliminates the need to read previous records. So what I'm after is that if you have selected news page by page, the next query can always start from the exact point where the last record came from.

                The need to optimize rises from a bad design.My articles[^]

                M 1 Reply Last reply
                0
                • W Wendelius

                  Maysam Mahfouzi wrote:

                  when I want to alter the current News table, it tells me that the columns is non-deterministic

                  Could you post the script for the News table.

                  Maysam Mahfouzi wrote:

                  I didn't get the concept of multiplying Culture by 100000000

                  The idea is to create a single column which orders the culture and date columns together. To combine those columns to a single column and to have unique values I converted the date to float so that I can add culture to it. However since Cultures are 1,2,3... I cannot add them to the date since it would change the "date portion" of the float so I decided to multiply the culture with a number big enough so that it won't get mixed with the date. For example if I would add a new record to the new right now with culture 1 the result would be 100039839.592728. After that I can use these numbers as "pointers" to rows to define a specific starting point to the query. This eliminates the need to read previous records. So what I'm after is that if you have selected news page by page, the next query can always start from the exact point where the last record came from.

                  The need to optimize rises from a bad design.My articles[^]

                  M Offline
                  M Offline
                  Meysam Mahfouzi
                  wrote on last edited by
                  #26

                  So I suppose that you are suggesting an alternative way to replace ROW_NUMBER() function, so that we do not need to calculate row number each time from scratch, am I right? If we use:

                  ROW_NUMBER() OVER(ORDER BY news.[Date] DESC)

                  each time all rows should be sorted based on Date and then row number will be assigned to them. But with your approach, we do not need to sort Date column at all. If I'm right, can we conclude that ROW_NUMBER() OVER(ORDER BY news.[Date] DESC) is not a good option for tables containing a millions of rows? here is my table script:

                  CREATE TABLE [News](
                  [ID] [bigint] IDENTITY(1,1) NOT NULL,
                  [Url] [varchar](300) NULL,
                  [Title] [nvarchar](200) NULL,
                  [Description] [nvarchar](3000) NULL,
                  [Date] [datetime] NULL,
                  [Rank] [float] NULL,
                  [ViewCount] [int] NULL,
                  [Vote] [smallint] NULL,
                  [Culture] [tinyint] NULL,
                  CONSTRAINT [PK_NewsItem_1] PRIMARY KEY CLUSTERED
                  (
                  [ID] ASC
                  )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
                  ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
                  ) ON [PRIMARY]

                  W 1 Reply Last reply
                  0
                  • M Meysam Mahfouzi

                    So I suppose that you are suggesting an alternative way to replace ROW_NUMBER() function, so that we do not need to calculate row number each time from scratch, am I right? If we use:

                    ROW_NUMBER() OVER(ORDER BY news.[Date] DESC)

                    each time all rows should be sorted based on Date and then row number will be assigned to them. But with your approach, we do not need to sort Date column at all. If I'm right, can we conclude that ROW_NUMBER() OVER(ORDER BY news.[Date] DESC) is not a good option for tables containing a millions of rows? here is my table script:

                    CREATE TABLE [News](
                    [ID] [bigint] IDENTITY(1,1) NOT NULL,
                    [Url] [varchar](300) NULL,
                    [Title] [nvarchar](200) NULL,
                    [Description] [nvarchar](3000) NULL,
                    [Date] [datetime] NULL,
                    [Rank] [float] NULL,
                    [ViewCount] [int] NULL,
                    [Vote] [smallint] NULL,
                    [Culture] [tinyint] NULL,
                    CONSTRAINT [PK_NewsItem_1] PRIMARY KEY CLUSTERED
                    (
                    [ID] ASC
                    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
                    ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
                    ) ON [PRIMARY]

                    W Offline
                    W Offline
                    Wendelius
                    wrote on last edited by
                    #27

                    Yes you interpretaion was correct. That's what I tried to explain (but with lots of useless word between :)) It seems that SQL Server 2005 won't create this column while 2008 will. Too bad. However, I think we could do this with the datetime also

                    DECLARE @Start datetime, @Count INT
                    SET @Start = GETDATE()
                    SET @Count = 5
                    DECLARE @c TINYINT
                    SET @c = 1;
                    SELECT TOP(@Count)
                    id, title, description, Date
                    FROM News a
                    WHERE Culture = @c
                    AND Date < @Start
                    ORDER BY Date DESC

                    The idea is the same. Define a starting point to the current datetime, fetch page and when you want the next page, define the starting point to the last date in the previous page (from the last row). The date column must be unique in this version so two different news must have at least 1 ms difference in date. You can try it by testing different values for the @Start variable.

                    The need to optimize rises from a bad design.My articles[^]

                    M 1 Reply Last reply
                    0
                    • W Wendelius

                      Yes you interpretaion was correct. That's what I tried to explain (but with lots of useless word between :)) It seems that SQL Server 2005 won't create this column while 2008 will. Too bad. However, I think we could do this with the datetime also

                      DECLARE @Start datetime, @Count INT
                      SET @Start = GETDATE()
                      SET @Count = 5
                      DECLARE @c TINYINT
                      SET @c = 1;
                      SELECT TOP(@Count)
                      id, title, description, Date
                      FROM News a
                      WHERE Culture = @c
                      AND Date < @Start
                      ORDER BY Date DESC

                      The idea is the same. Define a starting point to the current datetime, fetch page and when you want the next page, define the starting point to the last date in the previous page (from the last row). The date column must be unique in this version so two different news must have at least 1 ms difference in date. You can try it by testing different values for the @Start variable.

                      The need to optimize rises from a bad design.My articles[^]

                      M Offline
                      M Offline
                      Meysam Mahfouzi
                      wrote on last edited by
                      #28

                      But I guess there is a problem with this approach. That is, when we show pagination buttons at the bottom of page, how can user go to page 7 from page 1 without knowing the last date of page 6?

                      W 1 Reply Last reply
                      0
                      • M Meysam Mahfouzi

                        But I guess there is a problem with this approach. That is, when we show pagination buttons at the bottom of page, how can user go to page 7 from page 1 without knowing the last date of page 6?

                        W Offline
                        W Offline
                        Wendelius
                        wrote on last edited by
                        #29

                        Okay, if it's possible to skip pages, then this won't work. I started to think some kind of pre-pagination, but I think that won't work either... Hmm, are the news deleted often? Could it be acceptable that if a record is deleted, a time consuming operation is executed? What I'm thinking is that if we would have a ordinal number on the rows (1,2,3,4...) and we know the maximum at every moment then the page 6 (with 10 news on page) would be from (maximum - (5 * 10)) to (maximum - (6 * 10)) + 1. Now if we create an unique index on the ordinal column we get the maximum very fast and the query will also be very fast. The downside is that if a record is deleted, the ordinals must be rearranged starting from the deleted position to the maximum. However this would be one simple operation (although it may take awhile). Do you see problems in that solution?

                        The need to optimize rises from a bad design.My articles[^]

                        M 1 Reply Last reply
                        0
                        • W Wendelius

                          Okay, if it's possible to skip pages, then this won't work. I started to think some kind of pre-pagination, but I think that won't work either... Hmm, are the news deleted often? Could it be acceptable that if a record is deleted, a time consuming operation is executed? What I'm thinking is that if we would have a ordinal number on the rows (1,2,3,4...) and we know the maximum at every moment then the page 6 (with 10 news on page) would be from (maximum - (5 * 10)) to (maximum - (6 * 10)) + 1. Now if we create an unique index on the ordinal column we get the maximum very fast and the query will also be very fast. The downside is that if a record is deleted, the ordinals must be rearranged starting from the deleted position to the maximum. However this would be one simple operation (although it may take awhile). Do you see problems in that solution?

                          The need to optimize rises from a bad design.My articles[^]

                          M Offline
                          M Offline
                          Meysam Mahfouzi
                          wrote on last edited by
                          #30

                          Mika Wendelius wrote:

                          Do you see problems in that solution?

                          I don't see any problem with it dude, it seems something to work :) But the question is: Is this just MY problem with pagination, or everybody else is dealing with such problem for large number of rows. Have you ever had the same problem with paging? You know my conclusion is that, maybe, pagination has never been meant to be used for large number of rows. If a user wants to browse news, it's very unlikely that he/she wants to go to page 3000, right? We can provide user with a filter on Date of news, and then he/she will be able to first filter them based on date, and then browse them by page.

                          W 1 Reply Last reply
                          0
                          • M Meysam Mahfouzi

                            Mika Wendelius wrote:

                            Do you see problems in that solution?

                            I don't see any problem with it dude, it seems something to work :) But the question is: Is this just MY problem with pagination, or everybody else is dealing with such problem for large number of rows. Have you ever had the same problem with paging? You know my conclusion is that, maybe, pagination has never been meant to be used for large number of rows. If a user wants to browse news, it's very unlikely that he/she wants to go to page 3000, right? We can provide user with a filter on Date of news, and then he/she will be able to first filter them based on date, and then browse them by page.

                            W Offline
                            W Offline
                            Wendelius
                            wrote on last edited by
                            #31

                            Maysam Mahfouzi wrote:

                            Is this just MY problem with pagination, or everybody else is dealing with such problem for large number of rows

                            I think I'm not the best person to answer that. The applications I mostly design act a little bit differently. Although they handle larege amount of rows I try to keep the result sets small and avoid paging at all. I think that there propably is several different solutions and the one I suggested may be one of those. Another approach could be that you page per date as you mentioned. One technical modification could be to use partitioning (but that requires Enterprise Edition) etc.

                            Maysam Mahfouzi wrote:

                            You know my conclusion is that, maybe, pagination has never been meant to be used for large number of rows

                            That's my understanding too. It really doesn't make sense if you're on page 212347 :) Also one thing that we haven't spoken is caching. The page number change quite seldomly so you could cache the pages (or let's say first 100 pages) and use them from cache. In that case you just need to now when news are added in order to redresh the cache. That can be done using polling or better yet using SqlDependency. Still if cache is used and you go to the data beyond cache the search mechanism should be efficient. So if you try the last suggestion and let's see what it does. Also try utilizing cache. It greatly helps you to maintain overall performance of the system.

                            The need to optimize rises from a bad design.My articles[^]

                            M 1 Reply Last reply
                            0
                            • W Wendelius

                              Maysam Mahfouzi wrote:

                              Is this just MY problem with pagination, or everybody else is dealing with such problem for large number of rows

                              I think I'm not the best person to answer that. The applications I mostly design act a little bit differently. Although they handle larege amount of rows I try to keep the result sets small and avoid paging at all. I think that there propably is several different solutions and the one I suggested may be one of those. Another approach could be that you page per date as you mentioned. One technical modification could be to use partitioning (but that requires Enterprise Edition) etc.

                              Maysam Mahfouzi wrote:

                              You know my conclusion is that, maybe, pagination has never been meant to be used for large number of rows

                              That's my understanding too. It really doesn't make sense if you're on page 212347 :) Also one thing that we haven't spoken is caching. The page number change quite seldomly so you could cache the pages (or let's say first 100 pages) and use them from cache. In that case you just need to now when news are added in order to redresh the cache. That can be done using polling or better yet using SqlDependency. Still if cache is used and you go to the data beyond cache the search mechanism should be efficient. So if you try the last suggestion and let's see what it does. Also try utilizing cache. It greatly helps you to maintain overall performance of the system.

                              The need to optimize rises from a bad design.My articles[^]

                              M Offline
                              M Offline
                              Meysam Mahfouzi
                              wrote on last edited by
                              #32

                              I'm sorry to bother you with this long conversation but I'd like to continue it till I feel like I have no more questions :) I'm thinking of a problem with last approach. That's right that it does not make sense to be on page 243546, but, think of a weblog, I can see paging for blog posts, and I want to go to last page! So, there have to be some way for me to get to last page (to see the first blog post), which may be on page 23453! The problem with your last solution is that it will not work if we want to give paging to a result set from more than one table! right?

                              Mika Wendelius wrote:

                              One technical modification could be to use partitioning (but that requires Enterprise Edition) etc.

                              How can partitioning help? Can you show me a starting point? Think of a site like friendfeed, in this site I can see recent activities of a user. These activities have been probably collected from different tables in database. In this scenario, what happens if I want to see the very first activities of user? (I have to go to last page of activities)

                              W 1 Reply Last reply
                              0
                              • M Meysam Mahfouzi

                                I'm sorry to bother you with this long conversation but I'd like to continue it till I feel like I have no more questions :) I'm thinking of a problem with last approach. That's right that it does not make sense to be on page 243546, but, think of a weblog, I can see paging for blog posts, and I want to go to last page! So, there have to be some way for me to get to last page (to see the first blog post), which may be on page 23453! The problem with your last solution is that it will not work if we want to give paging to a result set from more than one table! right?

                                Mika Wendelius wrote:

                                One technical modification could be to use partitioning (but that requires Enterprise Edition) etc.

                                How can partitioning help? Can you show me a starting point? Think of a site like friendfeed, in this site I can see recent activities of a user. These activities have been probably collected from different tables in database. In this scenario, what happens if I want to see the very first activities of user? (I have to go to last page of activities)

                                W Offline
                                W Offline
                                Wendelius
                                wrote on last edited by
                                #33

                                Maysam Mahfouzi wrote:

                                I'm sorry to bother you with this long conversation

                                You're not bothering me at all :)

                                Maysam Mahfouzi wrote:

                                The problem with your last solution is that it will not work if we want to give paging to a result set from more than one table

                                That's correct since the numbers are precalculated per table.

                                Maysam Mahfouzi wrote:

                                How can partitioning help

                                Since partitioning divides the rows into smaller sets, it improves performance and makes parallel querying easier for the dmbs. To get a quick look at partitioning, see: Partitioned Table and Index Concepts[^].

                                Maysam Mahfouzi wrote:

                                These activities have been probably collected from different tables in database

                                This is the same problem as multiple tables in a query. I don't have any good idea immediately how to do it using SQL but again a technical tool could be using indexed views[^] (again an Enterprise Edition feature). It could be something like that you do the join for all rows and include row numbering into the result set. After that the view is indexed (this partly solves all multi table queries and ordering).

                                The need to optimize rises from a bad design.My articles[^]

                                M 1 Reply Last reply
                                0
                                • W Wendelius

                                  Maysam Mahfouzi wrote:

                                  I'm sorry to bother you with this long conversation

                                  You're not bothering me at all :)

                                  Maysam Mahfouzi wrote:

                                  The problem with your last solution is that it will not work if we want to give paging to a result set from more than one table

                                  That's correct since the numbers are precalculated per table.

                                  Maysam Mahfouzi wrote:

                                  How can partitioning help

                                  Since partitioning divides the rows into smaller sets, it improves performance and makes parallel querying easier for the dmbs. To get a quick look at partitioning, see: Partitioned Table and Index Concepts[^].

                                  Maysam Mahfouzi wrote:

                                  These activities have been probably collected from different tables in database

                                  This is the same problem as multiple tables in a query. I don't have any good idea immediately how to do it using SQL but again a technical tool could be using indexed views[^] (again an Enterprise Edition feature). It could be something like that you do the join for all rows and include row numbering into the result set. After that the view is indexed (this partly solves all multi table queries and ordering).

                                  The need to optimize rises from a bad design.My articles[^]

                                  M Offline
                                  M Offline
                                  Meysam Mahfouzi
                                  wrote on last edited by
                                  #34

                                  Dear Mika You have not been on codeproject for a long time. Hope you are fine. I have asked a question here[^]. Could you please take a look at it?

                                  1 Reply Last reply
                                  0
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Don't have an account? Register

                                  • Login or register to search.
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups