PIEBALDconsult wrote:
If you're going to teach a man to fish, do it right. Sure, in a pinch you can use a shotgun, but you shouldn't be teaching newbies to fish with a shotgun.
Now that really depends upon how hungry they are!
PIEBALDconsult wrote:
If you're going to teach a man to fish, do it right. Sure, in a pinch you can use a shotgun, but you shouldn't be teaching newbies to fish with a shotgun.
Now that really depends upon how hungry they are!
piticcotoc wrote:
Ok so you don't know. Anyone else?
I believe he was trying to tell you that the answer to your question is useless information. Even if tests provided it one way, you could not rely on it to react the same between different database technologies or even between different versions of the same databases. It is not defined anywhere that I know of.
PIEBALDconsult wrote:
Won't that rescan the table/index for each id?
Yes. You could load up a temp table (Emp_id, date) and then join the update to the temp table. It would lower the lookups by a 5th at the cost increasing the code complexity. For something that will run once, I think the clarity of the correlated sub-query overrides any benefit on time. Of course, if we are talking gigabyte tables, I take it all back.
It would look something like this:
UPDATE
TableName tmp
SET
start_date = (SELECT start_date
FROM TableName
WHERE Emp_id = tmp.Emp_id AND
Ben_Code = 1)
WHERE
Ben_Code <> 1
I would think the error is happening after the query is returned. Test the query in SQL alone and see if you get the error you describe.
Yes - if your values are constants.
It may not be as useless as it first appears. I have used this method to hold global variables that are used by more than one application.
Make sure the Primary Key can only contain 1 value using a check constraint:
ALTER TABLE [dbo].[MyTable] WITH CHECK ADD CONSTRAINT [CK_OneRow] CHECK (([Id]=(1)))
I work in a closed environment and have no applications in third party hands so changing the db password is not an issue. Not sure how I would handle an application at other sites. Even encrypting a connection string would require a hardcoded key somewhere along the line.
The only downside is adding development complexity. You need to manage rights in the application which requires the development of objects manage users and thier rights.
I agree with David. I usually create users/rights as structures in the database and use one SQL user/password to access the database. I do break this rule when security is a one shot item -> You either have full access or you have zero access. In this case I won't go to the trouble to create user/rights structures and just use standard group security on the database.
Seems to me that if the security crew has to switch the on/off at the same times each day, you can easily use the time to decide whether it is an in/out punch. In fact, why is there a switch anyway? We reconcile via a pre-determined schedule for each employee. They should be at the clock at the beginning and end of each shift. If there are missing punches that correspond with these times, a supervisor has to manually reconcile the work day for the erred employee. We gave up on trying to identify a punch as in/out at the time of punch a long time ago.
I have to agree with Marcus. The correlated subquery is redundant. I can't imagine any structure and/or data that would make this subquery useful. tb2.CustomerID = tb1.CustomerID will always return both columns from the same row. If CustomerID is duplicated within the table, then it will throw an error. It also seems strange that AccountName is not dependant upon AccountId as thier names imply. Are you sure your structure is designed correctly?
Remove [#Chas] from the entire query. Change your where statements to:
chassisdb.dbo.tblopenjit.chassisnbr LIKE @Chassis_Search
WorkPlacementorder.salesorderno LIKE @Chassis_Search
salesorderno = @Chassis_Search
I question the last where clause - why isn't it also a LIKE? Seems like it would fail when wildcards are used.
Not really knowing the structure of your keys - something like this should work.
SELECT
s.PI,
s.JobID,
s.FormID,
s.ShiftID,
s.StartEvent,
s.SW,
s.SG,
e.EndEvent,
e.EG,
e.EW
FROM
[BaseTable] s
INNER JOIN
[BaseTable] e
ON (s.PI = e.PI AND
s.JobId = e.JobId AND
s.FormId = e.FormId AND
s.ShiftId = e.ShiftId AND
e.EndEvent = (SELECT
MAX(EndEvent)
FROM
[BaseTable]
WHERE
s.PI = PI AND
s.JobId = JobId AND
s.FormId = FormId AND
s.ShiftId = ShiftId))
WHERE
s.StartEvent = (SELECT
MIN(StartEvent)
FROM
[BaseTable]
WHERE
s.PI = PI AND
s.JobId = JobId AND
s.FormId = FormId AND
s.ShiftId = ShiftId)
I would strip your query down to the INNER JOINs and execute into a temp table. Optimize this if necessary. Then I would apply the LEFT JOIN lookups to the temp table. I have found that complex queries can sometimes be sped up by a huge factor when broken up logically. In this case, you have the engine joining and executing numerous correlated subqueries (in the form of LEFT JOINS) at the same time.
Your query will only work if you have 2 warehouses. What happens if there are 3 or more? Try:
SELECT
a.warehouse,
a.product,
a.standard_material
FROM
scheme_stockm AS a
WHERE
EXISTS (SELECT *
FROM scheme_stockm
WHERE product = a.product AND
standard_material <> a.standard_material)
ORDER BY
a.product,
a.warehouse
Google 'Resume Parser'. No need to re-invent the wheel.
Limiting the loop will slow your performance. There really isn't a nice way of doing it without resorting to pre-import queries or using a cursor. Neither choice will result in faster performance. Proper indexes on the conditions will improve performance given the query you listed. If you are moving lots of data into an empty table and you can fully control its integrity, then remove all constraints (except identities) and indexes on the new table - update the table - and put them back on. This can greatly speed up a large insert. You can also take a look at BULK INSERT if your SQL version will allow.
Something like this should work:
INSERT INTO [NewProductTable]
(
[NewProductName],
[NewCol1],
[NewCol2]
)
SELECT
REPLACE([ProductName],'_',' '),
[Col1],
[Col2]
FROM
[Product]
WHERE
(REPLACE([ProductName],'_',' ') = ProductName) OR
(REPLACE([ProductName],'_',' ') NOT IN (SELECT [ProductName] FROM [Product]))
I want you to understand that any child records (think foreign key) based off the ignored rows will be orphaned in the new structure.