Question:
I need help with the basic concepts of querying with SQL?
dm
2007-06-19 08:40:34 UTC
I taught myself SQL strictly for querying purposes two years ago. At this point I have a very good understanding of how to get the information I want and can effectively use joins, case, decode, unions, views, etc.

However I was neverly formerly trained on how to optimize my queries to execute as efficiently as possible, and can't really seem to find anything on the net in laymans terms. So I was hoping for any pointers on speeding up queries. At the very least, could you answer this question?

When querying against more than one table using joins does the order of tables matter at all? For example if I am joining results from table "A" which has 60 columns by 1 million rows to table "B" with 10 columns and 50 rows, does it matter if I write:

select * from a, b where a.index = b.index

compared to

select * from b, a where a.index = b.index? Does the large table placement impact anything on speed to execute?
Five answers:
Pfo
2007-06-19 10:29:10 UTC
Here's a clue: let's say you have a table with 1 million records. One particular field in that table, which is usually searched on, can only have 1 of 5 different values. So index that field, and when querying for it, specify that criteria first. Now when it searches, it should only have to search 1/5 of that table, as compared to scanning the entire table.



This website has a lot of information about optimizing SQL Server, it is not a simple topic. Those with programming backgrounds will be able to understand it better:



http://www.sql-server-performance.com
Grandpa Tom
2007-06-19 11:11:28 UTC
Primary key is the unique record identifier and of no use to speeding up queries. Make indexes on two or three fields that group your records logically. City, Zip, State, Age are classics, but it always depends on the available fields/type of data. Then use where filters to limit the number of records the query will need to hash through.

...where Age >= 65 and Zip like "95%"



Of course, even if a field is a good index candidate, there is no reason to index it if isn't something that will help limit the number of records in the queries you need to use. 'State' is a great index, but of no value if you don't analyze or use the data on state by state basis.



Also, static length fields are much faster to search than variable length fields. But that's a design time consideration and probably you're stuck with whatever you have in that regard. But the era of running into storage space limitations is pretty much over and new tables should be created with static length fields when possible.



Store queries that will be used multiple times as procedures and reuse them.
Jestep
2007-06-19 09:42:43 UTC
One thing you can do, is not search using *. Specifically request columns instead of using *.



Also, if you have additional columns that you commonly search on, other than a primary key, you can make that column an index which will speed up processing. Index's do slow down the overall database when data is put into a table with multiple indexes, so use indexes only when you know you need them.



The order that you queried in your example shouldn't makje a difference.
Sirena609
2007-06-19 10:33:17 UTC
Primary keys are just the unique identifier for each record in your table and I don't think what order you put it in matters. Try using some filters on your queries that way there isn't as much data to sort through.
Chris S
2007-06-19 08:44:51 UTC
No, it won't matter.



However, searching is sped up if each table has a primary key defined.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...