I have a BO report which is running very slow.
One of the table is going for a table scan. I am trying to figure out a way so that the table hits an index instead of scanning the whole table.
There is no unique key on this table. I checked that there are some cases when duplicate rows exist in this table.
This is the BO generated query.
SELECT
module.module_id,
activity_level.activity_level_value,
module_staff.activity_type
FROM
module,
activity_level,
module_staff,
module_staff_activity
WHERE
( module_staff.module_id=module.module_id )
AND ( module_staff_activity.module_staff_id=module_staff.module_staff_id )
AND ( activity_level.activity_level_id=module_staff_activity.activity_level_id )
AND ( module_staff.activity_type IS NOT NULL )
module_staff_activity table has around 4 million rows. There is no primary key, no unique key on this table. We have nornal indexes on module_staff_activity.module_staff_id and one more column (not used in query) module_staff_activity.work_date.
Is there a way to write the query so that it could hit the index and there by run faster.
Other thoughts?
It’s going to be tough to coax a query to use an index that doesn’t exist. The first step is to work with your DBA to create the indexes, be sure statistics are run to help the optimizer, and so forth. Database tuning is more of an art than a science, but a general rule is that the fields used in joins and conditions benefit from an index. With that in place, hopefully the optimizer will make good choices. If not, the concept of “table hints” can be used to “force” the use of preferred indexes. Try a search for “table hints” (without quotes) to review previous discussions on the topic.
I can help only if your DB is oracle…
Try gathering statistics for tables involved in select statement, the see explain plan for that select, it should give you understanding how select is processed.
If explain didn’t help, enable a trace, parse it and that should explain everything…