A Job running in 10 minutes is a very small set of information to solve a problem. You need to be far more specific, for instance identifying if the whole of the 10 minutes is stolen by a single DF or script or is evenly distributed inside the job. Then you could post the details of that single part of the job to search for a solution. Just as an example, a bad designed lookup could ruin an entire DF.
Besides, if you have a HW bottleneck (Network, disk) the parellelism would even worsen the performance problem.
I agree with Andres. DoP and run time are just two pieces of a very complicated puzzle. Are other things running? Which piece of the job takes 10 minutes? What are the CPU, memory, network and disk stats looking like on the DS and DB servers?
By setting DOP = 6 you do not FORCE it, you are only specifying the upper max limit. If and how DOP is applied also depends on the data structures and how/if DS can do an effective round robin or not. If you have no primary keys and there really is no way to partition the data, you will find that DS will often ignore the DOP setting and just process everything as is - as it really needs to some way of reliably splitting the data and merging it again.
In my experience, you often need to redesign the Dataflow or the database is simply not very effective at quickly generating a result set. I tune from both directions and rarely do I find a process that I can’t improve.