The more I work in testing domain, the more I learn how difficult it is to answer this question. On one hand we need to be very diligent and specific about things we are testing. We have to be ready to answer what we tested and how. We have to collect information on what was tested and how in order to do retrospective review. On another hand, many test professionals believe that the best way to find a defect is get into something like a trance, a "free flying" through the functionality. Like a dog sniffing its pray, tester searches defects following gut instincts. This is way too difficult to organize in form of a checklist or guidelines. You can hardly explain what it takes to ride a bicycle or jump with a parachute to a person new to those activities. All you can achieve is "I have no idea what you are talking about" response. Same is for the testing.
So what is the best way for you, you may ask? The answer cannot be universal for everyone because each team is unique. The way we used to work, the qualities of team members and peculiarities of a task are very important. I personally believe that a mix of both approaches is what works better for most of the organizations. But what kind of a mix should it be? One may try to script everything, including tests executed once (executed and throw away), like in medical industry. Others may not bother having such tests written down on the paper, allowing some time for their testers to do non-scripted testing fee of any bureaucracy. What mix works best for you is a matter of numerous trial and errors. Try different approaches and see what effect it may have. If it's a waste of time don't do it.
Unlike test design based on product requirements "free flying" testing is in-context activity, so it takes less efforts to generate ideas. When you do not have application to click buttons and try different combinations it is difficult to come up with ideas. Modeling software behavior in mind is resource-consuming task. Your mind simply has not enough free processor time required for the creativity.
In general, scripted testing adds more organization and clarity in testing process. It helps in making the testing process visible and controllable, thus helping to meet deadlines. Having no test scripted, you can hardly say how long it will take to run all tests, because you can only guess how many of them you are going to execute.
So, the best way of dealing about it on the level of testing schedule is generating scripted tests against requirements (and don't bother too much about very specific ways of using software at this point) and allow a day or two (depending on how big the functionality is) for "free flying" testing. These two days are better to be scheduled at the end of the testing stage. Prior scripted testing will help removing blocking issues as well it will lend your testers time to get familiar with new functionality.
All is in your hands. Only you can say what is best for your specific situation. Good luck!