AArchitectural Game Changer?
Ever since I heard about it, one of the most obvious applications for a tablet computer, and the iPad in particular, would be multitouch CAD.
I am sure it is being worked on now by various companies, but here's a quick sketch idea regarding how the UI might work along with a few examples of basic gestures that might be intuitively translated from point and click and hotkeys to multitouch.
The applications would be numerous: from architects in the studio or on site, engineers, designers, clients and contractors, it's usefulness can't really be underestimated. And that's just the beginning. The critical factors that make this product the first viable tablet computer for CAD applications include the fact that:
1. No stylus is needed (just something else to lose, especially on site or travelling around),
2. The multitouch support allows exponentially more gestures to be interpreted than if only a single touch interface was used (which starts to approach the number of commands required for a meaningful CAD experience), and
3. A viable screen size that has enough screen real-estate to not crowd out the CAD information with the tool menus. What would also help is if pressure gestures were interpreted differently from tap gestures, so you would have a two-tier touch relationship which can intuitively separate and interpret your wish to access a command from your wish to manipulate the vector data, based upon pressure, the sensitivity of which you could customize like you already do with your mouse settings for things like double-click speed. So, for example, you could rearrange the tool icons wherever you wanted on the screen and they would be 50% transparent, so you could see what they are but they wouldn't interfere with the vector data beneath/above it. To select a tool icon, you might push down on it slightly, like dipping into a paint pot...or you could use a specific button that you hold down like a 'shift' key with the other hand to activate the tool icons, which you would then pick with your right hand (I'm assuming a right-handed bias here).
A couple of other examples: A finger on the screen would select an object tentatively, which you can drag around. A tap by a second adjacent finger within a specified area would confirm the selection. This would avoid accidental selecting of objects. You could cycle through different snaps using a tap on a button with the other hand. But a finger on the screen and then a slight push downwards (if the selection tool was activated) could activate a selection box and you could drag the box to the other corner, and release to activate. Or, if you are already within a command with multiple options, it could throw up a 50% transparent box with say, 3 options you could access comfortably with the adjacent finger, as shown above, which was based on the great work that Bonnier and BERG have done with Mag+, but the difference is that here you wouldn't need to lift your finger from the surface to activate it, so the workflow isn't interrupted.
But with multitouch it gets better of course: you could select commands with one hand while the other deals with manipulating objects that you want to move, copy, trim etc....at the same time. And this is why this way of inputting CAD data beats any other method hands down...as it were. Zooming via pinch (which is familiar to anyone who has ever used an iPhone, iPod touch or iPad already) could be combined with 3-finger dragging (which would pan in the direction of swipe) to yield a manoeuvre that zooms and pans at the same time...so the very way you think about manipulating the objects starts to be rewritten to become far easier and quicker. Combi-commands if you will. And then you add the pressure support, and you can start to see how your work rate could increase dramatically. Obvious and global commands like opening files, undo / redo would be really easy to find and use, the size of certain toolbars could maybe be based upon frequency of use. To show how all this might work...here's the workflow of a basic copy command as an example:
1. Select the copy tool or select the object to be copied. Single finger tentative, cycle through the relevant snaps, second finger tap confirms.
(2. At the same time you can Pan and zoom until you are happy with the window you are seeing. You could do this with the other hand.)
3. Select the base point for the copy command. Again, single finger tentative, cycle through the relevant snaps if applicable, second finger tap confirms.
(4. Pan and zoom again if necessary.)
5. Specify end point of the copy command, Tentative, confirm. No wires, can be done anywhere and I think it would take very little time for it to become second nature. I could go on ad nauseam...from how each command would work to optimising layouts for single hand use (you're using the other one to hold it) or double hand use. Of course all of this is predicated on the fact that using these gestures for long periods of time wouldn't lead to RSI or carpel-tunnel syndrome, which using a mouse for long periods without a break can sometimes induce.
Plenty of research yet to be done, but I fully anticipate it to be a reality soon...what an opportunity...I'd be very interested to hear if anyone has had any experience of this so far.