Properly setup, Vertica can connect to Hcatalog, or read hdfs files. This does require some DBA work, though.
If you want to easily get data fro Hive to Vertica, you can use the COPY statement with the LOCAL STDIN modifier and pipe the output of Hive to the input of Vertica. Once you add a dd in the middle to prevent the stream to just stop after a while, this works perfectly. I am not so sure why dd is needed, but I suppose it buffers data and makes the magic happen.
hive -e "select whatever FROM wherever" | \ dd bs=1M | \ /opt/vertica/bin/vsql -U $V_USERNAME -w $V_PASSWORD -h $HOST $DB -c \ "COPY schema.table FROM LOCAL STDIN DELIMITER E'\t' NULL 'NULL' DIRECT"
Of course, the previous statement needs to be amended to use your own user, password and database.
The performance are quite good with this, although I cannot give a good benchmark as in our case the hive statement was not trivial.
One thing to really take care of is where you run this statement. You can run it from everywhere as long as hive and Vertica are accessible, but be aware that data will flow from hive to your server to Vertica. Running this statement on a Vertica node or your hive server will reduce the network traffic and might speed up things.
This post is based on my answer to a question on stackoverflow.
This would not work if your table was terabyte large, would it?
I think it would work as the data does not have to be all in memory. It is kind of all of nothing though, so moving a terabyte table in one go might not be the best option.
Have you ever encountered the map runs in Hive but the reduce does not? the script you provided runs but it does copy anything over to Vertica.