Like the other answer said, std::vector is pretty good use for this, since they can grow on the fly. I do have to say although std::vector classes are good for reading data from files, they are actually slower when looping through on computationally heavy environments that run on cluster's and supercomputers. The standard type of array is actually faster (at a low level, but providing appropriate optimizations put them almost the same level). So what would I do? create a function that reads the data into a vector, then you can use new int[vector size], then copy the data from the vector to the array, and dump the vector at the end of the function. But understanding how to use std::vector is an important concept to C++ as they are nearly overly general purpose and are actually fun to use much of the time.
EDIT:
I would have to go digging around to find it somewhere since I did that back in 200? Things may have changed from now, but I still tend to use dynamically allocated arrays over vectors for my computational fluid dynamics codes. There is actually nothing wrong with the std::vector, I'm just saying that I enjoy using pointers and standard arrays. That may come from primarily using procedural/functional coding Vs. object oriented code. Now that being said, I will try and mock up a quick bench marking code and post a link to it within the next couple days during off-time.
Here is a quick example that I just did. If you see anything that is vastly wrong, then let me know. I used subscripts for each section of iterating and iterated over 10000 times. This is a very simple case, but it still demonstrates that arrays are still slightly faster. Not what most people should be concerned with, but when on computational science end, all the time counts because these numbers are not just reasonable, they are practically small. My last colleagues CFD had somewhere of 3.0 million nodes and time evolved for over 50000 time steps, which isn't too much for unsteady problems. Here is the results:
http://pastebin.com/aZRrW6JV
remember that user time is the important one. And don't forget O3 optimizations do not always give good results (as in wrong answers) based for computational science methods.