If you're just starting university, then all you need to know at the moment is high school algebra. Pay particular attention to power laws, logarithms and base conversion, because these are things that come up a lot in computer science. The other stuff, such as boolean logic and set theory, will probably be taught in your computer science courses sooner or later. Geometry and calculus are generally not necessary unless you're doing graphics or scientific computing, and you won't encounter such topics for a while yet.
Here's one example of math being used in computer science: Consider the mergesort algorithm. It works in the following way. Given a list of numbers, do the following to the list:
- If the list is less than 2 items long, pass it back up.
- If the list is exactly 2 items long, and the first item is larger than the second, switch the two with each other, then pass the list back up.
- If the list is more than 2 items long, then split it apart at the middle (with the remaining 1, if any, going to one side or the other) to get two smaller lists. Perform the entire algorithm on each of these smaller lists. Then scan through the two lists from the beginning to the end, always taking the smaller of the two next items and adding it to a third list (and moving up along the list you took the smaller item from). When the third list is full, pass it back up.
If we assuming that each individual step such as 'move up 1 space along a list' and 'compare two numbers with each other' and 'switch two numbers' and 'add a number from one list to another' takes constant time. What is the running time of the algorithm? Imagine splitting the running of the algorithm up into 'levels', where the first level has a single list, the second has roughly two lists (the sizes of which add to the size of the original list), the third has roughly four (which also add to the size of the original), etc. On each level, the total amount of work is linear in the size of the original list. Because the size is being divided in half at each lower level until it reaches a low constant (2 or less), the number of levels to reach that point will be the logarithm (to the base 2) of the size of the original list. Thus, the algorithm's running time can be expressed algebraically as proportional to N*log2(N), where N is the size of the original list (whatever that happens to be when you run the algorithm).
Don't worry if that all flew over your head. In a few years it will make sense. :)