Do research on genetic algorithms, algorithms development, machine learning, Turing completeness, and probabilistic programming, Bayesian Logic.
But first, please read something about strong vs. weak AI.
There's only a limited number of ways you can simulate intelligence with such a small amount of resources.
You can collect and parse data and evaluate information based on the information you collect.
You can record certain actions, their frequencies, and crunch numbers based on what you collect.
You can even have your algorithms optimise and alter themselves over time.
But you're using a computer.
You are doing math, only; until someone is able to successfully describe intelligence in terms of mathematics, you will not be able to write an actual learning computer. That's not to say you can't write a really convincing weak AI, but it's important you realize what you're trying to do before you go for it.
On a tangent, personally I think that the farther away we are from this the better, an actually intelligent computer raises important moral questions about the nature of the object we have formed. Is an intelligent computer not just as worthy of life as we are?
Interesting questions, if you like philosophy.
Best of luck.