Most programming languages distinguish between two kinds of numbers: integers and floating-point numbers. An integer is a whole number that has no fractional component. Integers can be positive, negative, or the number 0. Floating-point numbers (floats for short) can include a fractional value represented after a decimal point, as in 0.56, 199.99, and 3.14159. So, 1, 34523, -3, 0, and -9999999 are integers, but 223.45 and -0.56 are floats.