https://code.google.com/codejam/contest/2418487/dashboard

This is only possible because of Python’s arbitrary precision arithmetic, which is a serious lack on C++ Standard Library. The solution follows AL series summation methods, after sleeping through the actual competition it took me a few days to come up with this :D.

import math
import sys
from decimal import *
getcontext().rounding = ROUND_HALF_UP
getcontext().prec = 100
def revens(r):
if r==0:
return 0
else:
return (2*(r//2-1)+1)*((r//2-1)+1);
def rodds(r):
if r==1:
return 0
else:
return (r//2)*(2*(r//2)+1)
def proc(r,t):
if r%2==0:
ff= (Decimal(-3)+ Decimal(1+8*(revens(r)+t)).sqrt() ) /Decimal(4)
rr = ff - Decimal(r//2)+Decimal(1)
else:
ff = (Decimal(-1)+ Decimal(1+8*(rodds(r)+t)).sqrt() ) /Decimal(4)
rr = ff - Decimal(r//2)
return rr
sys.stdin.readline()
case=1
for line in sys.stdin.readlines():
line = line.strip()
arr = line.split()
print("Case #%d: %d"%(case,proc(int(arr[0]),int(arr[1]))))
case=case+1

### Like this:

Like Loading...

*Related*

Can you really say it’s O(1), when you need to change the precision to n, depending on the size of entry, and use a sqrt function, that isn’t in O(1)? Answer: NO.

True enough, I’ve assumed sqrt() to be O(1) which is comparatively correct because other operations take a lot of time comparatively. I do not change precision while running its changed only once, all other operations are O(1) in the same way you consider a*b or a/b to be O(1) its not actually about how the actual thing is implemented but the comparative effect that it has to the whole function.

” I do not change precision while running its changed only once”

With that argument, any algorithm running on a finite set of data is O(1)…

It’s like saying “I know I only need to check every number up to 10^100, so it’s O(1) !”

If you had bigger numbers, you’d need to increase the precision, which would increase the time to compute sqrt. So your algorithm is *not* in O(1).

Again it depends on the way you look at it, you don’t actually say a 32bit float’s sqrt() is faster than a 64bit double’s 😛 Neither way this is faster than the given solution.

(i modified the given solution to suite to Python 3)

[madura@trex-j ~]$ time python3 g.py /dev/null

real 0m1.945s

user 0m1.887s

sys 0m0.020s

[madura@trex-j ~]$ time python3 ~/kdev/bullseye/bullseye.py /dev/null

real 0m1.152s

user 0m0.743s

sys 0m0.027s