1) Convert the UTF-8 into UTF-32
2) Normalize them by decomposing them to cannonical and recompose them to NFKC.
3) Do a 1:1 lookup of the UTF-32 codes to CP1250 codes, throwing away anything not in CP1250
Step two is the killer, and is expected to be lossy. (Conveniently, anything you would lose doesn't appear in CP1250).
What is wrong with using a well-known and universally-implemented library to do this task? It seems like you are making life harder than it needs to be.