Wasn't quit sure if i should have posted this in here 
http://evilzone.org/random/downloading-very-large-files/ , where the id
ea of this started. But i thought i have ready enough script to post it in here as a new topic.
If you haven't read the earlier topic, idea of this program is to download file from a server(it uses curl for that) in splitted parts and then merge them back to that original file.
I made this quit quickly and i haven't done bash scripting in a while, so there might be some minor bugs and weird pieces of code left. Also if someone/everyone sees something strange in my mathematical solutions i can confess that i suck in math, end of the story.
Okay, here is some usage example:
./scurl.sh <url>
Just give url as an argument like you would when using curl and program checks the size of the file and asks which sizes parts you want to make. Last splitted file of course gets what is left at the end.
./scurl.sh -m <give_wanted_file.name>
So this is the action, which merge splitted files back to original one.(or atleast it is suppose to 

 ) I have check few md5sums and they have matched.
./scurl.sh <url> -p
This action gives you chance to choose which part of the splitted file you wan't to download. It also asks whichs size of parts you want to make. It roughly counts in how many pieces the file is going to split with size you have wanted for the parts. Last part/file ofc. once again gets what remains at the end, depending on the size of other  splitted parts.
But here is actual code:
#!/bin/bash
split_dl()
{
clear
check_url=$( curl -sI $URL | grep -i Accept-ranges )
if [[ $check_url = *"none"* ]]
then
    while [ "$ans" != "y" ] && [ "$ans" != "n" ]
    do
        clear
        echo "Server doesn't seem to support byte ranging"
        echo "Continue anyway(y/n):"
        read ans
    done
    if [ $ans = "n" ]
    then
        exit
    fi
elif [[ $check_url != *"bytes"* ]]
then
    clear
    echo "'bytes' not found in 'Accept-ranges' header."
    echo "If error occurs try downloading from different server"
    sleep 2
fi
byte_sz=$( curl -sI $URL | grep -i Content-Length | awk '{print $2}' )
mod_byte_sz=$(tr -d '\r' <<< "$byte_sz")
if [[ $mod_byte_sz -eq 0 ]]
then
    clear
    echo "Couldn't fetch size of the file"
    echo "Give exact file size(MB):"
    read alter_sz
    mod_byte_sz=$(( $alter_sz * 1048576))
fi
mb_sz=$(( $mod_byte_sz / 1048576 ))
clear
echo "Size of the file is: $mb_sz MB"
echo "Give size(MB) of the first part you want to split:"
read part_sz
count=$(( $mb_sz / $part_sz - 1 ))
psz=$(( $part_sz * 1048576 ))
i=1
j=0
while [ $count -gt 0 ]
do
    curl --range $j-$(( $psz * $i )) -o file.part$i $URL
    count=$(( $count - 1 ))
    i=$(( $i + 1 ))
    j=$(( $j + $psz + 1 ))
done
curl --range $j- -o file.part$i $URL
}
part_dl()
{
clear
clear
check_url=$( curl -sI $URL | grep -i Accept-ranges )
if [[ $check_url = *"none"* ]]
then
    while [ "$ans" != "y" ] && [ "$ans" != "n" ]
    do
        clear
        echo "Server doesn't seem to support byte ranging"
        echo "Continue anyway(y/n):"
        read ans
    done
    if [ $ans = "n" ]
    then
        exit
    fi
elif [ [$check_url != *"bytes"* ]]
then
    clear
    echo "'bytes' not found in 'Accept-ranges' header."
    echo "If error occurs try downloading from different server"
    sleep 2
fi
byte_sz=$( curl -sI $URL | grep -i Content-Length | awk '{print $2}' )
mod_byte_sz=$(tr -d '\r' <<< "$byte_sz")
if [[ $mod_byte_sz -eq 0 ]]
then
    clear
    echo "Couldn't fetch size of the file"
    echo "Give exact file size(MB):"
    read alter_sz
    mod_byte_sz=$(( $alter_sz * 1048576))
fi
mb_sz=$(( $mod_byte_sz / 1048576 ))
clear
echo "Size of the file is: $mb_sz MB"
echo "Give size(MB) of the first part you want to split:"
read part_sz
count=$(( $mb_sz / $part_sz ))
clear
echo "File is splitted in $count parts"
echo "Select which part to download: "
read part
count=$(( $mb_sz / $part_sz - 1 ))
psz=$(( $part_sz * 1048576 ))
i=1
j=0
while [ $count -gt 0 ]
do
    if [ $i = $part ]
    then
        curl --range $j-$(( $psz*$i )) -o file.part$i $URL
    fi
    count=$(( $count - 1 ))
    i=$(( $i + 1 ))
    j=$(( $j + $psz + 1 ))
done
if [ $i = $part ]
then
    curl --range $j- -o file.part$i $URL
fi
}
help_menu()
{
echo "Usage1: scurl.sh <url> [optional -p]"
echo "Usage2: scurl.sh <-m> <output_file.name>"
echo "-m, merge downloaded parts"
echo "-p, choose one part to download"
}
merge()
{
echo "Merging $f_name"
cat file.part* > $f_name
}
clear
if [ $# -eq 0 ]
then
    help_menu
fi
if [ $# -eq 1 ]
then
    URL=$1
    split_dl   
fi
if [ $# -gt 1 ]
then
    if [ $1 = "-m" ]
    then
        f_name=$2
        merge
    elif [ $2 = "-p" ]
    then
        URL=$1
        part_dl
    else
        help_menu
    fi
fi 
Like i said, i made this really quickly and haven't glanced the code over and over again.(Like i usually do)
But IMO the concept of this program is quit interesting, so that's why i already share it, even if my solutions in code are far a way from flawless.