Tuesday, July 16, 2013

PyManga--A python based gui to download manga

I often used to see people sitting infront of the big screens and clicking the mouse to check their favourate manga.I got plenty of time so decided to make the script.
This is the dumbest program ...with lots of bugs in it.What do you expect in one day ??
How to run this program
1.Download this program from sourceforge
2.Extract ==> one shortcut and one zip file
extract the zip file and run the file manga.exe
3.you'll see a popup and follow the instructions









usage:
Ex:1   naruto 100   
Ex2: bleach 245
  **** Spell properly else program will not run  
Ex:<anime name> number



Enter and click ok...wait for some time to download(as soon as black window aka command prompt disappears download is complete) check the manga downloader folder you'll find a new directory/folder check the images for more info..
or you can got to mangareader.net and copy the link at the top and paste it.Make sure you're on the first page(first image should be there on that page)3,4 images.
If you have python then install easygui and beautifulsoup4 modules using pip.Then copy this script and run...



import os
import urllib2
import urllib
from urlparse import urlparse
from bs4 import BeautifulSoup
import easygui as eg


eg.msgbox("Search/Enter proper url"+'\n'+"Ex:naruto 100 or bleach 400"+'\n'+" www.mangareader.net/naruto/100",title="Manga Downloader", ok_button="ok")
q=eg.enterbox(msg='Search or Enter the Link.',title='Manga Downloader')
print q

def user_input(q):
    if 'www' in q:
        site1(q)
    else:
        search(q)

def search(query):
    """Name episod
    Ex: 1.Bleach 544
         2.Naruto 100"""

    s=query.lower().strip(' ').split(' ')
        
    link='http://www.mangareader.net/'+s[0]+'/'+s[1]
    if len(s) > 2:
        link='http://www.mangareader.net/'+'-'.join(s[0:-1])+'/'+s[-1]

    site1(link)


def site1(link):
    link=link.strip('http://')
    link=link.strip('.html')
    if (link.count('/')>2):
        print 'link.count:',link.count('/')

        two=link.find('/',20)
        three=link.find('/',two+1)
        #link=http://www.mangareader.net/440-45521-1/watashi-ni-xx-shinasai/chapter-8.html
        link='http://www.mangareader.net/'+link[two+1:three]+'/'+link[three+9:]
        print link

    if 'http://' not in link:
        
        link1="http://"+link
    else:
        link1=link
    
    try:
        html=urllib2.urlopen(link1).read()

    except urllib2.HTTPError:
        print('Enter proper url')
    
    soup = BeautifulSoup(html)
    link_image=soup.img['src']
    link_next=soup.img.parent['href']
    """ Creates folder at specified location"""
    link_properties = urlparse(link)
    start=link_properties.path.find('/')
    end=link_properties.path.find('/',start+1)
    folder_name=link_properties.path[start+1:end]+'_'+link_properties.path[end+1:]
    if not os.path.exists(folder_name):
        os.makedirs(folder_name)
    """Completed creating Directory"""
        
    
    
    if 'www.' in link_next:
        pass
    link_next='http://www.mangareader.net'+link_next
    i=0
    while ('/'+link_properties.path[end+1:]+'/')in link_image:
        
            f = open(folder_name+'/'+str(i+1)+'.jpg','wb')
            f.write(urllib.urlopen(link_image).read())
            f.close()
            html=urllib2.urlopen(link_next).read()
            soup = BeautifulSoup(html)
            link_image=soup.img['src']
            link_next=soup.img.parent['href']
            link_next='http://www.mangareader.net'+link_next
            print link_next,link_image
            i=i+1

#search("one piece 100")
user_input(q)



Learn python for fun.The popular blog with questions and answers to the python.Solutions to facebookhackercup,codejam,codechef.The fun way to learn python with me.Building some cool apps.

3 comments:

  1. Hi, Thank you for this very useful tool!

    But, it has some problems, (I think that this problem has to do something with low internet speed) most of the time when I download chapters it is not going to download all the pages/images in it...

    For instance if some chapter has lats say 20 pages it is going to download in many cases less than that number of pages (in average it takes me 3 retries to download them all) and the tool closes not finishing the job.
    It does download the whole image (there is no half finished images) before it closes, but not all of them.
    This is a problem, but what makes this problem even bigger hassle is the fact that this tool doesn't check for already existing images, so it constantly redownloads them, making the whole process slower (and increasing the probability for tool crash, since it is starting the whole download of all pages from beginning).
    Also the instruction menu that appears first when the tool is started and you have to press "OK" becomes very boring after few times, so this instructions can be written as a part of the second menu of GUI where you enter your stuff, making the first menu obsolete.

    This kind of tool would be very useful as Firefox add-on also.

    Love God and accept Jesus for your salvation! God bless U

    ReplyDelete
  2. Hi, Thank you for this very useful tool!

    But, it has some problems, (I think that this problem has to do something with low internet speed) most of the time when I download chapters it is not going to download all the pages/images in it...

    For instance if some chapter has lats say 20 pages it is going to download in many cases less than that number of pages (in average it takes me 3 retries to download them all) and the tool closes not finishing the job.
    It does download the whole image (there is no half finished images) before it closes, but not all of them.
    This is a problem, but what makes this problem even bigger hassle is the fact that this tool doesn't check for already existing images, so it constantly redownloads them, making the whole process slower (and increasing the probability for tool crash, since it is starting the whole download of all pages from beginning).
    Also the instruction menu that appears first when the tool is started and you have to press "OK" becomes very boring after few times, so this instructions can be written as a part of the second menu of GUI where you enter your stuff, making the first menu obsolete.

    This kind of tool would be very useful as Firefox add-on also.

    Love God and accept Jesus for your salvation! God bless U

    ReplyDelete