Compare commits
28 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f3f02adc98 | ||
|
|
0812438b9d | ||
|
|
26d9f7bd20 | ||
|
|
2c95633fcd | ||
|
|
07e532f59c | ||
|
|
882edb6c69 | ||
|
|
93f02c625a | ||
|
|
e95ed1a8ed | ||
|
|
ba5927a20d | ||
|
|
297a9ddc66 | ||
|
|
4f34a9a196 | ||
|
|
529dd3f160 | ||
|
|
4163d5ccf4 | ||
|
|
867ac35b45 | ||
|
|
427137b0fe | ||
|
|
ac9cdb1e98 | ||
|
|
2bedd75005 | ||
|
|
8b632e309f | ||
|
|
bc968f8eca | ||
|
|
00ac669f76 | ||
|
|
694dfafd39 | ||
|
|
a7856f5c32 | ||
|
|
38eabe7612 | ||
|
|
9162698f89 | ||
|
|
506d97d5f0 | ||
|
|
a76ba56cd8 | ||
|
|
8e73edc012 | ||
|
|
c386ac6e6d |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -1,9 +1,6 @@
|
|||||||
# Byte-compiled / optimized / DLL files
|
# Byte-compiled / optimized / DLL files
|
||||||
__pycache__/
|
__pycache__/
|
||||||
*.py[cod]
|
*.pyc
|
||||||
|
|
||||||
# C extensions
|
|
||||||
*.so
|
|
||||||
|
|
||||||
# Distribution / packaging
|
# Distribution / packaging
|
||||||
.Python
|
.Python
|
||||||
|
|||||||
@@ -1,29 +0,0 @@
|
|||||||
From Apprentice Alf's Blog
|
|
||||||
|
|
||||||
Adobe Adept ePub and PDF, .epub, .pdf
|
|
||||||
|
|
||||||
The wonderful I♥CABBAGES has produced scripts that will remove the DRM from ePubs and PDFs encryped with Adobe’s DRM. Installing these scripts is a little more complex that the Mobipocket and eReader decryption tools, as they require installation of the PyCrypto package for Windows Boxes. For Mac OS X and Linux boxes, these scripts use the already installed OpenSSL libcrypto so there is no additional requirements for these platforms.
|
|
||||||
|
|
||||||
For more info, see the author's blog:
|
|
||||||
http://i-u2665-cabbages.blogspot.com/2009_02_01_archive.html
|
|
||||||
|
|
||||||
There are two scripts:
|
|
||||||
|
|
||||||
The first is called ineptkey_v5.pyw. Simply double-click to launch it and it will create a key file that is needed later to actually remove the DRM. This script need only be run once unless you change your ADE account information.
|
|
||||||
|
|
||||||
The second is called in ineptepub_v5.pyw. Simply double-click to launch it. It will ask for your previously generated key file and the path to the book you want to remove the DRM from.
|
|
||||||
|
|
||||||
|
|
||||||
Both of these scripts are gui python programs. Python 2.X (32 bit) is already installed in Mac OSX. We recommend ActiveState's Active Python Version 2.X (32 bit) for Windows users.
|
|
||||||
|
|
||||||
The latest version of ineptpdf to use is version 8.4.42, which improves support for some PDF files.
|
|
||||||
|
|
||||||
ineptpdf version 8.4.42 can be found here:
|
|
||||||
|
|
||||||
http://pastebin.com/kuKMXXsC
|
|
||||||
|
|
||||||
It is not included in the tools archive.
|
|
||||||
|
|
||||||
If that link is down, please check out the following website for some of the latest releases of these tools:
|
|
||||||
|
|
||||||
http://ainept.freewebspace.com/
|
|
||||||
38
Calibre_Plugins/Ignobleepub ReadMe.txt
Normal file
38
Calibre_Plugins/Ignobleepub ReadMe.txt
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
Ignoble Epub DeDRM - ignobleepub_v01.6_plugin.zip
|
||||||
|
|
||||||
|
All credit given to I♥Cabbages for the original standalone scripts.
|
||||||
|
I had the much easier job of converting them to a calibre plugin.
|
||||||
|
|
||||||
|
This plugin is meant to decrypt Barnes & Noble Epubs that are protected with Adobe's Adept encryption. It is meant to function without having to install any dependencies... other than having calibre installed, of course. It will still work if you have Python and PyCrypto already installed, but they aren't necessary.
|
||||||
|
|
||||||
|
|
||||||
|
Installation:
|
||||||
|
|
||||||
|
Go to calibre's Preferences page. Do **NOT** select "Get plugins to enhance calibre" as this is reserved for "official" calibre plugins, instead select "Change calibre behavior". Under "Advanced" click on the Plugins button. Use the "Load plugin from file" button to select the plugin's zip file (ignobleepub_vXX_plugin.zip) and click the 'Add' button. you're done.
|
||||||
|
|
||||||
|
Please note: calibre does not provide any immediate feedback to indicate that adding the plugin was a success. You can always click on the File-Type plugins to see if the plugin was added.
|
||||||
|
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
|
||||||
|
1) The easiest way to configure the plugin is to enter your name (Barnes & Noble account name) and credit card number (the one used to purchase the books) into the plugin's customization window. It's the same info you would enter into the ignoblekeygen script. Highlight the plugin (Ignoble Epub DeDRM) and click the "Customize Plugin" button on calibre's Preferences->Plugins page. Enter the name and credit card number separated by a comma: Your Name,1234123412341234
|
||||||
|
|
||||||
|
If you've purchased books with more than one credit card, separate that other info with a colon: Your Name,1234123412341234:Other Name,2345234523452345
|
||||||
|
|
||||||
|
** NOTE ** The above method is your only option if you don't have/can't run the original I♥Cabbages scripts on your particular machine. Your credit card number will be on display in calibre's Plugin configuration page when using the above method. If other people have access to your computer, you may want to use the second configuration method below.
|
||||||
|
|
||||||
|
|
||||||
|
2) If you already have keyfiles generated with I <3 Cabbages' ignoblekeygen.pyw script, you can put those keyfiles into calibre's configuration directory. The easiest way to find the correct directory is to go to calibre's Preferences page... click on the 'Miscellaneous' button (looks like a gear), and then click the 'Open calibre configuration directory' button. Paste your keyfiles in there. Just make sure that they have different names and are saved with the '.b64' extension (like the ignoblekeygen script produces). This directory isn't touched when upgrading calibre, so it's quite safe to leave them there.
|
||||||
|
|
||||||
|
All keyfiles from method 2 and all data entered from method 1 will be used to attempt to decrypt a book. You can use method 1 or method 2, or a combination of both.
|
||||||
|
|
||||||
|
|
||||||
|
Troubleshooting:
|
||||||
|
|
||||||
|
If you find that it's not working for you (imported epubs still have DRM), you can save a lot of time and trouble by trying to add the epub to calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might as well get used to it. ;)
|
||||||
|
|
||||||
|
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.epub". Don't type the quotes and obviously change the 'your_ebook.epub' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
||||||
|
|
||||||
|
** Note: the Mac version of calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
||||||
|
|
||||||
|
|
||||||
@@ -1,36 +1,39 @@
|
|||||||
Inept Epub DeDRM - ineptepub_vXX_plugin.zip
|
Inept Epub DeDRM - ineptepub_v01.7_plugin.zip
|
||||||
Requires Calibre version 0.6.44 or higher.
|
|
||||||
|
|
||||||
All credit given to I <3 Cabbages for the original standalone scripts.
|
All credit given to I♥Cabbages for the original standalone scripts.
|
||||||
I had the much easier job of converting them to a Calibre plugin.
|
I had the much easier job of converting them to a Calibre plugin.
|
||||||
|
|
||||||
This plugin is meant to decrypt Adobe Digital Edition Epubs that are protected with Adobe's Adept encryption. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. It will still work if you have Python and PyCrypto already installed, but they aren't necessary.
|
This plugin is meant to decrypt Adobe Digital Edition Epubs that are protected with Adobe's Adept encryption. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. It will still work if you have Python and PyCrypto already installed, but they aren't necessary.
|
||||||
|
|
||||||
|
|
||||||
Installation:
|
Installation:
|
||||||
|
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file dialog button to select the plugin's zip file (ineptepub_vXX_plugin.zip) and click the 'Add' button. you're done.
|
Go to Calibre's Preferences page. Do **NOT** select "Get plugins to enhance calibre" as this is reserved for "official" calibre plugins, instead select "Cahnge calibre behavior". Under "Advanced" click on the Plugins button. Use the "Load plugin from file" button to select the plugin's zip file (ineptepub_vXX_plugin.zip) and click the 'Add' button. you're done.
|
||||||
|
|
||||||
|
Please note: Calibre does not provide any immediate feedback to indicate that adding the plugin was a success. You can always click on the File-Type plugins to see if the plugin was added.
|
||||||
|
|
||||||
|
|
||||||
Configuration:
|
Configuration:
|
||||||
|
|
||||||
When first run, the plugin will attempt to find your Adobe Digital Editions installation (on Windows and Mac OS's). If successful, it will create an 'adeptkey.der' file and save it in Calibre's configuration directory. It will use that file on subsequent runs. If there are already '*.der' files in the directory, the plugin won't attempt to
|
When first run, the plugin will attempt to find your Adobe Digital Editions installation (on Windows and Mac OS's). If successful, it will create an 'adeptkey.der' file and save it in Calibre's configuration directory. It will use that file on subsequent runs. If there are already '*.der' files in the directory, the plugin won't attempt to find the Adobe Digital Editions installation installation.
|
||||||
find the Adobe Digital Editions installation installation.
|
|
||||||
|
|
||||||
So if you have Adobe Digital Editions installation installed on the same machine as Calibre... you are ready to go. If not... keep reading.
|
So if you have Adobe Digital Editions installation installed on the same machine as Calibre... you are ready to go. If not... keep reading.
|
||||||
|
|
||||||
If you already have keyfiles generated with I <3 Cabbages' ineptkey.pyw script, you can put those keyfiles in Calibre's configuration directory. The easiest way to find the correct directory is to go to Calibre's Preferences page... click on the 'Miscellaneous' button (looks like a gear), and then click the 'Open Calibre configuration directory' button. Paste your keyfiles in there. Just make sure that
|
If you already have keyfiles generated with I♥Cabbages' ineptkey.pyw script, you can put those keyfiles in Calibre's configuration directory. The easiest way to find the correct directory is to go to Calibre's Preferences page... click on the 'Miscellaneous' button (looks like a gear), and then click the 'Open Calibre configuration directory' button. Paste your keyfiles in there. Just make sure that they have different names and are saved with the '.der' extension (like the ineptkey script produces). This directory isn't touched when upgrading Calibre, so it's quite safe to leave them there.
|
||||||
they have different names and are saved with the '.der' extension (like the ineptkey script produces). This directory isn't touched when upgrading Calibre, so it's quite safe to leave them there.
|
|
||||||
|
|
||||||
Since there is no Linux version of Adobe Digital Editions, Linux users will have to obtain a keyfile through other methods and put the file in Calibre's configuration directory.
|
Since there is no Linux version of Adobe Digital Editions, Linux users will have to obtain a keyfile through other methods and put the file in Calibre's configuration directory.
|
||||||
|
|
||||||
All keyfiles with a '.der' extension found in Calibre's configuration directory will be used to attempt to decrypt a book.
|
All keyfiles with a '.der' extension found in Calibre's configuration directory will be used to attempt to decrypt a book.
|
||||||
|
|
||||||
|
|
||||||
** NOTE ** There is no plugin customization data for the Inept Epub DeDRM plugin.
|
** NOTE ** There is no plugin customization data for the Inept Epub DeDRM plugin.
|
||||||
|
|
||||||
|
|
||||||
Troubleshooting:
|
Troubleshooting:
|
||||||
|
|
||||||
If you find that it's not working for you (imported epubs still have DRM), you can save a lot of time and trouble by trying to add the epub to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might
|
If you find that it's not working for you (imported epubs still have DRM), you can save a lot of time and trouble by trying to add the epub to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might as well get used to it. ;)
|
||||||
as well get used to it. ;)
|
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.epub". Don't type the quotes and obviously change the 'your_ebook.epub' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.epub". Don't type the quotes and obviously change the 'your_ebook.epub' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
||||||
|
|
||||||
39
Calibre_Plugins/Ineptpdf ReadMe.txt
Normal file
39
Calibre_Plugins/Ineptpdf ReadMe.txt
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
Inept PDF Plugin - ineptpdf_v01.5_plugin.zip
|
||||||
|
|
||||||
|
All credit given to I♥Cabbages for the original standalone scripts.
|
||||||
|
I had the much easier job of converting them to a Calibre plugin.
|
||||||
|
|
||||||
|
This plugin is meant to decrypt Adobe Digital Edition PDFs that are protected with Adobe's Adept encryption. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. It will still work if you have Python, PyCrypto and/or OpenSSL already installed, but they aren't necessary.
|
||||||
|
|
||||||
|
|
||||||
|
Installation:
|
||||||
|
|
||||||
|
Go to Calibre's Preferences page. Do **NOT** select "Get plugins to enhance calibre" as this is reserved for "official" plugins, instead select "Change calibre behavior". Under "Advanced" click on the Plugins button. Use the "Load plugin from file" button to select the plugin's zip file (ineptpdf_vXX_plugin.zip) and click the 'Add' button. you're done.
|
||||||
|
|
||||||
|
Please note: Calibre does not provide any immediate feedback to indicate that adding the plugin was a success. You can always click on the File-Type plugins to see if the plugin was added.
|
||||||
|
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
|
||||||
|
When first run, the plugin will attempt to find your Adobe Digital Editions installation (on Windows and Mac OS's). If successful, it will create an 'adeptkey.der' file and save it in Calibre's configuration directory. It will use that file on subsequent runs. If there are already '*.der' files in the directory, the plugin won't attempt to find the Adobe Digital Editions installation installation.
|
||||||
|
|
||||||
|
So if you have Adobe Digital Editions installation installed on the same machine as Calibre... you are ready to go. If not... keep reading.
|
||||||
|
|
||||||
|
If you already have keyfiles generated with I <3 Cabbages' ineptkey.pyw script, you can put those keyfiles in Calibre's configuration directory. The easiest way to find the correct directory is to go to Calibre's Preferences page... click on the 'Miscellaneous' button (looks like a gear), and then click the 'Open Calibre configuration directory' button. Paste your keyfiles in there. Just make sure that
|
||||||
|
they have different names and are saved with the '.der' extension (like the ineptkey script produces). This directory isn't touched when upgrading Calibre, so it's quite safe to leave them there.
|
||||||
|
|
||||||
|
Since there is no Linux version of Adobe Digital Editions, Linux users will have to obtain a keyfile through other methods and put the file in Calibre's configuration directory.
|
||||||
|
|
||||||
|
All keyfiles with a '.der' extension found in Calibre's configuration directory will be used to attempt to decrypt a book.
|
||||||
|
|
||||||
|
** NOTE ** There is no plugin customization data for the Inept PDF plugin.
|
||||||
|
|
||||||
|
|
||||||
|
Troubleshooting:
|
||||||
|
|
||||||
|
If you find that it's not working for you (imported PDFs still have DRM), you can save a lot of time and trouble by trying to add the PDF to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might as well get used to it. ;)
|
||||||
|
|
||||||
|
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.pdf". Don't type the quotes and obviously change the 'your_ebook.pdf' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
||||||
|
|
||||||
|
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
||||||
|
|
||||||
37
Calibre_Plugins/K4MobiDeDRM ReadMe.txt
Normal file
37
Calibre_Plugins/K4MobiDeDRM ReadMe.txt
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
K4MobiDeDRM_v04.4_plugin.zip
|
||||||
|
|
||||||
|
Credit given to The Dark Reverser for the original standalone script. Credit also to the many people who have updated and expanded that script since then.
|
||||||
|
|
||||||
|
Plugin for K4PC, K4Mac, eInk Kindles and Mobipocket.
|
||||||
|
|
||||||
|
This plugin supersedes MobiDeDRM, K4DeDRM, and K4PCDeDRM and K4X plugins. If you install this plugin, those plugins can be safely removed.
|
||||||
|
|
||||||
|
This plugin is meant to remove the DRM from .prc, .mobi, .azw, .azw1, .azw3, .azw4 and .tpz ebooks. Calibre can then convert them to whatever format you desire. It is meant to function without having to install any dependencies except for Calibre being on your same machine and in the same account as your "Kindle for PC" or "Kindle for Mac" application if you are going to remove the DRM from those types of books.
|
||||||
|
|
||||||
|
|
||||||
|
Installation:
|
||||||
|
|
||||||
|
Go to Calibre's Preferences page. Do **NOT** select "Get Plugins to enhance calibre" as this is reserved for official calibre plugins", instead select "Change calibre behavior". Under "Advanced" click on the on the Plugins button. Click on the "Load plugin from file" button at the bottom of the screen. Use the file dialog button to select the plugin's zip file (K4MobiDeDRM_vXX_plugin.zip) and click the "Add" (or it may say "Open" button. Then click on the "Yes" button in the warning dialog that appears. A Confirmation dialog appears that says the plugin has been installed.
|
||||||
|
|
||||||
|
|
||||||
|
Configuration:
|
||||||
|
|
||||||
|
Highlight the plugin (K4MobiDeDRM under the "File type plugins" category) and click the "Customize Plugin" button on Calibre's Preferences->Plugins page. If you have an eInk Kindle enter the 16 digit serial number (these typically begin "B0..."). If you have more than one eInk Kindle, you can enter multiple serial numbers separated by commas (no spaces). If you have Mobipocket books, enter your 10 digit PID. If you have more than one PID, separate them with commax (no spaces).
|
||||||
|
|
||||||
|
This configuration step is not needed if you only want to decode "Kindle for PC" or "Kindle for Mac" books.
|
||||||
|
|
||||||
|
|
||||||
|
Linux Systems Only:
|
||||||
|
|
||||||
|
If you install Kindle for PC in Wine, the plugin should be able to decode files from that Kindle for PC installation under Wine. You might need to enter a Wine Prefix if it's not already set in your Environment variables.
|
||||||
|
|
||||||
|
|
||||||
|
Troubleshooting:
|
||||||
|
|
||||||
|
If you find that it's not working for you, you can save a lot of time and trouble by trying to add the DRMed ebook to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might as well get used to it. ;)
|
||||||
|
|
||||||
|
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook_file". Don't type the quotes and obviously change the 'your_ebook_file' to whatever the filename of your book is (including any file name extension like .azw). Copy the resulting output and paste it into any online help request you make.
|
||||||
|
|
||||||
|
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
||||||
|
|
||||||
|
|
||||||
243
Calibre_Plugins/K4MobiDeDRM_plugin/__init__.py
Normal file
243
Calibre_Plugins/K4MobiDeDRM_plugin/__init__.py
Normal file
@@ -0,0 +1,243 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
from __future__ import with_statement
|
||||||
|
|
||||||
|
from calibre.customize import FileTypePlugin
|
||||||
|
from calibre.gui2 import is_ok_to_use_qt
|
||||||
|
from calibre.utils.config import config_dir
|
||||||
|
from calibre.constants import iswindows, isosx
|
||||||
|
# from calibre.ptempfile import PersistentTemporaryDirectory
|
||||||
|
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
from zipfile import ZipFile
|
||||||
|
|
||||||
|
class K4DeDRM(FileTypePlugin):
|
||||||
|
name = 'Kindle and Mobipocket DeDRM' # Name of the plugin
|
||||||
|
description = 'Removes DRM from eInk Kindle, Kindle 4 Mac and Kindle 4 PC ebooks, and from Mobipocket ebooks. Provided by the work of many including DiapDealer, SomeUpdates, IHeartCabbages, CMBDTC, Skindle, DarkReverser, mdlnx, ApprenticeAlf, etc.'
|
||||||
|
supported_platforms = ['osx', 'windows', 'linux'] # Platforms this plugin will run on
|
||||||
|
author = 'DiapDealer, SomeUpdates, mdlnx, Apprentice Alf' # The author of this plugin
|
||||||
|
version = (0, 4, 4) # The version number of this plugin
|
||||||
|
file_types = set(['prc','mobi','azw','azw1','azw3','azw4','tpz']) # The file types that this plugin will be applied to
|
||||||
|
on_import = True # Run this plugin during the import
|
||||||
|
priority = 520 # run this plugin before earlier versions
|
||||||
|
minimum_calibre_version = (0, 7, 55)
|
||||||
|
|
||||||
|
def initialize(self):
|
||||||
|
"""
|
||||||
|
Dynamic modules can't be imported/loaded from a zipfile... so this routine
|
||||||
|
runs whenever the plugin gets initialized. This will extract the appropriate
|
||||||
|
library for the target OS and copy it to the 'alfcrypto' subdirectory of
|
||||||
|
calibre's configuration directory. That 'alfcrypto' directory is then
|
||||||
|
inserted into the syspath (as the very first entry) in the run function
|
||||||
|
so the CDLL stuff will work in the alfcrypto.py script.
|
||||||
|
"""
|
||||||
|
if iswindows:
|
||||||
|
names = ['alfcrypto.dll','alfcrypto64.dll']
|
||||||
|
elif isosx:
|
||||||
|
names = ['libalfcrypto.dylib']
|
||||||
|
else:
|
||||||
|
names = ['libalfcrypto32.so','libalfcrypto64.so','alfcrypto.py','alfcrypto.dll','alfcrypto64.dll','getk4pcpids.py','mobidedrm.py','kgenpids.py','k4pcutils.py','topazextract.py']
|
||||||
|
lib_dict = self.load_resources(names)
|
||||||
|
self.alfdir = os.path.join(config_dir, 'alfcrypto')
|
||||||
|
if not os.path.exists(self.alfdir):
|
||||||
|
os.mkdir(self.alfdir)
|
||||||
|
for entry, data in lib_dict.items():
|
||||||
|
file_path = os.path.join(self.alfdir, entry)
|
||||||
|
with open(file_path,'wb') as f:
|
||||||
|
f.write(data)
|
||||||
|
|
||||||
|
def run(self, path_to_ebook):
|
||||||
|
# add the alfcrypto directory to sys.path so alfcrypto.py
|
||||||
|
# will be able to locate the custom lib(s) for CDLL import.
|
||||||
|
sys.path.insert(0, self.alfdir)
|
||||||
|
# Had to move these imports here so the custom libs can be
|
||||||
|
# extracted to the appropriate places beforehand these routines
|
||||||
|
# look for them.
|
||||||
|
from calibre_plugins.k4mobidedrm import kgenpids
|
||||||
|
from calibre_plugins.k4mobidedrm import topazextract
|
||||||
|
from calibre_plugins.k4mobidedrm import mobidedrm
|
||||||
|
|
||||||
|
plug_ver = '.'.join(str(self.version).strip('()').replace(' ', '').split(','))
|
||||||
|
k4 = True
|
||||||
|
pids = []
|
||||||
|
serials = []
|
||||||
|
kInfoFiles = []
|
||||||
|
self.config()
|
||||||
|
|
||||||
|
# Get supplied list of PIDs to try from plugin customization.
|
||||||
|
pidstringlistt = self.pids_string.split(',')
|
||||||
|
for pid in pidstringlistt:
|
||||||
|
pid = str(pid).strip()
|
||||||
|
if len(pid) == 10 or len(pid) == 8:
|
||||||
|
pids.append(pid)
|
||||||
|
else:
|
||||||
|
if len(pid) > 0:
|
||||||
|
print "'%s' is not a valid Mobipocket PID." % pid
|
||||||
|
|
||||||
|
# For linux, get PIDs by calling the right routines under WINE
|
||||||
|
if sys.platform.startswith('linux'):
|
||||||
|
k4 = False
|
||||||
|
pids.extend(self.WINEgetPIDs(path_to_ebook))
|
||||||
|
|
||||||
|
# Get supplied list of Kindle serial numbers to try from plugin customization.
|
||||||
|
serialstringlistt = self.serials_string.split(',')
|
||||||
|
for serial in serialstringlistt:
|
||||||
|
serial = str(serial).strip()
|
||||||
|
if len(serial) == 16 and serial[0] == 'B':
|
||||||
|
serials.append(serial)
|
||||||
|
else:
|
||||||
|
if len(serial) > 0:
|
||||||
|
print "'%s' is not a valid Kindle serial number." % serial
|
||||||
|
|
||||||
|
# Load any kindle info files (*.info) included Calibre's config directory.
|
||||||
|
try:
|
||||||
|
print 'K4MobiDeDRM v%s: Calibre configuration directory = %s' % (plug_ver, config_dir)
|
||||||
|
files = os.listdir(config_dir)
|
||||||
|
filefilter = re.compile("\.info$|\.kinf$", re.IGNORECASE)
|
||||||
|
files = filter(filefilter.search, files)
|
||||||
|
if files:
|
||||||
|
for filename in files:
|
||||||
|
fpath = os.path.join(config_dir, filename)
|
||||||
|
kInfoFiles.append(fpath)
|
||||||
|
print 'K4MobiDeDRM v%s: Kindle info/kinf file %s found in config folder.' % (plug_ver, filename)
|
||||||
|
except IOError:
|
||||||
|
print 'K4MobiDeDRM v%s: Error reading kindle info/kinf files from config directory.' % plug_ver
|
||||||
|
pass
|
||||||
|
|
||||||
|
mobi = True
|
||||||
|
magic3 = file(path_to_ebook,'rb').read(3)
|
||||||
|
if magic3 == 'TPZ':
|
||||||
|
mobi = False
|
||||||
|
|
||||||
|
bookname = os.path.splitext(os.path.basename(path_to_ebook))[0]
|
||||||
|
|
||||||
|
if mobi:
|
||||||
|
mb = mobidedrm.MobiBook(path_to_ebook)
|
||||||
|
else:
|
||||||
|
mb = topazextract.TopazBook(path_to_ebook)
|
||||||
|
|
||||||
|
title = mb.getBookTitle()
|
||||||
|
md1, md2 = mb.getPIDMetaInfo()
|
||||||
|
pidlst = kgenpids.getPidList(md1, md2, k4, pids, serials, kInfoFiles)
|
||||||
|
|
||||||
|
try:
|
||||||
|
mb.processBook(pidlst)
|
||||||
|
|
||||||
|
except mobidedrm.DrmException, e:
|
||||||
|
#if you reached here then no luck raise and exception
|
||||||
|
if is_ok_to_use_qt():
|
||||||
|
from PyQt4.Qt import QMessageBox
|
||||||
|
d = QMessageBox(QMessageBox.Warning, "K4MobiDeDRM v%s Plugin" % plug_ver, "Error: " + str(e) + "... %s\n" % path_to_ebook)
|
||||||
|
d.show()
|
||||||
|
d.raise_()
|
||||||
|
d.exec_()
|
||||||
|
raise Exception("K4MobiDeDRM plugin v%s Error: %s" % (plug_ver, str(e)))
|
||||||
|
except topazextract.TpzDRMError, e:
|
||||||
|
#if you reached here then no luck raise and exception
|
||||||
|
if is_ok_to_use_qt():
|
||||||
|
from PyQt4.Qt import QMessageBox
|
||||||
|
d = QMessageBox(QMessageBox.Warning, "K4MobiDeDRM v%s Plugin" % plug_ver, "Error: " + str(e) + "... %s\n" % path_to_ebook)
|
||||||
|
d.show()
|
||||||
|
d.raise_()
|
||||||
|
d.exec_()
|
||||||
|
raise Exception("K4MobiDeDRM plugin v%s Error: %s" % (plug_ver, str(e)))
|
||||||
|
|
||||||
|
print "Success!"
|
||||||
|
if mobi:
|
||||||
|
if mb.getPrintReplica():
|
||||||
|
of = self.temporary_file(bookname+'.azw4')
|
||||||
|
print 'K4MobiDeDRM v%s: Print Replica format detected.' % plug_ver
|
||||||
|
elif mb.getMobiVersion() >= 8:
|
||||||
|
print 'K4MobiDeDRM v%s: Stand-alone KF8 format detected.' % plug_ver
|
||||||
|
of = self.temporary_file(bookname+'.azw3')
|
||||||
|
else:
|
||||||
|
of = self.temporary_file(bookname+'.mobi')
|
||||||
|
mb.getMobiFile(of.name)
|
||||||
|
else:
|
||||||
|
of = self.temporary_file(bookname+'.htmlz')
|
||||||
|
mb.getHTMLZip(of.name)
|
||||||
|
mb.cleanup()
|
||||||
|
return of.name
|
||||||
|
|
||||||
|
def WINEgetPIDs(self, infile):
|
||||||
|
|
||||||
|
import subprocess
|
||||||
|
from subprocess import Popen, PIPE, STDOUT
|
||||||
|
|
||||||
|
import subasyncio
|
||||||
|
from subasyncio import Process
|
||||||
|
|
||||||
|
print " Getting PIDs from WINE"
|
||||||
|
|
||||||
|
outfile = os.path.join(self.alfdir + 'winepids.txt')
|
||||||
|
|
||||||
|
cmdline = 'wine python.exe ' \
|
||||||
|
+ '"'+self.alfdir + '/getk4pcpids.py"' \
|
||||||
|
+ ' "' + infile + '"' \
|
||||||
|
+ ' "' + outfile + '"'
|
||||||
|
|
||||||
|
env = os.environ
|
||||||
|
|
||||||
|
print "My wine_prefix from tweaks is ", self.wine_prefix
|
||||||
|
|
||||||
|
if ("WINEPREFIX" in env):
|
||||||
|
print "Using WINEPREFIX from the environment: ", env["WINEPREFIX"]
|
||||||
|
elif (self.wine_prefix is not None):
|
||||||
|
env['WINEPREFIX'] = self.wine_prefix
|
||||||
|
print "Using WINEPREFIX from tweaks: ", self.wine_prefix
|
||||||
|
else:
|
||||||
|
print "No wine prefix used"
|
||||||
|
|
||||||
|
print cmdline
|
||||||
|
|
||||||
|
cmdline = cmdline.encode(sys.getfilesystemencoding())
|
||||||
|
p2 = Process(cmdline, shell=True, bufsize=1, stdin=None, stdout=sys.stdout, stderr=STDOUT, close_fds=False)
|
||||||
|
result = p2.wait("wait")
|
||||||
|
print "Conversion returned ", result
|
||||||
|
WINEpids = []
|
||||||
|
customvalues = file(outfile, 'r').readline().split(',')
|
||||||
|
for customvalue in customvalues:
|
||||||
|
customvalue = str(customvalue)
|
||||||
|
customvalue = customvalue.strip()
|
||||||
|
if len(customvalue) == 10 or len(customvalue) == 8:
|
||||||
|
WINEpids.append(customvalue)
|
||||||
|
else:
|
||||||
|
print "'%s' is not a valid PID." % customvalue
|
||||||
|
return WINEpids
|
||||||
|
|
||||||
|
def is_customizable(self):
|
||||||
|
# return true to allow customization via the Plugin->Preferences.
|
||||||
|
return True
|
||||||
|
|
||||||
|
def config_widget(self):
|
||||||
|
# It is important to put this import statement here rather than at the
|
||||||
|
# top of the module as importing the config class will also cause the
|
||||||
|
# GUI libraries to be loaded, which we do not want when using calibre
|
||||||
|
# from the command line
|
||||||
|
from calibre_plugins.k4mobidedrm.config import ConfigWidget
|
||||||
|
return config.ConfigWidget()
|
||||||
|
|
||||||
|
def config(self):
|
||||||
|
from calibre_plugins.k4mobidedrm.config import prefs
|
||||||
|
|
||||||
|
self.pids_string = prefs['pids']
|
||||||
|
self.serials_string = prefs['serials']
|
||||||
|
self.wine_prefix = prefs['WINEPREFIX']
|
||||||
|
|
||||||
|
def save_settings(self, config_widget):
|
||||||
|
'''
|
||||||
|
Save the settings specified by the user with config_widget.
|
||||||
|
'''
|
||||||
|
config_widget.save_settings()
|
||||||
|
self.config()
|
||||||
|
|
||||||
|
def load_resources(self, names):
|
||||||
|
ans = {}
|
||||||
|
with ZipFile(self.plugin_path, 'r') as zf:
|
||||||
|
for candidate in zf.namelist():
|
||||||
|
if candidate in names:
|
||||||
|
ans[candidate] = zf.read(candidate)
|
||||||
|
return ans
|
||||||
568
Calibre_Plugins/K4MobiDeDRM_plugin/aescbc.py
Normal file
568
Calibre_Plugins/K4MobiDeDRM_plugin/aescbc.py
Normal file
@@ -0,0 +1,568 @@
|
|||||||
|
#! /usr/bin/env python
|
||||||
|
|
||||||
|
"""
|
||||||
|
Routines for doing AES CBC in one file
|
||||||
|
|
||||||
|
Modified by some_updates to extract
|
||||||
|
and combine only those parts needed for AES CBC
|
||||||
|
into one simple to add python file
|
||||||
|
|
||||||
|
Original Version
|
||||||
|
Copyright (c) 2002 by Paul A. Lambert
|
||||||
|
Under:
|
||||||
|
CryptoPy Artisitic License Version 1.0
|
||||||
|
See the wonderful pure python package cryptopy-1.2.5
|
||||||
|
and read its LICENSE.txt for complete license details.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class CryptoError(Exception):
|
||||||
|
""" Base class for crypto exceptions """
|
||||||
|
def __init__(self,errorMessage='Error!'):
|
||||||
|
self.message = errorMessage
|
||||||
|
def __str__(self):
|
||||||
|
return self.message
|
||||||
|
|
||||||
|
class InitCryptoError(CryptoError):
|
||||||
|
""" Crypto errors during algorithm initialization """
|
||||||
|
class BadKeySizeError(InitCryptoError):
|
||||||
|
""" Bad key size error """
|
||||||
|
class EncryptError(CryptoError):
|
||||||
|
""" Error in encryption processing """
|
||||||
|
class DecryptError(CryptoError):
|
||||||
|
""" Error in decryption processing """
|
||||||
|
class DecryptNotBlockAlignedError(DecryptError):
|
||||||
|
""" Error in decryption processing """
|
||||||
|
|
||||||
|
def xorS(a,b):
|
||||||
|
""" XOR two strings """
|
||||||
|
assert len(a)==len(b)
|
||||||
|
x = []
|
||||||
|
for i in range(len(a)):
|
||||||
|
x.append( chr(ord(a[i])^ord(b[i])))
|
||||||
|
return ''.join(x)
|
||||||
|
|
||||||
|
def xor(a,b):
|
||||||
|
""" XOR two strings """
|
||||||
|
x = []
|
||||||
|
for i in range(min(len(a),len(b))):
|
||||||
|
x.append( chr(ord(a[i])^ord(b[i])))
|
||||||
|
return ''.join(x)
|
||||||
|
|
||||||
|
"""
|
||||||
|
Base 'BlockCipher' and Pad classes for cipher instances.
|
||||||
|
BlockCipher supports automatic padding and type conversion. The BlockCipher
|
||||||
|
class was written to make the actual algorithm code more readable and
|
||||||
|
not for performance.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class BlockCipher:
|
||||||
|
""" Block ciphers """
|
||||||
|
def __init__(self):
|
||||||
|
self.reset()
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
self.resetEncrypt()
|
||||||
|
self.resetDecrypt()
|
||||||
|
def resetEncrypt(self):
|
||||||
|
self.encryptBlockCount = 0
|
||||||
|
self.bytesToEncrypt = ''
|
||||||
|
def resetDecrypt(self):
|
||||||
|
self.decryptBlockCount = 0
|
||||||
|
self.bytesToDecrypt = ''
|
||||||
|
|
||||||
|
def encrypt(self, plainText, more = None):
|
||||||
|
""" Encrypt a string and return a binary string """
|
||||||
|
self.bytesToEncrypt += plainText # append plainText to any bytes from prior encrypt
|
||||||
|
numBlocks, numExtraBytes = divmod(len(self.bytesToEncrypt), self.blockSize)
|
||||||
|
cipherText = ''
|
||||||
|
for i in range(numBlocks):
|
||||||
|
bStart = i*self.blockSize
|
||||||
|
ctBlock = self.encryptBlock(self.bytesToEncrypt[bStart:bStart+self.blockSize])
|
||||||
|
self.encryptBlockCount += 1
|
||||||
|
cipherText += ctBlock
|
||||||
|
if numExtraBytes > 0: # save any bytes that are not block aligned
|
||||||
|
self.bytesToEncrypt = self.bytesToEncrypt[-numExtraBytes:]
|
||||||
|
else:
|
||||||
|
self.bytesToEncrypt = ''
|
||||||
|
|
||||||
|
if more == None: # no more data expected from caller
|
||||||
|
finalBytes = self.padding.addPad(self.bytesToEncrypt,self.blockSize)
|
||||||
|
if len(finalBytes) > 0:
|
||||||
|
ctBlock = self.encryptBlock(finalBytes)
|
||||||
|
self.encryptBlockCount += 1
|
||||||
|
cipherText += ctBlock
|
||||||
|
self.resetEncrypt()
|
||||||
|
return cipherText
|
||||||
|
|
||||||
|
def decrypt(self, cipherText, more = None):
|
||||||
|
""" Decrypt a string and return a string """
|
||||||
|
self.bytesToDecrypt += cipherText # append to any bytes from prior decrypt
|
||||||
|
|
||||||
|
numBlocks, numExtraBytes = divmod(len(self.bytesToDecrypt), self.blockSize)
|
||||||
|
if more == None: # no more calls to decrypt, should have all the data
|
||||||
|
if numExtraBytes != 0:
|
||||||
|
raise DecryptNotBlockAlignedError, 'Data not block aligned on decrypt'
|
||||||
|
|
||||||
|
# hold back some bytes in case last decrypt has zero len
|
||||||
|
if (more != None) and (numExtraBytes == 0) and (numBlocks >0) :
|
||||||
|
numBlocks -= 1
|
||||||
|
numExtraBytes = self.blockSize
|
||||||
|
|
||||||
|
plainText = ''
|
||||||
|
for i in range(numBlocks):
|
||||||
|
bStart = i*self.blockSize
|
||||||
|
ptBlock = self.decryptBlock(self.bytesToDecrypt[bStart : bStart+self.blockSize])
|
||||||
|
self.decryptBlockCount += 1
|
||||||
|
plainText += ptBlock
|
||||||
|
|
||||||
|
if numExtraBytes > 0: # save any bytes that are not block aligned
|
||||||
|
self.bytesToEncrypt = self.bytesToEncrypt[-numExtraBytes:]
|
||||||
|
else:
|
||||||
|
self.bytesToEncrypt = ''
|
||||||
|
|
||||||
|
if more == None: # last decrypt remove padding
|
||||||
|
plainText = self.padding.removePad(plainText, self.blockSize)
|
||||||
|
self.resetDecrypt()
|
||||||
|
return plainText
|
||||||
|
|
||||||
|
|
||||||
|
class Pad:
|
||||||
|
def __init__(self):
|
||||||
|
pass # eventually could put in calculation of min and max size extension
|
||||||
|
|
||||||
|
class padWithPadLen(Pad):
|
||||||
|
""" Pad a binary string with the length of the padding """
|
||||||
|
|
||||||
|
def addPad(self, extraBytes, blockSize):
|
||||||
|
""" Add padding to a binary string to make it an even multiple
|
||||||
|
of the block size """
|
||||||
|
blocks, numExtraBytes = divmod(len(extraBytes), blockSize)
|
||||||
|
padLength = blockSize - numExtraBytes
|
||||||
|
return extraBytes + padLength*chr(padLength)
|
||||||
|
|
||||||
|
def removePad(self, paddedBinaryString, blockSize):
|
||||||
|
""" Remove padding from a binary string """
|
||||||
|
if not(0<len(paddedBinaryString)):
|
||||||
|
raise DecryptNotBlockAlignedError, 'Expected More Data'
|
||||||
|
return paddedBinaryString[:-ord(paddedBinaryString[-1])]
|
||||||
|
|
||||||
|
class noPadding(Pad):
|
||||||
|
""" No padding. Use this to get ECB behavior from encrypt/decrypt """
|
||||||
|
|
||||||
|
def addPad(self, extraBytes, blockSize):
|
||||||
|
""" Add no padding """
|
||||||
|
return extraBytes
|
||||||
|
|
||||||
|
def removePad(self, paddedBinaryString, blockSize):
|
||||||
|
""" Remove no padding """
|
||||||
|
return paddedBinaryString
|
||||||
|
|
||||||
|
"""
|
||||||
|
Rijndael encryption algorithm
|
||||||
|
This byte oriented implementation is intended to closely
|
||||||
|
match FIPS specification for readability. It is not implemented
|
||||||
|
for performance.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class Rijndael(BlockCipher):
|
||||||
|
""" Rijndael encryption algorithm """
|
||||||
|
def __init__(self, key = None, padding = padWithPadLen(), keySize=16, blockSize=16 ):
|
||||||
|
self.name = 'RIJNDAEL'
|
||||||
|
self.keySize = keySize
|
||||||
|
self.strength = keySize*8
|
||||||
|
self.blockSize = blockSize # blockSize is in bytes
|
||||||
|
self.padding = padding # change default to noPadding() to get normal ECB behavior
|
||||||
|
|
||||||
|
assert( keySize%4==0 and NrTable[4].has_key(keySize/4)),'key size must be 16,20,24,29 or 32 bytes'
|
||||||
|
assert( blockSize%4==0 and NrTable.has_key(blockSize/4)), 'block size must be 16,20,24,29 or 32 bytes'
|
||||||
|
|
||||||
|
self.Nb = self.blockSize/4 # Nb is number of columns of 32 bit words
|
||||||
|
self.Nk = keySize/4 # Nk is the key length in 32-bit words
|
||||||
|
self.Nr = NrTable[self.Nb][self.Nk] # The number of rounds (Nr) is a function of
|
||||||
|
# the block (Nb) and key (Nk) sizes.
|
||||||
|
if key != None:
|
||||||
|
self.setKey(key)
|
||||||
|
|
||||||
|
def setKey(self, key):
|
||||||
|
""" Set a key and generate the expanded key """
|
||||||
|
assert( len(key) == (self.Nk*4) ), 'Key length must be same as keySize parameter'
|
||||||
|
self.__expandedKey = keyExpansion(self, key)
|
||||||
|
self.reset() # BlockCipher.reset()
|
||||||
|
|
||||||
|
def encryptBlock(self, plainTextBlock):
|
||||||
|
""" Encrypt a block, plainTextBlock must be a array of bytes [Nb by 4] """
|
||||||
|
self.state = self._toBlock(plainTextBlock)
|
||||||
|
AddRoundKey(self, self.__expandedKey[0:self.Nb])
|
||||||
|
for round in range(1,self.Nr): #for round = 1 step 1 to Nr
|
||||||
|
SubBytes(self)
|
||||||
|
ShiftRows(self)
|
||||||
|
MixColumns(self)
|
||||||
|
AddRoundKey(self, self.__expandedKey[round*self.Nb:(round+1)*self.Nb])
|
||||||
|
SubBytes(self)
|
||||||
|
ShiftRows(self)
|
||||||
|
AddRoundKey(self, self.__expandedKey[self.Nr*self.Nb:(self.Nr+1)*self.Nb])
|
||||||
|
return self._toBString(self.state)
|
||||||
|
|
||||||
|
|
||||||
|
def decryptBlock(self, encryptedBlock):
|
||||||
|
""" decrypt a block (array of bytes) """
|
||||||
|
self.state = self._toBlock(encryptedBlock)
|
||||||
|
AddRoundKey(self, self.__expandedKey[self.Nr*self.Nb:(self.Nr+1)*self.Nb])
|
||||||
|
for round in range(self.Nr-1,0,-1):
|
||||||
|
InvShiftRows(self)
|
||||||
|
InvSubBytes(self)
|
||||||
|
AddRoundKey(self, self.__expandedKey[round*self.Nb:(round+1)*self.Nb])
|
||||||
|
InvMixColumns(self)
|
||||||
|
InvShiftRows(self)
|
||||||
|
InvSubBytes(self)
|
||||||
|
AddRoundKey(self, self.__expandedKey[0:self.Nb])
|
||||||
|
return self._toBString(self.state)
|
||||||
|
|
||||||
|
def _toBlock(self, bs):
|
||||||
|
""" Convert binary string to array of bytes, state[col][row]"""
|
||||||
|
assert ( len(bs) == 4*self.Nb ), 'Rijndarl blocks must be of size blockSize'
|
||||||
|
return [[ord(bs[4*i]),ord(bs[4*i+1]),ord(bs[4*i+2]),ord(bs[4*i+3])] for i in range(self.Nb)]
|
||||||
|
|
||||||
|
def _toBString(self, block):
|
||||||
|
""" Convert block (array of bytes) to binary string """
|
||||||
|
l = []
|
||||||
|
for col in block:
|
||||||
|
for rowElement in col:
|
||||||
|
l.append(chr(rowElement))
|
||||||
|
return ''.join(l)
|
||||||
|
#-------------------------------------
|
||||||
|
""" Number of rounds Nr = NrTable[Nb][Nk]
|
||||||
|
|
||||||
|
Nb Nk=4 Nk=5 Nk=6 Nk=7 Nk=8
|
||||||
|
------------------------------------- """
|
||||||
|
NrTable = {4: {4:10, 5:11, 6:12, 7:13, 8:14},
|
||||||
|
5: {4:11, 5:11, 6:12, 7:13, 8:14},
|
||||||
|
6: {4:12, 5:12, 6:12, 7:13, 8:14},
|
||||||
|
7: {4:13, 5:13, 6:13, 7:13, 8:14},
|
||||||
|
8: {4:14, 5:14, 6:14, 7:14, 8:14}}
|
||||||
|
#-------------------------------------
|
||||||
|
def keyExpansion(algInstance, keyString):
|
||||||
|
""" Expand a string of size keySize into a larger array """
|
||||||
|
Nk, Nb, Nr = algInstance.Nk, algInstance.Nb, algInstance.Nr # for readability
|
||||||
|
key = [ord(byte) for byte in keyString] # convert string to list
|
||||||
|
w = [[key[4*i],key[4*i+1],key[4*i+2],key[4*i+3]] for i in range(Nk)]
|
||||||
|
for i in range(Nk,Nb*(Nr+1)):
|
||||||
|
temp = w[i-1] # a four byte column
|
||||||
|
if (i%Nk) == 0 :
|
||||||
|
temp = temp[1:]+[temp[0]] # RotWord(temp)
|
||||||
|
temp = [ Sbox[byte] for byte in temp ]
|
||||||
|
temp[0] ^= Rcon[i/Nk]
|
||||||
|
elif Nk > 6 and i%Nk == 4 :
|
||||||
|
temp = [ Sbox[byte] for byte in temp ] # SubWord(temp)
|
||||||
|
w.append( [ w[i-Nk][byte]^temp[byte] for byte in range(4) ] )
|
||||||
|
return w
|
||||||
|
|
||||||
|
Rcon = (0,0x01,0x02,0x04,0x08,0x10,0x20,0x40,0x80,0x1b,0x36, # note extra '0' !!!
|
||||||
|
0x6c,0xd8,0xab,0x4d,0x9a,0x2f,0x5e,0xbc,0x63,0xc6,
|
||||||
|
0x97,0x35,0x6a,0xd4,0xb3,0x7d,0xfa,0xef,0xc5,0x91)
|
||||||
|
|
||||||
|
#-------------------------------------
|
||||||
|
def AddRoundKey(algInstance, keyBlock):
|
||||||
|
""" XOR the algorithm state with a block of key material """
|
||||||
|
for column in range(algInstance.Nb):
|
||||||
|
for row in range(4):
|
||||||
|
algInstance.state[column][row] ^= keyBlock[column][row]
|
||||||
|
#-------------------------------------
|
||||||
|
|
||||||
|
def SubBytes(algInstance):
|
||||||
|
for column in range(algInstance.Nb):
|
||||||
|
for row in range(4):
|
||||||
|
algInstance.state[column][row] = Sbox[algInstance.state[column][row]]
|
||||||
|
|
||||||
|
def InvSubBytes(algInstance):
|
||||||
|
for column in range(algInstance.Nb):
|
||||||
|
for row in range(4):
|
||||||
|
algInstance.state[column][row] = InvSbox[algInstance.state[column][row]]
|
||||||
|
|
||||||
|
Sbox = (0x63,0x7c,0x77,0x7b,0xf2,0x6b,0x6f,0xc5,
|
||||||
|
0x30,0x01,0x67,0x2b,0xfe,0xd7,0xab,0x76,
|
||||||
|
0xca,0x82,0xc9,0x7d,0xfa,0x59,0x47,0xf0,
|
||||||
|
0xad,0xd4,0xa2,0xaf,0x9c,0xa4,0x72,0xc0,
|
||||||
|
0xb7,0xfd,0x93,0x26,0x36,0x3f,0xf7,0xcc,
|
||||||
|
0x34,0xa5,0xe5,0xf1,0x71,0xd8,0x31,0x15,
|
||||||
|
0x04,0xc7,0x23,0xc3,0x18,0x96,0x05,0x9a,
|
||||||
|
0x07,0x12,0x80,0xe2,0xeb,0x27,0xb2,0x75,
|
||||||
|
0x09,0x83,0x2c,0x1a,0x1b,0x6e,0x5a,0xa0,
|
||||||
|
0x52,0x3b,0xd6,0xb3,0x29,0xe3,0x2f,0x84,
|
||||||
|
0x53,0xd1,0x00,0xed,0x20,0xfc,0xb1,0x5b,
|
||||||
|
0x6a,0xcb,0xbe,0x39,0x4a,0x4c,0x58,0xcf,
|
||||||
|
0xd0,0xef,0xaa,0xfb,0x43,0x4d,0x33,0x85,
|
||||||
|
0x45,0xf9,0x02,0x7f,0x50,0x3c,0x9f,0xa8,
|
||||||
|
0x51,0xa3,0x40,0x8f,0x92,0x9d,0x38,0xf5,
|
||||||
|
0xbc,0xb6,0xda,0x21,0x10,0xff,0xf3,0xd2,
|
||||||
|
0xcd,0x0c,0x13,0xec,0x5f,0x97,0x44,0x17,
|
||||||
|
0xc4,0xa7,0x7e,0x3d,0x64,0x5d,0x19,0x73,
|
||||||
|
0x60,0x81,0x4f,0xdc,0x22,0x2a,0x90,0x88,
|
||||||
|
0x46,0xee,0xb8,0x14,0xde,0x5e,0x0b,0xdb,
|
||||||
|
0xe0,0x32,0x3a,0x0a,0x49,0x06,0x24,0x5c,
|
||||||
|
0xc2,0xd3,0xac,0x62,0x91,0x95,0xe4,0x79,
|
||||||
|
0xe7,0xc8,0x37,0x6d,0x8d,0xd5,0x4e,0xa9,
|
||||||
|
0x6c,0x56,0xf4,0xea,0x65,0x7a,0xae,0x08,
|
||||||
|
0xba,0x78,0x25,0x2e,0x1c,0xa6,0xb4,0xc6,
|
||||||
|
0xe8,0xdd,0x74,0x1f,0x4b,0xbd,0x8b,0x8a,
|
||||||
|
0x70,0x3e,0xb5,0x66,0x48,0x03,0xf6,0x0e,
|
||||||
|
0x61,0x35,0x57,0xb9,0x86,0xc1,0x1d,0x9e,
|
||||||
|
0xe1,0xf8,0x98,0x11,0x69,0xd9,0x8e,0x94,
|
||||||
|
0x9b,0x1e,0x87,0xe9,0xce,0x55,0x28,0xdf,
|
||||||
|
0x8c,0xa1,0x89,0x0d,0xbf,0xe6,0x42,0x68,
|
||||||
|
0x41,0x99,0x2d,0x0f,0xb0,0x54,0xbb,0x16)
|
||||||
|
|
||||||
|
InvSbox = (0x52,0x09,0x6a,0xd5,0x30,0x36,0xa5,0x38,
|
||||||
|
0xbf,0x40,0xa3,0x9e,0x81,0xf3,0xd7,0xfb,
|
||||||
|
0x7c,0xe3,0x39,0x82,0x9b,0x2f,0xff,0x87,
|
||||||
|
0x34,0x8e,0x43,0x44,0xc4,0xde,0xe9,0xcb,
|
||||||
|
0x54,0x7b,0x94,0x32,0xa6,0xc2,0x23,0x3d,
|
||||||
|
0xee,0x4c,0x95,0x0b,0x42,0xfa,0xc3,0x4e,
|
||||||
|
0x08,0x2e,0xa1,0x66,0x28,0xd9,0x24,0xb2,
|
||||||
|
0x76,0x5b,0xa2,0x49,0x6d,0x8b,0xd1,0x25,
|
||||||
|
0x72,0xf8,0xf6,0x64,0x86,0x68,0x98,0x16,
|
||||||
|
0xd4,0xa4,0x5c,0xcc,0x5d,0x65,0xb6,0x92,
|
||||||
|
0x6c,0x70,0x48,0x50,0xfd,0xed,0xb9,0xda,
|
||||||
|
0x5e,0x15,0x46,0x57,0xa7,0x8d,0x9d,0x84,
|
||||||
|
0x90,0xd8,0xab,0x00,0x8c,0xbc,0xd3,0x0a,
|
||||||
|
0xf7,0xe4,0x58,0x05,0xb8,0xb3,0x45,0x06,
|
||||||
|
0xd0,0x2c,0x1e,0x8f,0xca,0x3f,0x0f,0x02,
|
||||||
|
0xc1,0xaf,0xbd,0x03,0x01,0x13,0x8a,0x6b,
|
||||||
|
0x3a,0x91,0x11,0x41,0x4f,0x67,0xdc,0xea,
|
||||||
|
0x97,0xf2,0xcf,0xce,0xf0,0xb4,0xe6,0x73,
|
||||||
|
0x96,0xac,0x74,0x22,0xe7,0xad,0x35,0x85,
|
||||||
|
0xe2,0xf9,0x37,0xe8,0x1c,0x75,0xdf,0x6e,
|
||||||
|
0x47,0xf1,0x1a,0x71,0x1d,0x29,0xc5,0x89,
|
||||||
|
0x6f,0xb7,0x62,0x0e,0xaa,0x18,0xbe,0x1b,
|
||||||
|
0xfc,0x56,0x3e,0x4b,0xc6,0xd2,0x79,0x20,
|
||||||
|
0x9a,0xdb,0xc0,0xfe,0x78,0xcd,0x5a,0xf4,
|
||||||
|
0x1f,0xdd,0xa8,0x33,0x88,0x07,0xc7,0x31,
|
||||||
|
0xb1,0x12,0x10,0x59,0x27,0x80,0xec,0x5f,
|
||||||
|
0x60,0x51,0x7f,0xa9,0x19,0xb5,0x4a,0x0d,
|
||||||
|
0x2d,0xe5,0x7a,0x9f,0x93,0xc9,0x9c,0xef,
|
||||||
|
0xa0,0xe0,0x3b,0x4d,0xae,0x2a,0xf5,0xb0,
|
||||||
|
0xc8,0xeb,0xbb,0x3c,0x83,0x53,0x99,0x61,
|
||||||
|
0x17,0x2b,0x04,0x7e,0xba,0x77,0xd6,0x26,
|
||||||
|
0xe1,0x69,0x14,0x63,0x55,0x21,0x0c,0x7d)
|
||||||
|
|
||||||
|
#-------------------------------------
|
||||||
|
""" For each block size (Nb), the ShiftRow operation shifts row i
|
||||||
|
by the amount Ci. Note that row 0 is not shifted.
|
||||||
|
Nb C1 C2 C3
|
||||||
|
------------------- """
|
||||||
|
shiftOffset = { 4 : ( 0, 1, 2, 3),
|
||||||
|
5 : ( 0, 1, 2, 3),
|
||||||
|
6 : ( 0, 1, 2, 3),
|
||||||
|
7 : ( 0, 1, 2, 4),
|
||||||
|
8 : ( 0, 1, 3, 4) }
|
||||||
|
def ShiftRows(algInstance):
|
||||||
|
tmp = [0]*algInstance.Nb # list of size Nb
|
||||||
|
for r in range(1,4): # row 0 reamains unchanged and can be skipped
|
||||||
|
for c in range(algInstance.Nb):
|
||||||
|
tmp[c] = algInstance.state[(c+shiftOffset[algInstance.Nb][r]) % algInstance.Nb][r]
|
||||||
|
for c in range(algInstance.Nb):
|
||||||
|
algInstance.state[c][r] = tmp[c]
|
||||||
|
def InvShiftRows(algInstance):
|
||||||
|
tmp = [0]*algInstance.Nb # list of size Nb
|
||||||
|
for r in range(1,4): # row 0 reamains unchanged and can be skipped
|
||||||
|
for c in range(algInstance.Nb):
|
||||||
|
tmp[c] = algInstance.state[(c+algInstance.Nb-shiftOffset[algInstance.Nb][r]) % algInstance.Nb][r]
|
||||||
|
for c in range(algInstance.Nb):
|
||||||
|
algInstance.state[c][r] = tmp[c]
|
||||||
|
#-------------------------------------
|
||||||
|
def MixColumns(a):
|
||||||
|
Sprime = [0,0,0,0]
|
||||||
|
for j in range(a.Nb): # for each column
|
||||||
|
Sprime[0] = mul(2,a.state[j][0])^mul(3,a.state[j][1])^mul(1,a.state[j][2])^mul(1,a.state[j][3])
|
||||||
|
Sprime[1] = mul(1,a.state[j][0])^mul(2,a.state[j][1])^mul(3,a.state[j][2])^mul(1,a.state[j][3])
|
||||||
|
Sprime[2] = mul(1,a.state[j][0])^mul(1,a.state[j][1])^mul(2,a.state[j][2])^mul(3,a.state[j][3])
|
||||||
|
Sprime[3] = mul(3,a.state[j][0])^mul(1,a.state[j][1])^mul(1,a.state[j][2])^mul(2,a.state[j][3])
|
||||||
|
for i in range(4):
|
||||||
|
a.state[j][i] = Sprime[i]
|
||||||
|
|
||||||
|
def InvMixColumns(a):
|
||||||
|
""" Mix the four bytes of every column in a linear way
|
||||||
|
This is the opposite operation of Mixcolumn """
|
||||||
|
Sprime = [0,0,0,0]
|
||||||
|
for j in range(a.Nb): # for each column
|
||||||
|
Sprime[0] = mul(0x0E,a.state[j][0])^mul(0x0B,a.state[j][1])^mul(0x0D,a.state[j][2])^mul(0x09,a.state[j][3])
|
||||||
|
Sprime[1] = mul(0x09,a.state[j][0])^mul(0x0E,a.state[j][1])^mul(0x0B,a.state[j][2])^mul(0x0D,a.state[j][3])
|
||||||
|
Sprime[2] = mul(0x0D,a.state[j][0])^mul(0x09,a.state[j][1])^mul(0x0E,a.state[j][2])^mul(0x0B,a.state[j][3])
|
||||||
|
Sprime[3] = mul(0x0B,a.state[j][0])^mul(0x0D,a.state[j][1])^mul(0x09,a.state[j][2])^mul(0x0E,a.state[j][3])
|
||||||
|
for i in range(4):
|
||||||
|
a.state[j][i] = Sprime[i]
|
||||||
|
|
||||||
|
#-------------------------------------
|
||||||
|
def mul(a, b):
|
||||||
|
""" Multiply two elements of GF(2^m)
|
||||||
|
needed for MixColumn and InvMixColumn """
|
||||||
|
if (a !=0 and b!=0):
|
||||||
|
return Alogtable[(Logtable[a] + Logtable[b])%255]
|
||||||
|
else:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
Logtable = ( 0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, 51, 238, 223, 3,
|
||||||
|
100, 4, 224, 14, 52, 141, 129, 239, 76, 113, 8, 200, 248, 105, 28, 193,
|
||||||
|
125, 194, 29, 181, 249, 185, 39, 106, 77, 228, 166, 114, 154, 201, 9, 120,
|
||||||
|
101, 47, 138, 5, 33, 15, 225, 36, 18, 240, 130, 69, 53, 147, 218, 142,
|
||||||
|
150, 143, 219, 189, 54, 208, 206, 148, 19, 92, 210, 241, 64, 70, 131, 56,
|
||||||
|
102, 221, 253, 48, 191, 6, 139, 98, 179, 37, 226, 152, 34, 136, 145, 16,
|
||||||
|
126, 110, 72, 195, 163, 182, 30, 66, 58, 107, 40, 84, 250, 133, 61, 186,
|
||||||
|
43, 121, 10, 21, 155, 159, 94, 202, 78, 212, 172, 229, 243, 115, 167, 87,
|
||||||
|
175, 88, 168, 80, 244, 234, 214, 116, 79, 174, 233, 213, 231, 230, 173, 232,
|
||||||
|
44, 215, 117, 122, 235, 22, 11, 245, 89, 203, 95, 176, 156, 169, 81, 160,
|
||||||
|
127, 12, 246, 111, 23, 196, 73, 236, 216, 67, 31, 45, 164, 118, 123, 183,
|
||||||
|
204, 187, 62, 90, 251, 96, 177, 134, 59, 82, 161, 108, 170, 85, 41, 157,
|
||||||
|
151, 178, 135, 144, 97, 190, 220, 252, 188, 149, 207, 205, 55, 63, 91, 209,
|
||||||
|
83, 57, 132, 60, 65, 162, 109, 71, 20, 42, 158, 93, 86, 242, 211, 171,
|
||||||
|
68, 17, 146, 217, 35, 32, 46, 137, 180, 124, 184, 38, 119, 153, 227, 165,
|
||||||
|
103, 74, 237, 222, 197, 49, 254, 24, 13, 99, 140, 128, 192, 247, 112, 7)
|
||||||
|
|
||||||
|
Alogtable= ( 1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19, 53,
|
||||||
|
95, 225, 56, 72, 216, 115, 149, 164, 247, 2, 6, 10, 30, 34, 102, 170,
|
||||||
|
229, 52, 92, 228, 55, 89, 235, 38, 106, 190, 217, 112, 144, 171, 230, 49,
|
||||||
|
83, 245, 4, 12, 20, 60, 68, 204, 79, 209, 104, 184, 211, 110, 178, 205,
|
||||||
|
76, 212, 103, 169, 224, 59, 77, 215, 98, 166, 241, 8, 24, 40, 120, 136,
|
||||||
|
131, 158, 185, 208, 107, 189, 220, 127, 129, 152, 179, 206, 73, 219, 118, 154,
|
||||||
|
181, 196, 87, 249, 16, 48, 80, 240, 11, 29, 39, 105, 187, 214, 97, 163,
|
||||||
|
254, 25, 43, 125, 135, 146, 173, 236, 47, 113, 147, 174, 233, 32, 96, 160,
|
||||||
|
251, 22, 58, 78, 210, 109, 183, 194, 93, 231, 50, 86, 250, 21, 63, 65,
|
||||||
|
195, 94, 226, 61, 71, 201, 64, 192, 91, 237, 44, 116, 156, 191, 218, 117,
|
||||||
|
159, 186, 213, 100, 172, 239, 42, 126, 130, 157, 188, 223, 122, 142, 137, 128,
|
||||||
|
155, 182, 193, 88, 232, 35, 101, 175, 234, 37, 111, 177, 200, 67, 197, 84,
|
||||||
|
252, 31, 33, 99, 165, 244, 7, 9, 27, 45, 119, 153, 176, 203, 70, 202,
|
||||||
|
69, 207, 74, 222, 121, 139, 134, 145, 168, 227, 62, 66, 198, 81, 243, 14,
|
||||||
|
18, 54, 90, 238, 41, 123, 141, 140, 143, 138, 133, 148, 167, 242, 13, 23,
|
||||||
|
57, 75, 221, 124, 132, 151, 162, 253, 28, 36, 108, 180, 199, 82, 246, 1)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
AES Encryption Algorithm
|
||||||
|
The AES algorithm is just Rijndael algorithm restricted to the default
|
||||||
|
blockSize of 128 bits.
|
||||||
|
"""
|
||||||
|
|
||||||
|
class AES(Rijndael):
|
||||||
|
""" The AES algorithm is the Rijndael block cipher restricted to block
|
||||||
|
sizes of 128 bits and key sizes of 128, 192 or 256 bits
|
||||||
|
"""
|
||||||
|
def __init__(self, key = None, padding = padWithPadLen(), keySize=16):
|
||||||
|
""" Initialize AES, keySize is in bytes """
|
||||||
|
if not (keySize == 16 or keySize == 24 or keySize == 32) :
|
||||||
|
raise BadKeySizeError, 'Illegal AES key size, must be 16, 24, or 32 bytes'
|
||||||
|
|
||||||
|
Rijndael.__init__( self, key, padding=padding, keySize=keySize, blockSize=16 )
|
||||||
|
|
||||||
|
self.name = 'AES'
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
CBC mode of encryption for block ciphers.
|
||||||
|
This algorithm mode wraps any BlockCipher to make a
|
||||||
|
Cipher Block Chaining mode.
|
||||||
|
"""
|
||||||
|
from random import Random # should change to crypto.random!!!
|
||||||
|
|
||||||
|
|
||||||
|
class CBC(BlockCipher):
|
||||||
|
""" The CBC class wraps block ciphers to make cipher block chaining (CBC) mode
|
||||||
|
algorithms. The initialization (IV) is automatic if set to None. Padding
|
||||||
|
is also automatic based on the Pad class used to initialize the algorithm
|
||||||
|
"""
|
||||||
|
def __init__(self, blockCipherInstance, padding = padWithPadLen()):
|
||||||
|
""" CBC algorithms are created by initializing with a BlockCipher instance """
|
||||||
|
self.baseCipher = blockCipherInstance
|
||||||
|
self.name = self.baseCipher.name + '_CBC'
|
||||||
|
self.blockSize = self.baseCipher.blockSize
|
||||||
|
self.keySize = self.baseCipher.keySize
|
||||||
|
self.padding = padding
|
||||||
|
self.baseCipher.padding = noPadding() # baseCipher should NOT pad!!
|
||||||
|
self.r = Random() # for IV generation, currently uses
|
||||||
|
# mediocre standard distro version <----------------
|
||||||
|
import time
|
||||||
|
newSeed = time.ctime()+str(self.r) # seed with instance location
|
||||||
|
self.r.seed(newSeed) # to make unique
|
||||||
|
self.reset()
|
||||||
|
|
||||||
|
def setKey(self, key):
|
||||||
|
self.baseCipher.setKey(key)
|
||||||
|
|
||||||
|
# Overload to reset both CBC state and the wrapped baseCipher
|
||||||
|
def resetEncrypt(self):
|
||||||
|
BlockCipher.resetEncrypt(self) # reset CBC encrypt state (super class)
|
||||||
|
self.baseCipher.resetEncrypt() # reset base cipher encrypt state
|
||||||
|
|
||||||
|
def resetDecrypt(self):
|
||||||
|
BlockCipher.resetDecrypt(self) # reset CBC state (super class)
|
||||||
|
self.baseCipher.resetDecrypt() # reset base cipher decrypt state
|
||||||
|
|
||||||
|
def encrypt(self, plainText, iv=None, more=None):
|
||||||
|
""" CBC encryption - overloads baseCipher to allow optional explicit IV
|
||||||
|
when iv=None, iv is auto generated!
|
||||||
|
"""
|
||||||
|
if self.encryptBlockCount == 0:
|
||||||
|
self.iv = iv
|
||||||
|
else:
|
||||||
|
assert(iv==None), 'IV used only on first call to encrypt'
|
||||||
|
|
||||||
|
return BlockCipher.encrypt(self,plainText, more=more)
|
||||||
|
|
||||||
|
def decrypt(self, cipherText, iv=None, more=None):
|
||||||
|
""" CBC decryption - overloads baseCipher to allow optional explicit IV
|
||||||
|
when iv=None, iv is auto generated!
|
||||||
|
"""
|
||||||
|
if self.decryptBlockCount == 0:
|
||||||
|
self.iv = iv
|
||||||
|
else:
|
||||||
|
assert(iv==None), 'IV used only on first call to decrypt'
|
||||||
|
|
||||||
|
return BlockCipher.decrypt(self, cipherText, more=more)
|
||||||
|
|
||||||
|
def encryptBlock(self, plainTextBlock):
|
||||||
|
""" CBC block encryption, IV is set with 'encrypt' """
|
||||||
|
auto_IV = ''
|
||||||
|
if self.encryptBlockCount == 0:
|
||||||
|
if self.iv == None:
|
||||||
|
# generate IV and use
|
||||||
|
self.iv = ''.join([chr(self.r.randrange(256)) for i in range(self.blockSize)])
|
||||||
|
self.prior_encr_CT_block = self.iv
|
||||||
|
auto_IV = self.prior_encr_CT_block # prepend IV if it's automatic
|
||||||
|
else: # application provided IV
|
||||||
|
assert(len(self.iv) == self.blockSize ),'IV must be same length as block'
|
||||||
|
self.prior_encr_CT_block = self.iv
|
||||||
|
""" encrypt the prior CT XORed with the PT """
|
||||||
|
ct = self.baseCipher.encryptBlock( xor(self.prior_encr_CT_block, plainTextBlock) )
|
||||||
|
self.prior_encr_CT_block = ct
|
||||||
|
return auto_IV+ct
|
||||||
|
|
||||||
|
def decryptBlock(self, encryptedBlock):
|
||||||
|
""" Decrypt a single block """
|
||||||
|
|
||||||
|
if self.decryptBlockCount == 0: # first call, process IV
|
||||||
|
if self.iv == None: # auto decrypt IV?
|
||||||
|
self.prior_CT_block = encryptedBlock
|
||||||
|
return ''
|
||||||
|
else:
|
||||||
|
assert(len(self.iv)==self.blockSize),"Bad IV size on CBC decryption"
|
||||||
|
self.prior_CT_block = self.iv
|
||||||
|
|
||||||
|
dct = self.baseCipher.decryptBlock(encryptedBlock)
|
||||||
|
""" XOR the prior decrypted CT with the prior CT """
|
||||||
|
dct_XOR_priorCT = xor( self.prior_CT_block, dct )
|
||||||
|
|
||||||
|
self.prior_CT_block = encryptedBlock
|
||||||
|
|
||||||
|
return dct_XOR_priorCT
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
AES_CBC Encryption Algorithm
|
||||||
|
"""
|
||||||
|
|
||||||
|
class AES_CBC(CBC):
|
||||||
|
""" AES encryption in CBC feedback mode """
|
||||||
|
def __init__(self, key=None, padding=padWithPadLen(), keySize=16):
|
||||||
|
CBC.__init__( self, AES(key, noPadding(), keySize), padding)
|
||||||
|
self.name = 'AES_CBC'
|
||||||
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto.dll
Normal file
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto.dll
Normal file
Binary file not shown.
290
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto.py
Normal file
290
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto.py
Normal file
@@ -0,0 +1,290 @@
|
|||||||
|
#! /usr/bin/env python
|
||||||
|
|
||||||
|
import sys, os
|
||||||
|
import hmac
|
||||||
|
from struct import pack
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
|
||||||
|
# interface to needed routines libalfcrypto
|
||||||
|
def _load_libalfcrypto():
|
||||||
|
import ctypes
|
||||||
|
from ctypes import CDLL, byref, POINTER, c_void_p, c_char_p, c_int, c_long, \
|
||||||
|
Structure, c_ulong, create_string_buffer, addressof, string_at, cast, sizeof
|
||||||
|
|
||||||
|
pointer_size = ctypes.sizeof(ctypes.c_voidp)
|
||||||
|
name_of_lib = None
|
||||||
|
if sys.platform.startswith('darwin'):
|
||||||
|
name_of_lib = 'libalfcrypto.dylib'
|
||||||
|
elif sys.platform.startswith('win'):
|
||||||
|
if pointer_size == 4:
|
||||||
|
name_of_lib = 'alfcrypto.dll'
|
||||||
|
else:
|
||||||
|
name_of_lib = 'alfcrypto64.dll'
|
||||||
|
else:
|
||||||
|
if pointer_size == 4:
|
||||||
|
name_of_lib = 'libalfcrypto32.so'
|
||||||
|
else:
|
||||||
|
name_of_lib = 'libalfcrypto64.so'
|
||||||
|
|
||||||
|
libalfcrypto = sys.path[0] + os.sep + name_of_lib
|
||||||
|
|
||||||
|
if not os.path.isfile(libalfcrypto):
|
||||||
|
raise Exception('libalfcrypto not found')
|
||||||
|
|
||||||
|
libalfcrypto = CDLL(libalfcrypto)
|
||||||
|
|
||||||
|
c_char_pp = POINTER(c_char_p)
|
||||||
|
c_int_p = POINTER(c_int)
|
||||||
|
|
||||||
|
|
||||||
|
def F(restype, name, argtypes):
|
||||||
|
func = getattr(libalfcrypto, name)
|
||||||
|
func.restype = restype
|
||||||
|
func.argtypes = argtypes
|
||||||
|
return func
|
||||||
|
|
||||||
|
# aes cbc decryption
|
||||||
|
#
|
||||||
|
# struct aes_key_st {
|
||||||
|
# unsigned long rd_key[4 *(AES_MAXNR + 1)];
|
||||||
|
# int rounds;
|
||||||
|
# };
|
||||||
|
#
|
||||||
|
# typedef struct aes_key_st AES_KEY;
|
||||||
|
#
|
||||||
|
# int AES_set_decrypt_key(const unsigned char *userKey, const int bits, AES_KEY *key);
|
||||||
|
#
|
||||||
|
#
|
||||||
|
# void AES_cbc_encrypt(const unsigned char *in, unsigned char *out,
|
||||||
|
# const unsigned long length, const AES_KEY *key,
|
||||||
|
# unsigned char *ivec, const int enc);
|
||||||
|
|
||||||
|
AES_MAXNR = 14
|
||||||
|
|
||||||
|
class AES_KEY(Structure):
|
||||||
|
_fields_ = [('rd_key', c_long * (4 * (AES_MAXNR + 1))), ('rounds', c_int)]
|
||||||
|
|
||||||
|
AES_KEY_p = POINTER(AES_KEY)
|
||||||
|
AES_cbc_encrypt = F(None, 'AES_cbc_encrypt',[c_char_p, c_char_p, c_ulong, AES_KEY_p, c_char_p, c_int])
|
||||||
|
AES_set_decrypt_key = F(c_int, 'AES_set_decrypt_key',[c_char_p, c_int, AES_KEY_p])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Pukall 1 Cipher
|
||||||
|
# unsigned char *PC1(const unsigned char *key, unsigned int klen, const unsigned char *src,
|
||||||
|
# unsigned char *dest, unsigned int len, int decryption);
|
||||||
|
|
||||||
|
PC1 = F(c_char_p, 'PC1', [c_char_p, c_ulong, c_char_p, c_char_p, c_ulong, c_ulong])
|
||||||
|
|
||||||
|
# Topaz Encryption
|
||||||
|
# typedef struct _TpzCtx {
|
||||||
|
# unsigned int v[2];
|
||||||
|
# } TpzCtx;
|
||||||
|
#
|
||||||
|
# void topazCryptoInit(TpzCtx *ctx, const unsigned char *key, int klen);
|
||||||
|
# void topazCryptoDecrypt(const TpzCtx *ctx, const unsigned char *in, unsigned char *out, int len);
|
||||||
|
|
||||||
|
class TPZ_CTX(Structure):
|
||||||
|
_fields_ = [('v', c_long * 2)]
|
||||||
|
|
||||||
|
TPZ_CTX_p = POINTER(TPZ_CTX)
|
||||||
|
topazCryptoInit = F(None, 'topazCryptoInit', [TPZ_CTX_p, c_char_p, c_ulong])
|
||||||
|
topazCryptoDecrypt = F(None, 'topazCryptoDecrypt', [TPZ_CTX_p, c_char_p, c_char_p, c_ulong])
|
||||||
|
|
||||||
|
|
||||||
|
class AES_CBC(object):
|
||||||
|
def __init__(self):
|
||||||
|
self._blocksize = 0
|
||||||
|
self._keyctx = None
|
||||||
|
self._iv = 0
|
||||||
|
|
||||||
|
def set_decrypt_key(self, userkey, iv):
|
||||||
|
self._blocksize = len(userkey)
|
||||||
|
if (self._blocksize != 16) and (self._blocksize != 24) and (self._blocksize != 32) :
|
||||||
|
raise Exception('AES CBC improper key used')
|
||||||
|
return
|
||||||
|
keyctx = self._keyctx = AES_KEY()
|
||||||
|
self._iv = iv
|
||||||
|
rv = AES_set_decrypt_key(userkey, len(userkey) * 8, keyctx)
|
||||||
|
if rv < 0:
|
||||||
|
raise Exception('Failed to initialize AES CBC key')
|
||||||
|
|
||||||
|
def decrypt(self, data):
|
||||||
|
out = create_string_buffer(len(data))
|
||||||
|
mutable_iv = create_string_buffer(self._iv, len(self._iv))
|
||||||
|
rv = AES_cbc_encrypt(data, out, len(data), self._keyctx, mutable_iv, 0)
|
||||||
|
if rv == 0:
|
||||||
|
raise Exception('AES CBC decryption failed')
|
||||||
|
return out.raw
|
||||||
|
|
||||||
|
class Pukall_Cipher(object):
|
||||||
|
def __init__(self):
|
||||||
|
self.key = None
|
||||||
|
|
||||||
|
def PC1(self, key, src, decryption=True):
|
||||||
|
self.key = key
|
||||||
|
out = create_string_buffer(len(src))
|
||||||
|
de = 0
|
||||||
|
if decryption:
|
||||||
|
de = 1
|
||||||
|
rv = PC1(key, len(key), src, out, len(src), de)
|
||||||
|
return out.raw
|
||||||
|
|
||||||
|
class Topaz_Cipher(object):
|
||||||
|
def __init__(self):
|
||||||
|
self._ctx = None
|
||||||
|
|
||||||
|
def ctx_init(self, key):
|
||||||
|
tpz_ctx = self._ctx = TPZ_CTX()
|
||||||
|
topazCryptoInit(tpz_ctx, key, len(key))
|
||||||
|
return tpz_ctx
|
||||||
|
|
||||||
|
def decrypt(self, data, ctx=None):
|
||||||
|
if ctx == None:
|
||||||
|
ctx = self._ctx
|
||||||
|
out = create_string_buffer(len(data))
|
||||||
|
topazCryptoDecrypt(ctx, data, out, len(data))
|
||||||
|
return out.raw
|
||||||
|
|
||||||
|
print "Using Library AlfCrypto DLL/DYLIB/SO"
|
||||||
|
return (AES_CBC, Pukall_Cipher, Topaz_Cipher)
|
||||||
|
|
||||||
|
|
||||||
|
def _load_python_alfcrypto():
|
||||||
|
|
||||||
|
import aescbc
|
||||||
|
|
||||||
|
class Pukall_Cipher(object):
|
||||||
|
def __init__(self):
|
||||||
|
self.key = None
|
||||||
|
|
||||||
|
def PC1(self, key, src, decryption=True):
|
||||||
|
sum1 = 0;
|
||||||
|
sum2 = 0;
|
||||||
|
keyXorVal = 0;
|
||||||
|
if len(key)!=16:
|
||||||
|
print "Bad key length!"
|
||||||
|
return None
|
||||||
|
wkey = []
|
||||||
|
for i in xrange(8):
|
||||||
|
wkey.append(ord(key[i*2])<<8 | ord(key[i*2+1]))
|
||||||
|
dst = ""
|
||||||
|
for i in xrange(len(src)):
|
||||||
|
temp1 = 0;
|
||||||
|
byteXorVal = 0;
|
||||||
|
for j in xrange(8):
|
||||||
|
temp1 ^= wkey[j]
|
||||||
|
sum2 = (sum2+j)*20021 + sum1
|
||||||
|
sum1 = (temp1*346)&0xFFFF
|
||||||
|
sum2 = (sum2+sum1)&0xFFFF
|
||||||
|
temp1 = (temp1*20021+1)&0xFFFF
|
||||||
|
byteXorVal ^= temp1 ^ sum2
|
||||||
|
curByte = ord(src[i])
|
||||||
|
if not decryption:
|
||||||
|
keyXorVal = curByte * 257;
|
||||||
|
curByte = ((curByte ^ (byteXorVal >> 8)) ^ byteXorVal) & 0xFF
|
||||||
|
if decryption:
|
||||||
|
keyXorVal = curByte * 257;
|
||||||
|
for j in xrange(8):
|
||||||
|
wkey[j] ^= keyXorVal;
|
||||||
|
dst+=chr(curByte)
|
||||||
|
return dst
|
||||||
|
|
||||||
|
class Topaz_Cipher(object):
|
||||||
|
def __init__(self):
|
||||||
|
self._ctx = None
|
||||||
|
|
||||||
|
def ctx_init(self, key):
|
||||||
|
ctx1 = 0x0CAFFE19E
|
||||||
|
for keyChar in key:
|
||||||
|
keyByte = ord(keyChar)
|
||||||
|
ctx2 = ctx1
|
||||||
|
ctx1 = ((((ctx1 >>2) * (ctx1 >>7))&0xFFFFFFFF) ^ (keyByte * keyByte * 0x0F902007)& 0xFFFFFFFF )
|
||||||
|
self._ctx = [ctx1, ctx2]
|
||||||
|
return [ctx1,ctx2]
|
||||||
|
|
||||||
|
def decrypt(self, data, ctx=None):
|
||||||
|
if ctx == None:
|
||||||
|
ctx = self._ctx
|
||||||
|
ctx1 = ctx[0]
|
||||||
|
ctx2 = ctx[1]
|
||||||
|
plainText = ""
|
||||||
|
for dataChar in data:
|
||||||
|
dataByte = ord(dataChar)
|
||||||
|
m = (dataByte ^ ((ctx1 >> 3) &0xFF) ^ ((ctx2<<3) & 0xFF)) &0xFF
|
||||||
|
ctx2 = ctx1
|
||||||
|
ctx1 = (((ctx1 >> 2) * (ctx1 >> 7)) &0xFFFFFFFF) ^((m * m * 0x0F902007) &0xFFFFFFFF)
|
||||||
|
plainText += chr(m)
|
||||||
|
return plainText
|
||||||
|
|
||||||
|
class AES_CBC(object):
|
||||||
|
def __init__(self):
|
||||||
|
self._key = None
|
||||||
|
self._iv = None
|
||||||
|
self.aes = None
|
||||||
|
|
||||||
|
def set_decrypt_key(self, userkey, iv):
|
||||||
|
self._key = userkey
|
||||||
|
self._iv = iv
|
||||||
|
self.aes = aescbc.AES_CBC(userkey, aescbc.noPadding(), len(userkey))
|
||||||
|
|
||||||
|
def decrypt(self, data):
|
||||||
|
iv = self._iv
|
||||||
|
cleartext = self.aes.decrypt(iv + data)
|
||||||
|
return cleartext
|
||||||
|
|
||||||
|
return (AES_CBC, Pukall_Cipher, Topaz_Cipher)
|
||||||
|
|
||||||
|
|
||||||
|
def _load_crypto():
|
||||||
|
AES_CBC = Pukall_Cipher = Topaz_Cipher = None
|
||||||
|
cryptolist = (_load_libalfcrypto, _load_python_alfcrypto)
|
||||||
|
for loader in cryptolist:
|
||||||
|
try:
|
||||||
|
AES_CBC, Pukall_Cipher, Topaz_Cipher = loader()
|
||||||
|
break
|
||||||
|
except (ImportError, Exception):
|
||||||
|
pass
|
||||||
|
return AES_CBC, Pukall_Cipher, Topaz_Cipher
|
||||||
|
|
||||||
|
AES_CBC, Pukall_Cipher, Topaz_Cipher = _load_crypto()
|
||||||
|
|
||||||
|
|
||||||
|
class KeyIVGen(object):
|
||||||
|
# this only exists in openssl so we will use pure python implementation instead
|
||||||
|
# PKCS5_PBKDF2_HMAC_SHA1 = F(c_int, 'PKCS5_PBKDF2_HMAC_SHA1',
|
||||||
|
# [c_char_p, c_ulong, c_char_p, c_ulong, c_ulong, c_ulong, c_char_p])
|
||||||
|
def pbkdf2(self, passwd, salt, iter, keylen):
|
||||||
|
|
||||||
|
def xorstr( a, b ):
|
||||||
|
if len(a) != len(b):
|
||||||
|
raise Exception("xorstr(): lengths differ")
|
||||||
|
return ''.join((chr(ord(x)^ord(y)) for x, y in zip(a, b)))
|
||||||
|
|
||||||
|
def prf( h, data ):
|
||||||
|
hm = h.copy()
|
||||||
|
hm.update( data )
|
||||||
|
return hm.digest()
|
||||||
|
|
||||||
|
def pbkdf2_F( h, salt, itercount, blocknum ):
|
||||||
|
U = prf( h, salt + pack('>i',blocknum ) )
|
||||||
|
T = U
|
||||||
|
for i in range(2, itercount+1):
|
||||||
|
U = prf( h, U )
|
||||||
|
T = xorstr( T, U )
|
||||||
|
return T
|
||||||
|
|
||||||
|
sha = hashlib.sha1
|
||||||
|
digest_size = sha().digest_size
|
||||||
|
# l - number of output blocks to produce
|
||||||
|
l = keylen / digest_size
|
||||||
|
if keylen % digest_size != 0:
|
||||||
|
l += 1
|
||||||
|
h = hmac.new( passwd, None, sha )
|
||||||
|
T = ""
|
||||||
|
for i in range(1, l+1):
|
||||||
|
T += pbkdf2_F( h, salt, iter, i )
|
||||||
|
return T[0: keylen]
|
||||||
|
|
||||||
|
|
||||||
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto64.dll
Normal file
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto64.dll
Normal file
Binary file not shown.
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto_src.zip
Normal file
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/alfcrypto_src.zip
Normal file
Binary file not shown.
@@ -1,9 +1,8 @@
|
|||||||
#! /usr/bin/python
|
#! /usr/bin/python
|
||||||
# For use in Topaz Scripts version 2.6
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
Comprehensive Mazama Book DRM with Topaz Cryptography V2.0
|
Comprehensive Mazama Book DRM with Topaz Cryptography V2.2
|
||||||
|
|
||||||
-----BEGIN PUBLIC KEY-----
|
-----BEGIN PUBLIC KEY-----
|
||||||
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDdBHJ4CNc6DNFCw4MRCw4SWAK6
|
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDdBHJ4CNc6DNFCw4MRCw4SWAK6
|
||||||
@@ -13,22 +12,11 @@ y2/pHuYme7U1TsgSjwIDAQAB
|
|||||||
-----END PUBLIC KEY-----
|
-----END PUBLIC KEY-----
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from __future__ import with_statement
|
from __future__ import with_statement
|
||||||
|
|
||||||
class Unbuffered:
|
|
||||||
def __init__(self, stream):
|
|
||||||
self.stream = stream
|
|
||||||
def write(self, data):
|
|
||||||
self.stream.write(data)
|
|
||||||
self.stream.flush()
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
return getattr(self.stream, attr)
|
|
||||||
|
|
||||||
import sys
|
|
||||||
sys.stdout=Unbuffered(sys.stdout)
|
|
||||||
|
|
||||||
|
|
||||||
import csv
|
import csv
|
||||||
|
import sys
|
||||||
import os
|
import os
|
||||||
import getopt
|
import getopt
|
||||||
import zlib
|
import zlib
|
||||||
@@ -73,10 +61,10 @@ charMap4 = "ABCDEFGHIJKLMNPQRSTUVWXYZ123456789"
|
|||||||
|
|
||||||
class CMBDTCError(Exception):
|
class CMBDTCError(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
class CMBDTCFatal(Exception):
|
class CMBDTCFatal(Exception):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
#
|
#
|
||||||
# Stolen stuff
|
# Stolen stuff
|
||||||
#
|
#
|
||||||
@@ -185,18 +173,18 @@ def encode(data, map):
|
|||||||
result += map[Q]
|
result += map[Q]
|
||||||
result += map[R]
|
result += map[R]
|
||||||
return result
|
return result
|
||||||
|
|
||||||
#
|
#
|
||||||
# Hash the bytes in data and then encode the digest with the characters in map
|
# Hash the bytes in data and then encode the digest with the characters in map
|
||||||
#
|
#
|
||||||
|
|
||||||
def encodeHash(data,map):
|
def encodeHash(data,map):
|
||||||
return encode(MD5(data),map)
|
return encode(MD5(data),map)
|
||||||
|
|
||||||
#
|
#
|
||||||
# Decode the string in data with the characters in map. Returns the decoded bytes
|
# Decode the string in data with the characters in map. Returns the decoded bytes
|
||||||
#
|
#
|
||||||
|
|
||||||
def decode(data,map):
|
def decode(data,map):
|
||||||
result = ""
|
result = ""
|
||||||
for i in range (0,len(data),2):
|
for i in range (0,len(data),2):
|
||||||
@@ -205,14 +193,14 @@ def decode(data,map):
|
|||||||
value = (((high * 0x40) ^ 0x80) & 0xFF) + low
|
value = (((high * 0x40) ^ 0x80) & 0xFF) + low
|
||||||
result += pack("B",value)
|
result += pack("B",value)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
#
|
#
|
||||||
# Locate and open the Kindle.info file (Hopefully in the way it is done in the Kindle application)
|
# Locate and open the Kindle.info file (Hopefully in the way it is done in the Kindle application)
|
||||||
#
|
#
|
||||||
|
|
||||||
def openKindleInfo():
|
def openKindleInfo():
|
||||||
regkey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\\")
|
regkey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\\")
|
||||||
path = winreg.QueryValueEx(regkey, 'Local AppData')[0]
|
path = winreg.QueryValueEx(regkey, 'Local AppData')[0]
|
||||||
return open(path+'\\Amazon\\Kindle For PC\\{AMAwzsaPaaZAzmZzZQzgZCAkZ3AjA_AY}\\kindle.info','r')
|
return open(path+'\\Amazon\\Kindle For PC\\{AMAwzsaPaaZAzmZzZQzgZCAkZ3AjA_AY}\\kindle.info','r')
|
||||||
|
|
||||||
#
|
#
|
||||||
@@ -225,7 +213,7 @@ def parseKindleInfo():
|
|||||||
infoReader.read(1)
|
infoReader.read(1)
|
||||||
data = infoReader.read()
|
data = infoReader.read()
|
||||||
items = data.split('{')
|
items = data.split('{')
|
||||||
|
|
||||||
for item in items:
|
for item in items:
|
||||||
splito = item.split(':')
|
splito = item.split(':')
|
||||||
DB[splito[0]] =splito[1]
|
DB[splito[0]] =splito[1]
|
||||||
@@ -234,20 +222,20 @@ def parseKindleInfo():
|
|||||||
#
|
#
|
||||||
# Find if the original string for a hashed/encoded string is known. If so return the original string othwise return an empty string. (Totally not optimal)
|
# Find if the original string for a hashed/encoded string is known. If so return the original string othwise return an empty string. (Totally not optimal)
|
||||||
#
|
#
|
||||||
|
|
||||||
def findNameForHash(hash):
|
def findNameForHash(hash):
|
||||||
names = ["kindle.account.tokens","kindle.cookie.item","eulaVersionAccepted","login_date","kindle.token.item","login","kindle.key.item","kindle.name.info","kindle.device.info", "MazamaRandomNumber"]
|
names = ["kindle.account.tokens","kindle.cookie.item","eulaVersionAccepted","login_date","kindle.token.item","login","kindle.key.item","kindle.name.info","kindle.device.info", "MazamaRandomNumber"]
|
||||||
result = ""
|
result = ""
|
||||||
for name in names:
|
for name in names:
|
||||||
if hash == encodeHash(name, charMap2):
|
if hash == encodeHash(name, charMap2):
|
||||||
result = name
|
result = name
|
||||||
break
|
break
|
||||||
return name
|
return name
|
||||||
|
|
||||||
#
|
#
|
||||||
# Print all the records from the kindle.info file (option -i)
|
# Print all the records from the kindle.info file (option -i)
|
||||||
#
|
#
|
||||||
|
|
||||||
def printKindleInfo():
|
def printKindleInfo():
|
||||||
for record in kindleDatabase:
|
for record in kindleDatabase:
|
||||||
name = findNameForHash(record)
|
name = findNameForHash(record)
|
||||||
@@ -266,14 +254,14 @@ def getKindleInfoValueForHash(hashedKey):
|
|||||||
global kindleDatabase
|
global kindleDatabase
|
||||||
encryptedValue = decode(kindleDatabase[hashedKey],charMap2)
|
encryptedValue = decode(kindleDatabase[hashedKey],charMap2)
|
||||||
return CryptUnprotectData(encryptedValue,"")
|
return CryptUnprotectData(encryptedValue,"")
|
||||||
|
|
||||||
#
|
#
|
||||||
# Get a record from the Kindle.info file for the string in "key" (plaintext). Return the decoded and decrypted record
|
# Get a record from the Kindle.info file for the string in "key" (plaintext). Return the decoded and decrypted record
|
||||||
#
|
#
|
||||||
|
|
||||||
def getKindleInfoValueForKey(key):
|
def getKindleInfoValueForKey(key):
|
||||||
return getKindleInfoValueForHash(encodeHash(key,charMap2))
|
return getKindleInfoValueForHash(encodeHash(key,charMap2))
|
||||||
|
|
||||||
#
|
#
|
||||||
# Get a 7 bit encoded number from the book file
|
# Get a 7 bit encoded number from the book file
|
||||||
#
|
#
|
||||||
@@ -281,86 +269,86 @@ def getKindleInfoValueForKey(key):
|
|||||||
def bookReadEncodedNumber():
|
def bookReadEncodedNumber():
|
||||||
flag = False
|
flag = False
|
||||||
data = ord(bookFile.read(1))
|
data = ord(bookFile.read(1))
|
||||||
|
|
||||||
if data == 0xFF:
|
if data == 0xFF:
|
||||||
flag = True
|
flag = True
|
||||||
data = ord(bookFile.read(1))
|
data = ord(bookFile.read(1))
|
||||||
|
|
||||||
if data >= 0x80:
|
if data >= 0x80:
|
||||||
datax = (data & 0x7F)
|
datax = (data & 0x7F)
|
||||||
while data >= 0x80 :
|
while data >= 0x80 :
|
||||||
data = ord(bookFile.read(1))
|
data = ord(bookFile.read(1))
|
||||||
datax = (datax <<7) + (data & 0x7F)
|
datax = (datax <<7) + (data & 0x7F)
|
||||||
data = datax
|
data = datax
|
||||||
|
|
||||||
if flag:
|
if flag:
|
||||||
data = -data
|
data = -data
|
||||||
return data
|
return data
|
||||||
|
|
||||||
#
|
#
|
||||||
# Encode a number in 7 bit format
|
# Encode a number in 7 bit format
|
||||||
#
|
#
|
||||||
|
|
||||||
def encodeNumber(number):
|
def encodeNumber(number):
|
||||||
result = ""
|
result = ""
|
||||||
negative = False
|
negative = False
|
||||||
flag = 0
|
flag = 0
|
||||||
|
|
||||||
if number < 0 :
|
if number < 0 :
|
||||||
number = -number + 1
|
number = -number + 1
|
||||||
negative = True
|
negative = True
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
byte = number & 0x7F
|
byte = number & 0x7F
|
||||||
number = number >> 7
|
number = number >> 7
|
||||||
byte += flag
|
byte += flag
|
||||||
result += chr(byte)
|
result += chr(byte)
|
||||||
flag = 0x80
|
flag = 0x80
|
||||||
if number == 0 :
|
if number == 0 :
|
||||||
if (byte == 0xFF and negative == False) :
|
if (byte == 0xFF and negative == False) :
|
||||||
result += chr(0x80)
|
result += chr(0x80)
|
||||||
break
|
break
|
||||||
|
|
||||||
if negative:
|
if negative:
|
||||||
result += chr(0xFF)
|
result += chr(0xFF)
|
||||||
|
|
||||||
return result[::-1]
|
return result[::-1]
|
||||||
|
|
||||||
#
|
#
|
||||||
# Get a length prefixed string from the file
|
# Get a length prefixed string from the file
|
||||||
#
|
#
|
||||||
|
|
||||||
def bookReadString():
|
def bookReadString():
|
||||||
stringLength = bookReadEncodedNumber()
|
stringLength = bookReadEncodedNumber()
|
||||||
return unpack(str(stringLength)+"s",bookFile.read(stringLength))[0]
|
return unpack(str(stringLength)+"s",bookFile.read(stringLength))[0]
|
||||||
|
|
||||||
#
|
#
|
||||||
# Returns a length prefixed string
|
# Returns a length prefixed string
|
||||||
#
|
#
|
||||||
|
|
||||||
def lengthPrefixString(data):
|
def lengthPrefixString(data):
|
||||||
return encodeNumber(len(data))+data
|
return encodeNumber(len(data))+data
|
||||||
|
|
||||||
|
|
||||||
#
|
#
|
||||||
# Read and return the data of one header record at the current book file position [[offset,decompressedLength,compressedLength],...]
|
# Read and return the data of one header record at the current book file position [[offset,compressedLength,decompressedLength],...]
|
||||||
#
|
#
|
||||||
|
|
||||||
def bookReadHeaderRecordData():
|
def bookReadHeaderRecordData():
|
||||||
nbValues = bookReadEncodedNumber()
|
nbValues = bookReadEncodedNumber()
|
||||||
values = []
|
values = []
|
||||||
for i in range (0,nbValues):
|
for i in range (0,nbValues):
|
||||||
values.append([bookReadEncodedNumber(),bookReadEncodedNumber(),bookReadEncodedNumber()])
|
values.append([bookReadEncodedNumber(),bookReadEncodedNumber(),bookReadEncodedNumber()])
|
||||||
return values
|
return values
|
||||||
|
|
||||||
#
|
#
|
||||||
# Read and parse one header record at the current book file position and return the associated data [[offset,decompressedLength,compressedLength],...]
|
# Read and parse one header record at the current book file position and return the associated data [[offset,compressedLength,decompressedLength],...]
|
||||||
#
|
#
|
||||||
|
|
||||||
def parseTopazHeaderRecord():
|
def parseTopazHeaderRecord():
|
||||||
if ord(bookFile.read(1)) != 0x63:
|
if ord(bookFile.read(1)) != 0x63:
|
||||||
raise CMBDTCFatal("Parse Error : Invalid Header")
|
raise CMBDTCFatal("Parse Error : Invalid Header")
|
||||||
|
|
||||||
tag = bookReadString()
|
tag = bookReadString()
|
||||||
record = bookReadHeaderRecordData()
|
record = bookReadHeaderRecordData()
|
||||||
return [tag,record]
|
return [tag,record]
|
||||||
@@ -368,70 +356,63 @@ def parseTopazHeaderRecord():
|
|||||||
#
|
#
|
||||||
# Parse the header of a Topaz file, get all the header records and the offset for the payload
|
# Parse the header of a Topaz file, get all the header records and the offset for the payload
|
||||||
#
|
#
|
||||||
|
|
||||||
def parseTopazHeader():
|
def parseTopazHeader():
|
||||||
global bookHeaderRecords
|
global bookHeaderRecords
|
||||||
global bookPayloadOffset
|
global bookPayloadOffset
|
||||||
magic = unpack("4s",bookFile.read(4))[0]
|
magic = unpack("4s",bookFile.read(4))[0]
|
||||||
|
|
||||||
if magic != 'TPZ0':
|
if magic != 'TPZ0':
|
||||||
raise CMBDTCFatal("Parse Error : Invalid Header, not a Topaz file")
|
raise CMBDTCFatal("Parse Error : Invalid Header, not a Topaz file")
|
||||||
|
|
||||||
nbRecords = bookReadEncodedNumber()
|
nbRecords = bookReadEncodedNumber()
|
||||||
bookHeaderRecords = {}
|
bookHeaderRecords = {}
|
||||||
|
|
||||||
for i in range (0,nbRecords):
|
for i in range (0,nbRecords):
|
||||||
result = parseTopazHeaderRecord()
|
result = parseTopazHeaderRecord()
|
||||||
print result[0], result[1]
|
|
||||||
bookHeaderRecords[result[0]] = result[1]
|
bookHeaderRecords[result[0]] = result[1]
|
||||||
|
|
||||||
if ord(bookFile.read(1)) != 0x64 :
|
if ord(bookFile.read(1)) != 0x64 :
|
||||||
raise CMBDTCFatal("Parse Error : Invalid Header")
|
raise CMBDTCFatal("Parse Error : Invalid Header")
|
||||||
|
|
||||||
bookPayloadOffset = bookFile.tell()
|
bookPayloadOffset = bookFile.tell()
|
||||||
|
|
||||||
#
|
#
|
||||||
# Get a record in the book payload, given its name and index. If necessary the record is decrypted. The record is not decompressed
|
# Get a record in the book payload, given its name and index. If necessary the record is decrypted. The record is not decompressed
|
||||||
# Correction, the record is correctly decompressed too
|
|
||||||
#
|
#
|
||||||
|
|
||||||
def getBookPayloadRecord(name, index):
|
def getBookPayloadRecord(name, index):
|
||||||
encrypted = False
|
encrypted = False
|
||||||
compressed = False
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
recordOffset = bookHeaderRecords[name][index][0]
|
recordOffset = bookHeaderRecords[name][index][0]
|
||||||
except:
|
except:
|
||||||
raise CMBDTCFatal("Parse Error : Invalid Record, record not found")
|
raise CMBDTCFatal("Parse Error : Invalid Record, record not found")
|
||||||
|
|
||||||
bookFile.seek(bookPayloadOffset + recordOffset)
|
bookFile.seek(bookPayloadOffset + recordOffset)
|
||||||
|
|
||||||
tag = bookReadString()
|
tag = bookReadString()
|
||||||
if tag != name :
|
if tag != name :
|
||||||
raise CMBDTCFatal("Parse Error : Invalid Record, record name doesn't match")
|
raise CMBDTCFatal("Parse Error : Invalid Record, record name doesn't match")
|
||||||
|
|
||||||
recordIndex = bookReadEncodedNumber()
|
recordIndex = bookReadEncodedNumber()
|
||||||
|
|
||||||
if recordIndex < 0 :
|
if recordIndex < 0 :
|
||||||
encrypted = True
|
encrypted = True
|
||||||
recordIndex = -recordIndex -1
|
recordIndex = -recordIndex -1
|
||||||
|
|
||||||
if recordIndex != index :
|
if recordIndex != index :
|
||||||
raise CMBDTCFatal("Parse Error : Invalid Record, index doesn't match")
|
raise CMBDTCFatal("Parse Error : Invalid Record, index doesn't match")
|
||||||
|
|
||||||
if (bookHeaderRecords[name][index][2] > 0):
|
if bookHeaderRecords[name][index][2] != 0 :
|
||||||
compressed = True
|
|
||||||
record = bookFile.read(bookHeaderRecords[name][index][2])
|
record = bookFile.read(bookHeaderRecords[name][index][2])
|
||||||
else:
|
else:
|
||||||
record = bookFile.read(bookHeaderRecords[name][index][1])
|
record = bookFile.read(bookHeaderRecords[name][index][1])
|
||||||
|
|
||||||
if encrypted:
|
|
||||||
ctx = topazCryptoInit(bookKey)
|
|
||||||
record = topazCryptoDecrypt(record,ctx)
|
|
||||||
|
|
||||||
if compressed:
|
if encrypted:
|
||||||
record = zlib.decompress(record)
|
ctx = topazCryptoInit(bookKey)
|
||||||
|
record = topazCryptoDecrypt(record,ctx)
|
||||||
|
|
||||||
return record
|
return record
|
||||||
|
|
||||||
#
|
#
|
||||||
@@ -446,13 +427,13 @@ def extractBookPayloadRecord(name, index, filename):
|
|||||||
record = getBookPayloadRecord(name,index)
|
record = getBookPayloadRecord(name,index)
|
||||||
except:
|
except:
|
||||||
print("Could not find record")
|
print("Could not find record")
|
||||||
|
|
||||||
# if compressed:
|
if compressed:
|
||||||
# try:
|
try:
|
||||||
# record = zlib.decompress(record)
|
record = zlib.decompress(record)
|
||||||
# except:
|
except:
|
||||||
# raise CMBDTCFatal("Could not decompress record")
|
raise CMBDTCFatal("Could not decompress record")
|
||||||
|
|
||||||
if filename != "":
|
if filename != "":
|
||||||
try:
|
try:
|
||||||
file = open(filename,"wb")
|
file = open(filename,"wb")
|
||||||
@@ -462,14 +443,14 @@ def extractBookPayloadRecord(name, index, filename):
|
|||||||
raise CMBDTCFatal("Could not write to destination file")
|
raise CMBDTCFatal("Could not write to destination file")
|
||||||
else:
|
else:
|
||||||
print(record)
|
print(record)
|
||||||
|
|
||||||
#
|
#
|
||||||
# return next record [key,value] from the book metadata from the current book position
|
# return next record [key,value] from the book metadata from the current book position
|
||||||
#
|
#
|
||||||
|
|
||||||
def readMetadataRecord():
|
def readMetadataRecord():
|
||||||
return [bookReadString(),bookReadString()]
|
return [bookReadString(),bookReadString()]
|
||||||
|
|
||||||
#
|
#
|
||||||
# Parse the metadata record from the book payload and return a list of [key,values]
|
# Parse the metadata record from the book payload and return a list of [key,values]
|
||||||
#
|
#
|
||||||
@@ -483,10 +464,10 @@ def parseMetadata():
|
|||||||
tag = bookReadString()
|
tag = bookReadString()
|
||||||
if tag != "metadata" :
|
if tag != "metadata" :
|
||||||
raise CMBDTCFatal("Parse Error : Record Names Don't Match")
|
raise CMBDTCFatal("Parse Error : Record Names Don't Match")
|
||||||
|
|
||||||
flags = ord(bookFile.read(1))
|
flags = ord(bookFile.read(1))
|
||||||
nbRecords = ord(bookFile.read(1))
|
nbRecords = ord(bookFile.read(1))
|
||||||
|
|
||||||
for i in range (0,nbRecords) :
|
for i in range (0,nbRecords) :
|
||||||
record =readMetadataRecord()
|
record =readMetadataRecord()
|
||||||
bookMetadata[record[0]] = record[1]
|
bookMetadata[record[0]] = record[1]
|
||||||
@@ -494,22 +475,22 @@ def parseMetadata():
|
|||||||
#
|
#
|
||||||
# Returns two bit at offset from a bit field
|
# Returns two bit at offset from a bit field
|
||||||
#
|
#
|
||||||
|
|
||||||
def getTwoBitsFromBitField(bitField,offset):
|
def getTwoBitsFromBitField(bitField,offset):
|
||||||
byteNumber = offset // 4
|
byteNumber = offset // 4
|
||||||
bitPosition = 6 - 2*(offset % 4)
|
bitPosition = 6 - 2*(offset % 4)
|
||||||
|
|
||||||
return ord(bitField[byteNumber]) >> bitPosition & 3
|
return ord(bitField[byteNumber]) >> bitPosition & 3
|
||||||
|
|
||||||
#
|
#
|
||||||
# Returns the six bits at offset from a bit field
|
# Returns the six bits at offset from a bit field
|
||||||
#
|
#
|
||||||
|
|
||||||
def getSixBitsFromBitField(bitField,offset):
|
def getSixBitsFromBitField(bitField,offset):
|
||||||
offset *= 3
|
offset *= 3
|
||||||
value = (getTwoBitsFromBitField(bitField,offset) <<4) + (getTwoBitsFromBitField(bitField,offset+1) << 2) +getTwoBitsFromBitField(bitField,offset+2)
|
value = (getTwoBitsFromBitField(bitField,offset) <<4) + (getTwoBitsFromBitField(bitField,offset+1) << 2) +getTwoBitsFromBitField(bitField,offset+2)
|
||||||
return value
|
return value
|
||||||
|
|
||||||
#
|
#
|
||||||
# 8 bits to six bits encoding from hash to generate PID string
|
# 8 bits to six bits encoding from hash to generate PID string
|
||||||
#
|
#
|
||||||
@@ -520,37 +501,37 @@ def encodePID(hash):
|
|||||||
for position in range (0,8):
|
for position in range (0,8):
|
||||||
PID += charMap3[getSixBitsFromBitField(hash,position)]
|
PID += charMap3[getSixBitsFromBitField(hash,position)]
|
||||||
return PID
|
return PID
|
||||||
|
|
||||||
#
|
#
|
||||||
# Context initialisation for the Topaz Crypto
|
# Context initialisation for the Topaz Crypto
|
||||||
#
|
#
|
||||||
|
|
||||||
def topazCryptoInit(key):
|
def topazCryptoInit(key):
|
||||||
ctx1 = 0x0CAFFE19E
|
ctx1 = 0x0CAFFE19E
|
||||||
|
|
||||||
for keyChar in key:
|
for keyChar in key:
|
||||||
keyByte = ord(keyChar)
|
keyByte = ord(keyChar)
|
||||||
ctx2 = ctx1
|
ctx2 = ctx1
|
||||||
ctx1 = ((((ctx1 >>2) * (ctx1 >>7))&0xFFFFFFFF) ^ (keyByte * keyByte * 0x0F902007)& 0xFFFFFFFF )
|
ctx1 = ((((ctx1 >>2) * (ctx1 >>7))&0xFFFFFFFF) ^ (keyByte * keyByte * 0x0F902007)& 0xFFFFFFFF )
|
||||||
return [ctx1,ctx2]
|
return [ctx1,ctx2]
|
||||||
|
|
||||||
#
|
#
|
||||||
# decrypt data with the context prepared by topazCryptoInit()
|
# decrypt data with the context prepared by topazCryptoInit()
|
||||||
#
|
#
|
||||||
|
|
||||||
def topazCryptoDecrypt(data, ctx):
|
def topazCryptoDecrypt(data, ctx):
|
||||||
ctx1 = ctx[0]
|
ctx1 = ctx[0]
|
||||||
ctx2 = ctx[1]
|
ctx2 = ctx[1]
|
||||||
|
|
||||||
plainText = ""
|
plainText = ""
|
||||||
|
|
||||||
for dataChar in data:
|
for dataChar in data:
|
||||||
dataByte = ord(dataChar)
|
dataByte = ord(dataChar)
|
||||||
m = (dataByte ^ ((ctx1 >> 3) &0xFF) ^ ((ctx2<<3) & 0xFF)) &0xFF
|
m = (dataByte ^ ((ctx1 >> 3) &0xFF) ^ ((ctx2<<3) & 0xFF)) &0xFF
|
||||||
ctx2 = ctx1
|
ctx2 = ctx1
|
||||||
ctx1 = (((ctx1 >> 2) * (ctx1 >> 7)) &0xFFFFFFFF) ^((m * m * 0x0F902007) &0xFFFFFFFF)
|
ctx1 = (((ctx1 >> 2) * (ctx1 >> 7)) &0xFFFFFFFF) ^((m * m * 0x0F902007) &0xFFFFFFFF)
|
||||||
plainText += chr(m)
|
plainText += chr(m)
|
||||||
|
|
||||||
return plainText
|
return plainText
|
||||||
|
|
||||||
#
|
#
|
||||||
@@ -568,20 +549,20 @@ def decryptRecord(data,PID):
|
|||||||
def decryptDkeyRecord(data,PID):
|
def decryptDkeyRecord(data,PID):
|
||||||
record = decryptRecord(data,PID)
|
record = decryptRecord(data,PID)
|
||||||
fields = unpack("3sB8sB8s3s",record)
|
fields = unpack("3sB8sB8s3s",record)
|
||||||
|
|
||||||
if fields[0] != "PID" or fields[5] != "pid" :
|
if fields[0] != "PID" or fields[5] != "pid" :
|
||||||
raise CMBDTCError("Didn't find PID magic numbers in record")
|
raise CMBDTCError("Didn't find PID magic numbers in record")
|
||||||
elif fields[1] != 8 or fields[3] != 8 :
|
elif fields[1] != 8 or fields[3] != 8 :
|
||||||
raise CMBDTCError("Record didn't contain correct length fields")
|
raise CMBDTCError("Record didn't contain correct length fields")
|
||||||
elif fields[2] != PID :
|
elif fields[2] != PID :
|
||||||
raise CMBDTCError("Record didn't contain PID")
|
raise CMBDTCError("Record didn't contain PID")
|
||||||
|
|
||||||
return fields[4]
|
return fields[4]
|
||||||
|
|
||||||
#
|
#
|
||||||
# Decrypt all the book's dkey records (contain the book PID)
|
# Decrypt all the book's dkey records (contain the book PID)
|
||||||
#
|
#
|
||||||
|
|
||||||
def decryptDkeyRecords(data,PID):
|
def decryptDkeyRecords(data,PID):
|
||||||
nbKeyRecords = ord(data[0])
|
nbKeyRecords = ord(data[0])
|
||||||
records = []
|
records = []
|
||||||
@@ -594,13 +575,13 @@ def decryptDkeyRecords(data,PID):
|
|||||||
except CMBDTCError:
|
except CMBDTCError:
|
||||||
pass
|
pass
|
||||||
data = data[1+length:]
|
data = data[1+length:]
|
||||||
|
|
||||||
return records
|
return records
|
||||||
|
|
||||||
#
|
#
|
||||||
# Encryption table used to generate the device PID
|
# Encryption table used to generate the device PID
|
||||||
#
|
#
|
||||||
|
|
||||||
def generatePidEncryptionTable() :
|
def generatePidEncryptionTable() :
|
||||||
table = []
|
table = []
|
||||||
for counter1 in range (0,0x100):
|
for counter1 in range (0,0x100):
|
||||||
@@ -613,18 +594,18 @@ def generatePidEncryptionTable() :
|
|||||||
value = value ^ 0xEDB88320
|
value = value ^ 0xEDB88320
|
||||||
table.append(value)
|
table.append(value)
|
||||||
return table
|
return table
|
||||||
|
|
||||||
#
|
#
|
||||||
# Seed value used to generate the device PID
|
# Seed value used to generate the device PID
|
||||||
#
|
#
|
||||||
|
|
||||||
def generatePidSeed(table,dsn) :
|
def generatePidSeed(table,dsn) :
|
||||||
value = 0
|
value = 0
|
||||||
for counter in range (0,4) :
|
for counter in range (0,4) :
|
||||||
index = (ord(dsn[counter]) ^ value) &0xFF
|
index = (ord(dsn[counter]) ^ value) &0xFF
|
||||||
value = (value >> 8) ^ table[index]
|
value = (value >> 8) ^ table[index]
|
||||||
return value
|
return value
|
||||||
|
|
||||||
#
|
#
|
||||||
# Generate the device PID
|
# Generate the device PID
|
||||||
#
|
#
|
||||||
@@ -634,68 +615,92 @@ def generateDevicePID(table,dsn,nbRoll):
|
|||||||
pidAscii = ""
|
pidAscii = ""
|
||||||
pid = [(seed >>24) &0xFF,(seed >> 16) &0xff,(seed >> 8) &0xFF,(seed) & 0xFF,(seed>>24) & 0xFF,(seed >> 16) &0xff,(seed >> 8) &0xFF,(seed) & 0xFF]
|
pid = [(seed >>24) &0xFF,(seed >> 16) &0xff,(seed >> 8) &0xFF,(seed) & 0xFF,(seed>>24) & 0xFF,(seed >> 16) &0xff,(seed >> 8) &0xFF,(seed) & 0xFF]
|
||||||
index = 0
|
index = 0
|
||||||
|
|
||||||
for counter in range (0,nbRoll):
|
for counter in range (0,nbRoll):
|
||||||
pid[index] = pid[index] ^ ord(dsn[counter])
|
pid[index] = pid[index] ^ ord(dsn[counter])
|
||||||
index = (index+1) %8
|
index = (index+1) %8
|
||||||
|
|
||||||
for counter in range (0,8):
|
for counter in range (0,8):
|
||||||
index = ((((pid[counter] >>5) & 3) ^ pid[counter]) & 0x1f) + (pid[counter] >> 7)
|
index = ((((pid[counter] >>5) & 3) ^ pid[counter]) & 0x1f) + (pid[counter] >> 7)
|
||||||
pidAscii += charMap4[index]
|
pidAscii += charMap4[index]
|
||||||
return pidAscii
|
return pidAscii
|
||||||
|
|
||||||
#
|
#
|
||||||
# Create decrypted book payload
|
# Create decrypted book payload
|
||||||
#
|
#
|
||||||
|
|
||||||
def createDecryptedPayload(payload):
|
def createDecryptedPayload(payload):
|
||||||
for headerRecord in bookHeaderRecords:
|
|
||||||
name = headerRecord
|
|
||||||
if name != "dkey" :
|
|
||||||
ext = '.dat'
|
|
||||||
if name == 'img' : ext = '.jpg'
|
|
||||||
if name == 'color' : ext = '.jpg'
|
|
||||||
for index in range (0,len(bookHeaderRecords[name])) :
|
|
||||||
fnum = "%04d" % index
|
|
||||||
fname = name + fnum + ext
|
|
||||||
destdir = payload
|
|
||||||
if name == 'img':
|
|
||||||
destdir = os.path.join(payload,'img')
|
|
||||||
if name == 'color':
|
|
||||||
destdir = os.path.join(payload,'color_img')
|
|
||||||
if name == 'page':
|
|
||||||
destdir = os.path.join(payload,'page')
|
|
||||||
if name == 'glyphs':
|
|
||||||
destdir = os.path.join(payload,'glyphs')
|
|
||||||
outputFile = os.path.join(destdir,fname)
|
|
||||||
file(outputFile, 'wb').write(getBookPayloadRecord(name, index))
|
|
||||||
|
|
||||||
|
|
||||||
|
# store data to be able to create the header later
|
||||||
|
headerData= []
|
||||||
|
currentOffset = 0
|
||||||
|
|
||||||
|
# Add social DRM to decrypted files
|
||||||
|
|
||||||
|
try:
|
||||||
|
data = getKindleInfoValueForKey("kindle.name.info")+":"+ getKindleInfoValueForKey("login")
|
||||||
|
if payload!= None:
|
||||||
|
payload.write(lengthPrefixString("sdrm"))
|
||||||
|
payload.write(encodeNumber(0))
|
||||||
|
payload.write(data)
|
||||||
|
else:
|
||||||
|
currentOffset += len(lengthPrefixString("sdrm"))
|
||||||
|
currentOffset += len(encodeNumber(0))
|
||||||
|
currentOffset += len(data)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
for headerRecord in bookHeaderRecords:
|
||||||
|
name = headerRecord
|
||||||
|
newRecord = []
|
||||||
|
|
||||||
|
if name != "dkey" :
|
||||||
|
|
||||||
|
for index in range (0,len(bookHeaderRecords[name])) :
|
||||||
|
offset = currentOffset
|
||||||
|
|
||||||
|
if payload != None:
|
||||||
|
# write tag
|
||||||
|
payload.write(lengthPrefixString(name))
|
||||||
|
# write data
|
||||||
|
payload.write(encodeNumber(index))
|
||||||
|
payload.write(getBookPayloadRecord(name, index))
|
||||||
|
|
||||||
|
else :
|
||||||
|
currentOffset += len(lengthPrefixString(name))
|
||||||
|
currentOffset += len(encodeNumber(index))
|
||||||
|
currentOffset += len(getBookPayloadRecord(name, index))
|
||||||
|
newRecord.append([offset,bookHeaderRecords[name][index][1],bookHeaderRecords[name][index][2]])
|
||||||
|
|
||||||
|
headerData.append([name,newRecord])
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
return headerData
|
||||||
|
|
||||||
|
#
|
||||||
# Create decrypted book
|
# Create decrypted book
|
||||||
#
|
#
|
||||||
|
|
||||||
def createDecryptedBook(outdir):
|
def createDecryptedBook(outputFile):
|
||||||
if not os.path.exists(outdir):
|
outputFile = open(outputFile,"wb")
|
||||||
os.makedirs(outdir)
|
# Write the payload in a temporary file
|
||||||
|
headerData = createDecryptedPayload(None)
|
||||||
|
outputFile.write("TPZ0")
|
||||||
|
outputFile.write(encodeNumber(len(headerData)))
|
||||||
|
|
||||||
destdir = os.path.join(outdir,'img')
|
for header in headerData :
|
||||||
if not os.path.exists(destdir):
|
outputFile.write(chr(0x63))
|
||||||
os.makedirs(destdir)
|
outputFile.write(lengthPrefixString(header[0]))
|
||||||
|
outputFile.write(encodeNumber(len(header[1])))
|
||||||
destdir = os.path.join(outdir,'color_img')
|
for numbers in header[1] :
|
||||||
if not os.path.exists(destdir):
|
outputFile.write(encodeNumber(numbers[0]))
|
||||||
os.makedirs(destdir)
|
outputFile.write(encodeNumber(numbers[1]))
|
||||||
|
outputFile.write(encodeNumber(numbers[2]))
|
||||||
destdir = os.path.join(outdir,'page')
|
|
||||||
if not os.path.exists(destdir):
|
|
||||||
os.makedirs(destdir)
|
|
||||||
|
|
||||||
destdir = os.path.join(outdir,'glyphs')
|
|
||||||
if not os.path.exists(destdir):
|
|
||||||
os.makedirs(destdir)
|
|
||||||
|
|
||||||
createDecryptedPayload(outdir)
|
|
||||||
|
|
||||||
|
outputFile.write(chr(0x64))
|
||||||
|
createDecryptedPayload(outputFile)
|
||||||
|
outputFile.close()
|
||||||
|
|
||||||
#
|
#
|
||||||
# Set the command to execute by the programm according to cmdLine parameters
|
# Set the command to execute by the programm according to cmdLine parameters
|
||||||
@@ -704,26 +709,27 @@ def createDecryptedBook(outdir):
|
|||||||
def setCommand(name) :
|
def setCommand(name) :
|
||||||
global command
|
global command
|
||||||
if command != "" :
|
if command != "" :
|
||||||
raise CMBDTCFatal("Invalid command line parameters")
|
raise CMBDTCFatal("Invalid command line parameters")
|
||||||
else :
|
else :
|
||||||
command = name
|
command = name
|
||||||
|
|
||||||
#
|
#
|
||||||
# Program usage
|
# Program usage
|
||||||
#
|
#
|
||||||
|
|
||||||
def usage():
|
def usage():
|
||||||
print("\nUsage:")
|
print("\nUsage:")
|
||||||
print("\ncmbtc_dump.py [options] bookFileName\n")
|
print("\nCMBDTC.py [options] bookFileName\n")
|
||||||
print("-p Adds a PID to the list of PIDs that are tried to decrypt the book key (can be used several times)")
|
print("-p Adds a PID to the list of PIDs that are tried to decrypt the book key (can be used several times)")
|
||||||
print("-d Dumps the unencrypted book as files to outdir")
|
print("-d Saves a decrypted copy of the book")
|
||||||
print("-o Output directory to save book files to")
|
print("-r Prints or writes to disk a record indicated in the form name:index (e.g \"img:0\")")
|
||||||
|
print("-o Output file name to write records and decrypted books")
|
||||||
print("-v Verbose (can be used several times)")
|
print("-v Verbose (can be used several times)")
|
||||||
print("-i Prints kindle.info database")
|
print("-i Prints kindle.info database")
|
||||||
|
|
||||||
#
|
#
|
||||||
# Main
|
# Main
|
||||||
#
|
#
|
||||||
|
|
||||||
def main(argv=sys.argv):
|
def main(argv=sys.argv):
|
||||||
global kindleDatabase
|
global kindleDatabase
|
||||||
@@ -731,30 +737,30 @@ def main(argv=sys.argv):
|
|||||||
global bookKey
|
global bookKey
|
||||||
global bookFile
|
global bookFile
|
||||||
global command
|
global command
|
||||||
|
|
||||||
progname = os.path.basename(argv[0])
|
progname = os.path.basename(argv[0])
|
||||||
|
|
||||||
verbose = 0
|
verbose = 0
|
||||||
recordName = ""
|
recordName = ""
|
||||||
recordIndex = 0
|
recordIndex = 0
|
||||||
outdir = ""
|
outputFile = ""
|
||||||
PIDs = []
|
PIDs = []
|
||||||
kindleDatabase = None
|
kindleDatabase = None
|
||||||
command = ""
|
command = ""
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
opts, args = getopt.getopt(sys.argv[1:], "vi:o:p:d")
|
opts, args = getopt.getopt(sys.argv[1:], "vdir:o:p:")
|
||||||
except getopt.GetoptError, err:
|
except getopt.GetoptError, err:
|
||||||
# print help information and exit:
|
# print help information and exit:
|
||||||
print str(err) # will print something like "option -a not recognized"
|
print str(err) # will print something like "option -a not recognized"
|
||||||
usage()
|
usage()
|
||||||
sys.exit(2)
|
sys.exit(2)
|
||||||
|
|
||||||
if len(opts) == 0 and len(args) == 0 :
|
if len(opts) == 0 and len(args) == 0 :
|
||||||
usage()
|
usage()
|
||||||
sys.exit(2)
|
sys.exit(2)
|
||||||
|
|
||||||
for o, a in opts:
|
for o, a in opts:
|
||||||
if o == "-v":
|
if o == "-v":
|
||||||
verbose+=1
|
verbose+=1
|
||||||
@@ -763,124 +769,130 @@ def main(argv=sys.argv):
|
|||||||
if o =="-o":
|
if o =="-o":
|
||||||
if a == None :
|
if a == None :
|
||||||
raise CMBDTCFatal("Invalid parameter for -o")
|
raise CMBDTCFatal("Invalid parameter for -o")
|
||||||
outdir = a
|
outputFile = a
|
||||||
|
if o =="-r":
|
||||||
|
setCommand("printRecord")
|
||||||
|
try:
|
||||||
|
recordName,recordIndex = a.split(':')
|
||||||
|
except:
|
||||||
|
raise CMBDTCFatal("Invalid parameter for -r")
|
||||||
if o =="-p":
|
if o =="-p":
|
||||||
PIDs.append(a)
|
PIDs.append(a)
|
||||||
if o =="-d":
|
if o =="-d":
|
||||||
setCommand("doit")
|
setCommand("doit")
|
||||||
|
|
||||||
if command == "" :
|
if command == "" :
|
||||||
raise CMBDTCFatal("No action supplied on command line")
|
raise CMBDTCFatal("No action supplied on command line")
|
||||||
|
|
||||||
#
|
#
|
||||||
# Read the encrypted database
|
# Read the encrypted database
|
||||||
#
|
#
|
||||||
|
|
||||||
try:
|
try:
|
||||||
kindleDatabase = parseKindleInfo()
|
kindleDatabase = parseKindleInfo()
|
||||||
except Exception as message:
|
except Exception, message:
|
||||||
if verbose>0:
|
if verbose>0:
|
||||||
print(message)
|
print(message)
|
||||||
|
|
||||||
if kindleDatabase != None :
|
if kindleDatabase != None :
|
||||||
if command == "printInfo" :
|
if command == "printInfo" :
|
||||||
printKindleInfo()
|
printKindleInfo()
|
||||||
|
|
||||||
#
|
#
|
||||||
# Compute the DSN
|
# Compute the DSN
|
||||||
#
|
#
|
||||||
|
|
||||||
# Get the Mazama Random number
|
# Get the Mazama Random number
|
||||||
MazamaRandomNumber = getKindleInfoValueForKey("MazamaRandomNumber")
|
MazamaRandomNumber = getKindleInfoValueForKey("MazamaRandomNumber")
|
||||||
|
|
||||||
# Get the HDD serial
|
# Get the HDD serial
|
||||||
encodedSystemVolumeSerialNumber = encodeHash(str(GetVolumeSerialNumber(GetSystemDirectory().split('\\')[0] + '\\')),charMap1)
|
encodedSystemVolumeSerialNumber = encodeHash(str(GetVolumeSerialNumber(GetSystemDirectory().split('\\')[0] + '\\')),charMap1)
|
||||||
|
|
||||||
# Get the current user name
|
# Get the current user name
|
||||||
encodedUsername = encodeHash(GetUserName(),charMap1)
|
encodedUsername = encodeHash(GetUserName(),charMap1)
|
||||||
|
|
||||||
# concat, hash and encode
|
# concat, hash and encode
|
||||||
DSN = encode(SHA1(MazamaRandomNumber+encodedSystemVolumeSerialNumber+encodedUsername),charMap1)
|
DSN = encode(SHA1(MazamaRandomNumber+encodedSystemVolumeSerialNumber+encodedUsername),charMap1)
|
||||||
|
|
||||||
if verbose >1:
|
if verbose >1:
|
||||||
print("DSN: " + DSN)
|
print("DSN: " + DSN)
|
||||||
|
|
||||||
#
|
#
|
||||||
# Compute the device PID
|
# Compute the device PID
|
||||||
#
|
#
|
||||||
|
|
||||||
table = generatePidEncryptionTable()
|
table = generatePidEncryptionTable()
|
||||||
devicePID = generateDevicePID(table,DSN,4)
|
devicePID = generateDevicePID(table,DSN,4)
|
||||||
PIDs.append(devicePID)
|
PIDs.append(devicePID)
|
||||||
|
|
||||||
if verbose > 0:
|
if verbose > 0:
|
||||||
print("Device PID: " + devicePID)
|
print("Device PID: " + devicePID)
|
||||||
|
|
||||||
#
|
#
|
||||||
# Open book and parse metadata
|
# Open book and parse metadata
|
||||||
#
|
#
|
||||||
|
|
||||||
if len(args) == 1:
|
if len(args) == 1:
|
||||||
|
|
||||||
bookFile = openBook(args[0])
|
bookFile = openBook(args[0])
|
||||||
parseTopazHeader()
|
parseTopazHeader()
|
||||||
parseMetadata()
|
parseMetadata()
|
||||||
|
|
||||||
#
|
#
|
||||||
# Compute book PID
|
# Compute book PID
|
||||||
#
|
#
|
||||||
|
|
||||||
# Get the account token
|
# Get the account token
|
||||||
|
|
||||||
if kindleDatabase != None:
|
if kindleDatabase != None:
|
||||||
kindleAccountToken = getKindleInfoValueForKey("kindle.account.tokens")
|
kindleAccountToken = getKindleInfoValueForKey("kindle.account.tokens")
|
||||||
|
|
||||||
if verbose >1:
|
if verbose >1:
|
||||||
print("Account Token: " + kindleAccountToken)
|
print("Account Token: " + kindleAccountToken)
|
||||||
|
|
||||||
keysRecord = bookMetadata["keys"]
|
keysRecord = bookMetadata["keys"]
|
||||||
keysRecordRecord = bookMetadata[keysRecord]
|
keysRecordRecord = bookMetadata[keysRecord]
|
||||||
|
|
||||||
pidHash = SHA1(DSN+kindleAccountToken+keysRecord+keysRecordRecord)
|
pidHash = SHA1(DSN+kindleAccountToken+keysRecord+keysRecordRecord)
|
||||||
|
|
||||||
bookPID = encodePID(pidHash)
|
bookPID = encodePID(pidHash)
|
||||||
PIDs.append(bookPID)
|
PIDs.append(bookPID)
|
||||||
|
|
||||||
if verbose > 0:
|
if verbose > 0:
|
||||||
print ("Book PID: " + bookPID )
|
print ("Book PID: " + bookPID )
|
||||||
|
|
||||||
#
|
#
|
||||||
# Decrypt book key
|
# Decrypt book key
|
||||||
#
|
#
|
||||||
|
|
||||||
dkey = getBookPayloadRecord('dkey', 0)
|
dkey = getBookPayloadRecord('dkey', 0)
|
||||||
|
|
||||||
bookKeys = []
|
bookKeys = []
|
||||||
for PID in PIDs :
|
for PID in PIDs :
|
||||||
bookKeys+=decryptDkeyRecords(dkey,PID)
|
bookKeys+=decryptDkeyRecords(dkey,PID)
|
||||||
|
|
||||||
if len(bookKeys) == 0 :
|
if len(bookKeys) == 0 :
|
||||||
if verbose > 0 :
|
if verbose > 0 :
|
||||||
print ("Book key could not be found. Maybe this book is not registered with this device.")
|
print ("Book key could not be found. Maybe this book is not registered with this device.")
|
||||||
return 1
|
|
||||||
else :
|
else :
|
||||||
bookKey = bookKeys[0]
|
bookKey = bookKeys[0]
|
||||||
if verbose > 0:
|
if verbose > 0:
|
||||||
print("Book key: " + bookKey.encode('hex'))
|
print("Book key: " + bookKey.encode('hex'))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
if command == "printRecord" :
|
if command == "printRecord" :
|
||||||
extractBookPayloadRecord(recordName,int(recordIndex),outputFile)
|
extractBookPayloadRecord(recordName,int(recordIndex),outputFile)
|
||||||
if outputFile != "" and verbose>0 :
|
if outputFile != "" and verbose>0 :
|
||||||
print("Wrote record to file: "+outputFile)
|
print("Wrote record to file: "+outputFile)
|
||||||
elif command == "doit" :
|
elif command == "doit" :
|
||||||
if outdir != "" :
|
if outputFile!="" :
|
||||||
createDecryptedBook(outdir)
|
createDecryptedBook(outputFile)
|
||||||
if verbose >0 :
|
if verbose >0 :
|
||||||
print ("Decrypted book saved. Don't pirate!")
|
print ("Decrypted book saved. Don't pirate!")
|
||||||
elif verbose > 0:
|
elif verbose > 0:
|
||||||
print("Output directory name was not supplied.")
|
print("Output file name was not supplied.")
|
||||||
return 1
|
|
||||||
|
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
59
Calibre_Plugins/K4MobiDeDRM_plugin/config.py
Normal file
59
Calibre_Plugins/K4MobiDeDRM_plugin/config.py
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
from PyQt4.Qt import QWidget, QVBoxLayout, QLabel, QLineEdit
|
||||||
|
|
||||||
|
from calibre.utils.config import JSONConfig
|
||||||
|
|
||||||
|
# This is where all preferences for this plugin will be stored
|
||||||
|
# You should always prefix your config file name with plugins/,
|
||||||
|
# so as to ensure you dont accidentally clobber a calibre config file
|
||||||
|
prefs = JSONConfig('plugins/K4MobiDeDRM')
|
||||||
|
|
||||||
|
# Set defaults
|
||||||
|
prefs.defaults['pids'] = ""
|
||||||
|
prefs.defaults['serials'] = ""
|
||||||
|
prefs.defaults['WINEPREFIX'] = None
|
||||||
|
|
||||||
|
|
||||||
|
class ConfigWidget(QWidget):
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
QWidget.__init__(self)
|
||||||
|
self.l = QVBoxLayout()
|
||||||
|
self.setLayout(self.l)
|
||||||
|
|
||||||
|
self.serialLabel = QLabel('Kindle Serial numbers (separate with commas, no spaces)')
|
||||||
|
self.l.addWidget(self.serialLabel)
|
||||||
|
|
||||||
|
self.serials = QLineEdit(self)
|
||||||
|
self.serials.setText(prefs['serials'])
|
||||||
|
self.l.addWidget(self.serials)
|
||||||
|
self.serialLabel.setBuddy(self.serials)
|
||||||
|
|
||||||
|
self.pidLabel = QLabel('Mobipocket PIDs (separate with commas, no spaces)')
|
||||||
|
self.l.addWidget(self.pidLabel)
|
||||||
|
|
||||||
|
self.pids = QLineEdit(self)
|
||||||
|
self.pids.setText(prefs['pids'])
|
||||||
|
self.l.addWidget(self.pids)
|
||||||
|
self.pidLabel.setBuddy(self.serials)
|
||||||
|
|
||||||
|
self.wpLabel = QLabel('For Linux only: WINEPREFIX (enter absolute path)')
|
||||||
|
self.l.addWidget(self.wpLabel)
|
||||||
|
|
||||||
|
self.wineprefix = QLineEdit(self)
|
||||||
|
wineprefix = prefs['WINEPREFIX']
|
||||||
|
if wineprefix is not None:
|
||||||
|
self.wineprefix.setText(wineprefix)
|
||||||
|
else:
|
||||||
|
self.wineprefix.setText('')
|
||||||
|
|
||||||
|
self.l.addWidget(self.wineprefix)
|
||||||
|
self.wpLabel.setBuddy(self.wineprefix)
|
||||||
|
|
||||||
|
def save_settings(self):
|
||||||
|
prefs['pids'] = str(self.pids.text())
|
||||||
|
prefs['serials'] = str(self.serials.text())
|
||||||
|
winepref=str(self.wineprefix.text())
|
||||||
|
if winepref.strip() != '':
|
||||||
|
prefs['WINEPREFIX'] = winepref
|
||||||
|
else:
|
||||||
|
prefs['WINEPREFIX'] = None
|
||||||
@@ -20,8 +20,10 @@ import getopt
|
|||||||
from struct import pack
|
from struct import pack
|
||||||
from struct import unpack
|
from struct import unpack
|
||||||
|
|
||||||
|
class TpzDRMError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
# Get a 7 bit encoded number from string. The most
|
# Get a 7 bit encoded number from string. The most
|
||||||
# significant byte comes first and has the high bit (8th) set
|
# significant byte comes first and has the high bit (8th) set
|
||||||
|
|
||||||
def readEncodedNumber(file):
|
def readEncodedNumber(file):
|
||||||
@@ -30,57 +32,57 @@ def readEncodedNumber(file):
|
|||||||
if (len(c) == 0):
|
if (len(c) == 0):
|
||||||
return None
|
return None
|
||||||
data = ord(c)
|
data = ord(c)
|
||||||
|
|
||||||
if data == 0xFF:
|
if data == 0xFF:
|
||||||
flag = True
|
flag = True
|
||||||
c = file.read(1)
|
c = file.read(1)
|
||||||
if (len(c) == 0):
|
if (len(c) == 0):
|
||||||
return None
|
return None
|
||||||
data = ord(c)
|
data = ord(c)
|
||||||
|
|
||||||
if data >= 0x80:
|
if data >= 0x80:
|
||||||
datax = (data & 0x7F)
|
datax = (data & 0x7F)
|
||||||
while data >= 0x80 :
|
while data >= 0x80 :
|
||||||
c = file.read(1)
|
c = file.read(1)
|
||||||
if (len(c) == 0):
|
if (len(c) == 0):
|
||||||
return None
|
return None
|
||||||
data = ord(c)
|
data = ord(c)
|
||||||
datax = (datax <<7) + (data & 0x7F)
|
datax = (datax <<7) + (data & 0x7F)
|
||||||
data = datax
|
data = datax
|
||||||
|
|
||||||
if flag:
|
if flag:
|
||||||
data = -data
|
data = -data
|
||||||
return data
|
return data
|
||||||
|
|
||||||
|
|
||||||
# returns a binary string that encodes a number into 7 bits
|
# returns a binary string that encodes a number into 7 bits
|
||||||
# most significant byte first which has the high bit set
|
# most significant byte first which has the high bit set
|
||||||
|
|
||||||
def encodeNumber(number):
|
def encodeNumber(number):
|
||||||
result = ""
|
result = ""
|
||||||
negative = False
|
negative = False
|
||||||
flag = 0
|
flag = 0
|
||||||
|
|
||||||
if number < 0 :
|
if number < 0 :
|
||||||
number = -number + 1
|
number = -number + 1
|
||||||
negative = True
|
negative = True
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
byte = number & 0x7F
|
byte = number & 0x7F
|
||||||
number = number >> 7
|
number = number >> 7
|
||||||
byte += flag
|
byte += flag
|
||||||
result += chr(byte)
|
result += chr(byte)
|
||||||
flag = 0x80
|
flag = 0x80
|
||||||
if number == 0 :
|
if number == 0 :
|
||||||
if (byte == 0xFF and negative == False) :
|
if (byte == 0xFF and negative == False) :
|
||||||
result += chr(0x80)
|
result += chr(0x80)
|
||||||
break
|
break
|
||||||
|
|
||||||
if negative:
|
if negative:
|
||||||
result += chr(0xFF)
|
result += chr(0xFF)
|
||||||
|
|
||||||
return result[::-1]
|
return result[::-1]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# create / read a length prefixed string from the file
|
# create / read a length prefixed string from the file
|
||||||
@@ -95,9 +97,9 @@ def readString(file):
|
|||||||
sv = file.read(stringLength)
|
sv = file.read(stringLength)
|
||||||
if (len(sv) != stringLength):
|
if (len(sv) != stringLength):
|
||||||
return ""
|
return ""
|
||||||
return unpack(str(stringLength)+"s",sv)[0]
|
return unpack(str(stringLength)+"s",sv)[0]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# convert a binary string generated by encodeNumber (7 bit encoded number)
|
# convert a binary string generated by encodeNumber (7 bit encoded number)
|
||||||
# to the value you would find inside the page*.dat files to be processed
|
# to the value you would find inside the page*.dat files to be processed
|
||||||
|
|
||||||
@@ -138,7 +140,8 @@ class Dictionary(object):
|
|||||||
return self.stable[self.pos]
|
return self.stable[self.pos]
|
||||||
else:
|
else:
|
||||||
print "Error - %d outside of string table limits" % val
|
print "Error - %d outside of string table limits" % val
|
||||||
sys.exit(-1)
|
raise TpzDRMError('outside of string table limits')
|
||||||
|
# sys.exit(-1)
|
||||||
|
|
||||||
def getSize(self):
|
def getSize(self):
|
||||||
return self.size
|
return self.size
|
||||||
@@ -211,6 +214,7 @@ class PageParser(object):
|
|||||||
'links.title' : (1, 'text', 0, 0),
|
'links.title' : (1, 'text', 0, 0),
|
||||||
'links.href' : (1, 'text', 0, 0),
|
'links.href' : (1, 'text', 0, 0),
|
||||||
'links.type' : (1, 'text', 0, 0),
|
'links.type' : (1, 'text', 0, 0),
|
||||||
|
'links.id' : (1, 'number', 0, 0),
|
||||||
|
|
||||||
'paraCont' : (0, 'number', 1, 1),
|
'paraCont' : (0, 'number', 1, 1),
|
||||||
'paraCont.rootID' : (1, 'number', 0, 0),
|
'paraCont.rootID' : (1, 'number', 0, 0),
|
||||||
@@ -235,6 +239,8 @@ class PageParser(object):
|
|||||||
|
|
||||||
'group' : (1, 'snippets', 1, 0),
|
'group' : (1, 'snippets', 1, 0),
|
||||||
'group.type' : (1, 'scalar_text', 0, 0),
|
'group.type' : (1, 'scalar_text', 0, 0),
|
||||||
|
'group._tag' : (1, 'scalar_text', 0, 0),
|
||||||
|
'group.orientation': (1, 'scalar_text', 0, 0),
|
||||||
|
|
||||||
'region' : (1, 'snippets', 1, 0),
|
'region' : (1, 'snippets', 1, 0),
|
||||||
'region.type' : (1, 'scalar_text', 0, 0),
|
'region.type' : (1, 'scalar_text', 0, 0),
|
||||||
@@ -242,6 +248,7 @@ class PageParser(object):
|
|||||||
'region.y' : (1, 'scalar_number', 0, 0),
|
'region.y' : (1, 'scalar_number', 0, 0),
|
||||||
'region.h' : (1, 'scalar_number', 0, 0),
|
'region.h' : (1, 'scalar_number', 0, 0),
|
||||||
'region.w' : (1, 'scalar_number', 0, 0),
|
'region.w' : (1, 'scalar_number', 0, 0),
|
||||||
|
'region.orientation' : (1, 'scalar_text', 0, 0),
|
||||||
|
|
||||||
'empty_text_region' : (1, 'snippets', 1, 0),
|
'empty_text_region' : (1, 'snippets', 1, 0),
|
||||||
|
|
||||||
@@ -257,6 +264,13 @@ class PageParser(object):
|
|||||||
'paragraph.class' : (1, 'scalar_text', 0, 0),
|
'paragraph.class' : (1, 'scalar_text', 0, 0),
|
||||||
'paragraph.firstWord' : (1, 'scalar_number', 0, 0),
|
'paragraph.firstWord' : (1, 'scalar_number', 0, 0),
|
||||||
'paragraph.lastWord' : (1, 'scalar_number', 0, 0),
|
'paragraph.lastWord' : (1, 'scalar_number', 0, 0),
|
||||||
|
'paragraph.lastWord' : (1, 'scalar_number', 0, 0),
|
||||||
|
'paragraph.gridSize' : (1, 'scalar_number', 0, 0),
|
||||||
|
'paragraph.gridBottomCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'paragraph.gridTopCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'paragraph.gridBeginCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'paragraph.gridEndCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
|
||||||
|
|
||||||
'word_semantic' : (1, 'snippets', 1, 1),
|
'word_semantic' : (1, 'snippets', 1, 1),
|
||||||
'word_semantic.type' : (1, 'scalar_text', 0, 0),
|
'word_semantic.type' : (1, 'scalar_text', 0, 0),
|
||||||
@@ -271,11 +285,21 @@ class PageParser(object):
|
|||||||
|
|
||||||
'_span' : (1, 'snippets', 1, 0),
|
'_span' : (1, 'snippets', 1, 0),
|
||||||
'_span.firstWord' : (1, 'scalar_number', 0, 0),
|
'_span.firstWord' : (1, 'scalar_number', 0, 0),
|
||||||
'-span.lastWord' : (1, 'scalar_number', 0, 0),
|
'_span.lastWord' : (1, 'scalar_number', 0, 0),
|
||||||
|
'_span.gridSize' : (1, 'scalar_number', 0, 0),
|
||||||
|
'_span.gridBottomCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'_span.gridTopCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'_span.gridBeginCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'_span.gridEndCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
|
||||||
'span' : (1, 'snippets', 1, 0),
|
'span' : (1, 'snippets', 1, 0),
|
||||||
'span.firstWord' : (1, 'scalar_number', 0, 0),
|
'span.firstWord' : (1, 'scalar_number', 0, 0),
|
||||||
'span.lastWord' : (1, 'scalar_number', 0, 0),
|
'span.lastWord' : (1, 'scalar_number', 0, 0),
|
||||||
|
'span.gridSize' : (1, 'scalar_number', 0, 0),
|
||||||
|
'span.gridBottomCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'span.gridTopCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'span.gridBeginCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
'span.gridEndCenter' : (1, 'scalar_number', 0, 0),
|
||||||
|
|
||||||
'extratokens' : (1, 'snippets', 1, 0),
|
'extratokens' : (1, 'snippets', 1, 0),
|
||||||
'extratokens.type' : (1, 'scalar_text', 0, 0),
|
'extratokens.type' : (1, 'scalar_text', 0, 0),
|
||||||
@@ -361,14 +385,14 @@ class PageParser(object):
|
|||||||
for j in xrange(i+1, cnt) :
|
for j in xrange(i+1, cnt) :
|
||||||
result += '.' + self.tagpath[j]
|
result += '.' + self.tagpath[j]
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
# list of absolute command byte values values that indicate
|
# list of absolute command byte values values that indicate
|
||||||
# various types of loop meachanisms typically used to generate vectors
|
# various types of loop meachanisms typically used to generate vectors
|
||||||
|
|
||||||
cmd_list = (0x76, 0x76)
|
cmd_list = (0x76, 0x76)
|
||||||
|
|
||||||
# peek at and return 1 byte that is ahead by i bytes
|
# peek at and return 1 byte that is ahead by i bytes
|
||||||
def peek(self, aheadi):
|
def peek(self, aheadi):
|
||||||
c = self.fo.read(aheadi)
|
c = self.fo.read(aheadi)
|
||||||
if (len(c) == 0):
|
if (len(c) == 0):
|
||||||
@@ -401,7 +425,7 @@ class PageParser(object):
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
# process the next tag token, recursively handling subtags,
|
# process the next tag token, recursively handling subtags,
|
||||||
# arguments, and commands
|
# arguments, and commands
|
||||||
def procToken(self, token):
|
def procToken(self, token):
|
||||||
|
|
||||||
@@ -423,7 +447,7 @@ class PageParser(object):
|
|||||||
|
|
||||||
if known_token :
|
if known_token :
|
||||||
|
|
||||||
# handle subtags if present
|
# handle subtags if present
|
||||||
subtagres = []
|
subtagres = []
|
||||||
if (splcase == 1):
|
if (splcase == 1):
|
||||||
# this type of tag uses of escape marker 0x74 indicate subtag count
|
# this type of tag uses of escape marker 0x74 indicate subtag count
|
||||||
@@ -432,7 +456,7 @@ class PageParser(object):
|
|||||||
subtags = 1
|
subtags = 1
|
||||||
num_args = 0
|
num_args = 0
|
||||||
|
|
||||||
if (subtags == 1):
|
if (subtags == 1):
|
||||||
ntags = readEncodedNumber(self.fo)
|
ntags = readEncodedNumber(self.fo)
|
||||||
if self.debug : print 'subtags: ' + token + ' has ' + str(ntags)
|
if self.debug : print 'subtags: ' + token + ' has ' + str(ntags)
|
||||||
for j in xrange(ntags):
|
for j in xrange(ntags):
|
||||||
@@ -463,7 +487,7 @@ class PageParser(object):
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
# all tokens that need to be processed should be in the hash
|
# all tokens that need to be processed should be in the hash
|
||||||
# table if it may indicate a problem, either new token
|
# table if it may indicate a problem, either new token
|
||||||
# or an out of sync condition
|
# or an out of sync condition
|
||||||
else:
|
else:
|
||||||
result = []
|
result = []
|
||||||
@@ -515,7 +539,7 @@ class PageParser(object):
|
|||||||
# dispatches loop commands bytes with various modes
|
# dispatches loop commands bytes with various modes
|
||||||
# The 0x76 style loops are used to build vectors
|
# The 0x76 style loops are used to build vectors
|
||||||
|
|
||||||
# This was all derived by trial and error and
|
# This was all derived by trial and error and
|
||||||
# new loop types may exist that are not handled here
|
# new loop types may exist that are not handled here
|
||||||
# since they did not appear in the test cases
|
# since they did not appear in the test cases
|
||||||
|
|
||||||
@@ -534,7 +558,7 @@ class PageParser(object):
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# add full tag path to injected snippets
|
# add full tag path to injected snippets
|
||||||
def updateName(self, tag, prefix):
|
def updateName(self, tag, prefix):
|
||||||
name = tag[0]
|
name = tag[0]
|
||||||
@@ -562,7 +586,7 @@ class PageParser(object):
|
|||||||
argtype = tag[2]
|
argtype = tag[2]
|
||||||
argList = tag[3]
|
argList = tag[3]
|
||||||
nsubtagList = []
|
nsubtagList = []
|
||||||
if len(argList) > 0 :
|
if len(argList) > 0 :
|
||||||
for j in argList:
|
for j in argList:
|
||||||
asnip = self.snippetList[j]
|
asnip = self.snippetList[j]
|
||||||
aso, atag = self.injectSnippets(asnip)
|
aso, atag = self.injectSnippets(asnip)
|
||||||
@@ -594,65 +618,70 @@ class PageParser(object):
|
|||||||
nodename = fullpathname.pop()
|
nodename = fullpathname.pop()
|
||||||
ilvl = len(fullpathname)
|
ilvl = len(fullpathname)
|
||||||
indent = ' ' * (3 * ilvl)
|
indent = ' ' * (3 * ilvl)
|
||||||
result = indent + '<' + nodename + '>'
|
rlst = []
|
||||||
|
rlst.append(indent + '<' + nodename + '>')
|
||||||
if len(argList) > 0:
|
if len(argList) > 0:
|
||||||
argres = ''
|
alst = []
|
||||||
for j in argList:
|
for j in argList:
|
||||||
if (argtype == 'text') or (argtype == 'scalar_text') :
|
if (argtype == 'text') or (argtype == 'scalar_text') :
|
||||||
argres += j + '|'
|
alst.append(j + '|')
|
||||||
else :
|
else :
|
||||||
argres += str(j) + ','
|
alst.append(str(j) + ',')
|
||||||
|
argres = "".join(alst)
|
||||||
argres = argres[0:-1]
|
argres = argres[0:-1]
|
||||||
if argtype == 'snippets' :
|
if argtype == 'snippets' :
|
||||||
result += 'snippets:' + argres
|
rlst.append('snippets:' + argres)
|
||||||
else :
|
else :
|
||||||
result += argres
|
rlst.append(argres)
|
||||||
if len(subtagList) > 0 :
|
if len(subtagList) > 0 :
|
||||||
result += '\n'
|
rlst.append('\n')
|
||||||
for j in subtagList:
|
for j in subtagList:
|
||||||
if len(j) > 0 :
|
if len(j) > 0 :
|
||||||
result += self.formatTag(j)
|
rlst.append(self.formatTag(j))
|
||||||
result += indent + '</' + nodename + '>\n'
|
rlst.append(indent + '</' + nodename + '>\n')
|
||||||
else:
|
else:
|
||||||
result += '</' + nodename + '>\n'
|
rlst.append('</' + nodename + '>\n')
|
||||||
return result
|
return "".join(rlst)
|
||||||
|
|
||||||
|
|
||||||
# flatten tag
|
# flatten tag
|
||||||
def flattenTag(self, node):
|
def flattenTag(self, node):
|
||||||
name = node[0]
|
name = node[0]
|
||||||
subtagList = node[1]
|
subtagList = node[1]
|
||||||
argtype = node[2]
|
argtype = node[2]
|
||||||
argList = node[3]
|
argList = node[3]
|
||||||
result = name
|
rlst = []
|
||||||
|
rlst.append(name)
|
||||||
if (len(argList) > 0):
|
if (len(argList) > 0):
|
||||||
argres = ''
|
alst = []
|
||||||
for j in argList:
|
for j in argList:
|
||||||
if (argtype == 'text') or (argtype == 'scalar_text') :
|
if (argtype == 'text') or (argtype == 'scalar_text') :
|
||||||
argres += j + '|'
|
alst.append(j + '|')
|
||||||
else :
|
else :
|
||||||
argres += str(j) + '|'
|
alst.append(str(j) + '|')
|
||||||
|
argres = "".join(alst)
|
||||||
argres = argres[0:-1]
|
argres = argres[0:-1]
|
||||||
if argtype == 'snippets' :
|
if argtype == 'snippets' :
|
||||||
result += '.snippets=' + argres
|
rlst.append('.snippets=' + argres)
|
||||||
else :
|
else :
|
||||||
result += '=' + argres
|
rlst.append('=' + argres)
|
||||||
result += '\n'
|
rlst.append('\n')
|
||||||
for j in subtagList:
|
for j in subtagList:
|
||||||
if len(j) > 0 :
|
if len(j) > 0 :
|
||||||
result += self.flattenTag(j)
|
rlst.append(self.flattenTag(j))
|
||||||
return result
|
return "".join(rlst)
|
||||||
|
|
||||||
|
|
||||||
# reduce create xml output
|
# reduce create xml output
|
||||||
def formatDoc(self, flat_xml):
|
def formatDoc(self, flat_xml):
|
||||||
result = ''
|
rlst = []
|
||||||
for j in self.doc :
|
for j in self.doc :
|
||||||
if len(j) > 0:
|
if len(j) > 0:
|
||||||
if flat_xml:
|
if flat_xml:
|
||||||
result += self.flattenTag(j)
|
rlst.append(self.flattenTag(j))
|
||||||
else:
|
else:
|
||||||
result += self.formatTag(j)
|
rlst.append(self.formatTag(j))
|
||||||
|
result = "".join(rlst)
|
||||||
if self.debug : print result
|
if self.debug : print result
|
||||||
return result
|
return result
|
||||||
|
|
||||||
@@ -697,7 +726,7 @@ class PageParser(object):
|
|||||||
first_token = None
|
first_token = None
|
||||||
|
|
||||||
v = self.getNext()
|
v = self.getNext()
|
||||||
if (v == None):
|
if (v == None):
|
||||||
break
|
break
|
||||||
|
|
||||||
if (v == 0x72):
|
if (v == 0x72):
|
||||||
@@ -708,7 +737,7 @@ class PageParser(object):
|
|||||||
self.doc.append(tag)
|
self.doc.append(tag)
|
||||||
else:
|
else:
|
||||||
if self.debug:
|
if self.debug:
|
||||||
print "Main Loop: Unknown value: %x" % v
|
print "Main Loop: Unknown value: %x" % v
|
||||||
if (v == 0):
|
if (v == 0):
|
||||||
if (self.peek(1) == 0x5f):
|
if (self.peek(1) == 0x5f):
|
||||||
skip = self.fo.read(1)
|
skip = self.fo.read(1)
|
||||||
@@ -730,7 +759,20 @@ class PageParser(object):
|
|||||||
return xmlpage
|
return xmlpage
|
||||||
|
|
||||||
|
|
||||||
|
def fromData(dict, fname):
|
||||||
|
flat_xml = True
|
||||||
|
debug = False
|
||||||
|
pp = PageParser(fname, dict, debug, flat_xml)
|
||||||
|
xmlpage = pp.process()
|
||||||
|
return xmlpage
|
||||||
|
|
||||||
|
def getXML(dict, fname):
|
||||||
|
flat_xml = False
|
||||||
|
debug = False
|
||||||
|
pp = PageParser(fname, dict, debug, flat_xml)
|
||||||
|
xmlpage = pp.process()
|
||||||
|
return xmlpage
|
||||||
|
|
||||||
def usage():
|
def usage():
|
||||||
print 'Usage: '
|
print 'Usage: '
|
||||||
print ' convert2xml.py dict0000.dat infile.dat '
|
print ' convert2xml.py dict0000.dat infile.dat '
|
||||||
@@ -748,7 +790,7 @@ def usage():
|
|||||||
|
|
||||||
#
|
#
|
||||||
# Main
|
# Main
|
||||||
#
|
#
|
||||||
|
|
||||||
def main(argv):
|
def main(argv):
|
||||||
dictFile = ""
|
dictFile = ""
|
||||||
@@ -769,11 +811,11 @@ def main(argv):
|
|||||||
print str(err) # will print something like "option -a not recognized"
|
print str(err) # will print something like "option -a not recognized"
|
||||||
usage()
|
usage()
|
||||||
sys.exit(2)
|
sys.exit(2)
|
||||||
|
|
||||||
if len(opts) == 0 and len(args) == 0 :
|
if len(opts) == 0 and len(args) == 0 :
|
||||||
usage()
|
usage()
|
||||||
sys.exit(2)
|
sys.exit(2)
|
||||||
|
|
||||||
for o, a in opts:
|
for o, a in opts:
|
||||||
if o =="-d":
|
if o =="-d":
|
||||||
debug=True
|
debug=True
|
||||||
@@ -801,4 +843,4 @@ def main(argv):
|
|||||||
return xmlpage
|
return xmlpage
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
sys.exit(main(''))
|
sys.exit(main(''))
|
||||||
@@ -12,15 +12,14 @@ from struct import unpack
|
|||||||
|
|
||||||
|
|
||||||
class DocParser(object):
|
class DocParser(object):
|
||||||
def __init__(self, flatxml, classlst, fileid, bookDir, fixedimage):
|
def __init__(self, flatxml, classlst, fileid, bookDir, gdict, fixedimage):
|
||||||
self.id = os.path.basename(fileid).replace('.dat','')
|
self.id = os.path.basename(fileid).replace('.dat','')
|
||||||
self.svgcount = 0
|
self.svgcount = 0
|
||||||
self.docList = flatxml.split('\n')
|
self.docList = flatxml.split('\n')
|
||||||
self.docSize = len(self.docList)
|
self.docSize = len(self.docList)
|
||||||
self.classList = {}
|
self.classList = {}
|
||||||
self.bookDir = bookDir
|
self.bookDir = bookDir
|
||||||
self.glyphPaths = { }
|
self.gdict = gdict
|
||||||
self.numPaths = 0
|
|
||||||
tmpList = classlst.split('\n')
|
tmpList = classlst.split('\n')
|
||||||
for pclass in tmpList:
|
for pclass in tmpList:
|
||||||
if pclass != '':
|
if pclass != '':
|
||||||
@@ -41,9 +40,8 @@ class DocParser(object):
|
|||||||
|
|
||||||
def getGlyph(self, gid):
|
def getGlyph(self, gid):
|
||||||
result = ''
|
result = ''
|
||||||
id='gl%d' % gid
|
id='id="gl%d"' % gid
|
||||||
return self.glyphPaths[id]
|
return self.gdict.lookup(id)
|
||||||
|
|
||||||
|
|
||||||
def glyphs_to_image(self, glyphList):
|
def glyphs_to_image(self, glyphList):
|
||||||
|
|
||||||
@@ -52,31 +50,12 @@ class DocParser(object):
|
|||||||
e = path.find(' ',b)
|
e = path.find(' ',b)
|
||||||
return int(path[b:e])
|
return int(path[b:e])
|
||||||
|
|
||||||
def extractID(path, key):
|
|
||||||
b = path.find(key) + len(key)
|
|
||||||
e = path.find('"',b)
|
|
||||||
return path[b:e]
|
|
||||||
|
|
||||||
|
|
||||||
svgDir = os.path.join(self.bookDir,'svg')
|
svgDir = os.path.join(self.bookDir,'svg')
|
||||||
glyfile = os.path.join(svgDir,'glyphs.svg')
|
|
||||||
|
|
||||||
imgDir = os.path.join(self.bookDir,'img')
|
imgDir = os.path.join(self.bookDir,'img')
|
||||||
imgname = self.id + '_%04d.svg' % self.svgcount
|
imgname = self.id + '_%04d.svg' % self.svgcount
|
||||||
imgfile = os.path.join(imgDir,imgname)
|
imgfile = os.path.join(imgDir,imgname)
|
||||||
|
|
||||||
# build hashtable of glyph paths keyed by glyph id
|
|
||||||
if self.numPaths == 0:
|
|
||||||
gfile = open(glyfile, 'r')
|
|
||||||
while True:
|
|
||||||
path = gfile.readline()
|
|
||||||
if (path == ''): break
|
|
||||||
glyphid = extractID(path,'id="')
|
|
||||||
self.glyphPaths[glyphid] = path
|
|
||||||
self.numPaths += 1
|
|
||||||
gfile.close()
|
|
||||||
|
|
||||||
|
|
||||||
# get glyph information
|
# get glyph information
|
||||||
gxList = self.getData('info.glyph.x',0,-1)
|
gxList = self.getData('info.glyph.x',0,-1)
|
||||||
gyList = self.getData('info.glyph.y',0,-1)
|
gyList = self.getData('info.glyph.y',0,-1)
|
||||||
@@ -89,7 +68,7 @@ class DocParser(object):
|
|||||||
ys = []
|
ys = []
|
||||||
gdefs = []
|
gdefs = []
|
||||||
|
|
||||||
# get path defintions, positions, dimensions for ecah glyph
|
# get path defintions, positions, dimensions for each glyph
|
||||||
# that makes up the image, and find min x and min y to reposition origin
|
# that makes up the image, and find min x and min y to reposition origin
|
||||||
minx = -1
|
minx = -1
|
||||||
miny = -1
|
miny = -1
|
||||||
@@ -100,7 +79,7 @@ class DocParser(object):
|
|||||||
xs.append(gxList[j])
|
xs.append(gxList[j])
|
||||||
if minx == -1: minx = gxList[j]
|
if minx == -1: minx = gxList[j]
|
||||||
else : minx = min(minx, gxList[j])
|
else : minx = min(minx, gxList[j])
|
||||||
|
|
||||||
ys.append(gyList[j])
|
ys.append(gyList[j])
|
||||||
if miny == -1: miny = gyList[j]
|
if miny == -1: miny = gyList[j]
|
||||||
else : miny = min(miny, gyList[j])
|
else : miny = min(miny, gyList[j])
|
||||||
@@ -145,12 +124,12 @@ class DocParser(object):
|
|||||||
item = self.docList[pos]
|
item = self.docList[pos]
|
||||||
if item.find('=') >= 0:
|
if item.find('=') >= 0:
|
||||||
(name, argres) = item.split('=',1)
|
(name, argres) = item.split('=',1)
|
||||||
else :
|
else :
|
||||||
name = item
|
name = item
|
||||||
argres = ''
|
argres = ''
|
||||||
return name, argres
|
return name, argres
|
||||||
|
|
||||||
|
|
||||||
# find tag in doc if within pos to end inclusive
|
# find tag in doc if within pos to end inclusive
|
||||||
def findinDoc(self, tagpath, pos, end) :
|
def findinDoc(self, tagpath, pos, end) :
|
||||||
result = None
|
result = None
|
||||||
@@ -163,10 +142,10 @@ class DocParser(object):
|
|||||||
item = self.docList[j]
|
item = self.docList[j]
|
||||||
if item.find('=') >= 0:
|
if item.find('=') >= 0:
|
||||||
(name, argres) = item.split('=',1)
|
(name, argres) = item.split('=',1)
|
||||||
else :
|
else :
|
||||||
name = item
|
name = item
|
||||||
argres = ''
|
argres = ''
|
||||||
if name.endswith(tagpath) :
|
if name.endswith(tagpath) :
|
||||||
result = argres
|
result = argres
|
||||||
foundat = j
|
foundat = j
|
||||||
break
|
break
|
||||||
@@ -203,13 +182,13 @@ class DocParser(object):
|
|||||||
# class names are an issue given topaz may start them with numerals (not allowed),
|
# class names are an issue given topaz may start them with numerals (not allowed),
|
||||||
# use a mix of cases (which cause some browsers problems), and actually
|
# use a mix of cases (which cause some browsers problems), and actually
|
||||||
# attach numbers after "_reclustered*" to the end to deal classeses that inherit
|
# attach numbers after "_reclustered*" to the end to deal classeses that inherit
|
||||||
# from a base class (but then not actually provide all of these _reclustereed
|
# from a base class (but then not actually provide all of these _reclustereed
|
||||||
# classes in the stylesheet!
|
# classes in the stylesheet!
|
||||||
|
|
||||||
# so we clean this up by lowercasing, prepend 'cl-', and getting any baseclass
|
# so we clean this up by lowercasing, prepend 'cl-', and getting any baseclass
|
||||||
# that exists in the stylesheet first, and then adding this specific class
|
# that exists in the stylesheet first, and then adding this specific class
|
||||||
# after
|
# after
|
||||||
|
|
||||||
# also some class names have spaces in them so need to convert to dashes
|
# also some class names have spaces in them so need to convert to dashes
|
||||||
if nclass != None :
|
if nclass != None :
|
||||||
nclass = nclass.replace(' ','-')
|
nclass = nclass.replace(' ','-')
|
||||||
@@ -232,7 +211,7 @@ class DocParser(object):
|
|||||||
return nclass
|
return nclass
|
||||||
|
|
||||||
|
|
||||||
# develop a sorted description of the starting positions of
|
# develop a sorted description of the starting positions of
|
||||||
# groups and regions on the page, as well as the page type
|
# groups and regions on the page, as well as the page type
|
||||||
def PageDescription(self):
|
def PageDescription(self):
|
||||||
|
|
||||||
@@ -288,10 +267,13 @@ class DocParser(object):
|
|||||||
result = []
|
result = []
|
||||||
|
|
||||||
# paragraph
|
# paragraph
|
||||||
(pos, pclass) = self.findinDoc('paragraph.class',start,end)
|
(pos, pclass) = self.findinDoc('paragraph.class',start,end)
|
||||||
|
|
||||||
pclass = self.getClass(pclass)
|
pclass = self.getClass(pclass)
|
||||||
|
|
||||||
|
# if paragraph uses extratokens (extra glyphs) then make it fixed
|
||||||
|
(pos, extraglyphs) = self.findinDoc('paragraph.extratokens',start,end)
|
||||||
|
|
||||||
# build up a description of the paragraph in result and return it
|
# build up a description of the paragraph in result and return it
|
||||||
# first check for the basic - all words paragraph
|
# first check for the basic - all words paragraph
|
||||||
(pos, sfirst) = self.findinDoc('paragraph.firstWord',start,end)
|
(pos, sfirst) = self.findinDoc('paragraph.firstWord',start,end)
|
||||||
@@ -299,16 +281,22 @@ class DocParser(object):
|
|||||||
if (sfirst != None) and (slast != None) :
|
if (sfirst != None) and (slast != None) :
|
||||||
first = int(sfirst)
|
first = int(sfirst)
|
||||||
last = int(slast)
|
last = int(slast)
|
||||||
|
|
||||||
makeImage = (regtype == 'vertical') or (regtype == 'table')
|
makeImage = (regtype == 'vertical') or (regtype == 'table')
|
||||||
|
makeImage = makeImage or (extraglyphs != None)
|
||||||
if self.fixedimage:
|
if self.fixedimage:
|
||||||
makeImage = makeImage or (regtype == 'fixed')
|
makeImage = makeImage or (regtype == 'fixed')
|
||||||
|
|
||||||
if (pclass != None):
|
if (pclass != None):
|
||||||
makeImage = makeImage or (pclass.find('.inverted') >= 0)
|
makeImage = makeImage or (pclass.find('.inverted') >= 0)
|
||||||
if self.fixedimage :
|
if self.fixedimage :
|
||||||
makeImage = makeImage or (pclass.find('cl-f-') >= 0)
|
makeImage = makeImage or (pclass.find('cl-f-') >= 0)
|
||||||
|
|
||||||
|
# before creating an image make sure glyph info exists
|
||||||
|
gidList = self.getData('info.glyph.glyphID',0,-1)
|
||||||
|
|
||||||
|
makeImage = makeImage & (len(gidList) > 0)
|
||||||
|
|
||||||
if not makeImage :
|
if not makeImage :
|
||||||
# standard all word paragraph
|
# standard all word paragraph
|
||||||
for wordnum in xrange(first, last):
|
for wordnum in xrange(first, last):
|
||||||
@@ -326,6 +314,15 @@ class DocParser(object):
|
|||||||
lastGlyph = firstglyphList[last]
|
lastGlyph = firstglyphList[last]
|
||||||
else :
|
else :
|
||||||
lastGlyph = len(gidList)
|
lastGlyph = len(gidList)
|
||||||
|
|
||||||
|
# handle case of white sapce paragraphs with no actual glyphs in them
|
||||||
|
# by reverting to text based paragraph
|
||||||
|
if firstGlyph >= lastGlyph:
|
||||||
|
# revert to standard text based paragraph
|
||||||
|
for wordnum in xrange(first, last):
|
||||||
|
result.append(('ocr', wordnum))
|
||||||
|
return pclass, result
|
||||||
|
|
||||||
for glyphnum in xrange(firstGlyph, lastGlyph):
|
for glyphnum in xrange(firstGlyph, lastGlyph):
|
||||||
glyphList.append(glyphnum)
|
glyphList.append(glyphnum)
|
||||||
# include any extratokens if they exist
|
# include any extratokens if they exist
|
||||||
@@ -340,10 +337,10 @@ class DocParser(object):
|
|||||||
result.append(('svg', num))
|
result.append(('svg', num))
|
||||||
return pclass, result
|
return pclass, result
|
||||||
|
|
||||||
# this type of paragraph may be made up of multiple spans, inline
|
# this type of paragraph may be made up of multiple spans, inline
|
||||||
# word monograms (images), and words with semantic meaning,
|
# word monograms (images), and words with semantic meaning,
|
||||||
# plus glyphs used to form starting letter of first word
|
# plus glyphs used to form starting letter of first word
|
||||||
|
|
||||||
# need to parse this type line by line
|
# need to parse this type line by line
|
||||||
line = start + 1
|
line = start + 1
|
||||||
word_class = ''
|
word_class = ''
|
||||||
@@ -352,7 +349,7 @@ class DocParser(object):
|
|||||||
if end == -1 :
|
if end == -1 :
|
||||||
end = self.docSize
|
end = self.docSize
|
||||||
|
|
||||||
# seems some xml has last* coming before first* so we have to
|
# seems some xml has last* coming before first* so we have to
|
||||||
# handle any order
|
# handle any order
|
||||||
sp_first = -1
|
sp_first = -1
|
||||||
sp_last = -1
|
sp_last = -1
|
||||||
@@ -365,6 +362,8 @@ class DocParser(object):
|
|||||||
|
|
||||||
word_class = ''
|
word_class = ''
|
||||||
|
|
||||||
|
word_semantic_type = ''
|
||||||
|
|
||||||
while (line < end) :
|
while (line < end) :
|
||||||
|
|
||||||
(name, argres) = self.lineinDoc(line)
|
(name, argres) = self.lineinDoc(line)
|
||||||
@@ -388,10 +387,10 @@ class DocParser(object):
|
|||||||
ws_last = int(argres)
|
ws_last = int(argres)
|
||||||
|
|
||||||
elif name.endswith('word.class'):
|
elif name.endswith('word.class'):
|
||||||
(cname, space) = argres.split('-',1)
|
(cname, space) = argres.split('-',1)
|
||||||
if space == '' : space = '0'
|
if space == '' : space = '0'
|
||||||
if (cname == 'spaceafter') and (int(space) > 0) :
|
if (cname == 'spaceafter') and (int(space) > 0) :
|
||||||
word_class = 'sa'
|
word_class = 'sa'
|
||||||
|
|
||||||
elif name.endswith('word.img.src'):
|
elif name.endswith('word.img.src'):
|
||||||
result.append(('img' + word_class, int(argres)))
|
result.append(('img' + word_class, int(argres)))
|
||||||
@@ -422,11 +421,11 @@ class DocParser(object):
|
|||||||
result.append(('ocr', wordnum))
|
result.append(('ocr', wordnum))
|
||||||
ws_first = -1
|
ws_first = -1
|
||||||
ws_last = -1
|
ws_last = -1
|
||||||
|
|
||||||
line += 1
|
line += 1
|
||||||
|
|
||||||
return pclass, result
|
return pclass, result
|
||||||
|
|
||||||
|
|
||||||
def buildParagraph(self, pclass, pdesc, type, regtype) :
|
def buildParagraph(self, pclass, pdesc, type, regtype) :
|
||||||
parares = ''
|
parares = ''
|
||||||
@@ -439,7 +438,7 @@ class DocParser(object):
|
|||||||
br_lb = (regtype == 'fixed') or (regtype == 'chapterheading') or (regtype == 'vertical')
|
br_lb = (regtype == 'fixed') or (regtype == 'chapterheading') or (regtype == 'vertical')
|
||||||
|
|
||||||
handle_links = len(self.link_id) > 0
|
handle_links = len(self.link_id) > 0
|
||||||
|
|
||||||
if (type == 'full') or (type == 'begin') :
|
if (type == 'full') or (type == 'begin') :
|
||||||
parares += '<p' + classres + '>'
|
parares += '<p' + classres + '>'
|
||||||
|
|
||||||
@@ -468,7 +467,7 @@ class DocParser(object):
|
|||||||
if linktype == 'external' :
|
if linktype == 'external' :
|
||||||
linkhref = self.link_href[link-1]
|
linkhref = self.link_href[link-1]
|
||||||
linkhtml = '<a href="%s">' % linkhref
|
linkhtml = '<a href="%s">' % linkhref
|
||||||
else :
|
else :
|
||||||
if len(self.link_page) >= link :
|
if len(self.link_page) >= link :
|
||||||
ptarget = self.link_page[link-1] - 1
|
ptarget = self.link_page[link-1] - 1
|
||||||
linkhtml = '<a href="#page%04d">' % ptarget
|
linkhtml = '<a href="#page%04d">' % ptarget
|
||||||
@@ -515,7 +514,7 @@ class DocParser(object):
|
|||||||
|
|
||||||
elif wtype == 'svg' :
|
elif wtype == 'svg' :
|
||||||
sep = ''
|
sep = ''
|
||||||
parares += '<img src="img/' + self.id + '_%04d.svg" alt="" />' % num
|
parares += '<img src="img/' + self.id + '_%04d.svg" alt="" />' % num
|
||||||
parares += sep
|
parares += sep
|
||||||
|
|
||||||
if len(sep) > 0 : parares = parares[0:-1]
|
if len(sep) > 0 : parares = parares[0:-1]
|
||||||
@@ -524,13 +523,80 @@ class DocParser(object):
|
|||||||
return parares
|
return parares
|
||||||
|
|
||||||
|
|
||||||
|
def buildTOCEntry(self, pdesc) :
|
||||||
|
parares = ''
|
||||||
|
sep =''
|
||||||
|
tocentry = ''
|
||||||
|
handle_links = len(self.link_id) > 0
|
||||||
|
|
||||||
|
lstart = 0
|
||||||
|
|
||||||
|
cnt = len(pdesc)
|
||||||
|
for j in xrange( 0, cnt) :
|
||||||
|
|
||||||
|
(wtype, num) = pdesc[j]
|
||||||
|
|
||||||
|
if wtype == 'ocr' :
|
||||||
|
word = self.ocrtext[num]
|
||||||
|
sep = ' '
|
||||||
|
|
||||||
|
if handle_links:
|
||||||
|
link = self.link_id[num]
|
||||||
|
if (link > 0):
|
||||||
|
linktype = self.link_type[link-1]
|
||||||
|
title = self.link_title[link-1]
|
||||||
|
title = title.rstrip('. ')
|
||||||
|
alt_title = parares[lstart:]
|
||||||
|
alt_title = alt_title.strip()
|
||||||
|
# now strip off the actual printed page number
|
||||||
|
alt_title = alt_title.rstrip('01234567890ivxldIVXLD-.')
|
||||||
|
alt_title = alt_title.rstrip('. ')
|
||||||
|
# skip over any external links - can't have them in a books toc
|
||||||
|
if linktype == 'external' :
|
||||||
|
title = ''
|
||||||
|
alt_title = ''
|
||||||
|
linkpage = ''
|
||||||
|
else :
|
||||||
|
if len(self.link_page) >= link :
|
||||||
|
ptarget = self.link_page[link-1] - 1
|
||||||
|
linkpage = '%04d' % ptarget
|
||||||
|
else :
|
||||||
|
# just link to the current page
|
||||||
|
linkpage = self.id[4:]
|
||||||
|
if len(alt_title) >= len(title):
|
||||||
|
title = alt_title
|
||||||
|
if title != '' and linkpage != '':
|
||||||
|
tocentry += title + '|' + linkpage + '\n'
|
||||||
|
lstart = len(parares)
|
||||||
|
if word == '_link_' : word = ''
|
||||||
|
elif (link < 0) :
|
||||||
|
if word == '_link_' : word = ''
|
||||||
|
|
||||||
|
if word == '_lb_':
|
||||||
|
word = ''
|
||||||
|
sep = ''
|
||||||
|
|
||||||
|
if num in self.dehyphen_rootid :
|
||||||
|
word = word[0:-1]
|
||||||
|
sep = ''
|
||||||
|
|
||||||
|
parares += word + sep
|
||||||
|
|
||||||
|
else :
|
||||||
|
continue
|
||||||
|
|
||||||
|
return tocentry
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# walk the document tree collecting the information needed
|
# walk the document tree collecting the information needed
|
||||||
# to build an html page using the ocrText
|
# to build an html page using the ocrText
|
||||||
|
|
||||||
def process(self):
|
def process(self):
|
||||||
|
|
||||||
htmlpage = ''
|
tocinfo = ''
|
||||||
|
hlst = []
|
||||||
|
|
||||||
# get the ocr text
|
# get the ocr text
|
||||||
(pos, argres) = self.findinDoc('info.word.ocrText',0,-1)
|
(pos, argres) = self.findinDoc('info.word.ocrText',0,-1)
|
||||||
@@ -541,8 +607,8 @@ class DocParser(object):
|
|||||||
|
|
||||||
# determine if first paragraph is continued from previous page
|
# determine if first paragraph is continued from previous page
|
||||||
(pos, self.parastems_stemid) = self.findinDoc('info.paraStems.stemID',0,-1)
|
(pos, self.parastems_stemid) = self.findinDoc('info.paraStems.stemID',0,-1)
|
||||||
first_para_continued = (self.parastems_stemid != None)
|
first_para_continued = (self.parastems_stemid != None)
|
||||||
|
|
||||||
# determine if last paragraph is continued onto the next page
|
# determine if last paragraph is continued onto the next page
|
||||||
(pos, self.paracont_stemid) = self.findinDoc('info.paraCont.stemID',0,-1)
|
(pos, self.paracont_stemid) = self.findinDoc('info.paraCont.stemID',0,-1)
|
||||||
last_para_continued = (self.paracont_stemid != None)
|
last_para_continued = (self.paracont_stemid != None)
|
||||||
@@ -570,25 +636,25 @@ class DocParser(object):
|
|||||||
|
|
||||||
# get a descriptions of the starting points of the regions
|
# get a descriptions of the starting points of the regions
|
||||||
# and groups on the page
|
# and groups on the page
|
||||||
(pagetype, pageDesc) = self.PageDescription()
|
(pagetype, pageDesc) = self.PageDescription()
|
||||||
regcnt = len(pageDesc) - 1
|
regcnt = len(pageDesc) - 1
|
||||||
|
|
||||||
anchorSet = False
|
anchorSet = False
|
||||||
breakSet = False
|
breakSet = False
|
||||||
inGroup = False
|
inGroup = False
|
||||||
|
|
||||||
# process each region on the page and convert what you can to html
|
# process each region on the page and convert what you can to html
|
||||||
|
|
||||||
for j in xrange(regcnt):
|
for j in xrange(regcnt):
|
||||||
|
|
||||||
(etype, start) = pageDesc[j]
|
(etype, start) = pageDesc[j]
|
||||||
(ntype, end) = pageDesc[j+1]
|
(ntype, end) = pageDesc[j+1]
|
||||||
|
|
||||||
|
|
||||||
# set anchor for link target on this page
|
# set anchor for link target on this page
|
||||||
if not anchorSet and not first_para_continued:
|
if not anchorSet and not first_para_continued:
|
||||||
htmlpage += '<div style="visibility: hidden; height: 0; width: 0;" id="'
|
hlst.append('<div style="visibility: hidden; height: 0; width: 0;" id="')
|
||||||
htmlpage += self.id + '" title="pagetype_' + pagetype + '"></div>\n'
|
hlst.append(self.id + '" title="pagetype_' + pagetype + '"></div>\n')
|
||||||
anchorSet = True
|
anchorSet = True
|
||||||
|
|
||||||
# handle groups of graphics with text captions
|
# handle groups of graphics with text captions
|
||||||
@@ -597,12 +663,12 @@ class DocParser(object):
|
|||||||
if grptype != None:
|
if grptype != None:
|
||||||
if grptype == 'graphic':
|
if grptype == 'graphic':
|
||||||
gcstr = ' class="' + grptype + '"'
|
gcstr = ' class="' + grptype + '"'
|
||||||
htmlpage += '<div' + gcstr + '>'
|
hlst.append('<div' + gcstr + '>')
|
||||||
inGroup = True
|
inGroup = True
|
||||||
|
|
||||||
elif (etype == 'grpend'):
|
elif (etype == 'grpend'):
|
||||||
if inGroup:
|
if inGroup:
|
||||||
htmlpage += '</div>\n'
|
hlst.append('</div>\n')
|
||||||
inGroup = False
|
inGroup = False
|
||||||
|
|
||||||
else:
|
else:
|
||||||
@@ -612,25 +678,25 @@ class DocParser(object):
|
|||||||
(pos, simgsrc) = self.findinDoc('img.src',start,end)
|
(pos, simgsrc) = self.findinDoc('img.src',start,end)
|
||||||
if simgsrc:
|
if simgsrc:
|
||||||
if inGroup:
|
if inGroup:
|
||||||
htmlpage += '<img src="img/img%04d.jpg" alt="" />' % int(simgsrc)
|
hlst.append('<img src="img/img%04d.jpg" alt="" />' % int(simgsrc))
|
||||||
else:
|
else:
|
||||||
htmlpage += '<div class="graphic"><img src="img/img%04d.jpg" alt="" /></div>' % int(simgsrc)
|
hlst.append('<div class="graphic"><img src="img/img%04d.jpg" alt="" /></div>' % int(simgsrc))
|
||||||
|
|
||||||
elif regtype == 'chapterheading' :
|
elif regtype == 'chapterheading' :
|
||||||
(pclass, pdesc) = self.getParaDescription(start,end, regtype)
|
(pclass, pdesc) = self.getParaDescription(start,end, regtype)
|
||||||
if not breakSet:
|
if not breakSet:
|
||||||
htmlpage += '<div style="page-break-after: always;"> </div>\n'
|
hlst.append('<div style="page-break-after: always;"> </div>\n')
|
||||||
breakSet = True
|
breakSet = True
|
||||||
tag = 'h1'
|
tag = 'h1'
|
||||||
if pclass and (len(pclass) >= 7):
|
if pclass and (len(pclass) >= 7):
|
||||||
if pclass[3:7] == 'ch1-' : tag = 'h1'
|
if pclass[3:7] == 'ch1-' : tag = 'h1'
|
||||||
if pclass[3:7] == 'ch2-' : tag = 'h2'
|
if pclass[3:7] == 'ch2-' : tag = 'h2'
|
||||||
if pclass[3:7] == 'ch3-' : tag = 'h3'
|
if pclass[3:7] == 'ch3-' : tag = 'h3'
|
||||||
htmlpage += '<' + tag + ' class="' + pclass + '">'
|
hlst.append('<' + tag + ' class="' + pclass + '">')
|
||||||
else:
|
else:
|
||||||
htmlpage += '<' + tag + '>'
|
hlst.append('<' + tag + '>')
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, 'middle', regtype)
|
hlst.append(self.buildParagraph(pclass, pdesc, 'middle', regtype))
|
||||||
htmlpage += '</' + tag + '>'
|
hlst.append('</' + tag + '>')
|
||||||
|
|
||||||
elif (regtype == 'text') or (regtype == 'fixed') or (regtype == 'insert') or (regtype == 'listitem'):
|
elif (regtype == 'text') or (regtype == 'fixed') or (regtype == 'insert') or (regtype == 'listitem'):
|
||||||
ptype = 'full'
|
ptype = 'full'
|
||||||
@@ -644,11 +710,11 @@ class DocParser(object):
|
|||||||
if pclass[3:6] == 'h1-' : tag = 'h4'
|
if pclass[3:6] == 'h1-' : tag = 'h4'
|
||||||
if pclass[3:6] == 'h2-' : tag = 'h5'
|
if pclass[3:6] == 'h2-' : tag = 'h5'
|
||||||
if pclass[3:6] == 'h3-' : tag = 'h6'
|
if pclass[3:6] == 'h3-' : tag = 'h6'
|
||||||
htmlpage += '<' + tag + ' class="' + pclass + '">'
|
hlst.append('<' + tag + ' class="' + pclass + '">')
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, 'middle', regtype)
|
hlst.append(self.buildParagraph(pclass, pdesc, 'middle', regtype))
|
||||||
htmlpage += '</' + tag + '>'
|
hlst.append('</' + tag + '>')
|
||||||
else :
|
else :
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, ptype, regtype)
|
hlst.append(self.buildParagraph(pclass, pdesc, ptype, regtype))
|
||||||
|
|
||||||
elif (regtype == 'tocentry') :
|
elif (regtype == 'tocentry') :
|
||||||
ptype = 'full'
|
ptype = 'full'
|
||||||
@@ -656,8 +722,8 @@ class DocParser(object):
|
|||||||
ptype = 'end'
|
ptype = 'end'
|
||||||
first_para_continued = False
|
first_para_continued = False
|
||||||
(pclass, pdesc) = self.getParaDescription(start,end, regtype)
|
(pclass, pdesc) = self.getParaDescription(start,end, regtype)
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, ptype, regtype)
|
tocinfo += self.buildTOCEntry(pdesc)
|
||||||
|
hlst.append(self.buildParagraph(pclass, pdesc, ptype, regtype))
|
||||||
|
|
||||||
elif (regtype == 'vertical') or (regtype == 'table') :
|
elif (regtype == 'vertical') or (regtype == 'table') :
|
||||||
ptype = 'full'
|
ptype = 'full'
|
||||||
@@ -667,13 +733,13 @@ class DocParser(object):
|
|||||||
ptype = 'end'
|
ptype = 'end'
|
||||||
first_para_continued = False
|
first_para_continued = False
|
||||||
(pclass, pdesc) = self.getParaDescription(start, end, regtype)
|
(pclass, pdesc) = self.getParaDescription(start, end, regtype)
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, ptype, regtype)
|
hlst.append(self.buildParagraph(pclass, pdesc, ptype, regtype))
|
||||||
|
|
||||||
|
|
||||||
elif (regtype == 'synth_fcvr.center'):
|
elif (regtype == 'synth_fcvr.center'):
|
||||||
(pos, simgsrc) = self.findinDoc('img.src',start,end)
|
(pos, simgsrc) = self.findinDoc('img.src',start,end)
|
||||||
if simgsrc:
|
if simgsrc:
|
||||||
htmlpage += '<div class="graphic"><img src="img/img%04d.jpg" alt="" /></div>' % int(simgsrc)
|
hlst.append('<div class="graphic"><img src="img/img%04d.jpg" alt="" /></div>' % int(simgsrc))
|
||||||
|
|
||||||
else :
|
else :
|
||||||
print ' Making region type', regtype,
|
print ' Making region type', regtype,
|
||||||
@@ -699,32 +765,29 @@ class DocParser(object):
|
|||||||
if pclass[3:6] == 'h1-' : tag = 'h4'
|
if pclass[3:6] == 'h1-' : tag = 'h4'
|
||||||
if pclass[3:6] == 'h2-' : tag = 'h5'
|
if pclass[3:6] == 'h2-' : tag = 'h5'
|
||||||
if pclass[3:6] == 'h3-' : tag = 'h6'
|
if pclass[3:6] == 'h3-' : tag = 'h6'
|
||||||
htmlpage += '<' + tag + ' class="' + pclass + '">'
|
hlst.append('<' + tag + ' class="' + pclass + '">')
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, 'middle', regtype)
|
hlst.append(self.buildParagraph(pclass, pdesc, 'middle', regtype))
|
||||||
htmlpage += '</' + tag + '>'
|
hlst.append('</' + tag + '>')
|
||||||
else :
|
else :
|
||||||
htmlpage += self.buildParagraph(pclass, pdesc, ptype, regtype)
|
hlst.append(self.buildParagraph(pclass, pdesc, ptype, regtype))
|
||||||
else :
|
else :
|
||||||
print ' a "graphic" region'
|
print ' a "graphic" region'
|
||||||
(pos, simgsrc) = self.findinDoc('img.src',start,end)
|
(pos, simgsrc) = self.findinDoc('img.src',start,end)
|
||||||
if simgsrc:
|
if simgsrc:
|
||||||
htmlpage += '<div class="graphic"><img src="img/img%04d.jpg" alt="" /></div>' % int(simgsrc)
|
hlst.append('<div class="graphic"><img src="img/img%04d.jpg" alt="" /></div>' % int(simgsrc))
|
||||||
|
|
||||||
|
|
||||||
|
htmlpage = "".join(hlst)
|
||||||
if last_para_continued :
|
if last_para_continued :
|
||||||
if htmlpage[-4:] == '</p>':
|
if htmlpage[-4:] == '</p>':
|
||||||
htmlpage = htmlpage[0:-4]
|
htmlpage = htmlpage[0:-4]
|
||||||
last_para_continued = False
|
last_para_continued = False
|
||||||
|
|
||||||
return htmlpage
|
return htmlpage, tocinfo
|
||||||
|
|
||||||
|
|
||||||
|
def convert2HTML(flatxml, classlst, fileid, bookDir, gdict, fixedimage):
|
||||||
def convert2HTML(flatxml, classlst, fileid, bookDir, fixedimage):
|
|
||||||
|
|
||||||
# create a document parser
|
# create a document parser
|
||||||
dp = DocParser(flatxml, classlst, fileid, bookDir, fixedimage)
|
dp = DocParser(flatxml, classlst, fileid, bookDir, gdict, fixedimage)
|
||||||
|
htmlpage, tocinfo = dp.process()
|
||||||
htmlpage = dp.process()
|
return htmlpage, tocinfo
|
||||||
|
|
||||||
return htmlpage
|
|
||||||
249
Calibre_Plugins/K4MobiDeDRM_plugin/flatxml2svg.py
Normal file
249
Calibre_Plugins/K4MobiDeDRM_plugin/flatxml2svg.py
Normal file
@@ -0,0 +1,249 @@
|
|||||||
|
#! /usr/bin/python
|
||||||
|
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import csv
|
||||||
|
import os
|
||||||
|
import getopt
|
||||||
|
from struct import pack
|
||||||
|
from struct import unpack
|
||||||
|
|
||||||
|
|
||||||
|
class PParser(object):
|
||||||
|
def __init__(self, gd, flatxml, meta_array):
|
||||||
|
self.gd = gd
|
||||||
|
self.flatdoc = flatxml.split('\n')
|
||||||
|
self.docSize = len(self.flatdoc)
|
||||||
|
self.temp = []
|
||||||
|
|
||||||
|
self.ph = -1
|
||||||
|
self.pw = -1
|
||||||
|
startpos = self.posinDoc('page.h') or self.posinDoc('book.h')
|
||||||
|
for p in startpos:
|
||||||
|
(name, argres) = self.lineinDoc(p)
|
||||||
|
self.ph = max(self.ph, int(argres))
|
||||||
|
startpos = self.posinDoc('page.w') or self.posinDoc('book.w')
|
||||||
|
for p in startpos:
|
||||||
|
(name, argres) = self.lineinDoc(p)
|
||||||
|
self.pw = max(self.pw, int(argres))
|
||||||
|
|
||||||
|
if self.ph <= 0:
|
||||||
|
self.ph = int(meta_array.get('pageHeight', '11000'))
|
||||||
|
if self.pw <= 0:
|
||||||
|
self.pw = int(meta_array.get('pageWidth', '8500'))
|
||||||
|
|
||||||
|
res = []
|
||||||
|
startpos = self.posinDoc('info.glyph.x')
|
||||||
|
for p in startpos:
|
||||||
|
argres = self.getDataatPos('info.glyph.x', p)
|
||||||
|
res.extend(argres)
|
||||||
|
self.gx = res
|
||||||
|
|
||||||
|
res = []
|
||||||
|
startpos = self.posinDoc('info.glyph.y')
|
||||||
|
for p in startpos:
|
||||||
|
argres = self.getDataatPos('info.glyph.y', p)
|
||||||
|
res.extend(argres)
|
||||||
|
self.gy = res
|
||||||
|
|
||||||
|
res = []
|
||||||
|
startpos = self.posinDoc('info.glyph.glyphID')
|
||||||
|
for p in startpos:
|
||||||
|
argres = self.getDataatPos('info.glyph.glyphID', p)
|
||||||
|
res.extend(argres)
|
||||||
|
self.gid = res
|
||||||
|
|
||||||
|
|
||||||
|
# return tag at line pos in document
|
||||||
|
def lineinDoc(self, pos) :
|
||||||
|
if (pos >= 0) and (pos < self.docSize) :
|
||||||
|
item = self.flatdoc[pos]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argres) = item.split('=',1)
|
||||||
|
else :
|
||||||
|
name = item
|
||||||
|
argres = ''
|
||||||
|
return name, argres
|
||||||
|
|
||||||
|
# find tag in doc if within pos to end inclusive
|
||||||
|
def findinDoc(self, tagpath, pos, end) :
|
||||||
|
result = None
|
||||||
|
if end == -1 :
|
||||||
|
end = self.docSize
|
||||||
|
else:
|
||||||
|
end = min(self.docSize, end)
|
||||||
|
foundat = -1
|
||||||
|
for j in xrange(pos, end):
|
||||||
|
item = self.flatdoc[j]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argres) = item.split('=',1)
|
||||||
|
else :
|
||||||
|
name = item
|
||||||
|
argres = ''
|
||||||
|
if name.endswith(tagpath) :
|
||||||
|
result = argres
|
||||||
|
foundat = j
|
||||||
|
break
|
||||||
|
return foundat, result
|
||||||
|
|
||||||
|
# return list of start positions for the tagpath
|
||||||
|
def posinDoc(self, tagpath):
|
||||||
|
startpos = []
|
||||||
|
pos = 0
|
||||||
|
res = ""
|
||||||
|
while res != None :
|
||||||
|
(foundpos, res) = self.findinDoc(tagpath, pos, -1)
|
||||||
|
if res != None :
|
||||||
|
startpos.append(foundpos)
|
||||||
|
pos = foundpos + 1
|
||||||
|
return startpos
|
||||||
|
|
||||||
|
def getData(self, path):
|
||||||
|
result = None
|
||||||
|
cnt = len(self.flatdoc)
|
||||||
|
for j in xrange(cnt):
|
||||||
|
item = self.flatdoc[j]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argt) = item.split('=')
|
||||||
|
argres = argt.split('|')
|
||||||
|
else:
|
||||||
|
name = item
|
||||||
|
argres = []
|
||||||
|
if (name.endswith(path)):
|
||||||
|
result = argres
|
||||||
|
break
|
||||||
|
if (len(argres) > 0) :
|
||||||
|
for j in xrange(0,len(argres)):
|
||||||
|
argres[j] = int(argres[j])
|
||||||
|
return result
|
||||||
|
|
||||||
|
def getDataatPos(self, path, pos):
|
||||||
|
result = None
|
||||||
|
item = self.flatdoc[pos]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argt) = item.split('=')
|
||||||
|
argres = argt.split('|')
|
||||||
|
else:
|
||||||
|
name = item
|
||||||
|
argres = []
|
||||||
|
if (len(argres) > 0) :
|
||||||
|
for j in xrange(0,len(argres)):
|
||||||
|
argres[j] = int(argres[j])
|
||||||
|
if (name.endswith(path)):
|
||||||
|
result = argres
|
||||||
|
return result
|
||||||
|
|
||||||
|
def getDataTemp(self, path):
|
||||||
|
result = None
|
||||||
|
cnt = len(self.temp)
|
||||||
|
for j in xrange(cnt):
|
||||||
|
item = self.temp[j]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argt) = item.split('=')
|
||||||
|
argres = argt.split('|')
|
||||||
|
else:
|
||||||
|
name = item
|
||||||
|
argres = []
|
||||||
|
if (name.endswith(path)):
|
||||||
|
result = argres
|
||||||
|
self.temp.pop(j)
|
||||||
|
break
|
||||||
|
if (len(argres) > 0) :
|
||||||
|
for j in xrange(0,len(argres)):
|
||||||
|
argres[j] = int(argres[j])
|
||||||
|
return result
|
||||||
|
|
||||||
|
def getImages(self):
|
||||||
|
result = []
|
||||||
|
self.temp = self.flatdoc
|
||||||
|
while (self.getDataTemp('img') != None):
|
||||||
|
h = self.getDataTemp('img.h')[0]
|
||||||
|
w = self.getDataTemp('img.w')[0]
|
||||||
|
x = self.getDataTemp('img.x')[0]
|
||||||
|
y = self.getDataTemp('img.y')[0]
|
||||||
|
src = self.getDataTemp('img.src')[0]
|
||||||
|
result.append('<image xlink:href="../img/img%04d.jpg" x="%d" y="%d" width="%d" height="%d" />\n' % (src, x, y, w, h))
|
||||||
|
return result
|
||||||
|
|
||||||
|
def getGlyphs(self):
|
||||||
|
result = []
|
||||||
|
if (self.gid != None) and (len(self.gid) > 0):
|
||||||
|
glyphs = []
|
||||||
|
for j in set(self.gid):
|
||||||
|
glyphs.append(j)
|
||||||
|
glyphs.sort()
|
||||||
|
for gid in glyphs:
|
||||||
|
id='id="gl%d"' % gid
|
||||||
|
path = self.gd.lookup(id)
|
||||||
|
if path:
|
||||||
|
result.append(id + ' ' + path)
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def convert2SVG(gdict, flat_xml, pageid, previd, nextid, svgDir, raw, meta_array, scaledpi):
|
||||||
|
mlst = []
|
||||||
|
pp = PParser(gdict, flat_xml, meta_array)
|
||||||
|
mlst.append('<?xml version="1.0" standalone="no"?>\n')
|
||||||
|
if (raw):
|
||||||
|
mlst.append('<!DOCTYPE svg PUBLIC "-//W3C/DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n')
|
||||||
|
mlst.append('<svg width="%fin" height="%fin" viewBox="0 0 %d %d" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1">\n' % (pp.pw / scaledpi, pp.ph / scaledpi, pp.pw -1, pp.ph -1))
|
||||||
|
mlst.append('<title>Page %d - %s by %s</title>\n' % (pageid, meta_array['Title'],meta_array['Authors']))
|
||||||
|
else:
|
||||||
|
mlst.append('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">\n')
|
||||||
|
mlst.append('<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" ><head>\n')
|
||||||
|
mlst.append('<title>Page %d - %s by %s</title>\n' % (pageid, meta_array['Title'],meta_array['Authors']))
|
||||||
|
mlst.append('<script><![CDATA[\n')
|
||||||
|
mlst.append('function gd(){var p=window.location.href.replace(/^.*\?dpi=(\d+).*$/i,"$1");return p;}\n')
|
||||||
|
mlst.append('var dpi=%d;\n' % scaledpi)
|
||||||
|
if (previd) :
|
||||||
|
mlst.append('var prevpage="page%04d.xhtml";\n' % (previd))
|
||||||
|
if (nextid) :
|
||||||
|
mlst.append('var nextpage="page%04d.xhtml";\n' % (nextid))
|
||||||
|
mlst.append('var pw=%d;var ph=%d;' % (pp.pw, pp.ph))
|
||||||
|
mlst.append('function zoomin(){dpi=dpi*(0.8);setsize();}\n')
|
||||||
|
mlst.append('function zoomout(){dpi=dpi*1.25;setsize();}\n')
|
||||||
|
mlst.append('function setsize(){var svg=document.getElementById("svgimg");var prev=document.getElementById("prevsvg");var next=document.getElementById("nextsvg");var width=(pw/dpi)+"in";var height=(ph/dpi)+"in";svg.setAttribute("width",width);svg.setAttribute("height",height);prev.setAttribute("height",height);prev.setAttribute("width","50px");next.setAttribute("height",height);next.setAttribute("width","50px");}\n')
|
||||||
|
mlst.append('function ppage(){window.location.href=prevpage+"?dpi="+Math.round(dpi);}\n')
|
||||||
|
mlst.append('function npage(){window.location.href=nextpage+"?dpi="+Math.round(dpi);}\n')
|
||||||
|
mlst.append('var gt=gd();if(gt>0){dpi=gt;}\n')
|
||||||
|
mlst.append('window.onload=setsize;\n')
|
||||||
|
mlst.append(']]></script>\n')
|
||||||
|
mlst.append('</head>\n')
|
||||||
|
mlst.append('<body onLoad="setsize();" style="background-color:#777;text-align:center;">\n')
|
||||||
|
mlst.append('<div style="white-space:nowrap;">\n')
|
||||||
|
if previd == None:
|
||||||
|
mlst.append('<a href="javascript:ppage();"><svg id="prevsvg" viewBox="0 0 100 300" xmlns="http://www.w3.org/2000/svg" version="1.1" style="background-color:#777"></svg></a>\n')
|
||||||
|
else:
|
||||||
|
mlst.append('<a href="javascript:ppage();"><svg id="prevsvg" viewBox="0 0 100 300" xmlns="http://www.w3.org/2000/svg" version="1.1" style="background-color:#777"><polygon points="5,150,95,5,95,295" fill="#AAAAAA" /></svg></a>\n')
|
||||||
|
|
||||||
|
mlst.append('<a href="javascript:npage();"><svg id="svgimg" viewBox="0 0 %d %d" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" style="background-color:#FFF;border:1px solid black;">' % (pp.pw, pp.ph))
|
||||||
|
if (pp.gid != None):
|
||||||
|
mlst.append('<defs>\n')
|
||||||
|
gdefs = pp.getGlyphs()
|
||||||
|
for j in xrange(0,len(gdefs)):
|
||||||
|
mlst.append(gdefs[j])
|
||||||
|
mlst.append('</defs>\n')
|
||||||
|
img = pp.getImages()
|
||||||
|
if (img != None):
|
||||||
|
for j in xrange(0,len(img)):
|
||||||
|
mlst.append(img[j])
|
||||||
|
if (pp.gid != None):
|
||||||
|
for j in xrange(0,len(pp.gid)):
|
||||||
|
mlst.append('<use xlink:href="#gl%d" x="%d" y="%d" />\n' % (pp.gid[j], pp.gx[j], pp.gy[j]))
|
||||||
|
if (img == None or len(img) == 0) and (pp.gid == None or len(pp.gid) == 0):
|
||||||
|
xpos = "%d" % (pp.pw // 3)
|
||||||
|
ypos = "%d" % (pp.ph // 3)
|
||||||
|
mlst.append('<text x="' + xpos + '" y="' + ypos + '" font-size="' + meta_array['fontSize'] + '" font-family="Helvetica" stroke="black">This page intentionally left blank.</text>\n')
|
||||||
|
if (raw) :
|
||||||
|
mlst.append('</svg>')
|
||||||
|
else :
|
||||||
|
mlst.append('</svg></a>\n')
|
||||||
|
if nextid == None:
|
||||||
|
mlst.append('<a href="javascript:npage();"><svg id="nextsvg" viewBox="0 0 100 300" xmlns="http://www.w3.org/2000/svg" version="1.1" style="background-color:#777"></svg></a>\n')
|
||||||
|
else :
|
||||||
|
mlst.append('<a href="javascript:npage();"><svg id="nextsvg" viewBox="0 0 100 300" xmlns="http://www.w3.org/2000/svg" version="1.1" style="background-color:#777"><polygon points="5,5,5,295,95,150" fill="#AAAAAA" /></svg></a>\n')
|
||||||
|
mlst.append('</div>\n')
|
||||||
|
mlst.append('<div><a href="javascript:zoomin();">zoom in</a> - <a href="javascript:zoomout();">zoom out</a></div>\n')
|
||||||
|
mlst.append('</body>\n')
|
||||||
|
mlst.append('</html>\n')
|
||||||
|
return "".join(mlst)
|
||||||
721
Calibre_Plugins/K4MobiDeDRM_plugin/genbook.py
Normal file
721
Calibre_Plugins/K4MobiDeDRM_plugin/genbook.py
Normal file
@@ -0,0 +1,721 @@
|
|||||||
|
#! /usr/bin/python
|
||||||
|
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
|
||||||
|
|
||||||
|
class Unbuffered:
|
||||||
|
def __init__(self, stream):
|
||||||
|
self.stream = stream
|
||||||
|
def write(self, data):
|
||||||
|
self.stream.write(data)
|
||||||
|
self.stream.flush()
|
||||||
|
def __getattr__(self, attr):
|
||||||
|
return getattr(self.stream, attr)
|
||||||
|
|
||||||
|
import sys
|
||||||
|
sys.stdout=Unbuffered(sys.stdout)
|
||||||
|
|
||||||
|
import csv
|
||||||
|
import os
|
||||||
|
import getopt
|
||||||
|
from struct import pack
|
||||||
|
from struct import unpack
|
||||||
|
|
||||||
|
class TpzDRMError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# local support routines
|
||||||
|
if 'calibre' in sys.modules:
|
||||||
|
inCalibre = True
|
||||||
|
else:
|
||||||
|
inCalibre = False
|
||||||
|
|
||||||
|
if inCalibre :
|
||||||
|
from calibre_plugins.k4mobidedrm import convert2xml
|
||||||
|
from calibre_plugins.k4mobidedrm import flatxml2html
|
||||||
|
from calibre_plugins.k4mobidedrm import flatxml2svg
|
||||||
|
from calibre_plugins.k4mobidedrm import stylexml2css
|
||||||
|
else :
|
||||||
|
import convert2xml
|
||||||
|
import flatxml2html
|
||||||
|
import flatxml2svg
|
||||||
|
import stylexml2css
|
||||||
|
|
||||||
|
# global switch
|
||||||
|
buildXML = False
|
||||||
|
|
||||||
|
# Get a 7 bit encoded number from a file
|
||||||
|
def readEncodedNumber(file):
|
||||||
|
flag = False
|
||||||
|
c = file.read(1)
|
||||||
|
if (len(c) == 0):
|
||||||
|
return None
|
||||||
|
data = ord(c)
|
||||||
|
if data == 0xFF:
|
||||||
|
flag = True
|
||||||
|
c = file.read(1)
|
||||||
|
if (len(c) == 0):
|
||||||
|
return None
|
||||||
|
data = ord(c)
|
||||||
|
if data >= 0x80:
|
||||||
|
datax = (data & 0x7F)
|
||||||
|
while data >= 0x80 :
|
||||||
|
c = file.read(1)
|
||||||
|
if (len(c) == 0):
|
||||||
|
return None
|
||||||
|
data = ord(c)
|
||||||
|
datax = (datax <<7) + (data & 0x7F)
|
||||||
|
data = datax
|
||||||
|
if flag:
|
||||||
|
data = -data
|
||||||
|
return data
|
||||||
|
|
||||||
|
# Get a length prefixed string from the file
|
||||||
|
def lengthPrefixString(data):
|
||||||
|
return encodeNumber(len(data))+data
|
||||||
|
|
||||||
|
def readString(file):
|
||||||
|
stringLength = readEncodedNumber(file)
|
||||||
|
if (stringLength == None):
|
||||||
|
return None
|
||||||
|
sv = file.read(stringLength)
|
||||||
|
if (len(sv) != stringLength):
|
||||||
|
return ""
|
||||||
|
return unpack(str(stringLength)+"s",sv)[0]
|
||||||
|
|
||||||
|
def getMetaArray(metaFile):
|
||||||
|
# parse the meta file
|
||||||
|
result = {}
|
||||||
|
fo = file(metaFile,'rb')
|
||||||
|
size = readEncodedNumber(fo)
|
||||||
|
for i in xrange(size):
|
||||||
|
tag = readString(fo)
|
||||||
|
value = readString(fo)
|
||||||
|
result[tag] = value
|
||||||
|
# print tag, value
|
||||||
|
fo.close()
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
# dictionary of all text strings by index value
|
||||||
|
class Dictionary(object):
|
||||||
|
def __init__(self, dictFile):
|
||||||
|
self.filename = dictFile
|
||||||
|
self.size = 0
|
||||||
|
self.fo = file(dictFile,'rb')
|
||||||
|
self.stable = []
|
||||||
|
self.size = readEncodedNumber(self.fo)
|
||||||
|
for i in xrange(self.size):
|
||||||
|
self.stable.append(self.escapestr(readString(self.fo)))
|
||||||
|
self.pos = 0
|
||||||
|
def escapestr(self, str):
|
||||||
|
str = str.replace('&','&')
|
||||||
|
str = str.replace('<','<')
|
||||||
|
str = str.replace('>','>')
|
||||||
|
str = str.replace('=','=')
|
||||||
|
return str
|
||||||
|
def lookup(self,val):
|
||||||
|
if ((val >= 0) and (val < self.size)) :
|
||||||
|
self.pos = val
|
||||||
|
return self.stable[self.pos]
|
||||||
|
else:
|
||||||
|
print "Error - %d outside of string table limits" % val
|
||||||
|
raise TpzDRMError('outside or string table limits')
|
||||||
|
# sys.exit(-1)
|
||||||
|
def getSize(self):
|
||||||
|
return self.size
|
||||||
|
def getPos(self):
|
||||||
|
return self.pos
|
||||||
|
|
||||||
|
|
||||||
|
class PageDimParser(object):
|
||||||
|
def __init__(self, flatxml):
|
||||||
|
self.flatdoc = flatxml.split('\n')
|
||||||
|
# find tag if within pos to end inclusive
|
||||||
|
def findinDoc(self, tagpath, pos, end) :
|
||||||
|
result = None
|
||||||
|
docList = self.flatdoc
|
||||||
|
cnt = len(docList)
|
||||||
|
if end == -1 :
|
||||||
|
end = cnt
|
||||||
|
else:
|
||||||
|
end = min(cnt,end)
|
||||||
|
foundat = -1
|
||||||
|
for j in xrange(pos, end):
|
||||||
|
item = docList[j]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argres) = item.split('=')
|
||||||
|
else :
|
||||||
|
name = item
|
||||||
|
argres = ''
|
||||||
|
if name.endswith(tagpath) :
|
||||||
|
result = argres
|
||||||
|
foundat = j
|
||||||
|
break
|
||||||
|
return foundat, result
|
||||||
|
def process(self):
|
||||||
|
(pos, sph) = self.findinDoc('page.h',0,-1)
|
||||||
|
(pos, spw) = self.findinDoc('page.w',0,-1)
|
||||||
|
if (sph == None): sph = '-1'
|
||||||
|
if (spw == None): spw = '-1'
|
||||||
|
return sph, spw
|
||||||
|
|
||||||
|
def getPageDim(flatxml):
|
||||||
|
# create a document parser
|
||||||
|
dp = PageDimParser(flatxml)
|
||||||
|
(ph, pw) = dp.process()
|
||||||
|
return ph, pw
|
||||||
|
|
||||||
|
class GParser(object):
|
||||||
|
def __init__(self, flatxml):
|
||||||
|
self.flatdoc = flatxml.split('\n')
|
||||||
|
self.dpi = 1440
|
||||||
|
self.gh = self.getData('info.glyph.h')
|
||||||
|
self.gw = self.getData('info.glyph.w')
|
||||||
|
self.guse = self.getData('info.glyph.use')
|
||||||
|
if self.guse :
|
||||||
|
self.count = len(self.guse)
|
||||||
|
else :
|
||||||
|
self.count = 0
|
||||||
|
self.gvtx = self.getData('info.glyph.vtx')
|
||||||
|
self.glen = self.getData('info.glyph.len')
|
||||||
|
self.gdpi = self.getData('info.glyph.dpi')
|
||||||
|
self.vx = self.getData('info.vtx.x')
|
||||||
|
self.vy = self.getData('info.vtx.y')
|
||||||
|
self.vlen = self.getData('info.len.n')
|
||||||
|
if self.vlen :
|
||||||
|
self.glen.append(len(self.vlen))
|
||||||
|
elif self.glen:
|
||||||
|
self.glen.append(0)
|
||||||
|
if self.vx :
|
||||||
|
self.gvtx.append(len(self.vx))
|
||||||
|
elif self.gvtx :
|
||||||
|
self.gvtx.append(0)
|
||||||
|
def getData(self, path):
|
||||||
|
result = None
|
||||||
|
cnt = len(self.flatdoc)
|
||||||
|
for j in xrange(cnt):
|
||||||
|
item = self.flatdoc[j]
|
||||||
|
if item.find('=') >= 0:
|
||||||
|
(name, argt) = item.split('=')
|
||||||
|
argres = argt.split('|')
|
||||||
|
else:
|
||||||
|
name = item
|
||||||
|
argres = []
|
||||||
|
if (name == path):
|
||||||
|
result = argres
|
||||||
|
break
|
||||||
|
if (len(argres) > 0) :
|
||||||
|
for j in xrange(0,len(argres)):
|
||||||
|
argres[j] = int(argres[j])
|
||||||
|
return result
|
||||||
|
def getGlyphDim(self, gly):
|
||||||
|
if self.gdpi[gly] == 0:
|
||||||
|
return 0, 0
|
||||||
|
maxh = (self.gh[gly] * self.dpi) / self.gdpi[gly]
|
||||||
|
maxw = (self.gw[gly] * self.dpi) / self.gdpi[gly]
|
||||||
|
return maxh, maxw
|
||||||
|
def getPath(self, gly):
|
||||||
|
path = ''
|
||||||
|
if (gly < 0) or (gly >= self.count):
|
||||||
|
return path
|
||||||
|
tx = self.vx[self.gvtx[gly]:self.gvtx[gly+1]]
|
||||||
|
ty = self.vy[self.gvtx[gly]:self.gvtx[gly+1]]
|
||||||
|
p = 0
|
||||||
|
for k in xrange(self.glen[gly], self.glen[gly+1]):
|
||||||
|
if (p == 0):
|
||||||
|
zx = tx[0:self.vlen[k]+1]
|
||||||
|
zy = ty[0:self.vlen[k]+1]
|
||||||
|
else:
|
||||||
|
zx = tx[self.vlen[k-1]+1:self.vlen[k]+1]
|
||||||
|
zy = ty[self.vlen[k-1]+1:self.vlen[k]+1]
|
||||||
|
p += 1
|
||||||
|
j = 0
|
||||||
|
while ( j < len(zx) ):
|
||||||
|
if (j == 0):
|
||||||
|
# Start Position.
|
||||||
|
path += 'M %d %d ' % (zx[j] * self.dpi / self.gdpi[gly], zy[j] * self.dpi / self.gdpi[gly])
|
||||||
|
elif (j <= len(zx)-3):
|
||||||
|
# Cubic Bezier Curve
|
||||||
|
path += 'C %d %d %d %d %d %d ' % (zx[j] * self.dpi / self.gdpi[gly], zy[j] * self.dpi / self.gdpi[gly], zx[j+1] * self.dpi / self.gdpi[gly], zy[j+1] * self.dpi / self.gdpi[gly], zx[j+2] * self.dpi / self.gdpi[gly], zy[j+2] * self.dpi / self.gdpi[gly])
|
||||||
|
j += 2
|
||||||
|
elif (j == len(zx)-2):
|
||||||
|
# Cubic Bezier Curve to Start Position
|
||||||
|
path += 'C %d %d %d %d %d %d ' % (zx[j] * self.dpi / self.gdpi[gly], zy[j] * self.dpi / self.gdpi[gly], zx[j+1] * self.dpi / self.gdpi[gly], zy[j+1] * self.dpi / self.gdpi[gly], zx[0] * self.dpi / self.gdpi[gly], zy[0] * self.dpi / self.gdpi[gly])
|
||||||
|
j += 1
|
||||||
|
elif (j == len(zx)-1):
|
||||||
|
# Quadratic Bezier Curve to Start Position
|
||||||
|
path += 'Q %d %d %d %d ' % (zx[j] * self.dpi / self.gdpi[gly], zy[j] * self.dpi / self.gdpi[gly], zx[0] * self.dpi / self.gdpi[gly], zy[0] * self.dpi / self.gdpi[gly])
|
||||||
|
|
||||||
|
j += 1
|
||||||
|
path += 'z'
|
||||||
|
return path
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# dictionary of all text strings by index value
|
||||||
|
class GlyphDict(object):
|
||||||
|
def __init__(self):
|
||||||
|
self.gdict = {}
|
||||||
|
def lookup(self, id):
|
||||||
|
# id='id="gl%d"' % val
|
||||||
|
if id in self.gdict:
|
||||||
|
return self.gdict[id]
|
||||||
|
return None
|
||||||
|
def addGlyph(self, val, path):
|
||||||
|
id='id="gl%d"' % val
|
||||||
|
self.gdict[id] = path
|
||||||
|
|
||||||
|
|
||||||
|
def generateBook(bookDir, raw, fixedimage):
|
||||||
|
# sanity check Topaz file extraction
|
||||||
|
if not os.path.exists(bookDir) :
|
||||||
|
print "Can not find directory with unencrypted book"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
dictFile = os.path.join(bookDir,'dict0000.dat')
|
||||||
|
if not os.path.exists(dictFile) :
|
||||||
|
print "Can not find dict0000.dat file"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
pageDir = os.path.join(bookDir,'page')
|
||||||
|
if not os.path.exists(pageDir) :
|
||||||
|
print "Can not find page directory in unencrypted book"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
imgDir = os.path.join(bookDir,'img')
|
||||||
|
if not os.path.exists(imgDir) :
|
||||||
|
print "Can not find image directory in unencrypted book"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
glyphsDir = os.path.join(bookDir,'glyphs')
|
||||||
|
if not os.path.exists(glyphsDir) :
|
||||||
|
print "Can not find glyphs directory in unencrypted book"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
metaFile = os.path.join(bookDir,'metadata0000.dat')
|
||||||
|
if not os.path.exists(metaFile) :
|
||||||
|
print "Can not find metadata0000.dat in unencrypted book"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
svgDir = os.path.join(bookDir,'svg')
|
||||||
|
if not os.path.exists(svgDir) :
|
||||||
|
os.makedirs(svgDir)
|
||||||
|
|
||||||
|
if buildXML:
|
||||||
|
xmlDir = os.path.join(bookDir,'xml')
|
||||||
|
if not os.path.exists(xmlDir) :
|
||||||
|
os.makedirs(xmlDir)
|
||||||
|
|
||||||
|
otherFile = os.path.join(bookDir,'other0000.dat')
|
||||||
|
if not os.path.exists(otherFile) :
|
||||||
|
print "Can not find other0000.dat in unencrypted book"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print "Updating to color images if available"
|
||||||
|
spath = os.path.join(bookDir,'color_img')
|
||||||
|
dpath = os.path.join(bookDir,'img')
|
||||||
|
filenames = os.listdir(spath)
|
||||||
|
filenames = sorted(filenames)
|
||||||
|
for filename in filenames:
|
||||||
|
imgname = filename.replace('color','img')
|
||||||
|
sfile = os.path.join(spath,filename)
|
||||||
|
dfile = os.path.join(dpath,imgname)
|
||||||
|
imgdata = file(sfile,'rb').read()
|
||||||
|
file(dfile,'wb').write(imgdata)
|
||||||
|
|
||||||
|
print "Creating cover.jpg"
|
||||||
|
isCover = False
|
||||||
|
cpath = os.path.join(bookDir,'img')
|
||||||
|
cpath = os.path.join(cpath,'img0000.jpg')
|
||||||
|
if os.path.isfile(cpath):
|
||||||
|
cover = file(cpath, 'rb').read()
|
||||||
|
cpath = os.path.join(bookDir,'cover.jpg')
|
||||||
|
file(cpath, 'wb').write(cover)
|
||||||
|
isCover = True
|
||||||
|
|
||||||
|
|
||||||
|
print 'Processing Dictionary'
|
||||||
|
dict = Dictionary(dictFile)
|
||||||
|
|
||||||
|
print 'Processing Meta Data and creating OPF'
|
||||||
|
meta_array = getMetaArray(metaFile)
|
||||||
|
|
||||||
|
# replace special chars in title and authors like & < >
|
||||||
|
title = meta_array.get('Title','No Title Provided')
|
||||||
|
title = title.replace('&','&')
|
||||||
|
title = title.replace('<','<')
|
||||||
|
title = title.replace('>','>')
|
||||||
|
meta_array['Title'] = title
|
||||||
|
authors = meta_array.get('Authors','No Authors Provided')
|
||||||
|
authors = authors.replace('&','&')
|
||||||
|
authors = authors.replace('<','<')
|
||||||
|
authors = authors.replace('>','>')
|
||||||
|
meta_array['Authors'] = authors
|
||||||
|
|
||||||
|
if buildXML:
|
||||||
|
xname = os.path.join(xmlDir, 'metadata.xml')
|
||||||
|
mlst = []
|
||||||
|
for key in meta_array:
|
||||||
|
mlst.append('<meta name="' + key + '" content="' + meta_array[key] + '" />\n')
|
||||||
|
metastr = "".join(mlst)
|
||||||
|
mlst = None
|
||||||
|
file(xname, 'wb').write(metastr)
|
||||||
|
|
||||||
|
print 'Processing StyleSheet'
|
||||||
|
|
||||||
|
# get some scaling info from metadata to use while processing styles
|
||||||
|
# and first page info
|
||||||
|
|
||||||
|
fontsize = '135'
|
||||||
|
if 'fontSize' in meta_array:
|
||||||
|
fontsize = meta_array['fontSize']
|
||||||
|
|
||||||
|
# also get the size of a normal text page
|
||||||
|
# get the total number of pages unpacked as a safety check
|
||||||
|
filenames = os.listdir(pageDir)
|
||||||
|
numfiles = len(filenames)
|
||||||
|
|
||||||
|
spage = '1'
|
||||||
|
if 'firstTextPage' in meta_array:
|
||||||
|
spage = meta_array['firstTextPage']
|
||||||
|
pnum = int(spage)
|
||||||
|
if pnum >= numfiles or pnum < 0:
|
||||||
|
# metadata is wrong so just select a page near the front
|
||||||
|
# 10% of the book to get a normal text page
|
||||||
|
pnum = int(0.10 * numfiles)
|
||||||
|
# print "first normal text page is", spage
|
||||||
|
|
||||||
|
# get page height and width from first text page for use in stylesheet scaling
|
||||||
|
pname = 'page%04d.dat' % (pnum + 1)
|
||||||
|
fname = os.path.join(pageDir,pname)
|
||||||
|
flat_xml = convert2xml.fromData(dict, fname)
|
||||||
|
|
||||||
|
(ph, pw) = getPageDim(flat_xml)
|
||||||
|
if (ph == '-1') or (ph == '0') : ph = '11000'
|
||||||
|
if (pw == '-1') or (pw == '0') : pw = '8500'
|
||||||
|
meta_array['pageHeight'] = ph
|
||||||
|
meta_array['pageWidth'] = pw
|
||||||
|
if 'fontSize' not in meta_array.keys():
|
||||||
|
meta_array['fontSize'] = fontsize
|
||||||
|
|
||||||
|
# process other.dat for css info and for map of page files to svg images
|
||||||
|
# this map is needed because some pages actually are made up of multiple
|
||||||
|
# pageXXXX.xml files
|
||||||
|
xname = os.path.join(bookDir, 'style.css')
|
||||||
|
flat_xml = convert2xml.fromData(dict, otherFile)
|
||||||
|
|
||||||
|
# extract info.original.pid to get original page information
|
||||||
|
pageIDMap = {}
|
||||||
|
pageidnums = stylexml2css.getpageIDMap(flat_xml)
|
||||||
|
if len(pageidnums) == 0:
|
||||||
|
filenames = os.listdir(pageDir)
|
||||||
|
numfiles = len(filenames)
|
||||||
|
for k in range(numfiles):
|
||||||
|
pageidnums.append(k)
|
||||||
|
# create a map from page ids to list of page file nums to process for that page
|
||||||
|
for i in range(len(pageidnums)):
|
||||||
|
id = pageidnums[i]
|
||||||
|
if id in pageIDMap.keys():
|
||||||
|
pageIDMap[id].append(i)
|
||||||
|
else:
|
||||||
|
pageIDMap[id] = [i]
|
||||||
|
|
||||||
|
# now get the css info
|
||||||
|
cssstr , classlst = stylexml2css.convert2CSS(flat_xml, fontsize, ph, pw)
|
||||||
|
file(xname, 'wb').write(cssstr)
|
||||||
|
if buildXML:
|
||||||
|
xname = os.path.join(xmlDir, 'other0000.xml')
|
||||||
|
file(xname, 'wb').write(convert2xml.getXML(dict, otherFile))
|
||||||
|
|
||||||
|
print 'Processing Glyphs'
|
||||||
|
gd = GlyphDict()
|
||||||
|
filenames = os.listdir(glyphsDir)
|
||||||
|
filenames = sorted(filenames)
|
||||||
|
glyfname = os.path.join(svgDir,'glyphs.svg')
|
||||||
|
glyfile = open(glyfname, 'w')
|
||||||
|
glyfile.write('<?xml version="1.0" standalone="no"?>\n')
|
||||||
|
glyfile.write('<!DOCTYPE svg PUBLIC "-//W3C/DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">\n')
|
||||||
|
glyfile.write('<svg width="512" height="512" viewBox="0 0 511 511" xmlns="http://www.w3.org/2000/svg" version="1.1">\n')
|
||||||
|
glyfile.write('<title>Glyphs for %s</title>\n' % meta_array['Title'])
|
||||||
|
glyfile.write('<defs>\n')
|
||||||
|
counter = 0
|
||||||
|
for filename in filenames:
|
||||||
|
# print ' ', filename
|
||||||
|
print '.',
|
||||||
|
fname = os.path.join(glyphsDir,filename)
|
||||||
|
flat_xml = convert2xml.fromData(dict, fname)
|
||||||
|
|
||||||
|
if buildXML:
|
||||||
|
xname = os.path.join(xmlDir, filename.replace('.dat','.xml'))
|
||||||
|
file(xname, 'wb').write(convert2xml.getXML(dict, fname))
|
||||||
|
|
||||||
|
gp = GParser(flat_xml)
|
||||||
|
for i in xrange(0, gp.count):
|
||||||
|
path = gp.getPath(i)
|
||||||
|
maxh, maxw = gp.getGlyphDim(i)
|
||||||
|
fullpath = '<path id="gl%d" d="%s" fill="black" /><!-- width=%d height=%d -->\n' % (counter * 256 + i, path, maxw, maxh)
|
||||||
|
glyfile.write(fullpath)
|
||||||
|
gd.addGlyph(counter * 256 + i, fullpath)
|
||||||
|
counter += 1
|
||||||
|
glyfile.write('</defs>\n')
|
||||||
|
glyfile.write('</svg>\n')
|
||||||
|
glyfile.close()
|
||||||
|
print " "
|
||||||
|
|
||||||
|
|
||||||
|
# start up the html
|
||||||
|
# also build up tocentries while processing html
|
||||||
|
htmlFileName = "book.html"
|
||||||
|
hlst = []
|
||||||
|
hlst.append('<?xml version="1.0" encoding="utf-8"?>\n')
|
||||||
|
hlst.append('<!DOCTYPE HTML PUBLIC "-//W3C//DTD XHTML 1.1 Strict//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11-strict.dtd">\n')
|
||||||
|
hlst.append('<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">\n')
|
||||||
|
hlst.append('<head>\n')
|
||||||
|
hlst.append('<meta http-equiv="content-type" content="text/html; charset=utf-8"/>\n')
|
||||||
|
hlst.append('<title>' + meta_array['Title'] + ' by ' + meta_array['Authors'] + '</title>\n')
|
||||||
|
hlst.append('<meta name="Author" content="' + meta_array['Authors'] + '" />\n')
|
||||||
|
hlst.append('<meta name="Title" content="' + meta_array['Title'] + '" />\n')
|
||||||
|
if 'ASIN' in meta_array:
|
||||||
|
hlst.append('<meta name="ASIN" content="' + meta_array['ASIN'] + '" />\n')
|
||||||
|
if 'GUID' in meta_array:
|
||||||
|
hlst.append('<meta name="GUID" content="' + meta_array['GUID'] + '" />\n')
|
||||||
|
hlst.append('<link href="style.css" rel="stylesheet" type="text/css" />\n')
|
||||||
|
hlst.append('</head>\n<body>\n')
|
||||||
|
|
||||||
|
print 'Processing Pages'
|
||||||
|
# Books are at 1440 DPI. This is rendering at twice that size for
|
||||||
|
# readability when rendering to the screen.
|
||||||
|
scaledpi = 1440.0
|
||||||
|
|
||||||
|
filenames = os.listdir(pageDir)
|
||||||
|
filenames = sorted(filenames)
|
||||||
|
numfiles = len(filenames)
|
||||||
|
|
||||||
|
xmllst = []
|
||||||
|
elst = []
|
||||||
|
|
||||||
|
for filename in filenames:
|
||||||
|
# print ' ', filename
|
||||||
|
print ".",
|
||||||
|
fname = os.path.join(pageDir,filename)
|
||||||
|
flat_xml = convert2xml.fromData(dict, fname)
|
||||||
|
|
||||||
|
# keep flat_xml for later svg processing
|
||||||
|
xmllst.append(flat_xml)
|
||||||
|
|
||||||
|
if buildXML:
|
||||||
|
xname = os.path.join(xmlDir, filename.replace('.dat','.xml'))
|
||||||
|
file(xname, 'wb').write(convert2xml.getXML(dict, fname))
|
||||||
|
|
||||||
|
# first get the html
|
||||||
|
pagehtml, tocinfo = flatxml2html.convert2HTML(flat_xml, classlst, fname, bookDir, gd, fixedimage)
|
||||||
|
elst.append(tocinfo)
|
||||||
|
hlst.append(pagehtml)
|
||||||
|
|
||||||
|
# finish up the html string and output it
|
||||||
|
hlst.append('</body>\n</html>\n')
|
||||||
|
htmlstr = "".join(hlst)
|
||||||
|
hlst = None
|
||||||
|
file(os.path.join(bookDir, htmlFileName), 'wb').write(htmlstr)
|
||||||
|
|
||||||
|
print " "
|
||||||
|
print 'Extracting Table of Contents from Amazon OCR'
|
||||||
|
|
||||||
|
# first create a table of contents file for the svg images
|
||||||
|
tlst = []
|
||||||
|
tlst.append('<?xml version="1.0" encoding="utf-8"?>\n')
|
||||||
|
tlst.append('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">\n')
|
||||||
|
tlst.append('<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" >')
|
||||||
|
tlst.append('<head>\n')
|
||||||
|
tlst.append('<title>' + meta_array['Title'] + '</title>\n')
|
||||||
|
tlst.append('<meta name="Author" content="' + meta_array['Authors'] + '" />\n')
|
||||||
|
tlst.append('<meta name="Title" content="' + meta_array['Title'] + '" />\n')
|
||||||
|
if 'ASIN' in meta_array:
|
||||||
|
tlst.append('<meta name="ASIN" content="' + meta_array['ASIN'] + '" />\n')
|
||||||
|
if 'GUID' in meta_array:
|
||||||
|
tlst.append('<meta name="GUID" content="' + meta_array['GUID'] + '" />\n')
|
||||||
|
tlst.append('</head>\n')
|
||||||
|
tlst.append('<body>\n')
|
||||||
|
|
||||||
|
tlst.append('<h2>Table of Contents</h2>\n')
|
||||||
|
start = pageidnums[0]
|
||||||
|
if (raw):
|
||||||
|
startname = 'page%04d.svg' % start
|
||||||
|
else:
|
||||||
|
startname = 'page%04d.xhtml' % start
|
||||||
|
|
||||||
|
tlst.append('<h3><a href="' + startname + '">Start of Book</a></h3>\n')
|
||||||
|
# build up a table of contents for the svg xhtml output
|
||||||
|
tocentries = "".join(elst)
|
||||||
|
elst = None
|
||||||
|
toclst = tocentries.split('\n')
|
||||||
|
toclst.pop()
|
||||||
|
for entry in toclst:
|
||||||
|
print entry
|
||||||
|
title, pagenum = entry.split('|')
|
||||||
|
id = pageidnums[int(pagenum)]
|
||||||
|
if (raw):
|
||||||
|
fname = 'page%04d.svg' % id
|
||||||
|
else:
|
||||||
|
fname = 'page%04d.xhtml' % id
|
||||||
|
tlst.append('<h3><a href="'+ fname + '">' + title + '</a></h3>\n')
|
||||||
|
tlst.append('</body>\n')
|
||||||
|
tlst.append('</html>\n')
|
||||||
|
tochtml = "".join(tlst)
|
||||||
|
file(os.path.join(svgDir, 'toc.xhtml'), 'wb').write(tochtml)
|
||||||
|
|
||||||
|
|
||||||
|
# now create index_svg.xhtml that points to all required files
|
||||||
|
slst = []
|
||||||
|
slst.append('<?xml version="1.0" encoding="utf-8"?>\n')
|
||||||
|
slst.append('<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">\n')
|
||||||
|
slst.append('<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" >')
|
||||||
|
slst.append('<head>\n')
|
||||||
|
slst.append('<title>' + meta_array['Title'] + '</title>\n')
|
||||||
|
slst.append('<meta name="Author" content="' + meta_array['Authors'] + '" />\n')
|
||||||
|
slst.append('<meta name="Title" content="' + meta_array['Title'] + '" />\n')
|
||||||
|
if 'ASIN' in meta_array:
|
||||||
|
slst.append('<meta name="ASIN" content="' + meta_array['ASIN'] + '" />\n')
|
||||||
|
if 'GUID' in meta_array:
|
||||||
|
slst.append('<meta name="GUID" content="' + meta_array['GUID'] + '" />\n')
|
||||||
|
slst.append('</head>\n')
|
||||||
|
slst.append('<body>\n')
|
||||||
|
|
||||||
|
print "Building svg images of each book page"
|
||||||
|
slst.append('<h2>List of Pages</h2>\n')
|
||||||
|
slst.append('<div>\n')
|
||||||
|
idlst = sorted(pageIDMap.keys())
|
||||||
|
numids = len(idlst)
|
||||||
|
cnt = len(idlst)
|
||||||
|
previd = None
|
||||||
|
for j in range(cnt):
|
||||||
|
pageid = idlst[j]
|
||||||
|
if j < cnt - 1:
|
||||||
|
nextid = idlst[j+1]
|
||||||
|
else:
|
||||||
|
nextid = None
|
||||||
|
print '.',
|
||||||
|
pagelst = pageIDMap[pageid]
|
||||||
|
flst = []
|
||||||
|
for page in pagelst:
|
||||||
|
flst.append(xmllst[page])
|
||||||
|
flat_svg = "".join(flst)
|
||||||
|
flst=None
|
||||||
|
svgxml = flatxml2svg.convert2SVG(gd, flat_svg, pageid, previd, nextid, svgDir, raw, meta_array, scaledpi)
|
||||||
|
if (raw) :
|
||||||
|
pfile = open(os.path.join(svgDir,'page%04d.svg' % pageid),'w')
|
||||||
|
slst.append('<a href="svg/page%04d.svg">Page %d</a>\n' % (pageid, pageid))
|
||||||
|
else :
|
||||||
|
pfile = open(os.path.join(svgDir,'page%04d.xhtml' % pageid), 'w')
|
||||||
|
slst.append('<a href="svg/page%04d.xhtml">Page %d</a>\n' % (pageid, pageid))
|
||||||
|
previd = pageid
|
||||||
|
pfile.write(svgxml)
|
||||||
|
pfile.close()
|
||||||
|
counter += 1
|
||||||
|
slst.append('</div>\n')
|
||||||
|
slst.append('<h2><a href="svg/toc.xhtml">Table of Contents</a></h2>\n')
|
||||||
|
slst.append('</body>\n</html>\n')
|
||||||
|
svgindex = "".join(slst)
|
||||||
|
slst = None
|
||||||
|
file(os.path.join(bookDir, 'index_svg.xhtml'), 'wb').write(svgindex)
|
||||||
|
|
||||||
|
print " "
|
||||||
|
|
||||||
|
# build the opf file
|
||||||
|
opfname = os.path.join(bookDir, 'book.opf')
|
||||||
|
olst = []
|
||||||
|
olst.append('<?xml version="1.0" encoding="utf-8"?>\n')
|
||||||
|
olst.append('<package xmlns="http://www.idpf.org/2007/opf" unique-identifier="guid_id">\n')
|
||||||
|
# adding metadata
|
||||||
|
olst.append(' <metadata xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:opf="http://www.idpf.org/2007/opf">\n')
|
||||||
|
if 'GUID' in meta_array:
|
||||||
|
olst.append(' <dc:identifier opf:scheme="GUID" id="guid_id">' + meta_array['GUID'] + '</dc:identifier>\n')
|
||||||
|
if 'ASIN' in meta_array:
|
||||||
|
olst.append(' <dc:identifier opf:scheme="ASIN">' + meta_array['ASIN'] + '</dc:identifier>\n')
|
||||||
|
if 'oASIN' in meta_array:
|
||||||
|
olst.append(' <dc:identifier opf:scheme="oASIN">' + meta_array['oASIN'] + '</dc:identifier>\n')
|
||||||
|
olst.append(' <dc:title>' + meta_array['Title'] + '</dc:title>\n')
|
||||||
|
olst.append(' <dc:creator opf:role="aut">' + meta_array['Authors'] + '</dc:creator>\n')
|
||||||
|
olst.append(' <dc:language>en</dc:language>\n')
|
||||||
|
olst.append(' <dc:date>' + meta_array['UpdateTime'] + '</dc:date>\n')
|
||||||
|
if isCover:
|
||||||
|
olst.append(' <meta name="cover" content="bookcover"/>\n')
|
||||||
|
olst.append(' </metadata>\n')
|
||||||
|
olst.append('<manifest>\n')
|
||||||
|
olst.append(' <item id="book" href="book.html" media-type="application/xhtml+xml"/>\n')
|
||||||
|
olst.append(' <item id="stylesheet" href="style.css" media-type="text/css"/>\n')
|
||||||
|
# adding image files to manifest
|
||||||
|
filenames = os.listdir(imgDir)
|
||||||
|
filenames = sorted(filenames)
|
||||||
|
for filename in filenames:
|
||||||
|
imgname, imgext = os.path.splitext(filename)
|
||||||
|
if imgext == '.jpg':
|
||||||
|
imgext = 'jpeg'
|
||||||
|
if imgext == '.svg':
|
||||||
|
imgext = 'svg+xml'
|
||||||
|
olst.append(' <item id="' + imgname + '" href="img/' + filename + '" media-type="image/' + imgext + '"/>\n')
|
||||||
|
if isCover:
|
||||||
|
olst.append(' <item id="bookcover" href="cover.jpg" media-type="image/jpeg" />\n')
|
||||||
|
olst.append('</manifest>\n')
|
||||||
|
# adding spine
|
||||||
|
olst.append('<spine>\n <itemref idref="book" />\n</spine>\n')
|
||||||
|
if isCover:
|
||||||
|
olst.append(' <guide>\n')
|
||||||
|
olst.append(' <reference href="cover.jpg" type="cover" title="Cover"/>\n')
|
||||||
|
olst.append(' </guide>\n')
|
||||||
|
olst.append('</package>\n')
|
||||||
|
opfstr = "".join(olst)
|
||||||
|
olst = None
|
||||||
|
file(opfname, 'wb').write(opfstr)
|
||||||
|
|
||||||
|
print 'Processing Complete'
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
def usage():
|
||||||
|
print "genbook.py generates a book from the extract Topaz Files"
|
||||||
|
print "Usage:"
|
||||||
|
print " genbook.py [-r] [-h [--fixed-image] <bookDir> "
|
||||||
|
print " "
|
||||||
|
print "Options:"
|
||||||
|
print " -h : help - print this usage message"
|
||||||
|
print " -r : generate raw svg files (not wrapped in xhtml)"
|
||||||
|
print " --fixed-image : genearate any Fixed Area as an svg image in the html"
|
||||||
|
print " "
|
||||||
|
|
||||||
|
|
||||||
|
def main(argv):
|
||||||
|
bookDir = ''
|
||||||
|
if len(argv) == 0:
|
||||||
|
argv = sys.argv
|
||||||
|
|
||||||
|
try:
|
||||||
|
opts, args = getopt.getopt(argv[1:], "rh:",["fixed-image"])
|
||||||
|
|
||||||
|
except getopt.GetoptError, err:
|
||||||
|
print str(err)
|
||||||
|
usage()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if len(opts) == 0 and len(args) == 0 :
|
||||||
|
usage()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
raw = 0
|
||||||
|
fixedimage = True
|
||||||
|
for o, a in opts:
|
||||||
|
if o =="-h":
|
||||||
|
usage()
|
||||||
|
return 0
|
||||||
|
if o =="-r":
|
||||||
|
raw = 1
|
||||||
|
if o =="--fixed-image":
|
||||||
|
fixedimage = True
|
||||||
|
|
||||||
|
bookDir = args[0]
|
||||||
|
|
||||||
|
rv = generateBook(bookDir, raw, fixedimage)
|
||||||
|
return rv
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main(''))
|
||||||
77
Calibre_Plugins/K4MobiDeDRM_plugin/getk4pcpids.py
Normal file
77
Calibre_Plugins/K4MobiDeDRM_plugin/getk4pcpids.py
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
#!/usr/bin/python
|
||||||
|
#
|
||||||
|
# This is a python script. You need a Python interpreter to run it.
|
||||||
|
# For example, ActiveState Python, which exists for windows.
|
||||||
|
#
|
||||||
|
# Changelog
|
||||||
|
# 1.00 - Initial version
|
||||||
|
|
||||||
|
__version__ = '1.00'
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
class Unbuffered:
|
||||||
|
def __init__(self, stream):
|
||||||
|
self.stream = stream
|
||||||
|
def write(self, data):
|
||||||
|
self.stream.write(data)
|
||||||
|
self.stream.flush()
|
||||||
|
def __getattr__(self, attr):
|
||||||
|
return getattr(self.stream, attr)
|
||||||
|
sys.stdout=Unbuffered(sys.stdout)
|
||||||
|
|
||||||
|
import os
|
||||||
|
import struct
|
||||||
|
import binascii
|
||||||
|
import kgenpids
|
||||||
|
import topazextract
|
||||||
|
import mobidedrm
|
||||||
|
from alfcrypto import Pukall_Cipher
|
||||||
|
|
||||||
|
class DrmException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def getK4PCpids(path_to_ebook):
|
||||||
|
# Return Kindle4PC PIDs. Assumes that the caller checked that we are not on Linux, which will raise an exception
|
||||||
|
|
||||||
|
mobi = True
|
||||||
|
magic3 = file(path_to_ebook,'rb').read(3)
|
||||||
|
if magic3 == 'TPZ':
|
||||||
|
mobi = False
|
||||||
|
|
||||||
|
if mobi:
|
||||||
|
mb = mobidedrm.MobiBook(path_to_ebook,False)
|
||||||
|
else:
|
||||||
|
mb = topazextract.TopazBook(path_to_ebook)
|
||||||
|
|
||||||
|
md1, md2 = mb.getPIDMetaInfo()
|
||||||
|
|
||||||
|
return kgenpids.getPidList(md1, md2, True, [], [], [])
|
||||||
|
|
||||||
|
|
||||||
|
def main(argv=sys.argv):
|
||||||
|
print ('getk4pcpids.py v%(__version__)s. '
|
||||||
|
'Copyright 2012 Apprentic Alf' % globals())
|
||||||
|
|
||||||
|
if len(argv)<2 or len(argv)>3:
|
||||||
|
print "Gets the possible book-specific PIDs from K4PC for a particular book"
|
||||||
|
print "Usage:"
|
||||||
|
print " %s <bookfile> [<outfile>]" % sys.argv[0]
|
||||||
|
return 1
|
||||||
|
else:
|
||||||
|
infile = argv[1]
|
||||||
|
try:
|
||||||
|
pidlist = getK4PCpids(infile)
|
||||||
|
except DrmException, e:
|
||||||
|
print "Error: %s" % e
|
||||||
|
return 1
|
||||||
|
pidstring = ','.join(pidlist)
|
||||||
|
print "Possible PIDs are: ", pidstring
|
||||||
|
if len(argv) is 3:
|
||||||
|
outfile = argv[2]
|
||||||
|
file(outfile, 'w').write(pidstring)
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
223
Calibre_Plugins/K4MobiDeDRM_plugin/k4mobidedrm_orig.py
Normal file
223
Calibre_Plugins/K4MobiDeDRM_plugin/k4mobidedrm_orig.py
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
from __future__ import with_statement
|
||||||
|
|
||||||
|
# engine to remove drm from Kindle for Mac and Kindle for PC books
|
||||||
|
# for personal use for archiving and converting your ebooks
|
||||||
|
|
||||||
|
# PLEASE DO NOT PIRATE EBOOKS!
|
||||||
|
|
||||||
|
# We want all authors and publishers, and eBook stores to live
|
||||||
|
# long and prosperous lives but at the same time we just want to
|
||||||
|
# be able to read OUR books on whatever device we want and to keep
|
||||||
|
# readable for a long, long time
|
||||||
|
|
||||||
|
# This borrows very heavily from works by CMBDTC, IHeartCabbages, skindle,
|
||||||
|
# unswindle, DarkReverser, ApprenticeAlf, DiapDealer, some_updates
|
||||||
|
# and many many others
|
||||||
|
|
||||||
|
|
||||||
|
__version__ = '4.3'
|
||||||
|
|
||||||
|
class Unbuffered:
|
||||||
|
def __init__(self, stream):
|
||||||
|
self.stream = stream
|
||||||
|
def write(self, data):
|
||||||
|
self.stream.write(data)
|
||||||
|
self.stream.flush()
|
||||||
|
def __getattr__(self, attr):
|
||||||
|
return getattr(self.stream, attr)
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import os, csv, getopt
|
||||||
|
import string
|
||||||
|
import re
|
||||||
|
import traceback
|
||||||
|
|
||||||
|
buildXML = False
|
||||||
|
|
||||||
|
class DrmException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
if 'calibre' in sys.modules:
|
||||||
|
inCalibre = True
|
||||||
|
else:
|
||||||
|
inCalibre = False
|
||||||
|
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.k4mobidedrm import mobidedrm
|
||||||
|
from calibre_plugins.k4mobidedrm import topazextract
|
||||||
|
from calibre_plugins.k4mobidedrm import kgenpids
|
||||||
|
else:
|
||||||
|
import mobidedrm
|
||||||
|
import topazextract
|
||||||
|
import kgenpids
|
||||||
|
|
||||||
|
|
||||||
|
# cleanup bytestring filenames
|
||||||
|
# borrowed from calibre from calibre/src/calibre/__init__.py
|
||||||
|
# added in removal of non-printing chars
|
||||||
|
# and removal of . at start
|
||||||
|
# convert underscores to spaces (we're OK with spaces in file names)
|
||||||
|
def cleanup_name(name):
|
||||||
|
_filename_sanitize = re.compile(r'[\xae\0\\|\?\*<":>\+/]')
|
||||||
|
substitute='_'
|
||||||
|
one = ''.join(char for char in name if char in string.printable)
|
||||||
|
one = _filename_sanitize.sub(substitute, one)
|
||||||
|
one = re.sub(r'\s', ' ', one).strip()
|
||||||
|
one = re.sub(r'^\.+$', '_', one)
|
||||||
|
one = one.replace('..', substitute)
|
||||||
|
# Windows doesn't like path components that end with a period
|
||||||
|
if one.endswith('.'):
|
||||||
|
one = one[:-1]+substitute
|
||||||
|
# Mac and Unix don't like file names that begin with a full stop
|
||||||
|
if len(one) > 0 and one[0] == '.':
|
||||||
|
one = substitute+one[1:]
|
||||||
|
one = one.replace('_',' ')
|
||||||
|
return one
|
||||||
|
|
||||||
|
def decryptBook(infile, outdir, k4, kInfoFiles, serials, pids):
|
||||||
|
global buildXML
|
||||||
|
|
||||||
|
# handle the obvious cases at the beginning
|
||||||
|
if not os.path.isfile(infile):
|
||||||
|
print >>sys.stderr, ('K4MobiDeDrm v%(__version__)s\n' % globals()) + "Error: Input file does not exist"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
mobi = True
|
||||||
|
magic3 = file(infile,'rb').read(3)
|
||||||
|
if magic3 == 'TPZ':
|
||||||
|
mobi = False
|
||||||
|
|
||||||
|
bookname = os.path.splitext(os.path.basename(infile))[0]
|
||||||
|
|
||||||
|
if mobi:
|
||||||
|
mb = mobidedrm.MobiBook(infile)
|
||||||
|
else:
|
||||||
|
mb = topazextract.TopazBook(infile)
|
||||||
|
|
||||||
|
title = mb.getBookTitle()
|
||||||
|
print "Processing Book: ", title
|
||||||
|
filenametitle = cleanup_name(title)
|
||||||
|
outfilename = cleanup_name(bookname)
|
||||||
|
|
||||||
|
# generate 'sensible' filename, that will sort with the original name,
|
||||||
|
# but is close to the name from the file.
|
||||||
|
outlength = len(outfilename)
|
||||||
|
comparelength = min(8,min(outlength,len(filenametitle)))
|
||||||
|
copylength = min(max(outfilename.find(' '),8),len(outfilename))
|
||||||
|
if outlength==0:
|
||||||
|
outfilename = filenametitle
|
||||||
|
elif comparelength > 0:
|
||||||
|
if outfilename[:comparelength] == filenametitle[:comparelength]:
|
||||||
|
outfilename = filenametitle
|
||||||
|
else:
|
||||||
|
outfilename = outfilename[:copylength] + " " + filenametitle
|
||||||
|
|
||||||
|
# avoid excessively long file names
|
||||||
|
if len(outfilename)>150:
|
||||||
|
outfilename = outfilename[:150]
|
||||||
|
|
||||||
|
# build pid list
|
||||||
|
md1, md2 = mb.getPIDMetaInfo()
|
||||||
|
pidlst = kgenpids.getPidList(md1, md2, k4, pids, serials, kInfoFiles)
|
||||||
|
|
||||||
|
try:
|
||||||
|
mb.processBook(pidlst)
|
||||||
|
|
||||||
|
except mobidedrm.DrmException, e:
|
||||||
|
print >>sys.stderr, ('K4MobiDeDrm v%(__version__)s\n' % globals()) + "Error: " + str(e) + "\nDRM Removal Failed.\n"
|
||||||
|
return 1
|
||||||
|
except topazextract.TpzDRMError, e:
|
||||||
|
print >>sys.stderr, ('K4MobiDeDrm v%(__version__)s\n' % globals()) + "Error: " + str(e) + "\nDRM Removal Failed.\n"
|
||||||
|
return 1
|
||||||
|
except Exception, e:
|
||||||
|
print >>sys.stderr, ('K4MobiDeDrm v%(__version__)s\n' % globals()) + "Error: " + str(e) + "\nDRM Removal Failed.\n"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if mobi:
|
||||||
|
if mb.getPrintReplica():
|
||||||
|
outfile = os.path.join(outdir, outfilename + '_nodrm' + '.azw4')
|
||||||
|
elif mb.getMobiVersion() >= 8:
|
||||||
|
outfile = os.path.join(outdir, outfilename + '_nodrm' + '.azw3')
|
||||||
|
else:
|
||||||
|
outfile = os.path.join(outdir, outfilename + '_nodrm' + '.mobi')
|
||||||
|
mb.getMobiFile(outfile)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
# topaz:
|
||||||
|
print " Creating NoDRM HTMLZ Archive"
|
||||||
|
zipname = os.path.join(outdir, outfilename + '_nodrm' + '.htmlz')
|
||||||
|
mb.getHTMLZip(zipname)
|
||||||
|
|
||||||
|
print " Creating SVG ZIP Archive"
|
||||||
|
zipname = os.path.join(outdir, outfilename + '_SVG' + '.zip')
|
||||||
|
mb.getSVGZip(zipname)
|
||||||
|
|
||||||
|
if buildXML:
|
||||||
|
print " Creating XML ZIP Archive"
|
||||||
|
zipname = os.path.join(outdir, outfilename + '_XML' + '.zip')
|
||||||
|
mb.getXMLZip(zipname)
|
||||||
|
|
||||||
|
# remove internal temporary directory of Topaz pieces
|
||||||
|
mb.cleanup()
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def usage(progname):
|
||||||
|
print "Removes DRM protection from K4PC/M, Kindle, Mobi and Topaz ebooks"
|
||||||
|
print "Usage:"
|
||||||
|
print " %s [-k <kindle.info>] [-p <pidnums>] [-s <kindleSerialNumbers>] <infile> <outdir> " % progname
|
||||||
|
|
||||||
|
#
|
||||||
|
# Main
|
||||||
|
#
|
||||||
|
def main(argv=sys.argv):
|
||||||
|
progname = os.path.basename(argv[0])
|
||||||
|
|
||||||
|
k4 = False
|
||||||
|
kInfoFiles = []
|
||||||
|
serials = []
|
||||||
|
pids = []
|
||||||
|
|
||||||
|
print ('K4MobiDeDrm v%(__version__)s '
|
||||||
|
'provided by the work of many including DiapDealer, SomeUpdates, IHeartCabbages, CMBDTC, Skindle, DarkReverser, ApprenticeAlf, etc .' % globals())
|
||||||
|
|
||||||
|
try:
|
||||||
|
opts, args = getopt.getopt(sys.argv[1:], "k:p:s:")
|
||||||
|
except getopt.GetoptError, err:
|
||||||
|
print str(err)
|
||||||
|
usage(progname)
|
||||||
|
sys.exit(2)
|
||||||
|
if len(args)<2:
|
||||||
|
usage(progname)
|
||||||
|
sys.exit(2)
|
||||||
|
|
||||||
|
for o, a in opts:
|
||||||
|
if o == "-k":
|
||||||
|
if a == None :
|
||||||
|
raise DrmException("Invalid parameter for -k")
|
||||||
|
kInfoFiles.append(a)
|
||||||
|
if o == "-p":
|
||||||
|
if a == None :
|
||||||
|
raise DrmException("Invalid parameter for -p")
|
||||||
|
pids = a.split(',')
|
||||||
|
if o == "-s":
|
||||||
|
if a == None :
|
||||||
|
raise DrmException("Invalid parameter for -s")
|
||||||
|
serials = a.split(',')
|
||||||
|
|
||||||
|
# try with built in Kindle Info files
|
||||||
|
k4 = True
|
||||||
|
if sys.platform.startswith('linux'):
|
||||||
|
k4 = False
|
||||||
|
kInfoFiles = None
|
||||||
|
infile = args[0]
|
||||||
|
outdir = args[1]
|
||||||
|
return decryptBook(infile, outdir, k4, kInfoFiles, serials, pids)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.stdout=Unbuffered(sys.stdout)
|
||||||
|
sys.exit(main())
|
||||||
276
Calibre_Plugins/K4MobiDeDRM_plugin/kgenpids.py
Normal file
276
Calibre_Plugins/K4MobiDeDRM_plugin/kgenpids.py
Normal file
@@ -0,0 +1,276 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
from __future__ import with_statement
|
||||||
|
import sys
|
||||||
|
import os, csv
|
||||||
|
import binascii
|
||||||
|
import zlib
|
||||||
|
import re
|
||||||
|
from struct import pack, unpack, unpack_from
|
||||||
|
|
||||||
|
class DrmException(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
global charMap1
|
||||||
|
global charMap3
|
||||||
|
global charMap4
|
||||||
|
|
||||||
|
if 'calibre' in sys.modules:
|
||||||
|
inCalibre = True
|
||||||
|
else:
|
||||||
|
inCalibre = False
|
||||||
|
|
||||||
|
if inCalibre:
|
||||||
|
if sys.platform.startswith('win'):
|
||||||
|
from calibre_plugins.k4mobidedrm.k4pcutils import getKindleInfoFiles, getDBfromFile, GetUserName, GetIDString
|
||||||
|
|
||||||
|
if sys.platform.startswith('darwin'):
|
||||||
|
from calibre_plugins.k4mobidedrm.k4mutils import getKindleInfoFiles, getDBfromFile, GetUserName, GetIDString
|
||||||
|
else:
|
||||||
|
if sys.platform.startswith('win'):
|
||||||
|
from k4pcutils import getKindleInfoFiles, getDBfromFile, GetUserName, GetIDString
|
||||||
|
|
||||||
|
if sys.platform.startswith('darwin'):
|
||||||
|
from k4mutils import getKindleInfoFiles, getDBfromFile, GetUserName, GetIDString
|
||||||
|
|
||||||
|
|
||||||
|
charMap1 = "n5Pr6St7Uv8Wx9YzAb0Cd1Ef2Gh3Jk4M"
|
||||||
|
charMap3 = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
|
||||||
|
charMap4 = "ABCDEFGHIJKLMNPQRSTUVWXYZ123456789"
|
||||||
|
|
||||||
|
# crypto digestroutines
|
||||||
|
import hashlib
|
||||||
|
|
||||||
|
def MD5(message):
|
||||||
|
ctx = hashlib.md5()
|
||||||
|
ctx.update(message)
|
||||||
|
return ctx.digest()
|
||||||
|
|
||||||
|
def SHA1(message):
|
||||||
|
ctx = hashlib.sha1()
|
||||||
|
ctx.update(message)
|
||||||
|
return ctx.digest()
|
||||||
|
|
||||||
|
|
||||||
|
# Encode the bytes in data with the characters in map
|
||||||
|
def encode(data, map):
|
||||||
|
result = ""
|
||||||
|
for char in data:
|
||||||
|
value = ord(char)
|
||||||
|
Q = (value ^ 0x80) // len(map)
|
||||||
|
R = value % len(map)
|
||||||
|
result += map[Q]
|
||||||
|
result += map[R]
|
||||||
|
return result
|
||||||
|
|
||||||
|
# Hash the bytes in data and then encode the digest with the characters in map
|
||||||
|
def encodeHash(data,map):
|
||||||
|
return encode(MD5(data),map)
|
||||||
|
|
||||||
|
# Decode the string in data with the characters in map. Returns the decoded bytes
|
||||||
|
def decode(data,map):
|
||||||
|
result = ""
|
||||||
|
for i in range (0,len(data)-1,2):
|
||||||
|
high = map.find(data[i])
|
||||||
|
low = map.find(data[i+1])
|
||||||
|
if (high == -1) or (low == -1) :
|
||||||
|
break
|
||||||
|
value = (((high * len(map)) ^ 0x80) & 0xFF) + low
|
||||||
|
result += pack("B",value)
|
||||||
|
return result
|
||||||
|
|
||||||
|
#
|
||||||
|
# PID generation routines
|
||||||
|
#
|
||||||
|
|
||||||
|
# Returns two bit at offset from a bit field
|
||||||
|
def getTwoBitsFromBitField(bitField,offset):
|
||||||
|
byteNumber = offset // 4
|
||||||
|
bitPosition = 6 - 2*(offset % 4)
|
||||||
|
return ord(bitField[byteNumber]) >> bitPosition & 3
|
||||||
|
|
||||||
|
# Returns the six bits at offset from a bit field
|
||||||
|
def getSixBitsFromBitField(bitField,offset):
|
||||||
|
offset *= 3
|
||||||
|
value = (getTwoBitsFromBitField(bitField,offset) <<4) + (getTwoBitsFromBitField(bitField,offset+1) << 2) +getTwoBitsFromBitField(bitField,offset+2)
|
||||||
|
return value
|
||||||
|
|
||||||
|
# 8 bits to six bits encoding from hash to generate PID string
|
||||||
|
def encodePID(hash):
|
||||||
|
global charMap3
|
||||||
|
PID = ""
|
||||||
|
for position in range (0,8):
|
||||||
|
PID += charMap3[getSixBitsFromBitField(hash,position)]
|
||||||
|
return PID
|
||||||
|
|
||||||
|
# Encryption table used to generate the device PID
|
||||||
|
def generatePidEncryptionTable() :
|
||||||
|
table = []
|
||||||
|
for counter1 in range (0,0x100):
|
||||||
|
value = counter1
|
||||||
|
for counter2 in range (0,8):
|
||||||
|
if (value & 1 == 0) :
|
||||||
|
value = value >> 1
|
||||||
|
else :
|
||||||
|
value = value >> 1
|
||||||
|
value = value ^ 0xEDB88320
|
||||||
|
table.append(value)
|
||||||
|
return table
|
||||||
|
|
||||||
|
# Seed value used to generate the device PID
|
||||||
|
def generatePidSeed(table,dsn) :
|
||||||
|
value = 0
|
||||||
|
for counter in range (0,4) :
|
||||||
|
index = (ord(dsn[counter]) ^ value) &0xFF
|
||||||
|
value = (value >> 8) ^ table[index]
|
||||||
|
return value
|
||||||
|
|
||||||
|
# Generate the device PID
|
||||||
|
def generateDevicePID(table,dsn,nbRoll):
|
||||||
|
global charMap4
|
||||||
|
seed = generatePidSeed(table,dsn)
|
||||||
|
pidAscii = ""
|
||||||
|
pid = [(seed >>24) &0xFF,(seed >> 16) &0xff,(seed >> 8) &0xFF,(seed) & 0xFF,(seed>>24) & 0xFF,(seed >> 16) &0xff,(seed >> 8) &0xFF,(seed) & 0xFF]
|
||||||
|
index = 0
|
||||||
|
for counter in range (0,nbRoll):
|
||||||
|
pid[index] = pid[index] ^ ord(dsn[counter])
|
||||||
|
index = (index+1) %8
|
||||||
|
for counter in range (0,8):
|
||||||
|
index = ((((pid[counter] >>5) & 3) ^ pid[counter]) & 0x1f) + (pid[counter] >> 7)
|
||||||
|
pidAscii += charMap4[index]
|
||||||
|
return pidAscii
|
||||||
|
|
||||||
|
def crc32(s):
|
||||||
|
return (~binascii.crc32(s,-1))&0xFFFFFFFF
|
||||||
|
|
||||||
|
# convert from 8 digit PID to 10 digit PID with checksum
|
||||||
|
def checksumPid(s):
|
||||||
|
global charMap4
|
||||||
|
crc = crc32(s)
|
||||||
|
crc = crc ^ (crc >> 16)
|
||||||
|
res = s
|
||||||
|
l = len(charMap4)
|
||||||
|
for i in (0,1):
|
||||||
|
b = crc & 0xff
|
||||||
|
pos = (b // l) ^ (b % l)
|
||||||
|
res += charMap4[pos%l]
|
||||||
|
crc >>= 8
|
||||||
|
return res
|
||||||
|
|
||||||
|
|
||||||
|
# old kindle serial number to fixed pid
|
||||||
|
def pidFromSerial(s, l):
|
||||||
|
global charMap4
|
||||||
|
crc = crc32(s)
|
||||||
|
arr1 = [0]*l
|
||||||
|
for i in xrange(len(s)):
|
||||||
|
arr1[i%l] ^= ord(s[i])
|
||||||
|
crc_bytes = [crc >> 24 & 0xff, crc >> 16 & 0xff, crc >> 8 & 0xff, crc & 0xff]
|
||||||
|
for i in xrange(l):
|
||||||
|
arr1[i] ^= crc_bytes[i&3]
|
||||||
|
pid = ""
|
||||||
|
for i in xrange(l):
|
||||||
|
b = arr1[i] & 0xff
|
||||||
|
pid+=charMap4[(b >> 7) + ((b >> 5 & 3) ^ (b & 0x1f))]
|
||||||
|
return pid
|
||||||
|
|
||||||
|
|
||||||
|
# Parse the EXTH header records and use the Kindle serial number to calculate the book pid.
|
||||||
|
def getKindlePid(pidlst, rec209, token, serialnum):
|
||||||
|
# Compute book PID
|
||||||
|
pidHash = SHA1(serialnum+rec209+token)
|
||||||
|
bookPID = encodePID(pidHash)
|
||||||
|
bookPID = checksumPid(bookPID)
|
||||||
|
pidlst.append(bookPID)
|
||||||
|
|
||||||
|
# compute fixed pid for old pre 2.5 firmware update pid as well
|
||||||
|
bookPID = pidFromSerial(serialnum, 7) + "*"
|
||||||
|
bookPID = checksumPid(bookPID)
|
||||||
|
pidlst.append(bookPID)
|
||||||
|
|
||||||
|
return pidlst
|
||||||
|
|
||||||
|
|
||||||
|
# parse the Kindleinfo file to calculate the book pid.
|
||||||
|
|
||||||
|
keynames = ["kindle.account.tokens","kindle.cookie.item","eulaVersionAccepted","login_date","kindle.token.item","login","kindle.key.item","kindle.name.info","kindle.device.info", "MazamaRandomNumber"]
|
||||||
|
|
||||||
|
def getK4Pids(pidlst, rec209, token, kInfoFile):
|
||||||
|
global charMap1
|
||||||
|
kindleDatabase = None
|
||||||
|
try:
|
||||||
|
kindleDatabase = getDBfromFile(kInfoFile)
|
||||||
|
except Exception, message:
|
||||||
|
print(message)
|
||||||
|
kindleDatabase = None
|
||||||
|
pass
|
||||||
|
|
||||||
|
if kindleDatabase == None :
|
||||||
|
return pidlst
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Get the Mazama Random number
|
||||||
|
MazamaRandomNumber = kindleDatabase["MazamaRandomNumber"]
|
||||||
|
|
||||||
|
# Get the kindle account token
|
||||||
|
kindleAccountToken = kindleDatabase["kindle.account.tokens"]
|
||||||
|
except KeyError:
|
||||||
|
print "Keys not found in " + kInfoFile
|
||||||
|
return pidlst
|
||||||
|
|
||||||
|
# Get the ID string used
|
||||||
|
encodedIDString = encodeHash(GetIDString(),charMap1)
|
||||||
|
|
||||||
|
# Get the current user name
|
||||||
|
encodedUsername = encodeHash(GetUserName(),charMap1)
|
||||||
|
|
||||||
|
# concat, hash and encode to calculate the DSN
|
||||||
|
DSN = encode(SHA1(MazamaRandomNumber+encodedIDString+encodedUsername),charMap1)
|
||||||
|
|
||||||
|
# Compute the device PID (for which I can tell, is used for nothing).
|
||||||
|
table = generatePidEncryptionTable()
|
||||||
|
devicePID = generateDevicePID(table,DSN,4)
|
||||||
|
devicePID = checksumPid(devicePID)
|
||||||
|
pidlst.append(devicePID)
|
||||||
|
|
||||||
|
# Compute book PIDs
|
||||||
|
|
||||||
|
# book pid
|
||||||
|
pidHash = SHA1(DSN+kindleAccountToken+rec209+token)
|
||||||
|
bookPID = encodePID(pidHash)
|
||||||
|
bookPID = checksumPid(bookPID)
|
||||||
|
pidlst.append(bookPID)
|
||||||
|
|
||||||
|
# variant 1
|
||||||
|
pidHash = SHA1(kindleAccountToken+rec209+token)
|
||||||
|
bookPID = encodePID(pidHash)
|
||||||
|
bookPID = checksumPid(bookPID)
|
||||||
|
pidlst.append(bookPID)
|
||||||
|
|
||||||
|
# variant 2
|
||||||
|
pidHash = SHA1(DSN+rec209+token)
|
||||||
|
bookPID = encodePID(pidHash)
|
||||||
|
bookPID = checksumPid(bookPID)
|
||||||
|
pidlst.append(bookPID)
|
||||||
|
|
||||||
|
return pidlst
|
||||||
|
|
||||||
|
def getPidList(md1, md2, k4, pids, serials, kInfoFiles):
|
||||||
|
pidlst = []
|
||||||
|
if kInfoFiles is None:
|
||||||
|
kInfoFiles = []
|
||||||
|
if k4:
|
||||||
|
kInfoFiles = getKindleInfoFiles(kInfoFiles)
|
||||||
|
for infoFile in kInfoFiles:
|
||||||
|
try:
|
||||||
|
pidlst = getK4Pids(pidlst, md1, md2, infoFile)
|
||||||
|
except Exception, message:
|
||||||
|
print("Error getting PIDs from " + infoFile + ": " + message)
|
||||||
|
for serialnum in serials:
|
||||||
|
try:
|
||||||
|
pidlst = getKindlePid(pidlst, md1, md2, serialnum)
|
||||||
|
except Exception, message:
|
||||||
|
print("Error getting PIDs from " + serialnum + ": " + message)
|
||||||
|
for pid in pids:
|
||||||
|
pidlst.append(pid)
|
||||||
|
return pidlst
|
||||||
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/libalfcrypto.dylib
Normal file
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/libalfcrypto.dylib
Normal file
Binary file not shown.
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/libalfcrypto32.so
Normal file
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/libalfcrypto32.so
Normal file
Binary file not shown.
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/libalfcrypto64.so
Normal file
BIN
Calibre_Plugins/K4MobiDeDRM_plugin/libalfcrypto64.so
Normal file
Binary file not shown.
68
Calibre_Plugins/K4MobiDeDRM_plugin/pbkdf2.py
Normal file
68
Calibre_Plugins/K4MobiDeDRM_plugin/pbkdf2.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# A simple implementation of pbkdf2 using stock python modules. See RFC2898
|
||||||
|
# for details. Basically, it derives a key from a password and salt.
|
||||||
|
|
||||||
|
# Copyright 2004 Matt Johnston <matt @ ucc asn au>
|
||||||
|
# Copyright 2009 Daniel Holth <dholth@fastmail.fm>
|
||||||
|
# This code may be freely used and modified for any purpose.
|
||||||
|
|
||||||
|
# Revision history
|
||||||
|
# v0.1 October 2004 - Initial release
|
||||||
|
# v0.2 8 March 2007 - Make usable with hashlib in Python 2.5 and use
|
||||||
|
# v0.3 "" the correct digest_size rather than always 20
|
||||||
|
# v0.4 Oct 2009 - Rescue from chandler svn, test and optimize.
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import hmac
|
||||||
|
from struct import pack
|
||||||
|
try:
|
||||||
|
# only in python 2.5
|
||||||
|
import hashlib
|
||||||
|
sha = hashlib.sha1
|
||||||
|
md5 = hashlib.md5
|
||||||
|
sha256 = hashlib.sha256
|
||||||
|
except ImportError: # pragma: NO COVERAGE
|
||||||
|
# fallback
|
||||||
|
import sha
|
||||||
|
import md5
|
||||||
|
|
||||||
|
# this is what you want to call.
|
||||||
|
def pbkdf2( password, salt, itercount, keylen, hashfn = sha ):
|
||||||
|
try:
|
||||||
|
# depending whether the hashfn is from hashlib or sha/md5
|
||||||
|
digest_size = hashfn().digest_size
|
||||||
|
except TypeError: # pragma: NO COVERAGE
|
||||||
|
digest_size = hashfn.digest_size
|
||||||
|
# l - number of output blocks to produce
|
||||||
|
l = keylen / digest_size
|
||||||
|
if keylen % digest_size != 0:
|
||||||
|
l += 1
|
||||||
|
|
||||||
|
h = hmac.new( password, None, hashfn )
|
||||||
|
|
||||||
|
T = ""
|
||||||
|
for i in range(1, l+1):
|
||||||
|
T += pbkdf2_F( h, salt, itercount, i )
|
||||||
|
|
||||||
|
return T[0: keylen]
|
||||||
|
|
||||||
|
def xorstr( a, b ):
|
||||||
|
if len(a) != len(b):
|
||||||
|
raise ValueError("xorstr(): lengths differ")
|
||||||
|
return ''.join((chr(ord(x)^ord(y)) for x, y in zip(a, b)))
|
||||||
|
|
||||||
|
def prf( h, data ):
|
||||||
|
hm = h.copy()
|
||||||
|
hm.update( data )
|
||||||
|
return hm.digest()
|
||||||
|
|
||||||
|
# Helper as per the spec. h is a hmac which has been created seeded with the
|
||||||
|
# password, it will be copy()ed and not modified.
|
||||||
|
def pbkdf2_F( h, salt, itercount, blocknum ):
|
||||||
|
U = prf( h, salt + pack('>i',blocknum ) )
|
||||||
|
T = U
|
||||||
|
|
||||||
|
for i in range(2, itercount+1):
|
||||||
|
U = prf( h, U )
|
||||||
|
T = xorstr( T, U )
|
||||||
|
|
||||||
|
return T
|
||||||
@@ -6,6 +6,7 @@ import csv
|
|||||||
import sys
|
import sys
|
||||||
import os
|
import os
|
||||||
import getopt
|
import getopt
|
||||||
|
import re
|
||||||
from struct import pack
|
from struct import pack
|
||||||
from struct import unpack
|
from struct import unpack
|
||||||
|
|
||||||
@@ -43,8 +44,8 @@ class DocParser(object):
|
|||||||
'pos-right' : 'text-align: right;',
|
'pos-right' : 'text-align: right;',
|
||||||
'pos-center' : 'text-align: center; margin-left: auto; margin-right: auto;',
|
'pos-center' : 'text-align: center; margin-left: auto; margin-right: auto;',
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
# find tag if within pos to end inclusive
|
# find tag if within pos to end inclusive
|
||||||
def findinDoc(self, tagpath, pos, end) :
|
def findinDoc(self, tagpath, pos, end) :
|
||||||
result = None
|
result = None
|
||||||
@@ -59,10 +60,10 @@ class DocParser(object):
|
|||||||
item = docList[j]
|
item = docList[j]
|
||||||
if item.find('=') >= 0:
|
if item.find('=') >= 0:
|
||||||
(name, argres) = item.split('=',1)
|
(name, argres) = item.split('=',1)
|
||||||
else :
|
else :
|
||||||
name = item
|
name = item
|
||||||
argres = ''
|
argres = ''
|
||||||
if name.endswith(tagpath) :
|
if name.endswith(tagpath) :
|
||||||
result = argres
|
result = argres
|
||||||
foundat = j
|
foundat = j
|
||||||
break
|
break
|
||||||
@@ -81,6 +82,21 @@ class DocParser(object):
|
|||||||
pos = foundpos + 1
|
pos = foundpos + 1
|
||||||
return startpos
|
return startpos
|
||||||
|
|
||||||
|
# returns a vector of integers for the tagpath
|
||||||
|
def getData(self, tagpath, pos, end, clean=False):
|
||||||
|
if clean:
|
||||||
|
digits_only = re.compile(r'''([0-9]+)''')
|
||||||
|
argres=[]
|
||||||
|
(foundat, argt) = self.findinDoc(tagpath, pos, end)
|
||||||
|
if (argt != None) and (len(argt) > 0) :
|
||||||
|
argList = argt.split('|')
|
||||||
|
for strval in argList:
|
||||||
|
if clean:
|
||||||
|
m = re.search(digits_only, strval)
|
||||||
|
if m != None:
|
||||||
|
strval = m.group()
|
||||||
|
argres.append(int(strval))
|
||||||
|
return argres
|
||||||
|
|
||||||
def process(self):
|
def process(self):
|
||||||
|
|
||||||
@@ -104,7 +120,7 @@ class DocParser(object):
|
|||||||
(pos, tag) = self.findinDoc('style._tag',start,end)
|
(pos, tag) = self.findinDoc('style._tag',start,end)
|
||||||
if tag == None :
|
if tag == None :
|
||||||
(pos, tag) = self.findinDoc('style.type',start,end)
|
(pos, tag) = self.findinDoc('style.type',start,end)
|
||||||
|
|
||||||
# Is this something we know how to convert to css
|
# Is this something we know how to convert to css
|
||||||
if tag in self.stags :
|
if tag in self.stags :
|
||||||
|
|
||||||
@@ -113,7 +129,7 @@ class DocParser(object):
|
|||||||
if sclass != None:
|
if sclass != None:
|
||||||
sclass = sclass.replace(' ','-')
|
sclass = sclass.replace(' ','-')
|
||||||
sclass = '.cl-' + sclass.lower()
|
sclass = '.cl-' + sclass.lower()
|
||||||
else :
|
else :
|
||||||
sclass = ''
|
sclass = ''
|
||||||
|
|
||||||
# check for any "after class" specifiers
|
# check for any "after class" specifiers
|
||||||
@@ -121,7 +137,7 @@ class DocParser(object):
|
|||||||
if aftclass != None:
|
if aftclass != None:
|
||||||
aftclass = aftclass.replace(' ','-')
|
aftclass = aftclass.replace(' ','-')
|
||||||
aftclass = '.cl-' + aftclass.lower()
|
aftclass = '.cl-' + aftclass.lower()
|
||||||
else :
|
else :
|
||||||
aftclass = ''
|
aftclass = ''
|
||||||
|
|
||||||
cssargs = {}
|
cssargs = {}
|
||||||
@@ -132,7 +148,7 @@ class DocParser(object):
|
|||||||
(pos2, val) = self.findinDoc('style.rule.value', start, end)
|
(pos2, val) = self.findinDoc('style.rule.value', start, end)
|
||||||
|
|
||||||
if attr == None : break
|
if attr == None : break
|
||||||
|
|
||||||
if (attr == 'display') or (attr == 'pos') or (attr == 'align'):
|
if (attr == 'display') or (attr == 'pos') or (attr == 'align'):
|
||||||
# handle text based attributess
|
# handle text based attributess
|
||||||
attr = attr + '-' + val
|
attr = attr + '-' + val
|
||||||
@@ -148,6 +164,9 @@ class DocParser(object):
|
|||||||
scale = self.pw
|
scale = self.pw
|
||||||
elif attr == 'line-space':
|
elif attr == 'line-space':
|
||||||
scale = self.fontsize * 2.0
|
scale = self.fontsize * 2.0
|
||||||
|
|
||||||
|
if val == "":
|
||||||
|
val = 0
|
||||||
|
|
||||||
if not ((attr == 'hang') and (int(val) == 0)) :
|
if not ((attr == 'hang') and (int(val) == 0)) :
|
||||||
pv = float(val)/scale
|
pv = float(val)/scale
|
||||||
@@ -160,7 +179,7 @@ class DocParser(object):
|
|||||||
if aftclass != "" : keep = False
|
if aftclass != "" : keep = False
|
||||||
|
|
||||||
if keep :
|
if keep :
|
||||||
# make sure line-space does not go below 100% or above 300% since
|
# make sure line-space does not go below 100% or above 300% since
|
||||||
# it can be wacky in some styles
|
# it can be wacky in some styles
|
||||||
if 'line-space' in cssargs:
|
if 'line-space' in cssargs:
|
||||||
seg = cssargs['line-space'][0]
|
seg = cssargs['line-space'][0]
|
||||||
@@ -170,7 +189,7 @@ class DocParser(object):
|
|||||||
del cssargs['line-space']
|
del cssargs['line-space']
|
||||||
cssargs['line-space'] = (self.attr_val_map['line-space'], val)
|
cssargs['line-space'] = (self.attr_val_map['line-space'], val)
|
||||||
|
|
||||||
|
|
||||||
# handle modifications for css style hanging indents
|
# handle modifications for css style hanging indents
|
||||||
if 'hang' in cssargs:
|
if 'hang' in cssargs:
|
||||||
hseg = cssargs['hang'][0]
|
hseg = cssargs['hang'][0]
|
||||||
@@ -203,7 +222,7 @@ class DocParser(object):
|
|||||||
|
|
||||||
if sclass != '' :
|
if sclass != '' :
|
||||||
classlst += sclass + '\n'
|
classlst += sclass + '\n'
|
||||||
|
|
||||||
# handle special case of paragraph class used inside chapter heading
|
# handle special case of paragraph class used inside chapter heading
|
||||||
# and non-chapter headings
|
# and non-chapter headings
|
||||||
if sclass != '' :
|
if sclass != '' :
|
||||||
@@ -224,7 +243,7 @@ class DocParser(object):
|
|||||||
if cssline != ' { }':
|
if cssline != ' { }':
|
||||||
csspage += self.stags[tag] + cssline + '\n'
|
csspage += self.stags[tag] + cssline + '\n'
|
||||||
|
|
||||||
|
|
||||||
return csspage, classlst
|
return csspage, classlst
|
||||||
|
|
||||||
|
|
||||||
@@ -237,7 +256,11 @@ def convert2CSS(flatxml, fontsize, ph, pw):
|
|||||||
|
|
||||||
# create a document parser
|
# create a document parser
|
||||||
dp = DocParser(flatxml, fontsize, ph, pw)
|
dp = DocParser(flatxml, fontsize, ph, pw)
|
||||||
|
|
||||||
csspage = dp.process()
|
csspage = dp.process()
|
||||||
|
|
||||||
return csspage
|
return csspage
|
||||||
|
|
||||||
|
|
||||||
|
def getpageIDMap(flatxml):
|
||||||
|
dp = DocParser(flatxml, 0, 0, 0)
|
||||||
|
pageidnumbers = dp.getData('info.original.pid', 0, -1, True)
|
||||||
|
return pageidnumbers
|
||||||
@@ -52,7 +52,7 @@ class Process(object):
|
|||||||
self.__stdout_thread = threading.Thread(
|
self.__stdout_thread = threading.Thread(
|
||||||
name="stdout-thread",
|
name="stdout-thread",
|
||||||
target=self.__reader, args=(self.__collected_outdata,
|
target=self.__reader, args=(self.__collected_outdata,
|
||||||
self.__process.stdout))
|
self.__process.stdout))
|
||||||
self.__stdout_thread.setDaemon(True)
|
self.__stdout_thread.setDaemon(True)
|
||||||
self.__stdout_thread.start()
|
self.__stdout_thread.start()
|
||||||
|
|
||||||
@@ -60,7 +60,7 @@ class Process(object):
|
|||||||
self.__stderr_thread = threading.Thread(
|
self.__stderr_thread = threading.Thread(
|
||||||
name="stderr-thread",
|
name="stderr-thread",
|
||||||
target=self.__reader, args=(self.__collected_errdata,
|
target=self.__reader, args=(self.__collected_errdata,
|
||||||
self.__process.stderr))
|
self.__process.stderr))
|
||||||
self.__stderr_thread.setDaemon(True)
|
self.__stderr_thread.setDaemon(True)
|
||||||
self.__stderr_thread.start()
|
self.__stderr_thread.start()
|
||||||
|
|
||||||
@@ -146,4 +146,3 @@ class Process(object):
|
|||||||
self.__quit = True
|
self.__quit = True
|
||||||
self.__inputsem.release()
|
self.__inputsem.release()
|
||||||
self.__lock.release()
|
self.__lock.release()
|
||||||
|
|
||||||
482
Calibre_Plugins/K4MobiDeDRM_plugin/topazextract.py
Normal file
482
Calibre_Plugins/K4MobiDeDRM_plugin/topazextract.py
Normal file
@@ -0,0 +1,482 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
class Unbuffered:
|
||||||
|
def __init__(self, stream):
|
||||||
|
self.stream = stream
|
||||||
|
def write(self, data):
|
||||||
|
self.stream.write(data)
|
||||||
|
self.stream.flush()
|
||||||
|
def __getattr__(self, attr):
|
||||||
|
return getattr(self.stream, attr)
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
if 'calibre' in sys.modules:
|
||||||
|
inCalibre = True
|
||||||
|
else:
|
||||||
|
inCalibre = False
|
||||||
|
|
||||||
|
buildXML = False
|
||||||
|
|
||||||
|
import os, csv, getopt
|
||||||
|
import zlib, zipfile, tempfile, shutil
|
||||||
|
from struct import pack
|
||||||
|
from struct import unpack
|
||||||
|
from alfcrypto import Topaz_Cipher
|
||||||
|
|
||||||
|
class TpzDRMError(Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
# local support routines
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.k4mobidedrm import kgenpids
|
||||||
|
else:
|
||||||
|
import kgenpids
|
||||||
|
|
||||||
|
# recursive zip creation support routine
|
||||||
|
def zipUpDir(myzip, tdir, localname):
|
||||||
|
currentdir = tdir
|
||||||
|
if localname != "":
|
||||||
|
currentdir = os.path.join(currentdir,localname)
|
||||||
|
list = os.listdir(currentdir)
|
||||||
|
for file in list:
|
||||||
|
afilename = file
|
||||||
|
localfilePath = os.path.join(localname, afilename)
|
||||||
|
realfilePath = os.path.join(currentdir,file)
|
||||||
|
if os.path.isfile(realfilePath):
|
||||||
|
myzip.write(realfilePath, localfilePath)
|
||||||
|
elif os.path.isdir(realfilePath):
|
||||||
|
zipUpDir(myzip, tdir, localfilePath)
|
||||||
|
|
||||||
|
#
|
||||||
|
# Utility routines
|
||||||
|
#
|
||||||
|
|
||||||
|
# Get a 7 bit encoded number from file
|
||||||
|
def bookReadEncodedNumber(fo):
|
||||||
|
flag = False
|
||||||
|
data = ord(fo.read(1))
|
||||||
|
if data == 0xFF:
|
||||||
|
flag = True
|
||||||
|
data = ord(fo.read(1))
|
||||||
|
if data >= 0x80:
|
||||||
|
datax = (data & 0x7F)
|
||||||
|
while data >= 0x80 :
|
||||||
|
data = ord(fo.read(1))
|
||||||
|
datax = (datax <<7) + (data & 0x7F)
|
||||||
|
data = datax
|
||||||
|
if flag:
|
||||||
|
data = -data
|
||||||
|
return data
|
||||||
|
|
||||||
|
# Get a length prefixed string from file
|
||||||
|
def bookReadString(fo):
|
||||||
|
stringLength = bookReadEncodedNumber(fo)
|
||||||
|
return unpack(str(stringLength)+"s",fo.read(stringLength))[0]
|
||||||
|
|
||||||
|
#
|
||||||
|
# crypto routines
|
||||||
|
#
|
||||||
|
|
||||||
|
# Context initialisation for the Topaz Crypto
|
||||||
|
def topazCryptoInit(key):
|
||||||
|
return Topaz_Cipher().ctx_init(key)
|
||||||
|
|
||||||
|
# ctx1 = 0x0CAFFE19E
|
||||||
|
# for keyChar in key:
|
||||||
|
# keyByte = ord(keyChar)
|
||||||
|
# ctx2 = ctx1
|
||||||
|
# ctx1 = ((((ctx1 >>2) * (ctx1 >>7))&0xFFFFFFFF) ^ (keyByte * keyByte * 0x0F902007)& 0xFFFFFFFF )
|
||||||
|
# return [ctx1,ctx2]
|
||||||
|
|
||||||
|
# decrypt data with the context prepared by topazCryptoInit()
|
||||||
|
def topazCryptoDecrypt(data, ctx):
|
||||||
|
return Topaz_Cipher().decrypt(data, ctx)
|
||||||
|
# ctx1 = ctx[0]
|
||||||
|
# ctx2 = ctx[1]
|
||||||
|
# plainText = ""
|
||||||
|
# for dataChar in data:
|
||||||
|
# dataByte = ord(dataChar)
|
||||||
|
# m = (dataByte ^ ((ctx1 >> 3) &0xFF) ^ ((ctx2<<3) & 0xFF)) &0xFF
|
||||||
|
# ctx2 = ctx1
|
||||||
|
# ctx1 = (((ctx1 >> 2) * (ctx1 >> 7)) &0xFFFFFFFF) ^((m * m * 0x0F902007) &0xFFFFFFFF)
|
||||||
|
# plainText += chr(m)
|
||||||
|
# return plainText
|
||||||
|
|
||||||
|
# Decrypt data with the PID
|
||||||
|
def decryptRecord(data,PID):
|
||||||
|
ctx = topazCryptoInit(PID)
|
||||||
|
return topazCryptoDecrypt(data, ctx)
|
||||||
|
|
||||||
|
# Try to decrypt a dkey record (contains the bookPID)
|
||||||
|
def decryptDkeyRecord(data,PID):
|
||||||
|
record = decryptRecord(data,PID)
|
||||||
|
fields = unpack("3sB8sB8s3s",record)
|
||||||
|
if fields[0] != "PID" or fields[5] != "pid" :
|
||||||
|
raise TpzDRMError("Didn't find PID magic numbers in record")
|
||||||
|
elif fields[1] != 8 or fields[3] != 8 :
|
||||||
|
raise TpzDRMError("Record didn't contain correct length fields")
|
||||||
|
elif fields[2] != PID :
|
||||||
|
raise TpzDRMError("Record didn't contain PID")
|
||||||
|
return fields[4]
|
||||||
|
|
||||||
|
# Decrypt all dkey records (contain the book PID)
|
||||||
|
def decryptDkeyRecords(data,PID):
|
||||||
|
nbKeyRecords = ord(data[0])
|
||||||
|
records = []
|
||||||
|
data = data[1:]
|
||||||
|
for i in range (0,nbKeyRecords):
|
||||||
|
length = ord(data[0])
|
||||||
|
try:
|
||||||
|
key = decryptDkeyRecord(data[1:length+1],PID)
|
||||||
|
records.append(key)
|
||||||
|
except TpzDRMError:
|
||||||
|
pass
|
||||||
|
data = data[1+length:]
|
||||||
|
if len(records) == 0:
|
||||||
|
raise TpzDRMError("BookKey Not Found")
|
||||||
|
return records
|
||||||
|
|
||||||
|
|
||||||
|
class TopazBook:
|
||||||
|
def __init__(self, filename):
|
||||||
|
self.fo = file(filename, 'rb')
|
||||||
|
self.outdir = tempfile.mkdtemp()
|
||||||
|
# self.outdir = 'rawdat'
|
||||||
|
self.bookPayloadOffset = 0
|
||||||
|
self.bookHeaderRecords = {}
|
||||||
|
self.bookMetadata = {}
|
||||||
|
self.bookKey = None
|
||||||
|
magic = unpack("4s",self.fo.read(4))[0]
|
||||||
|
if magic != 'TPZ0':
|
||||||
|
raise TpzDRMError("Parse Error : Invalid Header, not a Topaz file")
|
||||||
|
self.parseTopazHeaders()
|
||||||
|
self.parseMetadata()
|
||||||
|
|
||||||
|
def parseTopazHeaders(self):
|
||||||
|
def bookReadHeaderRecordData():
|
||||||
|
# Read and return the data of one header record at the current book file position
|
||||||
|
# [[offset,decompressedLength,compressedLength],...]
|
||||||
|
nbValues = bookReadEncodedNumber(self.fo)
|
||||||
|
values = []
|
||||||
|
for i in range (0,nbValues):
|
||||||
|
values.append([bookReadEncodedNumber(self.fo),bookReadEncodedNumber(self.fo),bookReadEncodedNumber(self.fo)])
|
||||||
|
return values
|
||||||
|
def parseTopazHeaderRecord():
|
||||||
|
# Read and parse one header record at the current book file position and return the associated data
|
||||||
|
# [[offset,decompressedLength,compressedLength],...]
|
||||||
|
if ord(self.fo.read(1)) != 0x63:
|
||||||
|
raise TpzDRMError("Parse Error : Invalid Header")
|
||||||
|
tag = bookReadString(self.fo)
|
||||||
|
record = bookReadHeaderRecordData()
|
||||||
|
return [tag,record]
|
||||||
|
nbRecords = bookReadEncodedNumber(self.fo)
|
||||||
|
for i in range (0,nbRecords):
|
||||||
|
result = parseTopazHeaderRecord()
|
||||||
|
# print result[0], result[1]
|
||||||
|
self.bookHeaderRecords[result[0]] = result[1]
|
||||||
|
if ord(self.fo.read(1)) != 0x64 :
|
||||||
|
raise TpzDRMError("Parse Error : Invalid Header")
|
||||||
|
self.bookPayloadOffset = self.fo.tell()
|
||||||
|
|
||||||
|
def parseMetadata(self):
|
||||||
|
# Parse the metadata record from the book payload and return a list of [key,values]
|
||||||
|
self.fo.seek(self.bookPayloadOffset + self.bookHeaderRecords["metadata"][0][0])
|
||||||
|
tag = bookReadString(self.fo)
|
||||||
|
if tag != "metadata" :
|
||||||
|
raise TpzDRMError("Parse Error : Record Names Don't Match")
|
||||||
|
flags = ord(self.fo.read(1))
|
||||||
|
nbRecords = ord(self.fo.read(1))
|
||||||
|
# print nbRecords
|
||||||
|
for i in range (0,nbRecords) :
|
||||||
|
keyval = bookReadString(self.fo)
|
||||||
|
content = bookReadString(self.fo)
|
||||||
|
# print keyval
|
||||||
|
# print content
|
||||||
|
self.bookMetadata[keyval] = content
|
||||||
|
return self.bookMetadata
|
||||||
|
|
||||||
|
def getPIDMetaInfo(self):
|
||||||
|
keysRecord = self.bookMetadata.get('keys','')
|
||||||
|
keysRecordRecord = ''
|
||||||
|
if keysRecord != '':
|
||||||
|
keylst = keysRecord.split(',')
|
||||||
|
for keyval in keylst:
|
||||||
|
keysRecordRecord += self.bookMetadata.get(keyval,'')
|
||||||
|
return keysRecord, keysRecordRecord
|
||||||
|
|
||||||
|
def getBookTitle(self):
|
||||||
|
title = ''
|
||||||
|
if 'Title' in self.bookMetadata:
|
||||||
|
title = self.bookMetadata['Title']
|
||||||
|
return title
|
||||||
|
|
||||||
|
def setBookKey(self, key):
|
||||||
|
self.bookKey = key
|
||||||
|
|
||||||
|
def getBookPayloadRecord(self, name, index):
|
||||||
|
# Get a record in the book payload, given its name and index.
|
||||||
|
# decrypted and decompressed if necessary
|
||||||
|
encrypted = False
|
||||||
|
compressed = False
|
||||||
|
try:
|
||||||
|
recordOffset = self.bookHeaderRecords[name][index][0]
|
||||||
|
except:
|
||||||
|
raise TpzDRMError("Parse Error : Invalid Record, record not found")
|
||||||
|
|
||||||
|
self.fo.seek(self.bookPayloadOffset + recordOffset)
|
||||||
|
|
||||||
|
tag = bookReadString(self.fo)
|
||||||
|
if tag != name :
|
||||||
|
raise TpzDRMError("Parse Error : Invalid Record, record name doesn't match")
|
||||||
|
|
||||||
|
recordIndex = bookReadEncodedNumber(self.fo)
|
||||||
|
if recordIndex < 0 :
|
||||||
|
encrypted = True
|
||||||
|
recordIndex = -recordIndex -1
|
||||||
|
|
||||||
|
if recordIndex != index :
|
||||||
|
raise TpzDRMError("Parse Error : Invalid Record, index doesn't match")
|
||||||
|
|
||||||
|
if (self.bookHeaderRecords[name][index][2] > 0):
|
||||||
|
compressed = True
|
||||||
|
record = self.fo.read(self.bookHeaderRecords[name][index][2])
|
||||||
|
else:
|
||||||
|
record = self.fo.read(self.bookHeaderRecords[name][index][1])
|
||||||
|
|
||||||
|
if encrypted:
|
||||||
|
if self.bookKey:
|
||||||
|
ctx = topazCryptoInit(self.bookKey)
|
||||||
|
record = topazCryptoDecrypt(record,ctx)
|
||||||
|
else :
|
||||||
|
raise TpzDRMError("Error: Attempt to decrypt without bookKey")
|
||||||
|
|
||||||
|
if compressed:
|
||||||
|
record = zlib.decompress(record)
|
||||||
|
|
||||||
|
return record
|
||||||
|
|
||||||
|
def processBook(self, pidlst):
|
||||||
|
raw = 0
|
||||||
|
fixedimage=True
|
||||||
|
try:
|
||||||
|
keydata = self.getBookPayloadRecord('dkey', 0)
|
||||||
|
except TpzDRMError, e:
|
||||||
|
print "no dkey record found, book may not be encrypted"
|
||||||
|
print "attempting to extrct files without a book key"
|
||||||
|
self.createBookDirectory()
|
||||||
|
self.extractFiles()
|
||||||
|
print "Successfully Extracted Topaz contents"
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.k4mobidedrm import genbook
|
||||||
|
else:
|
||||||
|
import genbook
|
||||||
|
|
||||||
|
rv = genbook.generateBook(self.outdir, raw, fixedimage)
|
||||||
|
if rv == 0:
|
||||||
|
print "\nBook Successfully generated"
|
||||||
|
return rv
|
||||||
|
|
||||||
|
# try each pid to decode the file
|
||||||
|
bookKey = None
|
||||||
|
for pid in pidlst:
|
||||||
|
# use 8 digit pids here
|
||||||
|
pid = pid[0:8]
|
||||||
|
print "\nTrying: ", pid
|
||||||
|
bookKeys = []
|
||||||
|
data = keydata
|
||||||
|
try:
|
||||||
|
bookKeys+=decryptDkeyRecords(data,pid)
|
||||||
|
except TpzDRMError, e:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
bookKey = bookKeys[0]
|
||||||
|
print "Book Key Found!"
|
||||||
|
break
|
||||||
|
|
||||||
|
if not bookKey:
|
||||||
|
raise TpzDRMError('Decryption Unsucessful; No valid pid found')
|
||||||
|
|
||||||
|
self.setBookKey(bookKey)
|
||||||
|
self.createBookDirectory()
|
||||||
|
self.extractFiles()
|
||||||
|
print "Successfully Extracted Topaz contents"
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.k4mobidedrm import genbook
|
||||||
|
else:
|
||||||
|
import genbook
|
||||||
|
|
||||||
|
rv = genbook.generateBook(self.outdir, raw, fixedimage)
|
||||||
|
if rv == 0:
|
||||||
|
print "\nBook Successfully generated"
|
||||||
|
return rv
|
||||||
|
|
||||||
|
def createBookDirectory(self):
|
||||||
|
outdir = self.outdir
|
||||||
|
# create output directory structure
|
||||||
|
if not os.path.exists(outdir):
|
||||||
|
os.makedirs(outdir)
|
||||||
|
destdir = os.path.join(outdir,'img')
|
||||||
|
if not os.path.exists(destdir):
|
||||||
|
os.makedirs(destdir)
|
||||||
|
destdir = os.path.join(outdir,'color_img')
|
||||||
|
if not os.path.exists(destdir):
|
||||||
|
os.makedirs(destdir)
|
||||||
|
destdir = os.path.join(outdir,'page')
|
||||||
|
if not os.path.exists(destdir):
|
||||||
|
os.makedirs(destdir)
|
||||||
|
destdir = os.path.join(outdir,'glyphs')
|
||||||
|
if not os.path.exists(destdir):
|
||||||
|
os.makedirs(destdir)
|
||||||
|
|
||||||
|
def extractFiles(self):
|
||||||
|
outdir = self.outdir
|
||||||
|
for headerRecord in self.bookHeaderRecords:
|
||||||
|
name = headerRecord
|
||||||
|
if name != "dkey" :
|
||||||
|
ext = '.dat'
|
||||||
|
if name == 'img' : ext = '.jpg'
|
||||||
|
if name == 'color' : ext = '.jpg'
|
||||||
|
print "\nProcessing Section: %s " % name
|
||||||
|
for index in range (0,len(self.bookHeaderRecords[name])) :
|
||||||
|
fnum = "%04d" % index
|
||||||
|
fname = name + fnum + ext
|
||||||
|
destdir = outdir
|
||||||
|
if name == 'img':
|
||||||
|
destdir = os.path.join(outdir,'img')
|
||||||
|
if name == 'color':
|
||||||
|
destdir = os.path.join(outdir,'color_img')
|
||||||
|
if name == 'page':
|
||||||
|
destdir = os.path.join(outdir,'page')
|
||||||
|
if name == 'glyphs':
|
||||||
|
destdir = os.path.join(outdir,'glyphs')
|
||||||
|
outputFile = os.path.join(destdir,fname)
|
||||||
|
print ".",
|
||||||
|
record = self.getBookPayloadRecord(name,index)
|
||||||
|
if record != '':
|
||||||
|
file(outputFile, 'wb').write(record)
|
||||||
|
print " "
|
||||||
|
|
||||||
|
def getHTMLZip(self, zipname):
|
||||||
|
htmlzip = zipfile.ZipFile(zipname,'w',zipfile.ZIP_DEFLATED, False)
|
||||||
|
htmlzip.write(os.path.join(self.outdir,'book.html'),'book.html')
|
||||||
|
htmlzip.write(os.path.join(self.outdir,'book.opf'),'book.opf')
|
||||||
|
if os.path.isfile(os.path.join(self.outdir,'cover.jpg')):
|
||||||
|
htmlzip.write(os.path.join(self.outdir,'cover.jpg'),'cover.jpg')
|
||||||
|
htmlzip.write(os.path.join(self.outdir,'style.css'),'style.css')
|
||||||
|
zipUpDir(htmlzip, self.outdir, 'img')
|
||||||
|
htmlzip.close()
|
||||||
|
|
||||||
|
def getSVGZip(self, zipname):
|
||||||
|
svgzip = zipfile.ZipFile(zipname,'w',zipfile.ZIP_DEFLATED, False)
|
||||||
|
svgzip.write(os.path.join(self.outdir,'index_svg.xhtml'),'index_svg.xhtml')
|
||||||
|
zipUpDir(svgzip, self.outdir, 'svg')
|
||||||
|
zipUpDir(svgzip, self.outdir, 'img')
|
||||||
|
svgzip.close()
|
||||||
|
|
||||||
|
def getXMLZip(self, zipname):
|
||||||
|
xmlzip = zipfile.ZipFile(zipname,'w',zipfile.ZIP_DEFLATED, False)
|
||||||
|
targetdir = os.path.join(self.outdir,'xml')
|
||||||
|
zipUpDir(xmlzip, targetdir, '')
|
||||||
|
zipUpDir(xmlzip, self.outdir, 'img')
|
||||||
|
xmlzip.close()
|
||||||
|
|
||||||
|
def cleanup(self):
|
||||||
|
if os.path.isdir(self.outdir):
|
||||||
|
shutil.rmtree(self.outdir, True)
|
||||||
|
|
||||||
|
def usage(progname):
|
||||||
|
print "Removes DRM protection from Topaz ebooks and extract the contents"
|
||||||
|
print "Usage:"
|
||||||
|
print " %s [-k <kindle.info>] [-p <pidnums>] [-s <kindleSerialNumbers>] <infile> <outdir> " % progname
|
||||||
|
|
||||||
|
|
||||||
|
# Main
|
||||||
|
def main(argv=sys.argv):
|
||||||
|
global buildXML
|
||||||
|
progname = os.path.basename(argv[0])
|
||||||
|
k4 = False
|
||||||
|
pids = []
|
||||||
|
serials = []
|
||||||
|
kInfoFiles = []
|
||||||
|
|
||||||
|
try:
|
||||||
|
opts, args = getopt.getopt(sys.argv[1:], "k:p:s:")
|
||||||
|
except getopt.GetoptError, err:
|
||||||
|
print str(err)
|
||||||
|
usage(progname)
|
||||||
|
return 1
|
||||||
|
if len(args)<2:
|
||||||
|
usage(progname)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
for o, a in opts:
|
||||||
|
if o == "-k":
|
||||||
|
if a == None :
|
||||||
|
print "Invalid parameter for -k"
|
||||||
|
return 1
|
||||||
|
kInfoFiles.append(a)
|
||||||
|
if o == "-p":
|
||||||
|
if a == None :
|
||||||
|
print "Invalid parameter for -p"
|
||||||
|
return 1
|
||||||
|
pids = a.split(',')
|
||||||
|
if o == "-s":
|
||||||
|
if a == None :
|
||||||
|
print "Invalid parameter for -s"
|
||||||
|
return 1
|
||||||
|
serials = a.split(',')
|
||||||
|
k4 = True
|
||||||
|
|
||||||
|
infile = args[0]
|
||||||
|
outdir = args[1]
|
||||||
|
|
||||||
|
if not os.path.isfile(infile):
|
||||||
|
print "Input File Does Not Exist"
|
||||||
|
return 1
|
||||||
|
|
||||||
|
bookname = os.path.splitext(os.path.basename(infile))[0]
|
||||||
|
|
||||||
|
tb = TopazBook(infile)
|
||||||
|
title = tb.getBookTitle()
|
||||||
|
print "Processing Book: ", title
|
||||||
|
keysRecord, keysRecordRecord = tb.getPIDMetaInfo()
|
||||||
|
pidlst = kgenpids.getPidList(keysRecord, keysRecordRecord, k4, pids, serials, kInfoFiles)
|
||||||
|
|
||||||
|
try:
|
||||||
|
print "Decrypting Book"
|
||||||
|
tb.processBook(pidlst)
|
||||||
|
|
||||||
|
print " Creating HTML ZIP Archive"
|
||||||
|
zipname = os.path.join(outdir, bookname + '_nodrm' + '.htmlz')
|
||||||
|
tb.getHTMLZip(zipname)
|
||||||
|
|
||||||
|
print " Creating SVG ZIP Archive"
|
||||||
|
zipname = os.path.join(outdir, bookname + '_SVG' + '.zip')
|
||||||
|
tb.getSVGZip(zipname)
|
||||||
|
|
||||||
|
if buildXML:
|
||||||
|
print " Creating XML ZIP Archive"
|
||||||
|
zipname = os.path.join(outdir, bookname + '_XML' + '.zip')
|
||||||
|
tb.getXMLZip(zipname)
|
||||||
|
|
||||||
|
# removing internal temporary directory of pieces
|
||||||
|
tb.cleanup()
|
||||||
|
|
||||||
|
except TpzDRMError, e:
|
||||||
|
print str(e)
|
||||||
|
# tb.cleanup()
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception, e:
|
||||||
|
print str(e)
|
||||||
|
# tb.cleanup
|
||||||
|
return 1
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.stdout=Unbuffered(sys.stdout)
|
||||||
|
sys.exit(main())
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
Plugin for K4PC, K4Mac and Mobi Books
|
|
||||||
|
|
||||||
Will work on Linux (standard DRM Mobi books only), Mac OS X (standard DRM Mobi books and "Kindle for Mac" books, and Windows (standard DRM Mobi books and "Kindle for PC" books.
|
|
||||||
|
|
||||||
This plugin supersedes MobiDeDRM, K4DeDRM, and K4PCDeDRM plugins. If you install this plugin, those plugins can be safely removed.
|
|
||||||
|
|
||||||
This plugin is meant to convert "Kindle for PC", "Kindle for Mac" and "Mobi" ebooks with DRM to unlocked Mobi files. Calibre can then convert them to whatever format you desire. It is meant to function without having to install any dependencies except for Calibre being on your same machine and in the same account as your "Kindle for PC" or "Kindle for Mac" application if you are going to remove the DRM from those types of books.
|
|
||||||
|
|
||||||
Installation:
|
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file dialog button to select the plugin's zip file (k4mobidedrm_vXX_plugin.zip) and click the 'Add' button. You're done.
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
Highlight the plugin (K4MobiDeDRM under the "File type plugins" category) and click the "Customize Plugin" button on Calibre's Preferences->Plugins page. Enter a comma separated list of your 10 digit PIDs. This is not needed if you only want to decode "Kindle for PC" or "Kindle for Mac" books.
|
|
||||||
|
|
||||||
|
|
||||||
Troubleshooting:
|
|
||||||
If you find that it's not working for you (imported azw's are not converted to mobi format), you can save a lot of time and trouble by trying to add the azw file to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might
|
|
||||||
as well get used to it. ;)
|
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.azw". Don't type the quotes and obviously change the 'your_ebook.azw' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
|
||||||
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
eReader PDB2PML - eReaderPDB2PML_vXX_plugin.zip
|
|
||||||
|
|
||||||
All credit given to The Dark Reverser for the original standalone script. I had the much easier job of converting it to a Calibre plugin.
|
|
||||||
|
|
||||||
This plugin is meant to convert secure Ereader files (PDB) to unsecured PMLZ files. Calibre can then convert it to whatever format you desire. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. I've included the psyco libraries (compiled for each platform) for speed. If your system can use them, great! Otherwise, they won't be used and things will just work slower.
|
|
||||||
|
|
||||||
Installation:
|
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file dialog button to select the plugin's zip file (eReaderPDB2PML_vXX_plugin.zip) and click the 'Add' button. You're done.
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
Highlight the plugin (eReader PDB 2 PML under the "File type plugins" category) and click the "Customize Plugin" button on Calibre's Preferences->Plugins page. Enter your name and last 8 digits of the credit card number separated by a comma: Your Name,12341234
|
|
||||||
|
|
||||||
If you've purchased books with more than one credit card, separate the info with a colon: Your Name,12341234:Other Name,23452345 (NOTE: Do NOT put quotes around your name like you do with the original script!!)
|
|
||||||
|
|
||||||
Troubleshooting:
|
|
||||||
If you find that it's not working for you (imported pdb's are not converted to pmlz format), you can save a lot of time and trouble by trying to add the pdb to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might
|
|
||||||
as well get used to it. ;)
|
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.pdb". Don't type the quotes and obviously change the 'your_ebook.pdb' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
Ignoble Epub DeDRM - ignobleepub_vXX_plugin.zip
|
|
||||||
Requires Calibre version 0.6.44 or higher.
|
|
||||||
|
|
||||||
All credit given to I <3 Cabbages for the original standalone scripts.
|
|
||||||
I had the much easier job of converting them to a Calibre plugin.
|
|
||||||
|
|
||||||
This plugin is meant to decrypt Barnes & Noble Epubs that are protected
|
|
||||||
with Adobe's Adept encryption. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. It will still work if you have Python and PyCrypto already installed, but they aren't necessary.
|
|
||||||
|
|
||||||
Installation:
|
|
||||||
|
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file dialog button to select the plugin's zip file (ignobleepub_vXX_plugin.zip) and
|
|
||||||
click the 'Add' button. you're done.
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
|
|
||||||
1) The easiest way to configure the plugin is to enter your name (Barnes & Noble account name) and credit card number (the one used to purchase the books) into the plugin's customization window. It's the same info you would enter into the ignoblekeygen script. Highlight the plugin (Ignoble Epub DeDRM) and click the "Customize Plugin" button on
|
|
||||||
Calibre's Preferences->Plugins page. Enter the name and credit card number separated by a comma: Your Name,1234123412341234
|
|
||||||
|
|
||||||
If you've purchased books with more than one credit card, separate that other info with a colon: Your Name,1234123412341234:Other Name,2345234523452345
|
|
||||||
|
|
||||||
** NOTE ** The above method is your only option if you don't have/can't run the original I <3 Cabbages scripts on your particular machine.
|
|
||||||
|
|
||||||
** NOTE ** Your credit card number will be on display in Calibre's Plugin configuration page when using the above method. If other people have access to your computer, you may want to use the second configuration method below.
|
|
||||||
|
|
||||||
2) If you already have keyfiles generated with I <3 Cabbages' ignoblekeygen.pyw script, you can put those keyfiles into Calibre's configuration directory. The easiest way to find the correct directory is to go to Calibre's Preferences page... click on the 'Miscellaneous' button (looks like a gear), and then click the 'Open Calibre
|
|
||||||
configuration directory' button. Paste your keyfiles in there. Just make sure that they have different names and are saved with the '.b64' extension (like the ignoblekeygen script produces). This directory isn't touched when upgrading Calibre, so it's quite safe to leave them there.
|
|
||||||
|
|
||||||
All keyfiles from method 2 and all data entered from method 1 will be used to attempt to decrypt a book. You can use method 1 or method 2, or a combination of both.
|
|
||||||
|
|
||||||
Troubleshooting:
|
|
||||||
|
|
||||||
If you find that it's not working for you (imported epubs still have DRM), you can save a lot of time and trouble by trying to add the epub to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might
|
|
||||||
as well get used to it. ;)
|
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.epub". Don't type the quotes and obviously change the 'your_ebook.epub' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
|
||||||
@@ -1,21 +1,31 @@
|
|||||||
eReader PDB2PML - eReaderPDB2PML_vXX_plugin.zip
|
eReader PDB2PML - eReaderPDB2PML_v06_plugin.zip
|
||||||
|
|
||||||
All credit given to The Dark Reverser for the original standalone script. I had the much easier job of converting it to a Calibre plugin.
|
All credit given to The Dark Reverser for the original standalone script. I had the much easier job of converting it to a Calibre plugin.
|
||||||
|
|
||||||
This plugin is meant to convert secure Ereader files (PDB) to unsecured PMLZ files. Calibre can then convert it to whatever format you desire. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. I've included the psyco libraries (compiled for each platform) for speed. If your system can use them, great! Otherwise, they won't be used and things will just work slower.
|
This plugin is meant to convert secure Ereader files (PDB) to unsecured PMLZ files. Calibre can then convert it to whatever format you desire. It is meant to function without having to install any dependencies... other than having Calibre installed, of course. I've included the psyco libraries (compiled for each platform) for speed. If your system can use them, great! Otherwise, they won't be used and things will just work slower.
|
||||||
|
|
||||||
|
|
||||||
Installation:
|
Installation:
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file dialog button to select the plugin's zip file (eReaderPDB2PML_vXX_plugin.zip) and click the 'Add' button. You're done.
|
|
||||||
|
Go to Calibre's Preferences page. Do **NOT** select "Get Plugins to enhance calibre" as this is reserved for "official" calibre plugins, instead select "Change calibre behavior". Under "Advanced" click on the Plugins button. Use the "Load plugin from file" button to select the plugin's zip file (eReaderPDB2PML_vXX_plugin.zip) and click the 'Add' button. You're done.
|
||||||
|
|
||||||
|
|
||||||
|
Please note: Calibre does not provide any immediate feedback to indicate that adding the plugin was a success. You can always click on the File-Type plugins to see if the plugin was added.
|
||||||
|
|
||||||
|
|
||||||
Configuration:
|
Configuration:
|
||||||
|
|
||||||
Highlight the plugin (eReader PDB 2 PML under the "File type plugins" category) and click the "Customize Plugin" button on Calibre's Preferences->Plugins page. Enter your name and last 8 digits of the credit card number separated by a comma: Your Name,12341234
|
Highlight the plugin (eReader PDB 2 PML under the "File type plugins" category) and click the "Customize Plugin" button on Calibre's Preferences->Plugins page. Enter your name and last 8 digits of the credit card number separated by a comma: Your Name,12341234
|
||||||
|
|
||||||
If you've purchased books with more than one credit card, separate the info with a colon: Your Name,12341234:Other Name,23452345 (NOTE: Do NOT put quotes around your name like you do with the original script!!)
|
If you've purchased books with more than one credit card, separate the info with a colon: Your Name,12341234:Other Name,23452345 (NOTE: Do NOT put quotes around your name like you do with the original script!!)
|
||||||
|
|
||||||
|
|
||||||
Troubleshooting:
|
Troubleshooting:
|
||||||
If you find that it's not working for you (imported pdb's are not converted to pmlz format), you can save a lot of time and trouble by trying to add the pdb to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might
|
|
||||||
as well get used to it. ;)
|
If you find that it's not working for you (imported pdb's are not converted to pmlz format), you can save a lot of time and trouble by trying to add the pdb to Calibre with the command line tools. This will print out a lot of helpful debugging info that can be copied into any online help requests. I'm going to ask you to do it first, anyway, so you might as well get used to it. ;)
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.pdb". Don't type the quotes and obviously change the 'your_ebook.pdb' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
Open a command prompt (terminal) and change to the directory where the ebook you're trying to import resides. Then type the command "calibredb add your_ebook.pdb". Don't type the quotes and obviously change the 'your_ebook.pdb' to whatever the filename of your book is. Copy the resulting output and paste it into any online help request you make.
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
** Note: the Mac version of Calibre doesn't install the command line tools by default. If you go to the 'Preferences' page and click on the miscellaneous button, you'll see the option to install the command line tools.
|
||||||
|
|
||||||
|
|
||||||
Binary file not shown.
140
Calibre_Plugins/eReaderPDB2PML_plugin/__init__.py
Normal file
140
Calibre_Plugins/eReaderPDB2PML_plugin/__init__.py
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
# eReaderPDB2PML_plugin.py
|
||||||
|
# Released under the terms of the GNU General Public Licence, version 3 or
|
||||||
|
# later. <http://www.gnu.org/licenses/>
|
||||||
|
#
|
||||||
|
# All credit given to The Dark Reverser for the original standalone script.
|
||||||
|
# I had the much easier job of converting it to Calibre a plugin.
|
||||||
|
#
|
||||||
|
# This plugin is meant to convert secure Ereader files (PDB) to unsecured PMLZ files.
|
||||||
|
# Calibre can then convert it to whatever format you desire.
|
||||||
|
# It is meant to function without having to install any dependencies...
|
||||||
|
# other than having Calibre installed, of course.
|
||||||
|
#
|
||||||
|
# Installation:
|
||||||
|
# Go to Calibre's Preferences page... click on the Plugins button. Use the file
|
||||||
|
# dialog button to select the plugin's zip file (eReaderPDB2PML_vXX_plugin.zip) and
|
||||||
|
# click the 'Add' button. You're done.
|
||||||
|
#
|
||||||
|
# Configuration:
|
||||||
|
# Highlight the plugin (eReader PDB 2 PML) and click the
|
||||||
|
# "Customize Plugin" button on Calibre's Preferences->Plugins page.
|
||||||
|
# Enter your name and the last 8 digits of the credit card number separated by
|
||||||
|
# a comma: Your Name,12341234
|
||||||
|
#
|
||||||
|
# If you've purchased books with more than one credit card, separate the info with
|
||||||
|
# a colon: Your Name,12341234:Other Name,23452345
|
||||||
|
# NOTE: Do NOT put quotes around your name like you do with the original script!!
|
||||||
|
#
|
||||||
|
# Revision history:
|
||||||
|
# 0.0.1 - Initial release
|
||||||
|
# 0.0.2 - updated to distinguish it from earlier non-openssl version
|
||||||
|
# 0.0.3 - removed added psyco code as it is not supported under Calibre's Python 2.7
|
||||||
|
# 0.0.4 - minor typos fixed
|
||||||
|
# 0.0.5 - updated to the new calibre plugin interface
|
||||||
|
|
||||||
|
import sys, os
|
||||||
|
|
||||||
|
from calibre.customize import FileTypePlugin
|
||||||
|
from calibre.ptempfile import PersistentTemporaryDirectory
|
||||||
|
from calibre.constants import iswindows, isosx
|
||||||
|
from calibre_plugins.erdrpdb2pml import erdr2pml
|
||||||
|
|
||||||
|
class eRdrDeDRM(FileTypePlugin):
|
||||||
|
name = 'eReader PDB 2 PML' # Name of the plugin
|
||||||
|
description = 'Removes DRM from secure pdb files. \
|
||||||
|
Credit given to The Dark Reverser for the original standalone script.'
|
||||||
|
supported_platforms = ['linux', 'osx', 'windows'] # Platforms this plugin will run on
|
||||||
|
author = 'DiapDealer' # The author of this plugin
|
||||||
|
version = (0, 0, 6) # The version number of this plugin
|
||||||
|
file_types = set(['pdb']) # The file types that this plugin will be applied to
|
||||||
|
on_import = True # Run this plugin during the import
|
||||||
|
minimum_calibre_version = (0, 7, 55)
|
||||||
|
|
||||||
|
def run(self, path_to_ebook):
|
||||||
|
|
||||||
|
global bookname, erdr2pml
|
||||||
|
|
||||||
|
infile = path_to_ebook
|
||||||
|
bookname = os.path.splitext(os.path.basename(infile))[0]
|
||||||
|
outdir = PersistentTemporaryDirectory()
|
||||||
|
pmlzfile = self.temporary_file(bookname + '.pmlz')
|
||||||
|
|
||||||
|
if self.site_customization:
|
||||||
|
keydata = self.site_customization
|
||||||
|
ar = keydata.split(':')
|
||||||
|
for i in ar:
|
||||||
|
try:
|
||||||
|
name, cc = i.split(',')
|
||||||
|
except ValueError:
|
||||||
|
print ' Error parsing user supplied data.'
|
||||||
|
return path_to_ebook
|
||||||
|
|
||||||
|
try:
|
||||||
|
print "Processing..."
|
||||||
|
import time
|
||||||
|
start_time = time.time()
|
||||||
|
pmlfilepath = self.convertEreaderToPml(infile, name, cc, outdir)
|
||||||
|
|
||||||
|
if pmlfilepath and pmlfilepath != 1:
|
||||||
|
import zipfile
|
||||||
|
print " Creating PMLZ file"
|
||||||
|
myZipFile = zipfile.ZipFile(pmlzfile.name,'w',zipfile.ZIP_STORED, False)
|
||||||
|
list = os.listdir(outdir)
|
||||||
|
for file in list:
|
||||||
|
localname = file
|
||||||
|
filePath = os.path.join(outdir,file)
|
||||||
|
if os.path.isfile(filePath):
|
||||||
|
myZipFile.write(filePath, localname)
|
||||||
|
elif os.path.isdir(filePath):
|
||||||
|
imageList = os.listdir(filePath)
|
||||||
|
localimgdir = os.path.basename(filePath)
|
||||||
|
for image in imageList:
|
||||||
|
localname = os.path.join(localimgdir,image)
|
||||||
|
imagePath = os.path.join(filePath,image)
|
||||||
|
if os.path.isfile(imagePath):
|
||||||
|
myZipFile.write(imagePath, localname)
|
||||||
|
myZipFile.close()
|
||||||
|
end_time = time.time()
|
||||||
|
search_time = end_time - start_time
|
||||||
|
print 'elapsed time: %.2f seconds' % (search_time, )
|
||||||
|
print "done"
|
||||||
|
return pmlzfile.name
|
||||||
|
else:
|
||||||
|
raise ValueError('Error Creating PML file.')
|
||||||
|
except ValueError, e:
|
||||||
|
print "Error: %s" % e
|
||||||
|
pass
|
||||||
|
raise Exception('Couldn\'t decrypt pdb file.')
|
||||||
|
else:
|
||||||
|
raise Exception('No name and CC# provided.')
|
||||||
|
|
||||||
|
def convertEreaderToPml(self, infile, name, cc, outdir):
|
||||||
|
|
||||||
|
print " Decoding File"
|
||||||
|
sect = erdr2pml.Sectionizer(infile, 'PNRdPPrs')
|
||||||
|
er = erdr2pml.EreaderProcessor(sect, name, cc)
|
||||||
|
|
||||||
|
if er.getNumImages() > 0:
|
||||||
|
print " Extracting images"
|
||||||
|
#imagedir = bookname + '_img/'
|
||||||
|
imagedir = 'images/'
|
||||||
|
imagedirpath = os.path.join(outdir,imagedir)
|
||||||
|
if not os.path.exists(imagedirpath):
|
||||||
|
os.makedirs(imagedirpath)
|
||||||
|
for i in xrange(er.getNumImages()):
|
||||||
|
name, contents = er.getImage(i)
|
||||||
|
file(os.path.join(imagedirpath, name), 'wb').write(contents)
|
||||||
|
|
||||||
|
print " Extracting pml"
|
||||||
|
pml_string = er.getText()
|
||||||
|
pmlfilename = bookname + ".pml"
|
||||||
|
try:
|
||||||
|
file(os.path.join(outdir, pmlfilename),'wb').write(erdr2pml.cleanPML(pml_string))
|
||||||
|
return os.path.join(outdir, pmlfilename)
|
||||||
|
except:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
def customization_help(self, gui=False):
|
||||||
|
return 'Enter Account Name & Last 8 digits of Credit Card number (separate with a comma)'
|
||||||
@@ -54,26 +54,16 @@
|
|||||||
# 0.13 - change to unbuffered stdout for use with gui front ends
|
# 0.13 - change to unbuffered stdout for use with gui front ends
|
||||||
# 0.14 - contributed enhancement to support --make-pmlz switch
|
# 0.14 - contributed enhancement to support --make-pmlz switch
|
||||||
# 0.15 - enabled high-ascii to pml character encoding. DropBook now works on Mac.
|
# 0.15 - enabled high-ascii to pml character encoding. DropBook now works on Mac.
|
||||||
|
# 0.16 - convert to use openssl DES (very very fast) or pure python DES if openssl's libcrypto is not available
|
||||||
|
# 0.17 - added support for pycrypto's DES as well
|
||||||
|
# 0.18 - on Windows try PyCrypto first and OpenSSL next
|
||||||
|
# 0.19 - Modify the interface to allow use of import
|
||||||
|
# 0.20 - modify to allow use inside new interface for calibre plugins
|
||||||
|
# 0.21 - Support eReader (drm) version 11.
|
||||||
|
# - Don't reject dictionary format.
|
||||||
|
# - Ignore sidebars for dictionaries (different format?)
|
||||||
|
|
||||||
__version__='0.15'
|
__version__='0.21'
|
||||||
|
|
||||||
# Import Psyco if available
|
|
||||||
try:
|
|
||||||
# Dumb speed hack 1
|
|
||||||
# http://psyco.sourceforge.net
|
|
||||||
import psyco
|
|
||||||
psyco.full()
|
|
||||||
pass
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
try:
|
|
||||||
# Dumb speed hack 2
|
|
||||||
# All map() calls converted to list comprehension (some use zip)
|
|
||||||
# override zip with izip - saves memory and in rough testing
|
|
||||||
# appears to be faster zip() is only used in the converted map() calls
|
|
||||||
from itertools import izip as zip
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
class Unbuffered:
|
class Unbuffered:
|
||||||
def __init__(self, stream):
|
def __init__(self, stream):
|
||||||
@@ -85,246 +75,86 @@ class Unbuffered:
|
|||||||
return getattr(self.stream, attr)
|
return getattr(self.stream, attr)
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
sys.stdout=Unbuffered(sys.stdout)
|
|
||||||
|
|
||||||
import struct, binascii, getopt, zlib, os, os.path, urllib, tempfile
|
import struct, binascii, getopt, zlib, os, os.path, urllib, tempfile
|
||||||
|
|
||||||
|
if 'calibre' in sys.modules:
|
||||||
|
inCalibre = True
|
||||||
|
else:
|
||||||
|
inCalibre = False
|
||||||
|
|
||||||
|
Des = None
|
||||||
|
if sys.platform.startswith('win'):
|
||||||
|
# first try with pycrypto
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.erdrpdb2pml import pycrypto_des
|
||||||
|
else:
|
||||||
|
import pycrypto_des
|
||||||
|
Des = pycrypto_des.load_pycrypto()
|
||||||
|
if Des == None:
|
||||||
|
# they try with openssl
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.erdrpdb2pml import openssl_des
|
||||||
|
else:
|
||||||
|
import openssl_des
|
||||||
|
Des = openssl_des.load_libcrypto()
|
||||||
|
else:
|
||||||
|
# first try with openssl
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.erdrpdb2pml import openssl_des
|
||||||
|
else:
|
||||||
|
import openssl_des
|
||||||
|
Des = openssl_des.load_libcrypto()
|
||||||
|
if Des == None:
|
||||||
|
# then try with pycrypto
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.erdrpdb2pml import pycrypto_des
|
||||||
|
else:
|
||||||
|
import pycrypto_des
|
||||||
|
Des = pycrypto_des.load_pycrypto()
|
||||||
|
|
||||||
|
# if that did not work then use pure python implementation
|
||||||
|
# of DES and try to speed it up with Psycho
|
||||||
|
if Des == None:
|
||||||
|
if inCalibre:
|
||||||
|
from calibre_plugins.erdrpdb2pml import python_des
|
||||||
|
else:
|
||||||
|
import python_des
|
||||||
|
Des = python_des.Des
|
||||||
|
# Import Psyco if available
|
||||||
|
try:
|
||||||
|
# http://psyco.sourceforge.net
|
||||||
|
import psyco
|
||||||
|
psyco.full()
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from hashlib import sha1
|
from hashlib import sha1
|
||||||
except ImportError:
|
except ImportError:
|
||||||
# older Python release
|
# older Python release
|
||||||
import sha
|
import sha
|
||||||
sha1 = lambda s: sha.new(s)
|
sha1 = lambda s: sha.new(s)
|
||||||
|
|
||||||
import cgi
|
import cgi
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
logging.basicConfig()
|
logging.basicConfig()
|
||||||
#logging.basicConfig(level=logging.DEBUG)
|
#logging.basicConfig(level=logging.DEBUG)
|
||||||
|
|
||||||
ECB = 0
|
|
||||||
CBC = 1
|
|
||||||
class Des(object):
|
|
||||||
__pc1 = [56, 48, 40, 32, 24, 16, 8, 0, 57, 49, 41, 33, 25, 17,
|
|
||||||
9, 1, 58, 50, 42, 34, 26, 18, 10, 2, 59, 51, 43, 35,
|
|
||||||
62, 54, 46, 38, 30, 22, 14, 6, 61, 53, 45, 37, 29, 21,
|
|
||||||
13, 5, 60, 52, 44, 36, 28, 20, 12, 4, 27, 19, 11, 3]
|
|
||||||
__left_rotations = [1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1]
|
|
||||||
__pc2 = [13, 16, 10, 23, 0, 4,2, 27, 14, 5, 20, 9,
|
|
||||||
22, 18, 11, 3, 25, 7, 15, 6, 26, 19, 12, 1,
|
|
||||||
40, 51, 30, 36, 46, 54, 29, 39, 50, 44, 32, 47,
|
|
||||||
43, 48, 38, 55, 33, 52, 45, 41, 49, 35, 28, 31]
|
|
||||||
__ip = [57, 49, 41, 33, 25, 17, 9, 1, 59, 51, 43, 35, 27, 19, 11, 3,
|
|
||||||
61, 53, 45, 37, 29, 21, 13, 5, 63, 55, 47, 39, 31, 23, 15, 7,
|
|
||||||
56, 48, 40, 32, 24, 16, 8, 0, 58, 50, 42, 34, 26, 18, 10, 2,
|
|
||||||
60, 52, 44, 36, 28, 20, 12, 4, 62, 54, 46, 38, 30, 22, 14, 6]
|
|
||||||
__expansion_table = [31, 0, 1, 2, 3, 4, 3, 4, 5, 6, 7, 8,
|
|
||||||
7, 8, 9, 10, 11, 12,11, 12, 13, 14, 15, 16,
|
|
||||||
15, 16, 17, 18, 19, 20,19, 20, 21, 22, 23, 24,
|
|
||||||
23, 24, 25, 26, 27, 28,27, 28, 29, 30, 31, 0]
|
|
||||||
__sbox = [[14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7,
|
|
||||||
0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8,
|
|
||||||
4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0,
|
|
||||||
15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13],
|
|
||||||
[15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10,
|
|
||||||
3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5,
|
|
||||||
0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15,
|
|
||||||
13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9],
|
|
||||||
[10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8,
|
|
||||||
13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1,
|
|
||||||
13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7,
|
|
||||||
1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12],
|
|
||||||
[7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15,
|
|
||||||
13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9,
|
|
||||||
10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4,
|
|
||||||
3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14],
|
|
||||||
[2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9,
|
|
||||||
14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6,
|
|
||||||
4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14,
|
|
||||||
11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3],
|
|
||||||
[12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11,
|
|
||||||
10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8,
|
|
||||||
9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6,
|
|
||||||
4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13],
|
|
||||||
[4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1,
|
|
||||||
13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6,
|
|
||||||
1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2,
|
|
||||||
6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12],
|
|
||||||
[13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7,
|
|
||||||
1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2,
|
|
||||||
7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8,
|
|
||||||
2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11],]
|
|
||||||
__p = [15, 6, 19, 20, 28, 11,27, 16, 0, 14, 22, 25,
|
|
||||||
4, 17, 30, 9, 1, 7,23,13, 31, 26, 2, 8,18, 12, 29, 5, 21, 10,3, 24]
|
|
||||||
__fp = [39, 7, 47, 15, 55, 23, 63, 31,38, 6, 46, 14, 54, 22, 62, 30,
|
|
||||||
37, 5, 45, 13, 53, 21, 61, 29,36, 4, 44, 12, 52, 20, 60, 28,
|
|
||||||
35, 3, 43, 11, 51, 19, 59, 27,34, 2, 42, 10, 50, 18, 58, 26,
|
|
||||||
33, 1, 41, 9, 49, 17, 57, 25,32, 0, 40, 8, 48, 16, 56, 24]
|
|
||||||
# Type of crypting being done
|
|
||||||
ENCRYPT = 0x00
|
|
||||||
DECRYPT = 0x01
|
|
||||||
def __init__(self, key, mode=ECB, IV=None):
|
|
||||||
if len(key) != 8:
|
|
||||||
raise ValueError("Invalid DES key size. Key must be exactly 8 bytes long.")
|
|
||||||
self.block_size = 8
|
|
||||||
self.key_size = 8
|
|
||||||
self.__padding = ''
|
|
||||||
self.setMode(mode)
|
|
||||||
if IV:
|
|
||||||
self.setIV(IV)
|
|
||||||
self.L = []
|
|
||||||
self.R = []
|
|
||||||
self.Kn = [ [0] * 48 ] * 16 # 16 48-bit keys (K1 - K16)
|
|
||||||
self.final = []
|
|
||||||
self.setKey(key)
|
|
||||||
def getKey(self):
|
|
||||||
return self.__key
|
|
||||||
def setKey(self, key):
|
|
||||||
self.__key = key
|
|
||||||
self.__create_sub_keys()
|
|
||||||
def getMode(self):
|
|
||||||
return self.__mode
|
|
||||||
def setMode(self, mode):
|
|
||||||
self.__mode = mode
|
|
||||||
def getIV(self):
|
|
||||||
return self.__iv
|
|
||||||
def setIV(self, IV):
|
|
||||||
if not IV or len(IV) != self.block_size:
|
|
||||||
raise ValueError("Invalid Initial Value (IV), must be a multiple of " + str(self.block_size) + " bytes")
|
|
||||||
self.__iv = IV
|
|
||||||
def getPadding(self):
|
|
||||||
return self.__padding
|
|
||||||
def __String_to_BitList(self, data):
|
|
||||||
l = len(data) * 8
|
|
||||||
result = [0] * l
|
|
||||||
pos = 0
|
|
||||||
for c in data:
|
|
||||||
i = 7
|
|
||||||
ch = ord(c)
|
|
||||||
while i >= 0:
|
|
||||||
if ch & (1 << i) != 0:
|
|
||||||
result[pos] = 1
|
|
||||||
else:
|
|
||||||
result[pos] = 0
|
|
||||||
pos += 1
|
|
||||||
i -= 1
|
|
||||||
return result
|
|
||||||
def __BitList_to_String(self, data):
|
|
||||||
result = ''
|
|
||||||
pos = 0
|
|
||||||
c = 0
|
|
||||||
while pos < len(data):
|
|
||||||
c += data[pos] << (7 - (pos % 8))
|
|
||||||
if (pos % 8) == 7:
|
|
||||||
result += chr(c)
|
|
||||||
c = 0
|
|
||||||
pos += 1
|
|
||||||
return result
|
|
||||||
def __permutate(self, table, block):
|
|
||||||
return [block[x] for x in table]
|
|
||||||
def __create_sub_keys(self):
|
|
||||||
key = self.__permutate(Des.__pc1, self.__String_to_BitList(self.getKey()))
|
|
||||||
i = 0
|
|
||||||
self.L = key[:28]
|
|
||||||
self.R = key[28:]
|
|
||||||
while i < 16:
|
|
||||||
j = 0
|
|
||||||
while j < Des.__left_rotations[i]:
|
|
||||||
self.L.append(self.L[0])
|
|
||||||
del self.L[0]
|
|
||||||
self.R.append(self.R[0])
|
|
||||||
del self.R[0]
|
|
||||||
j += 1
|
|
||||||
self.Kn[i] = self.__permutate(Des.__pc2, self.L + self.R)
|
|
||||||
i += 1
|
|
||||||
def __des_crypt(self, block, crypt_type):
|
|
||||||
block = self.__permutate(Des.__ip, block)
|
|
||||||
self.L = block[:32]
|
|
||||||
self.R = block[32:]
|
|
||||||
if crypt_type == Des.ENCRYPT:
|
|
||||||
iteration = 0
|
|
||||||
iteration_adjustment = 1
|
|
||||||
else:
|
|
||||||
iteration = 15
|
|
||||||
iteration_adjustment = -1
|
|
||||||
i = 0
|
|
||||||
while i < 16:
|
|
||||||
tempR = self.R[:]
|
|
||||||
self.R = self.__permutate(Des.__expansion_table, self.R)
|
|
||||||
self.R = [x ^ y for x,y in zip(self.R, self.Kn[iteration])]
|
|
||||||
B = [self.R[:6], self.R[6:12], self.R[12:18], self.R[18:24], self.R[24:30], self.R[30:36], self.R[36:42], self.R[42:]]
|
|
||||||
j = 0
|
|
||||||
Bn = [0] * 32
|
|
||||||
pos = 0
|
|
||||||
while j < 8:
|
|
||||||
m = (B[j][0] << 1) + B[j][5]
|
|
||||||
n = (B[j][1] << 3) + (B[j][2] << 2) + (B[j][3] << 1) + B[j][4]
|
|
||||||
v = Des.__sbox[j][(m << 4) + n]
|
|
||||||
Bn[pos] = (v & 8) >> 3
|
|
||||||
Bn[pos + 1] = (v & 4) >> 2
|
|
||||||
Bn[pos + 2] = (v & 2) >> 1
|
|
||||||
Bn[pos + 3] = v & 1
|
|
||||||
pos += 4
|
|
||||||
j += 1
|
|
||||||
self.R = self.__permutate(Des.__p, Bn)
|
|
||||||
self.R = [x ^ y for x, y in zip(self.R, self.L)]
|
|
||||||
self.L = tempR
|
|
||||||
i += 1
|
|
||||||
iteration += iteration_adjustment
|
|
||||||
self.final = self.__permutate(Des.__fp, self.R + self.L)
|
|
||||||
return self.final
|
|
||||||
def crypt(self, data, crypt_type):
|
|
||||||
if not data:
|
|
||||||
return ''
|
|
||||||
if len(data) % self.block_size != 0:
|
|
||||||
if crypt_type == Des.DECRYPT: # Decryption must work on 8 byte blocks
|
|
||||||
raise ValueError("Invalid data length, data must be a multiple of " + str(self.block_size) + " bytes\n.")
|
|
||||||
if not self.getPadding():
|
|
||||||
raise ValueError("Invalid data length, data must be a multiple of " + str(self.block_size) + " bytes\n. Try setting the optional padding character")
|
|
||||||
else:
|
|
||||||
data += (self.block_size - (len(data) % self.block_size)) * self.getPadding()
|
|
||||||
if self.getMode() == CBC:
|
|
||||||
if self.getIV():
|
|
||||||
iv = self.__String_to_BitList(self.getIV())
|
|
||||||
else:
|
|
||||||
raise ValueError("For CBC mode, you must supply the Initial Value (IV) for ciphering")
|
|
||||||
i = 0
|
|
||||||
dict = {}
|
|
||||||
result = []
|
|
||||||
while i < len(data):
|
|
||||||
block = self.__String_to_BitList(data[i:i+8])
|
|
||||||
if self.getMode() == CBC:
|
|
||||||
if crypt_type == Des.ENCRYPT:
|
|
||||||
block = [x ^ y for x, y in zip(block, iv)]
|
|
||||||
processed_block = self.__des_crypt(block, crypt_type)
|
|
||||||
if crypt_type == Des.DECRYPT:
|
|
||||||
processed_block = [x ^ y for x, y in zip(processed_block, iv)]
|
|
||||||
iv = block
|
|
||||||
else:
|
|
||||||
iv = processed_block
|
|
||||||
else:
|
|
||||||
processed_block = self.__des_crypt(block, crypt_type)
|
|
||||||
result.append(self.__BitList_to_String(processed_block))
|
|
||||||
i += 8
|
|
||||||
if crypt_type == Des.DECRYPT and self.getPadding():
|
|
||||||
s = result[-1]
|
|
||||||
while s[-1] == self.getPadding():
|
|
||||||
s = s[:-1]
|
|
||||||
result[-1] = s
|
|
||||||
return ''.join(result)
|
|
||||||
def encrypt(self, data, pad=''):
|
|
||||||
self.__padding = pad
|
|
||||||
return self.crypt(data, Des.ENCRYPT)
|
|
||||||
def decrypt(self, data, pad=''):
|
|
||||||
self.__padding = pad
|
|
||||||
return self.crypt(data, Des.DECRYPT)
|
|
||||||
|
|
||||||
class Sectionizer(object):
|
class Sectionizer(object):
|
||||||
|
bkType = "Book"
|
||||||
|
|
||||||
def __init__(self, filename, ident):
|
def __init__(self, filename, ident):
|
||||||
self.contents = file(filename, 'rb').read()
|
self.contents = file(filename, 'rb').read()
|
||||||
self.header = self.contents[0:72]
|
self.header = self.contents[0:72]
|
||||||
self.num_sections, = struct.unpack('>H', self.contents[76:78])
|
self.num_sections, = struct.unpack('>H', self.contents[76:78])
|
||||||
|
# Dictionary or normal content (TODO: Not hard-coded)
|
||||||
if self.header[0x3C:0x3C+8] != ident:
|
if self.header[0x3C:0x3C+8] != ident:
|
||||||
raise ValueError('Invalid file format')
|
if self.header[0x3C:0x3C+8] == "PDctPPrs":
|
||||||
|
self.bkType = "Dict"
|
||||||
|
else:
|
||||||
|
raise ValueError('Invalid file format')
|
||||||
self.sections = []
|
self.sections = []
|
||||||
for i in xrange(self.num_sections):
|
for i in xrange(self.num_sections):
|
||||||
offset, a1,a2,a3,a4 = struct.unpack('>LBBBB', self.contents[78+i*8:78+i*8+8])
|
offset, a1,a2,a3,a4 = struct.unpack('>LBBBB', self.contents[78+i*8:78+i*8+8])
|
||||||
@@ -361,15 +191,15 @@ def deXOR(text, sp, table):
|
|||||||
return r
|
return r
|
||||||
|
|
||||||
class EreaderProcessor(object):
|
class EreaderProcessor(object):
|
||||||
def __init__(self, section_reader, username, creditcard):
|
def __init__(self, sect, username, creditcard):
|
||||||
self.section_reader = section_reader
|
self.section_reader = sect.loadSection
|
||||||
data = section_reader(0)
|
data = self.section_reader(0)
|
||||||
version, = struct.unpack('>H', data[0:2])
|
version, = struct.unpack('>H', data[0:2])
|
||||||
self.version = version
|
self.version = version
|
||||||
logging.info('eReader file format version %s', version)
|
logging.info('eReader file format version %s', version)
|
||||||
if version != 272 and version != 260 and version != 259:
|
if version != 272 and version != 260 and version != 259:
|
||||||
raise ValueError('incorrect eReader version %d (error 1)' % version)
|
raise ValueError('incorrect eReader version %d (error 1)' % version)
|
||||||
data = section_reader(1)
|
data = self.section_reader(1)
|
||||||
self.data = data
|
self.data = data
|
||||||
des = Des(fixKey(data[0:8]))
|
des = Des(fixKey(data[0:8]))
|
||||||
cookie_shuf, cookie_size = struct.unpack('>LL', des.decrypt(data[-8:]))
|
cookie_shuf, cookie_size = struct.unpack('>LL', des.decrypt(data[-8:]))
|
||||||
@@ -398,11 +228,17 @@ class EreaderProcessor(object):
|
|||||||
self.num_text_pages = struct.unpack('>H', r[2:4])[0] - 1
|
self.num_text_pages = struct.unpack('>H', r[2:4])[0] - 1
|
||||||
self.num_image_pages = struct.unpack('>H', r[26:26+2])[0]
|
self.num_image_pages = struct.unpack('>H', r[26:26+2])[0]
|
||||||
self.first_image_page = struct.unpack('>H', r[24:24+2])[0]
|
self.first_image_page = struct.unpack('>H', r[24:24+2])[0]
|
||||||
|
# Default values
|
||||||
|
self.num_footnote_pages = 0
|
||||||
|
self.num_sidebar_pages = 0
|
||||||
|
self.first_footnote_page = -1
|
||||||
|
self.first_sidebar_page = -1
|
||||||
if self.version == 272:
|
if self.version == 272:
|
||||||
self.num_footnote_pages = struct.unpack('>H', r[46:46+2])[0]
|
self.num_footnote_pages = struct.unpack('>H', r[46:46+2])[0]
|
||||||
self.first_footnote_page = struct.unpack('>H', r[44:44+2])[0]
|
self.first_footnote_page = struct.unpack('>H', r[44:44+2])[0]
|
||||||
self.num_sidebar_pages = struct.unpack('>H', r[38:38+2])[0]
|
if (sect.bkType == "Book"):
|
||||||
self.first_sidebar_page = struct.unpack('>H', r[36:36+2])[0]
|
self.num_sidebar_pages = struct.unpack('>H', r[38:38+2])[0]
|
||||||
|
self.first_sidebar_page = struct.unpack('>H', r[36:36+2])[0]
|
||||||
# self.num_bookinfo_pages = struct.unpack('>H', r[34:34+2])[0]
|
# self.num_bookinfo_pages = struct.unpack('>H', r[34:34+2])[0]
|
||||||
# self.first_bookinfo_page = struct.unpack('>H', r[32:32+2])[0]
|
# self.first_bookinfo_page = struct.unpack('>H', r[32:32+2])[0]
|
||||||
# self.num_chapter_pages = struct.unpack('>H', r[22:22+2])[0]
|
# self.num_chapter_pages = struct.unpack('>H', r[22:22+2])[0]
|
||||||
@@ -418,10 +254,8 @@ class EreaderProcessor(object):
|
|||||||
self.xortable_size = struct.unpack('>H', r[42:42+2])[0]
|
self.xortable_size = struct.unpack('>H', r[42:42+2])[0]
|
||||||
self.xortable = self.data[self.xortable_offset:self.xortable_offset + self.xortable_size]
|
self.xortable = self.data[self.xortable_offset:self.xortable_offset + self.xortable_size]
|
||||||
else:
|
else:
|
||||||
self.num_footnote_pages = 0
|
# Nothing needs to be done
|
||||||
self.num_sidebar_pages = 0
|
pass
|
||||||
self.first_footnote_page = -1
|
|
||||||
self.first_sidebar_page = -1
|
|
||||||
# self.num_bookinfo_pages = 0
|
# self.num_bookinfo_pages = 0
|
||||||
# self.num_chapter_pages = 0
|
# self.num_chapter_pages = 0
|
||||||
# self.num_link_pages = 0
|
# self.num_link_pages = 0
|
||||||
@@ -446,10 +280,14 @@ class EreaderProcessor(object):
|
|||||||
encrypted_key_sha = r[44:44+20]
|
encrypted_key_sha = r[44:44+20]
|
||||||
encrypted_key = r[64:64+8]
|
encrypted_key = r[64:64+8]
|
||||||
elif version == 260:
|
elif version == 260:
|
||||||
if drm_sub_version != 13:
|
if drm_sub_version != 13 and drm_sub_version != 11:
|
||||||
raise ValueError('incorrect eReader version %d (error 3)' % drm_sub_version)
|
raise ValueError('incorrect eReader version %d (error 3)' % drm_sub_version)
|
||||||
encrypted_key = r[44:44+8]
|
if drm_sub_version == 13:
|
||||||
encrypted_key_sha = r[52:52+20]
|
encrypted_key = r[44:44+8]
|
||||||
|
encrypted_key_sha = r[52:52+20]
|
||||||
|
else:
|
||||||
|
encrypted_key = r[64:64+8]
|
||||||
|
encrypted_key_sha = r[44:44+20]
|
||||||
elif version == 272:
|
elif version == 272:
|
||||||
encrypted_key = r[172:172+8]
|
encrypted_key = r[172:172+8]
|
||||||
encrypted_key_sha = r[56:56+20]
|
encrypted_key_sha = r[56:56+20]
|
||||||
@@ -535,6 +373,12 @@ class EreaderProcessor(object):
|
|||||||
r += fmarker
|
r += fmarker
|
||||||
fnote_ids = fnote_ids[id_len+4:]
|
fnote_ids = fnote_ids[id_len+4:]
|
||||||
|
|
||||||
|
# TODO: Handle dictionary index (?) pages - which are also marked as
|
||||||
|
# sidebar_pages (?). For now dictionary sidebars are ignored
|
||||||
|
# For dictionaries - record 0 is null terminated strings, followed by
|
||||||
|
# blocks of around 62000 bytes and a final block. Not sure of the
|
||||||
|
# encoding
|
||||||
|
|
||||||
# now handle sidebar pages
|
# now handle sidebar pages
|
||||||
if self.num_sidebar_pages > 0:
|
if self.num_sidebar_pages > 0:
|
||||||
r += '\n'
|
r += '\n'
|
||||||
@@ -547,7 +391,7 @@ class EreaderProcessor(object):
|
|||||||
id_len = ord(sbar_ids[2])
|
id_len = ord(sbar_ids[2])
|
||||||
id = sbar_ids[3:3+id_len]
|
id = sbar_ids[3:3+id_len]
|
||||||
smarker = '<sidebar id="%s">\n' % id
|
smarker = '<sidebar id="%s">\n' % id
|
||||||
smarker += zlib.decompress(des.decrypt(self.section_reader(self.first_footnote_page + i)))
|
smarker += zlib.decompress(des.decrypt(self.section_reader(self.first_sidebar_page + i)))
|
||||||
smarker += '\n</sidebar>\n'
|
smarker += '\n</sidebar>\n'
|
||||||
r += smarker
|
r += smarker
|
||||||
sbar_ids = sbar_ids[id_len+4:]
|
sbar_ids = sbar_ids[id_len+4:]
|
||||||
@@ -565,10 +409,10 @@ def cleanPML(pml):
|
|||||||
def convertEreaderToPml(infile, name, cc, outdir):
|
def convertEreaderToPml(infile, name, cc, outdir):
|
||||||
if not os.path.exists(outdir):
|
if not os.path.exists(outdir):
|
||||||
os.makedirs(outdir)
|
os.makedirs(outdir)
|
||||||
|
bookname = os.path.splitext(os.path.basename(infile))[0]
|
||||||
print " Decoding File"
|
print " Decoding File"
|
||||||
sect = Sectionizer(infile, 'PNRdPPrs')
|
sect = Sectionizer(infile, 'PNRdPPrs')
|
||||||
er = EreaderProcessor(sect.loadSection, name, cc)
|
er = EreaderProcessor(sect, name, cc)
|
||||||
|
|
||||||
if er.getNumImages() > 0:
|
if er.getNumImages() > 0:
|
||||||
print " Extracting images"
|
print " Extracting images"
|
||||||
@@ -591,6 +435,47 @@ def convertEreaderToPml(infile, name, cc, outdir):
|
|||||||
# file(os.path.join(outdir, 'bookinfo.txt'),'wb').write(bkinfo)
|
# file(os.path.join(outdir, 'bookinfo.txt'),'wb').write(bkinfo)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def decryptBook(infile, outdir, name, cc, make_pmlz):
|
||||||
|
if make_pmlz :
|
||||||
|
# ignore specified outdir, use tempdir instead
|
||||||
|
outdir = tempfile.mkdtemp()
|
||||||
|
try:
|
||||||
|
print "Processing..."
|
||||||
|
convertEreaderToPml(infile, name, cc, outdir)
|
||||||
|
if make_pmlz :
|
||||||
|
import zipfile
|
||||||
|
import shutil
|
||||||
|
print " Creating PMLZ file"
|
||||||
|
zipname = infile[:-4] + '.pmlz'
|
||||||
|
myZipFile = zipfile.ZipFile(zipname,'w',zipfile.ZIP_STORED, False)
|
||||||
|
list = os.listdir(outdir)
|
||||||
|
for file in list:
|
||||||
|
localname = file
|
||||||
|
filePath = os.path.join(outdir,file)
|
||||||
|
if os.path.isfile(filePath):
|
||||||
|
myZipFile.write(filePath, localname)
|
||||||
|
elif os.path.isdir(filePath):
|
||||||
|
imageList = os.listdir(filePath)
|
||||||
|
localimgdir = os.path.basename(filePath)
|
||||||
|
for image in imageList:
|
||||||
|
localname = os.path.join(localimgdir,image)
|
||||||
|
imagePath = os.path.join(filePath,image)
|
||||||
|
if os.path.isfile(imagePath):
|
||||||
|
myZipFile.write(imagePath, localname)
|
||||||
|
myZipFile.close()
|
||||||
|
# remove temporary directory
|
||||||
|
shutil.rmtree(outdir, True)
|
||||||
|
print 'output is %s' % zipname
|
||||||
|
else :
|
||||||
|
print 'output in %s' % outdir
|
||||||
|
print "done"
|
||||||
|
except ValueError, e:
|
||||||
|
print "Error: %s" % e
|
||||||
|
return 1
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
def usage():
|
def usage():
|
||||||
print "Converts DRMed eReader books to PML Source"
|
print "Converts DRMed eReader books to PML Source"
|
||||||
print "Usage:"
|
print "Usage:"
|
||||||
@@ -605,8 +490,8 @@ def usage():
|
|||||||
print " It's enough to enter the last 8 digits of the credit card number"
|
print " It's enough to enter the last 8 digits of the credit card number"
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
def main(argv=None):
|
def main(argv=None):
|
||||||
global bookname
|
|
||||||
try:
|
try:
|
||||||
opts, args = getopt.getopt(sys.argv[1:], "h", ["make-pmlz"])
|
opts, args = getopt.getopt(sys.argv[1:], "h", ["make-pmlz"])
|
||||||
except getopt.GetoptError, err:
|
except getopt.GetoptError, err:
|
||||||
@@ -614,79 +499,29 @@ def main(argv=None):
|
|||||||
usage()
|
usage()
|
||||||
return 1
|
return 1
|
||||||
make_pmlz = False
|
make_pmlz = False
|
||||||
zipname = None
|
|
||||||
for o, a in opts:
|
for o, a in opts:
|
||||||
if o == "-h":
|
if o == "-h":
|
||||||
usage()
|
usage()
|
||||||
return 0
|
return 0
|
||||||
elif o == "--make-pmlz":
|
elif o == "--make-pmlz":
|
||||||
make_pmlz = True
|
make_pmlz = True
|
||||||
zipname = ''
|
|
||||||
|
|
||||||
print "eRdr2Pml v%s. Copyright (c) 2009 The Dark Reverser" % __version__
|
print "eRdr2Pml v%s. Copyright (c) 2009 The Dark Reverser" % __version__
|
||||||
|
|
||||||
if len(args)!=3 and len(args)!=4:
|
if len(args)!=3 and len(args)!=4:
|
||||||
usage()
|
usage()
|
||||||
return 1
|
return 1
|
||||||
else:
|
|
||||||
if len(args)==3:
|
|
||||||
infile, name, cc = args[0], args[1], args[2]
|
|
||||||
outdir = infile[:-4] + '_Source'
|
|
||||||
elif len(args)==4:
|
|
||||||
infile, outdir, name, cc = args[0], args[1], args[2], args[3]
|
|
||||||
|
|
||||||
if make_pmlz :
|
if len(args)==3:
|
||||||
# ignore specified outdir, use tempdir instead
|
infile, name, cc = args[0], args[1], args[2]
|
||||||
outdir = tempfile.mkdtemp()
|
outdir = infile[:-4] + '_Source'
|
||||||
|
elif len(args)==4:
|
||||||
bookname = os.path.splitext(os.path.basename(infile))[0]
|
infile, outdir, name, cc = args[0], args[1], args[2], args[3]
|
||||||
|
|
||||||
try:
|
return decryptBook(infile, outdir, name, cc, make_pmlz)
|
||||||
print "Processing..."
|
|
||||||
import time
|
|
||||||
start_time = time.time()
|
|
||||||
convertEreaderToPml(infile, name, cc, outdir)
|
|
||||||
|
|
||||||
if make_pmlz :
|
|
||||||
import zipfile
|
|
||||||
import shutil
|
|
||||||
print " Creating PMLZ file"
|
|
||||||
zipname = infile[:-4] + '.pmlz'
|
|
||||||
myZipFile = zipfile.ZipFile(zipname,'w',zipfile.ZIP_STORED, False)
|
|
||||||
list = os.listdir(outdir)
|
|
||||||
for file in list:
|
|
||||||
localname = file
|
|
||||||
filePath = os.path.join(outdir,file)
|
|
||||||
if os.path.isfile(filePath):
|
|
||||||
myZipFile.write(filePath, localname)
|
|
||||||
elif os.path.isdir(filePath):
|
|
||||||
imageList = os.listdir(filePath)
|
|
||||||
localimgdir = os.path.basename(filePath)
|
|
||||||
for image in imageList:
|
|
||||||
localname = os.path.join(localimgdir,image)
|
|
||||||
imagePath = os.path.join(filePath,image)
|
|
||||||
if os.path.isfile(imagePath):
|
|
||||||
myZipFile.write(imagePath, localname)
|
|
||||||
myZipFile.close()
|
|
||||||
# remove temporary directory
|
|
||||||
shutil.rmtree(outdir)
|
|
||||||
|
|
||||||
end_time = time.time()
|
|
||||||
search_time = end_time - start_time
|
|
||||||
print 'elapsed time: %.2f seconds' % (search_time, )
|
|
||||||
if make_pmlz :
|
|
||||||
print 'output is %s' % zipname
|
|
||||||
else :
|
|
||||||
print 'output in %s' % outdir
|
|
||||||
print "done"
|
|
||||||
except ValueError, e:
|
|
||||||
print "Error: %s" % e
|
|
||||||
return 1
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
#import cProfile
|
sys.stdout=Unbuffered(sys.stdout)
|
||||||
#command = """sys.exit(main())"""
|
|
||||||
#cProfile.runctx( command, globals(), locals(), filename="cprofile.profile" )
|
|
||||||
|
|
||||||
sys.exit(main())
|
sys.exit(main())
|
||||||
|
|
||||||
|
|||||||
90
Calibre_Plugins/eReaderPDB2PML_plugin/openssl_des.py
Normal file
90
Calibre_Plugins/eReaderPDB2PML_plugin/openssl_des.py
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
|
||||||
|
|
||||||
|
# implement just enough of des from openssl to make erdr2pml.py happy
|
||||||
|
|
||||||
|
def load_libcrypto():
|
||||||
|
from ctypes import CDLL, POINTER, c_void_p, c_char_p, c_char, c_int, c_long, \
|
||||||
|
Structure, c_ulong, create_string_buffer, cast
|
||||||
|
from ctypes.util import find_library
|
||||||
|
import sys
|
||||||
|
|
||||||
|
if sys.platform.startswith('win'):
|
||||||
|
libcrypto = find_library('libeay32')
|
||||||
|
else:
|
||||||
|
libcrypto = find_library('crypto')
|
||||||
|
|
||||||
|
if libcrypto is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
libcrypto = CDLL(libcrypto)
|
||||||
|
|
||||||
|
# typedef struct DES_ks
|
||||||
|
# {
|
||||||
|
# union
|
||||||
|
# {
|
||||||
|
# DES_cblock cblock;
|
||||||
|
# /* make sure things are correct size on machines with
|
||||||
|
# * 8 byte longs */
|
||||||
|
# DES_LONG deslong[2];
|
||||||
|
# } ks[16];
|
||||||
|
# } DES_key_schedule;
|
||||||
|
|
||||||
|
# just create a big enough place to hold everything
|
||||||
|
# it will have alignment of structure so we should be okay (16 byte aligned?)
|
||||||
|
class DES_KEY_SCHEDULE(Structure):
|
||||||
|
_fields_ = [('DES_cblock1', c_char * 16),
|
||||||
|
('DES_cblock2', c_char * 16),
|
||||||
|
('DES_cblock3', c_char * 16),
|
||||||
|
('DES_cblock4', c_char * 16),
|
||||||
|
('DES_cblock5', c_char * 16),
|
||||||
|
('DES_cblock6', c_char * 16),
|
||||||
|
('DES_cblock7', c_char * 16),
|
||||||
|
('DES_cblock8', c_char * 16),
|
||||||
|
('DES_cblock9', c_char * 16),
|
||||||
|
('DES_cblock10', c_char * 16),
|
||||||
|
('DES_cblock11', c_char * 16),
|
||||||
|
('DES_cblock12', c_char * 16),
|
||||||
|
('DES_cblock13', c_char * 16),
|
||||||
|
('DES_cblock14', c_char * 16),
|
||||||
|
('DES_cblock15', c_char * 16),
|
||||||
|
('DES_cblock16', c_char * 16)]
|
||||||
|
|
||||||
|
DES_KEY_SCHEDULE_p = POINTER(DES_KEY_SCHEDULE)
|
||||||
|
|
||||||
|
def F(restype, name, argtypes):
|
||||||
|
func = getattr(libcrypto, name)
|
||||||
|
func.restype = restype
|
||||||
|
func.argtypes = argtypes
|
||||||
|
return func
|
||||||
|
|
||||||
|
DES_set_key = F(None, 'DES_set_key',[c_char_p, DES_KEY_SCHEDULE_p])
|
||||||
|
DES_ecb_encrypt = F(None, 'DES_ecb_encrypt',[c_char_p, c_char_p, DES_KEY_SCHEDULE_p, c_int])
|
||||||
|
|
||||||
|
|
||||||
|
class DES(object):
|
||||||
|
def __init__(self, key):
|
||||||
|
if len(key) != 8 :
|
||||||
|
raise Error('DES improper key used')
|
||||||
|
return
|
||||||
|
self.key = key
|
||||||
|
self.keyschedule = DES_KEY_SCHEDULE()
|
||||||
|
DES_set_key(self.key, self.keyschedule)
|
||||||
|
def desdecrypt(self, data):
|
||||||
|
ob = create_string_buffer(len(data))
|
||||||
|
DES_ecb_encrypt(data, ob, self.keyschedule, 0)
|
||||||
|
return ob.raw
|
||||||
|
def decrypt(self, data):
|
||||||
|
if not data:
|
||||||
|
return ''
|
||||||
|
i = 0
|
||||||
|
result = []
|
||||||
|
while i < len(data):
|
||||||
|
block = data[i:i+8]
|
||||||
|
processed_block = self.desdecrypt(block)
|
||||||
|
result.append(processed_block)
|
||||||
|
i += 8
|
||||||
|
return ''.join(result)
|
||||||
|
|
||||||
|
return DES
|
||||||
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 41
|
|
||||||
/svn/!svn/ver/70200/psyco/dist/py-support
|
|
||||||
END
|
|
||||||
core.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 49
|
|
||||||
/svn/!svn/ver/70200/psyco/dist/py-support/core.py
|
|
||||||
END
|
|
||||||
support.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 52
|
|
||||||
/svn/!svn/ver/49315/psyco/dist/py-support/support.py
|
|
||||||
END
|
|
||||||
classes.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 52
|
|
||||||
/svn/!svn/ver/35003/psyco/dist/py-support/classes.py
|
|
||||||
END
|
|
||||||
__init__.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 53
|
|
||||||
/svn/!svn/ver/35003/psyco/dist/py-support/__init__.py
|
|
||||||
END
|
|
||||||
logger.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 51
|
|
||||||
/svn/!svn/ver/23284/psyco/dist/py-support/logger.py
|
|
||||||
END
|
|
||||||
kdictproxy.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 55
|
|
||||||
/svn/!svn/ver/35003/psyco/dist/py-support/kdictproxy.py
|
|
||||||
END
|
|
||||||
profiler.py
|
|
||||||
K 25
|
|
||||||
svn:wc:ra_dav:version-url
|
|
||||||
V 53
|
|
||||||
/svn/!svn/ver/70200/psyco/dist/py-support/profiler.py
|
|
||||||
END
|
|
||||||
@@ -1,7 +0,0 @@
|
|||||||
K 10
|
|
||||||
svn:ignore
|
|
||||||
V 14
|
|
||||||
*~
|
|
||||||
*.pyc
|
|
||||||
*.pyo
|
|
||||||
END
|
|
||||||
@@ -1,266 +0,0 @@
|
|||||||
10
|
|
||||||
|
|
||||||
dir
|
|
||||||
78269
|
|
||||||
http://codespeak.net/svn/psyco/dist/py-support
|
|
||||||
http://codespeak.net/svn
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2009-12-18T16:35:35.119276Z
|
|
||||||
70200
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
fd0d7bf2-dfb6-0310-8d31-b7ecfe96aada
|
|
||||||
|
|
||||||
core.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
3b362177a839893c9e867880b3a7cef3
|
|
||||||
2009-12-18T16:35:35.119276Z
|
|
||||||
70200
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
8144
|
|
||||||
|
|
||||||
support.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
b0551e975d774f2f7f58a29ed4b6b90e
|
|
||||||
2007-12-03T12:27:25.632574Z
|
|
||||||
49315
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
6043
|
|
||||||
|
|
||||||
classes.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
5932ed955198d16ec17285dfb195d341
|
|
||||||
2006-11-26T13:03:26.949973Z
|
|
||||||
35003
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1440
|
|
||||||
|
|
||||||
__init__.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
219582b5182dfa38a9119d059a71965f
|
|
||||||
2006-11-26T13:03:26.949973Z
|
|
||||||
35003
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1895
|
|
||||||
|
|
||||||
logger.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
aa21f905df036af43082e1ea2a2561ee
|
|
||||||
2006-02-13T15:02:51.744168Z
|
|
||||||
23284
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2678
|
|
||||||
|
|
||||||
kdictproxy.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
1c8611748dcee5b29848bf25be3ec473
|
|
||||||
2006-11-26T13:03:26.949973Z
|
|
||||||
35003
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
4369
|
|
||||||
|
|
||||||
profiler.py
|
|
||||||
file
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2010-10-25T15:10:42.000000Z
|
|
||||||
858162366cbc39cd9e249e35e6f510c4
|
|
||||||
2009-12-18T16:35:35.119276Z
|
|
||||||
70200
|
|
||||||
arigo
|
|
||||||
has-props
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
11238
|
|
||||||
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
K 13
|
|
||||||
svn:eol-style
|
|
||||||
V 6
|
|
||||||
native
|
|
||||||
K 12
|
|
||||||
svn:keywords
|
|
||||||
V 23
|
|
||||||
Author Date Id Revision
|
|
||||||
END
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco top-level file of the Psyco package.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco -- the Python Specializing Compiler.
|
|
||||||
|
|
||||||
Typical usage: add the following lines to your application's main module,
|
|
||||||
preferably after the other imports:
|
|
||||||
|
|
||||||
try:
|
|
||||||
import psyco
|
|
||||||
psyco.full()
|
|
||||||
except ImportError:
|
|
||||||
print 'Psyco not installed, the program will just run slower'
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# This module is present to make 'psyco' a package and to
|
|
||||||
# publish the main functions and variables.
|
|
||||||
#
|
|
||||||
# More documentation can be found in core.py.
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
# Try to import the dynamic-loading _psyco and report errors
|
|
||||||
try:
|
|
||||||
import _psyco
|
|
||||||
except ImportError, e:
|
|
||||||
extramsg = ''
|
|
||||||
import sys, imp
|
|
||||||
try:
|
|
||||||
file, filename, (suffix, mode, type) = imp.find_module('_psyco', __path__)
|
|
||||||
except ImportError:
|
|
||||||
ext = [suffix for suffix, mode, type in imp.get_suffixes()
|
|
||||||
if type == imp.C_EXTENSION]
|
|
||||||
if ext:
|
|
||||||
extramsg = (" (cannot locate the compiled extension '_psyco%s' "
|
|
||||||
"in the package path '%s')" % (ext[0], '; '.join(__path__)))
|
|
||||||
else:
|
|
||||||
extramsg = (" (check that the compiled extension '%s' is for "
|
|
||||||
"the correct Python version; this is Python %s)" %
|
|
||||||
(filename, sys.version.split()[0]))
|
|
||||||
raise ImportError, str(e) + extramsg
|
|
||||||
|
|
||||||
# Publish important data by importing them in the package
|
|
||||||
from support import __version__, error, warning, _getrealframe, _getemulframe
|
|
||||||
from support import version_info, __version__ as hexversion
|
|
||||||
from core import full, profile, background, runonly, stop, cannotcompile
|
|
||||||
from core import log, bind, unbind, proxy, unproxy, dumpcodebuf
|
|
||||||
from _psyco import setfilter
|
|
||||||
from _psyco import compact, compacttype
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco class support module.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco class support module.
|
|
||||||
|
|
||||||
'psyco.classes.psyobj' is an alternate Psyco-optimized root for classes.
|
|
||||||
Any class inheriting from it or using the metaclass '__metaclass__' might
|
|
||||||
get optimized specifically for Psyco. It is equivalent to call
|
|
||||||
psyco.bind() on the class object after its creation.
|
|
||||||
|
|
||||||
Importing everything from psyco.classes in a module will import the
|
|
||||||
'__metaclass__' name, so all classes defined after a
|
|
||||||
|
|
||||||
from psyco.classes import *
|
|
||||||
|
|
||||||
will automatically use the Psyco-optimized metaclass.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
__all__ = ['psyobj', 'psymetaclass', '__metaclass__']
|
|
||||||
|
|
||||||
|
|
||||||
from _psyco import compacttype
|
|
||||||
import core
|
|
||||||
from types import FunctionType
|
|
||||||
|
|
||||||
class psymetaclass(compacttype):
|
|
||||||
"Psyco-optimized meta-class. Turns all methods into Psyco proxies."
|
|
||||||
|
|
||||||
def __new__(cls, name, bases, dict):
|
|
||||||
bindlist = dict.get('__psyco__bind__')
|
|
||||||
if bindlist is None:
|
|
||||||
bindlist = [key for key, value in dict.items()
|
|
||||||
if isinstance(value, FunctionType)]
|
|
||||||
for attr in bindlist:
|
|
||||||
dict[attr] = core.proxy(dict[attr])
|
|
||||||
return super(psymetaclass, cls).__new__(cls, name, bases, dict)
|
|
||||||
|
|
||||||
psyobj = psymetaclass("psyobj", (), {})
|
|
||||||
__metaclass__ = psymetaclass
|
|
||||||
@@ -1,231 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco main functions.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco main functions.
|
|
||||||
|
|
||||||
Here are the routines that you can use from your applications.
|
|
||||||
These are mostly interfaces to the C core, but they depend on
|
|
||||||
the Python version.
|
|
||||||
|
|
||||||
You can use these functions from the 'psyco' module instead of
|
|
||||||
'psyco.core', e.g.
|
|
||||||
|
|
||||||
import psyco
|
|
||||||
psyco.log('/tmp/psyco.log')
|
|
||||||
psyco.profile()
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
import _psyco
|
|
||||||
import types
|
|
||||||
from support import *
|
|
||||||
|
|
||||||
newfunction = types.FunctionType
|
|
||||||
newinstancemethod = types.MethodType
|
|
||||||
|
|
||||||
|
|
||||||
# Default charge profiler values
|
|
||||||
default_watermark = 0.09 # between 0.0 (0%) and 1.0 (100%)
|
|
||||||
default_halflife = 0.5 # seconds
|
|
||||||
default_pollfreq_profile = 20 # Hz
|
|
||||||
default_pollfreq_background = 100 # Hz -- a maximum for sleep's resolution
|
|
||||||
default_parentframe = 0.25 # should not be more than 0.5 (50%)
|
|
||||||
|
|
||||||
|
|
||||||
def full(memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Compile as much as possible.
|
|
||||||
|
|
||||||
Typical use is for small scripts performing intensive computations
|
|
||||||
or string handling."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.FullCompiler()
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def profile(watermark = default_watermark,
|
|
||||||
halflife = default_halflife,
|
|
||||||
pollfreq = default_pollfreq_profile,
|
|
||||||
parentframe = default_parentframe,
|
|
||||||
memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Turn on profiling.
|
|
||||||
|
|
||||||
The 'watermark' parameter controls how easily running functions will
|
|
||||||
be compiled. The smaller the value, the more functions are compiled."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.ActivePassiveProfiler(watermark, halflife,
|
|
||||||
pollfreq, parentframe)
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def background(watermark = default_watermark,
|
|
||||||
halflife = default_halflife,
|
|
||||||
pollfreq = default_pollfreq_background,
|
|
||||||
parentframe = default_parentframe,
|
|
||||||
memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Turn on passive profiling.
|
|
||||||
|
|
||||||
This is a very lightweight mode in which only intensively computing
|
|
||||||
functions can be detected. The smaller the 'watermark', the more functions
|
|
||||||
are compiled."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.PassiveProfiler(watermark, halflife, pollfreq, parentframe)
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def runonly(memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Nonprofiler.
|
|
||||||
|
|
||||||
XXX check if this is useful and document."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.RunOnly()
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def stop():
|
|
||||||
"""Turn off all automatic compilation. bind() calls remain in effect."""
|
|
||||||
import profiler
|
|
||||||
profiler.go([])
|
|
||||||
|
|
||||||
|
|
||||||
def log(logfile='', mode='w', top=10):
|
|
||||||
"""Enable logging to the given file.
|
|
||||||
|
|
||||||
If the file name is unspecified, a default name is built by appending
|
|
||||||
a 'log-psyco' extension to the main script name.
|
|
||||||
|
|
||||||
Mode is 'a' to append to a possibly existing file or 'w' to overwrite
|
|
||||||
an existing file. Note that the log file may grow quickly in 'a' mode."""
|
|
||||||
import profiler, logger
|
|
||||||
if not logfile:
|
|
||||||
import os
|
|
||||||
logfile, dummy = os.path.splitext(sys.argv[0])
|
|
||||||
if os.path.basename(logfile):
|
|
||||||
logfile += '.'
|
|
||||||
logfile += 'log-psyco'
|
|
||||||
if hasattr(_psyco, 'VERBOSE_LEVEL'):
|
|
||||||
print >> sys.stderr, 'psyco: logging to', logfile
|
|
||||||
# logger.current should be a real file object; subtle problems
|
|
||||||
# will show up if its write() and flush() methods are written
|
|
||||||
# in Python, as Psyco will invoke them while compiling.
|
|
||||||
logger.current = open(logfile, mode)
|
|
||||||
logger.print_charges = top
|
|
||||||
profiler.logger = logger
|
|
||||||
logger.writedate('Logging started')
|
|
||||||
cannotcompile(logger.psycowrite)
|
|
||||||
_psyco.statwrite(logger=logger.psycowrite)
|
|
||||||
|
|
||||||
|
|
||||||
def bind(x, rec=None):
|
|
||||||
"""Enable compilation of the given function, method, or class object.
|
|
||||||
|
|
||||||
If C is a class (or anything with a '__dict__' attribute), bind(C) will
|
|
||||||
rebind all functions and methods found in C.__dict__ (which means, for
|
|
||||||
classes, all methods defined in the class but not in its parents).
|
|
||||||
|
|
||||||
The optional second argument specifies the number of recursive
|
|
||||||
compilation levels: all functions called by func are compiled
|
|
||||||
up to the given depth of indirection."""
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
if rec is None:
|
|
||||||
x.func_code = _psyco.proxycode(x)
|
|
||||||
else:
|
|
||||||
x.func_code = _psyco.proxycode(x, rec)
|
|
||||||
return
|
|
||||||
if hasattr(x, '__dict__'):
|
|
||||||
funcs = [o for o in x.__dict__.values()
|
|
||||||
if isinstance(o, types.MethodType)
|
|
||||||
or isinstance(o, types.FunctionType)]
|
|
||||||
if not funcs:
|
|
||||||
raise error, ("nothing bindable found in %s object" %
|
|
||||||
type(x).__name__)
|
|
||||||
for o in funcs:
|
|
||||||
bind(o, rec)
|
|
||||||
return
|
|
||||||
raise TypeError, "cannot bind %s objects" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def unbind(x):
|
|
||||||
"""Reverse of bind()."""
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
try:
|
|
||||||
f = _psyco.unproxycode(x.func_code)
|
|
||||||
except error:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
x.func_code = f.func_code
|
|
||||||
return
|
|
||||||
if hasattr(x, '__dict__'):
|
|
||||||
for o in x.__dict__.values():
|
|
||||||
if (isinstance(o, types.MethodType)
|
|
||||||
or isinstance(o, types.FunctionType)):
|
|
||||||
unbind(o)
|
|
||||||
return
|
|
||||||
raise TypeError, "cannot unbind %s objects" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def proxy(x, rec=None):
|
|
||||||
"""Return a Psyco-enabled copy of the function.
|
|
||||||
|
|
||||||
The original function is still available for non-compiled calls.
|
|
||||||
The optional second argument specifies the number of recursive
|
|
||||||
compilation levels: all functions called by func are compiled
|
|
||||||
up to the given depth of indirection."""
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
if rec is None:
|
|
||||||
code = _psyco.proxycode(x)
|
|
||||||
else:
|
|
||||||
code = _psyco.proxycode(x, rec)
|
|
||||||
return newfunction(code, x.func_globals, x.func_name)
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
p = proxy(x.im_func, rec)
|
|
||||||
return newinstancemethod(p, x.im_self, x.im_class)
|
|
||||||
raise TypeError, "cannot proxy %s objects" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def unproxy(proxy):
|
|
||||||
"""Return a new copy of the original function of method behind a proxy.
|
|
||||||
The result behaves like the original function in that calling it
|
|
||||||
does not trigger compilation nor execution of any compiled code."""
|
|
||||||
if isinstance(proxy, types.FunctionType):
|
|
||||||
return _psyco.unproxycode(proxy.func_code)
|
|
||||||
if isinstance(proxy, types.MethodType):
|
|
||||||
f = unproxy(proxy.im_func)
|
|
||||||
return newinstancemethod(f, proxy.im_self, proxy.im_class)
|
|
||||||
raise TypeError, "%s objects cannot be proxies" % type(proxy).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def cannotcompile(x):
|
|
||||||
"""Instruct Psyco never to compile the given function, method
|
|
||||||
or code object."""
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
x = x.func_code
|
|
||||||
if isinstance(x, types.CodeType):
|
|
||||||
_psyco.cannotcompile(x)
|
|
||||||
else:
|
|
||||||
raise TypeError, "unexpected %s object" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def dumpcodebuf():
|
|
||||||
"""Write in file psyco.dump a copy of the emitted machine code,
|
|
||||||
provided Psyco was compiled with a non-zero CODE_DUMP.
|
|
||||||
See py-utils/httpxam.py to examine psyco.dump."""
|
|
||||||
if hasattr(_psyco, 'dumpcodebuf'):
|
|
||||||
_psyco.dumpcodebuf()
|
|
||||||
|
|
||||||
|
|
||||||
###########################################################################
|
|
||||||
# Psyco variables
|
|
||||||
# error * the error raised by Psyco
|
|
||||||
# warning * the warning raised by Psyco
|
|
||||||
# __in_psyco__ * a new built-in variable which is always zero, but which
|
|
||||||
# Psyco special-cases by returning 1 instead. So
|
|
||||||
# __in_psyco__ can be used in a function to know if
|
|
||||||
# that function is being executed by Psyco or not.
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Support code for the 'psyco.compact' type.
|
|
||||||
|
|
||||||
from __future__ import generators
|
|
||||||
|
|
||||||
try:
|
|
||||||
from UserDict import DictMixin
|
|
||||||
except ImportError:
|
|
||||||
|
|
||||||
# backported from Python 2.3 to Python 2.2
|
|
||||||
class DictMixin:
|
|
||||||
# Mixin defining all dictionary methods for classes that already have
|
|
||||||
# a minimum dictionary interface including getitem, setitem, delitem,
|
|
||||||
# and keys. Without knowledge of the subclass constructor, the mixin
|
|
||||||
# does not define __init__() or copy(). In addition to the four base
|
|
||||||
# methods, progressively more efficiency comes with defining
|
|
||||||
# __contains__(), __iter__(), and iteritems().
|
|
||||||
|
|
||||||
# second level definitions support higher levels
|
|
||||||
def __iter__(self):
|
|
||||||
for k in self.keys():
|
|
||||||
yield k
|
|
||||||
def has_key(self, key):
|
|
||||||
try:
|
|
||||||
value = self[key]
|
|
||||||
except KeyError:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
def __contains__(self, key):
|
|
||||||
return self.has_key(key)
|
|
||||||
|
|
||||||
# third level takes advantage of second level definitions
|
|
||||||
def iteritems(self):
|
|
||||||
for k in self:
|
|
||||||
yield (k, self[k])
|
|
||||||
def iterkeys(self):
|
|
||||||
return self.__iter__()
|
|
||||||
|
|
||||||
# fourth level uses definitions from lower levels
|
|
||||||
def itervalues(self):
|
|
||||||
for _, v in self.iteritems():
|
|
||||||
yield v
|
|
||||||
def values(self):
|
|
||||||
return [v for _, v in self.iteritems()]
|
|
||||||
def items(self):
|
|
||||||
return list(self.iteritems())
|
|
||||||
def clear(self):
|
|
||||||
for key in self.keys():
|
|
||||||
del self[key]
|
|
||||||
def setdefault(self, key, default):
|
|
||||||
try:
|
|
||||||
return self[key]
|
|
||||||
except KeyError:
|
|
||||||
self[key] = default
|
|
||||||
return default
|
|
||||||
def pop(self, key, *args):
|
|
||||||
if len(args) > 1:
|
|
||||||
raise TypeError, "pop expected at most 2 arguments, got "\
|
|
||||||
+ repr(1 + len(args))
|
|
||||||
try:
|
|
||||||
value = self[key]
|
|
||||||
except KeyError:
|
|
||||||
if args:
|
|
||||||
return args[0]
|
|
||||||
raise
|
|
||||||
del self[key]
|
|
||||||
return value
|
|
||||||
def popitem(self):
|
|
||||||
try:
|
|
||||||
k, v = self.iteritems().next()
|
|
||||||
except StopIteration:
|
|
||||||
raise KeyError, 'container is empty'
|
|
||||||
del self[k]
|
|
||||||
return (k, v)
|
|
||||||
def update(self, other):
|
|
||||||
# Make progressively weaker assumptions about "other"
|
|
||||||
if hasattr(other, 'iteritems'): # iteritems saves memory and lookups
|
|
||||||
for k, v in other.iteritems():
|
|
||||||
self[k] = v
|
|
||||||
elif hasattr(other, '__iter__'): # iter saves memory
|
|
||||||
for k in other:
|
|
||||||
self[k] = other[k]
|
|
||||||
else:
|
|
||||||
for k in other.keys():
|
|
||||||
self[k] = other[k]
|
|
||||||
def get(self, key, default=None):
|
|
||||||
try:
|
|
||||||
return self[key]
|
|
||||||
except KeyError:
|
|
||||||
return default
|
|
||||||
def __repr__(self):
|
|
||||||
return repr(dict(self.iteritems()))
|
|
||||||
def __cmp__(self, other):
|
|
||||||
if other is None:
|
|
||||||
return 1
|
|
||||||
if isinstance(other, DictMixin):
|
|
||||||
other = dict(other.iteritems())
|
|
||||||
return cmp(dict(self.iteritems()), other)
|
|
||||||
def __len__(self):
|
|
||||||
return len(self.keys())
|
|
||||||
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
from _psyco import compact
|
|
||||||
|
|
||||||
|
|
||||||
class compactdictproxy(DictMixin):
|
|
||||||
|
|
||||||
def __init__(self, ko):
|
|
||||||
self._ko = ko # compact object of which 'self' is the dict
|
|
||||||
|
|
||||||
def __getitem__(self, key):
|
|
||||||
return compact.__getslot__(self._ko, key)
|
|
||||||
|
|
||||||
def __setitem__(self, key, value):
|
|
||||||
compact.__setslot__(self._ko, key, value)
|
|
||||||
|
|
||||||
def __delitem__(self, key):
|
|
||||||
compact.__delslot__(self._ko, key)
|
|
||||||
|
|
||||||
def keys(self):
|
|
||||||
return compact.__members__.__get__(self._ko)
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
keys = self.keys()
|
|
||||||
keys.reverse()
|
|
||||||
for key in keys:
|
|
||||||
del self[key]
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
keys = ', '.join(self.keys())
|
|
||||||
return '<compactdictproxy object {%s}>' % (keys,)
|
|
||||||
@@ -1,96 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco logger.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco logger.
|
|
||||||
|
|
||||||
See log() in core.py.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
|
|
||||||
import _psyco
|
|
||||||
from time import time, localtime, strftime
|
|
||||||
|
|
||||||
|
|
||||||
current = None
|
|
||||||
print_charges = 10
|
|
||||||
dump_delay = 0.2
|
|
||||||
dump_last = 0.0
|
|
||||||
|
|
||||||
def write(s, level):
|
|
||||||
t = time()
|
|
||||||
f = t-int(t)
|
|
||||||
try:
|
|
||||||
current.write("%s.%02d %-*s%s\n" % (
|
|
||||||
strftime("%X", localtime(int(t))),
|
|
||||||
int(f*100.0), 63-level, s,
|
|
||||||
"%"*level))
|
|
||||||
current.flush()
|
|
||||||
except (OSError, IOError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def psycowrite(s):
|
|
||||||
t = time()
|
|
||||||
f = t-int(t)
|
|
||||||
try:
|
|
||||||
current.write("%s.%02d %-*s%s\n" % (
|
|
||||||
strftime("%X", localtime(int(t))),
|
|
||||||
int(f*100.0), 60, s.strip(),
|
|
||||||
"% %"))
|
|
||||||
current.flush()
|
|
||||||
except (OSError, IOError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
##def writelines(lines, level=0):
|
|
||||||
## if lines:
|
|
||||||
## t = time()
|
|
||||||
## f = t-int(t)
|
|
||||||
## timedesc = strftime("%x %X", localtime(int(t)))
|
|
||||||
## print >> current, "%s.%03d %-*s %s" % (
|
|
||||||
## timedesc, int(f*1000),
|
|
||||||
## 50-level, lines[0],
|
|
||||||
## "+"*level)
|
|
||||||
## timedesc = " " * (len(timedesc)+5)
|
|
||||||
## for line in lines[1:]:
|
|
||||||
## print >> current, timedesc, line
|
|
||||||
|
|
||||||
def writememory():
|
|
||||||
write("memory usage: %d+ kb" % _psyco.memory(), 1)
|
|
||||||
|
|
||||||
def dumpcharges():
|
|
||||||
global dump_last
|
|
||||||
if print_charges:
|
|
||||||
t = time()
|
|
||||||
if not (dump_last <= t < dump_last+dump_delay):
|
|
||||||
if t <= dump_last+1.5*dump_delay:
|
|
||||||
dump_last += dump_delay
|
|
||||||
else:
|
|
||||||
dump_last = t
|
|
||||||
#write("%s: charges:" % who, 0)
|
|
||||||
lst = _psyco.stattop(print_charges)
|
|
||||||
if lst:
|
|
||||||
f = t-int(t)
|
|
||||||
lines = ["%s.%02d ______\n" % (
|
|
||||||
strftime("%X", localtime(int(t))),
|
|
||||||
int(f*100.0))]
|
|
||||||
i = 1
|
|
||||||
for co, charge in lst:
|
|
||||||
detail = co.co_filename
|
|
||||||
if len(detail) > 19:
|
|
||||||
detail = '...' + detail[-17:]
|
|
||||||
lines.append(" #%-3d |%4.1f %%| %-26s%20s:%d\n" %
|
|
||||||
(i, charge*100.0, co.co_name, detail,
|
|
||||||
co.co_firstlineno))
|
|
||||||
i += 1
|
|
||||||
current.writelines(lines)
|
|
||||||
current.flush()
|
|
||||||
|
|
||||||
def writefinalstats():
|
|
||||||
dumpcharges()
|
|
||||||
writememory()
|
|
||||||
writedate("program exit")
|
|
||||||
|
|
||||||
def writedate(msg):
|
|
||||||
write('%s, %s' % (msg, strftime("%x")), 20)
|
|
||||||
@@ -1,379 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco profiler (Python part).
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco profiler (Python part).
|
|
||||||
|
|
||||||
The implementation of the non-time-critical parts of the profiler.
|
|
||||||
See profile() and full() in core.py for the easy interface.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
import _psyco
|
|
||||||
from support import *
|
|
||||||
import math, time, types, atexit
|
|
||||||
now = time.time
|
|
||||||
try:
|
|
||||||
import thread
|
|
||||||
except ImportError:
|
|
||||||
import dummy_thread as thread
|
|
||||||
|
|
||||||
|
|
||||||
# current profiler instance
|
|
||||||
current = None
|
|
||||||
|
|
||||||
# enabled profilers, in order of priority
|
|
||||||
profilers = []
|
|
||||||
|
|
||||||
# logger module (when enabled by core.log())
|
|
||||||
logger = None
|
|
||||||
|
|
||||||
# a lock for a thread-safe go()
|
|
||||||
go_lock = thread.allocate_lock()
|
|
||||||
|
|
||||||
def go(stop=0):
|
|
||||||
# run the highest-priority profiler in 'profilers'
|
|
||||||
global current
|
|
||||||
go_lock.acquire()
|
|
||||||
try:
|
|
||||||
prev = current
|
|
||||||
if stop:
|
|
||||||
del profilers[:]
|
|
||||||
if prev:
|
|
||||||
if profilers and profilers[0] is prev:
|
|
||||||
return # best profiler already running
|
|
||||||
prev.stop()
|
|
||||||
current = None
|
|
||||||
for p in profilers[:]:
|
|
||||||
if p.start():
|
|
||||||
current = p
|
|
||||||
if logger: # and p is not prev:
|
|
||||||
logger.write("%s: starting" % p.__class__.__name__, 5)
|
|
||||||
return
|
|
||||||
finally:
|
|
||||||
go_lock.release()
|
|
||||||
# no profiler is running now
|
|
||||||
if stop:
|
|
||||||
if logger:
|
|
||||||
logger.writefinalstats()
|
|
||||||
else:
|
|
||||||
tag2bind()
|
|
||||||
|
|
||||||
atexit.register(go, 1)
|
|
||||||
|
|
||||||
|
|
||||||
def buildfncache(globals, cache):
|
|
||||||
if hasattr(types.IntType, '__dict__'):
|
|
||||||
clstypes = (types.ClassType, types.TypeType)
|
|
||||||
else:
|
|
||||||
clstypes = types.ClassType
|
|
||||||
for x in globals.values():
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
cache[x.func_code] = x, ''
|
|
||||||
elif isinstance(x, clstypes):
|
|
||||||
for y in x.__dict__.values():
|
|
||||||
if isinstance(y, types.MethodType):
|
|
||||||
y = y.im_func
|
|
||||||
if isinstance(y, types.FunctionType):
|
|
||||||
cache[y.func_code] = y, x.__name__
|
|
||||||
|
|
||||||
# code-to-function mapping (cache)
|
|
||||||
function_cache = {}
|
|
||||||
|
|
||||||
def trytobind(co, globals, log=1):
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
buildfncache(globals, function_cache)
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
if logger:
|
|
||||||
logger.write('warning: cannot find function %s in %s' %
|
|
||||||
(co.co_name, globals.get('__name__', '?')), 3)
|
|
||||||
return # give up
|
|
||||||
if logger and log:
|
|
||||||
modulename = globals.get('__name__', '?')
|
|
||||||
if clsname:
|
|
||||||
modulename += '.' + clsname
|
|
||||||
logger.write('bind function: %s.%s' % (modulename, co.co_name), 1)
|
|
||||||
f.func_code = _psyco.proxycode(f)
|
|
||||||
|
|
||||||
|
|
||||||
# the list of code objects that have been tagged
|
|
||||||
tagged_codes = []
|
|
||||||
|
|
||||||
def tag(co, globals):
|
|
||||||
if logger:
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
buildfncache(globals, function_cache)
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
clsname = '' # give up
|
|
||||||
modulename = globals.get('__name__', '?')
|
|
||||||
if clsname:
|
|
||||||
modulename += '.' + clsname
|
|
||||||
logger.write('tag function: %s.%s' % (modulename, co.co_name), 1)
|
|
||||||
tagged_codes.append((co, globals))
|
|
||||||
_psyco.turbo_frame(co)
|
|
||||||
_psyco.turbo_code(co)
|
|
||||||
|
|
||||||
def tag2bind():
|
|
||||||
if tagged_codes:
|
|
||||||
if logger:
|
|
||||||
logger.write('profiling stopped, binding %d functions' %
|
|
||||||
len(tagged_codes), 2)
|
|
||||||
for co, globals in tagged_codes:
|
|
||||||
trytobind(co, globals, 0)
|
|
||||||
function_cache.clear()
|
|
||||||
del tagged_codes[:]
|
|
||||||
|
|
||||||
|
|
||||||
class Profiler:
|
|
||||||
MemoryTimerResolution = 0.103
|
|
||||||
|
|
||||||
def run(self, memory, time, memorymax, timemax):
|
|
||||||
self.memory = memory
|
|
||||||
self.memorymax = memorymax
|
|
||||||
self.time = time
|
|
||||||
if timemax is None:
|
|
||||||
self.endtime = None
|
|
||||||
else:
|
|
||||||
self.endtime = now() + timemax
|
|
||||||
self.alarms = []
|
|
||||||
profilers.append(self)
|
|
||||||
go()
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
curmem = _psyco.memory()
|
|
||||||
memlimits = []
|
|
||||||
if self.memorymax is not None:
|
|
||||||
if curmem >= self.memorymax:
|
|
||||||
if logger:
|
|
||||||
logger.writememory()
|
|
||||||
return self.limitreached('memorymax')
|
|
||||||
memlimits.append(self.memorymax)
|
|
||||||
if self.memory is not None:
|
|
||||||
if self.memory <= 0:
|
|
||||||
if logger:
|
|
||||||
logger.writememory()
|
|
||||||
return self.limitreached('memory')
|
|
||||||
memlimits.append(curmem + self.memory)
|
|
||||||
self.memory_at_start = curmem
|
|
||||||
|
|
||||||
curtime = now()
|
|
||||||
timelimits = []
|
|
||||||
if self.endtime is not None:
|
|
||||||
if curtime >= self.endtime:
|
|
||||||
return self.limitreached('timemax')
|
|
||||||
timelimits.append(self.endtime - curtime)
|
|
||||||
if self.time is not None:
|
|
||||||
if self.time <= 0.0:
|
|
||||||
return self.limitreached('time')
|
|
||||||
timelimits.append(self.time)
|
|
||||||
self.time_at_start = curtime
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.do_start()
|
|
||||||
except error, e:
|
|
||||||
if logger:
|
|
||||||
logger.write('%s: disabled by psyco.error:' % (
|
|
||||||
self.__class__.__name__), 4)
|
|
||||||
logger.write(' %s' % str(e), 3)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if memlimits:
|
|
||||||
self.memlimits_args = (time.sleep, (self.MemoryTimerResolution,),
|
|
||||||
self.check_memory, (min(memlimits),))
|
|
||||||
self.alarms.append(_psyco.alarm(*self.memlimits_args))
|
|
||||||
if timelimits:
|
|
||||||
self.alarms.append(_psyco.alarm(time.sleep, (min(timelimits),),
|
|
||||||
self.time_out))
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
for alarm in self.alarms:
|
|
||||||
alarm.stop(0)
|
|
||||||
for alarm in self.alarms:
|
|
||||||
alarm.stop(1) # wait for parallel threads to stop
|
|
||||||
del self.alarms[:]
|
|
||||||
if self.time is not None:
|
|
||||||
self.time -= now() - self.time_at_start
|
|
||||||
if self.memory is not None:
|
|
||||||
self.memory -= _psyco.memory() - self.memory_at_start
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.do_stop()
|
|
||||||
except error:
|
|
||||||
return 0
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def check_memory(self, limit):
|
|
||||||
if _psyco.memory() < limit:
|
|
||||||
return self.memlimits_args
|
|
||||||
go()
|
|
||||||
|
|
||||||
def time_out(self):
|
|
||||||
self.time = 0.0
|
|
||||||
go()
|
|
||||||
|
|
||||||
def limitreached(self, limitname):
|
|
||||||
try:
|
|
||||||
profilers.remove(self)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
if logger:
|
|
||||||
logger.write('%s: disabled (%s limit reached)' % (
|
|
||||||
self.__class__.__name__, limitname), 4)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
class FullCompiler(Profiler):
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
_psyco.profiling('f')
|
|
||||||
|
|
||||||
def do_stop(self):
|
|
||||||
_psyco.profiling('.')
|
|
||||||
|
|
||||||
|
|
||||||
class RunOnly(Profiler):
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
_psyco.profiling('n')
|
|
||||||
|
|
||||||
def do_stop(self):
|
|
||||||
_psyco.profiling('.')
|
|
||||||
|
|
||||||
|
|
||||||
class ChargeProfiler(Profiler):
|
|
||||||
|
|
||||||
def __init__(self, watermark, parentframe):
|
|
||||||
self.watermark = watermark
|
|
||||||
self.parent2 = parentframe * 2.0
|
|
||||||
self.lock = thread.allocate_lock()
|
|
||||||
|
|
||||||
def init_charges(self):
|
|
||||||
_psyco.statwrite(watermark = self.watermark,
|
|
||||||
parent2 = self.parent2)
|
|
||||||
|
|
||||||
def do_stop(self):
|
|
||||||
_psyco.profiling('.')
|
|
||||||
_psyco.statwrite(callback = None)
|
|
||||||
|
|
||||||
|
|
||||||
class ActiveProfiler(ChargeProfiler):
|
|
||||||
|
|
||||||
def active_start(self):
|
|
||||||
_psyco.profiling('p')
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
self.init_charges()
|
|
||||||
self.active_start()
|
|
||||||
_psyco.statwrite(callback = self.charge_callback)
|
|
||||||
|
|
||||||
def charge_callback(self, frame, charge):
|
|
||||||
tag(frame.f_code, frame.f_globals)
|
|
||||||
|
|
||||||
|
|
||||||
class PassiveProfiler(ChargeProfiler):
|
|
||||||
|
|
||||||
initial_charge_unit = _psyco.statread('unit')
|
|
||||||
reset_stats_after = 120 # half-lives (maximum 200!)
|
|
||||||
reset_limit = initial_charge_unit * (2.0 ** reset_stats_after)
|
|
||||||
|
|
||||||
def __init__(self, watermark, halflife, pollfreq, parentframe):
|
|
||||||
ChargeProfiler.__init__(self, watermark, parentframe)
|
|
||||||
self.pollfreq = pollfreq
|
|
||||||
# self.progress is slightly more than 1.0, and computed so that
|
|
||||||
# do_profile() will double the change_unit every 'halflife' seconds.
|
|
||||||
self.progress = 2.0 ** (1.0 / (halflife * pollfreq))
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
_psyco.statwrite(unit = self.initial_charge_unit, callback = None)
|
|
||||||
_psyco.statreset()
|
|
||||||
if logger:
|
|
||||||
logger.write("%s: resetting stats" % self.__class__.__name__, 1)
|
|
||||||
|
|
||||||
def passive_start(self):
|
|
||||||
self.passivealarm_args = (time.sleep, (1.0 / self.pollfreq,),
|
|
||||||
self.do_profile)
|
|
||||||
self.alarms.append(_psyco.alarm(*self.passivealarm_args))
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
tag2bind()
|
|
||||||
self.init_charges()
|
|
||||||
self.passive_start()
|
|
||||||
|
|
||||||
def do_profile(self):
|
|
||||||
_psyco.statcollect()
|
|
||||||
if logger:
|
|
||||||
logger.dumpcharges()
|
|
||||||
nunit = _psyco.statread('unit') * self.progress
|
|
||||||
if nunit > self.reset_limit:
|
|
||||||
self.reset()
|
|
||||||
else:
|
|
||||||
_psyco.statwrite(unit = nunit, callback = self.charge_callback)
|
|
||||||
return self.passivealarm_args
|
|
||||||
|
|
||||||
def charge_callback(self, frame, charge):
|
|
||||||
trytobind(frame.f_code, frame.f_globals)
|
|
||||||
|
|
||||||
|
|
||||||
class ActivePassiveProfiler(PassiveProfiler, ActiveProfiler):
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
self.init_charges()
|
|
||||||
self.active_start()
|
|
||||||
self.passive_start()
|
|
||||||
|
|
||||||
def charge_callback(self, frame, charge):
|
|
||||||
tag(frame.f_code, frame.f_globals)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# we register our own version of sys.settrace(), sys.setprofile()
|
|
||||||
# and thread.start_new_thread().
|
|
||||||
#
|
|
||||||
|
|
||||||
def psyco_settrace(*args, **kw):
|
|
||||||
"This is the Psyco-aware version of sys.settrace()."
|
|
||||||
result = original_settrace(*args, **kw)
|
|
||||||
go()
|
|
||||||
return result
|
|
||||||
|
|
||||||
def psyco_setprofile(*args, **kw):
|
|
||||||
"This is the Psyco-aware version of sys.setprofile()."
|
|
||||||
result = original_setprofile(*args, **kw)
|
|
||||||
go()
|
|
||||||
return result
|
|
||||||
|
|
||||||
def psyco_thread_stub(callable, args, kw):
|
|
||||||
_psyco.statcollect()
|
|
||||||
if kw is None:
|
|
||||||
return callable(*args)
|
|
||||||
else:
|
|
||||||
return callable(*args, **kw)
|
|
||||||
|
|
||||||
def psyco_start_new_thread(callable, args, kw=None):
|
|
||||||
"This is the Psyco-aware version of thread.start_new_thread()."
|
|
||||||
return original_start_new_thread(psyco_thread_stub, (callable, args, kw))
|
|
||||||
|
|
||||||
original_settrace = sys.settrace
|
|
||||||
original_setprofile = sys.setprofile
|
|
||||||
original_start_new_thread = thread.start_new_thread
|
|
||||||
sys.settrace = psyco_settrace
|
|
||||||
sys.setprofile = psyco_setprofile
|
|
||||||
thread.start_new_thread = psyco_start_new_thread
|
|
||||||
# hack to patch threading._start_new_thread if the module is
|
|
||||||
# already loaded
|
|
||||||
if ('threading' in sys.modules and
|
|
||||||
hasattr(sys.modules['threading'], '_start_new_thread')):
|
|
||||||
sys.modules['threading']._start_new_thread = psyco_start_new_thread
|
|
||||||
@@ -1,191 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco general support module.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco general support module.
|
|
||||||
|
|
||||||
For internal use.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
import sys, _psyco, __builtin__
|
|
||||||
|
|
||||||
error = _psyco.error
|
|
||||||
class warning(Warning):
|
|
||||||
pass
|
|
||||||
|
|
||||||
_psyco.NoLocalsWarning = warning
|
|
||||||
|
|
||||||
def warn(msg):
|
|
||||||
from warnings import warn
|
|
||||||
warn(msg, warning, stacklevel=2)
|
|
||||||
|
|
||||||
#
|
|
||||||
# Version checks
|
|
||||||
#
|
|
||||||
__version__ = 0x010600f0
|
|
||||||
if _psyco.PSYVER != __version__:
|
|
||||||
raise error, "version mismatch between Psyco parts, reinstall it"
|
|
||||||
|
|
||||||
version_info = (__version__ >> 24,
|
|
||||||
(__version__ >> 16) & 0xff,
|
|
||||||
(__version__ >> 8) & 0xff,
|
|
||||||
{0xa0: 'alpha',
|
|
||||||
0xb0: 'beta',
|
|
||||||
0xc0: 'candidate',
|
|
||||||
0xf0: 'final'}[__version__ & 0xf0],
|
|
||||||
__version__ & 0xf)
|
|
||||||
|
|
||||||
|
|
||||||
VERSION_LIMITS = [0x02020200, # 2.2.2
|
|
||||||
0x02030000, # 2.3
|
|
||||||
0x02040000] # 2.4
|
|
||||||
|
|
||||||
if ([v for v in VERSION_LIMITS if v <= sys.hexversion] !=
|
|
||||||
[v for v in VERSION_LIMITS if v <= _psyco.PYVER ]):
|
|
||||||
if sys.hexversion < VERSION_LIMITS[0]:
|
|
||||||
warn("Psyco requires Python version 2.2.2 or later")
|
|
||||||
else:
|
|
||||||
warn("Psyco version does not match Python version. "
|
|
||||||
"Psyco must be updated or recompiled")
|
|
||||||
|
|
||||||
|
|
||||||
if hasattr(_psyco, 'ALL_CHECKS') and hasattr(_psyco, 'VERBOSE_LEVEL'):
|
|
||||||
print >> sys.stderr, ('psyco: running in debugging mode on %s' %
|
|
||||||
_psyco.PROCESSOR)
|
|
||||||
|
|
||||||
|
|
||||||
###########################################################################
|
|
||||||
# sys._getframe() gives strange results on a mixed Psyco- and Python-style
|
|
||||||
# stack frame. Psyco provides a replacement that partially emulates Python
|
|
||||||
# frames from Psyco frames. The new sys._getframe() may return objects of
|
|
||||||
# a custom "Psyco frame" type, which is a subtype of the normal frame type.
|
|
||||||
#
|
|
||||||
# The same problems require some other built-in functions to be replaced
|
|
||||||
# as well. Note that the local variables are not available in any
|
|
||||||
# dictionary with Psyco.
|
|
||||||
|
|
||||||
|
|
||||||
class Frame:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class PythonFrame(Frame):
|
|
||||||
|
|
||||||
def __init__(self, frame):
|
|
||||||
self.__dict__.update({
|
|
||||||
'_frame': frame,
|
|
||||||
})
|
|
||||||
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
if attr == 'f_back':
|
|
||||||
try:
|
|
||||||
result = embedframe(_psyco.getframe(self._frame))
|
|
||||||
except ValueError:
|
|
||||||
result = None
|
|
||||||
except error:
|
|
||||||
warn("f_back is skipping dead Psyco frames")
|
|
||||||
result = self._frame.f_back
|
|
||||||
self.__dict__['f_back'] = result
|
|
||||||
return result
|
|
||||||
else:
|
|
||||||
return getattr(self._frame, attr)
|
|
||||||
|
|
||||||
def __setattr__(self, attr, value):
|
|
||||||
setattr(self._frame, attr, value)
|
|
||||||
|
|
||||||
def __delattr__(self, attr):
|
|
||||||
delattr(self._frame, attr)
|
|
||||||
|
|
||||||
|
|
||||||
class PsycoFrame(Frame):
|
|
||||||
|
|
||||||
def __init__(self, tag):
|
|
||||||
self.__dict__.update({
|
|
||||||
'_tag' : tag,
|
|
||||||
'f_code' : tag[0],
|
|
||||||
'f_globals': tag[1],
|
|
||||||
})
|
|
||||||
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
if attr == 'f_back':
|
|
||||||
try:
|
|
||||||
result = embedframe(_psyco.getframe(self._tag))
|
|
||||||
except ValueError:
|
|
||||||
result = None
|
|
||||||
elif attr == 'f_lineno':
|
|
||||||
result = self.f_code.co_firstlineno # better than nothing
|
|
||||||
elif attr == 'f_builtins':
|
|
||||||
result = self.f_globals['__builtins__']
|
|
||||||
elif attr == 'f_restricted':
|
|
||||||
result = self.f_builtins is not __builtins__
|
|
||||||
elif attr == 'f_locals':
|
|
||||||
raise AttributeError, ("local variables of functions run by Psyco "
|
|
||||||
"cannot be accessed in any way, sorry")
|
|
||||||
else:
|
|
||||||
raise AttributeError, ("emulated Psyco frames have "
|
|
||||||
"no '%s' attribute" % attr)
|
|
||||||
self.__dict__[attr] = result
|
|
||||||
return result
|
|
||||||
|
|
||||||
def __setattr__(self, attr, value):
|
|
||||||
raise AttributeError, "Psyco frame objects are read-only"
|
|
||||||
|
|
||||||
def __delattr__(self, attr):
|
|
||||||
if attr == 'f_trace':
|
|
||||||
# for bdb which relies on CPython frames exhibiting a slightly
|
|
||||||
# buggy behavior: you can 'del f.f_trace' as often as you like
|
|
||||||
# even without having set it previously.
|
|
||||||
return
|
|
||||||
raise AttributeError, "Psyco frame objects are read-only"
|
|
||||||
|
|
||||||
|
|
||||||
def embedframe(result):
|
|
||||||
if type(result) is type(()):
|
|
||||||
return PsycoFrame(result)
|
|
||||||
else:
|
|
||||||
return PythonFrame(result)
|
|
||||||
|
|
||||||
def _getframe(depth=0):
|
|
||||||
"""Return a frame object from the call stack. This is a replacement for
|
|
||||||
sys._getframe() which is aware of Psyco frames.
|
|
||||||
|
|
||||||
The returned objects are instances of either PythonFrame or PsycoFrame
|
|
||||||
instead of being real Python-level frame object, so that they can emulate
|
|
||||||
the common attributes of frame objects.
|
|
||||||
|
|
||||||
The original sys._getframe() ignoring Psyco frames altogether is stored in
|
|
||||||
psyco._getrealframe(). See also psyco._getemulframe()."""
|
|
||||||
# 'depth+1' to account for this _getframe() Python function
|
|
||||||
return embedframe(_psyco.getframe(depth+1))
|
|
||||||
|
|
||||||
def _getemulframe(depth=0):
|
|
||||||
"""As _getframe(), but the returned objects are real Python frame objects
|
|
||||||
emulating Psyco frames. Some of their attributes can be wrong or missing,
|
|
||||||
however."""
|
|
||||||
# 'depth+1' to account for this _getemulframe() Python function
|
|
||||||
return _psyco.getframe(depth+1, 1)
|
|
||||||
|
|
||||||
def patch(name, module=__builtin__):
|
|
||||||
f = getattr(_psyco, name)
|
|
||||||
org = getattr(module, name)
|
|
||||||
if org is not f:
|
|
||||||
setattr(module, name, f)
|
|
||||||
setattr(_psyco, 'original_' + name, org)
|
|
||||||
|
|
||||||
_getrealframe = sys._getframe
|
|
||||||
sys._getframe = _getframe
|
|
||||||
patch('globals')
|
|
||||||
patch('eval')
|
|
||||||
patch('execfile')
|
|
||||||
patch('locals')
|
|
||||||
patch('vars')
|
|
||||||
patch('dir')
|
|
||||||
patch('input')
|
|
||||||
_psyco.original_raw_input = raw_input
|
|
||||||
__builtin__.__in_psyco__ = 0==1 # False
|
|
||||||
|
|
||||||
if hasattr(_psyco, 'compact'):
|
|
||||||
import kdictproxy
|
|
||||||
_psyco.compactdictproxy = kdictproxy.compactdictproxy
|
|
||||||
@@ -1,54 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco top-level file of the Psyco package.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco -- the Python Specializing Compiler.
|
|
||||||
|
|
||||||
Typical usage: add the following lines to your application's main module,
|
|
||||||
preferably after the other imports:
|
|
||||||
|
|
||||||
try:
|
|
||||||
import psyco
|
|
||||||
psyco.full()
|
|
||||||
except ImportError:
|
|
||||||
print 'Psyco not installed, the program will just run slower'
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# This module is present to make 'psyco' a package and to
|
|
||||||
# publish the main functions and variables.
|
|
||||||
#
|
|
||||||
# More documentation can be found in core.py.
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
# Try to import the dynamic-loading _psyco and report errors
|
|
||||||
try:
|
|
||||||
import _psyco
|
|
||||||
except ImportError, e:
|
|
||||||
extramsg = ''
|
|
||||||
import sys, imp
|
|
||||||
try:
|
|
||||||
file, filename, (suffix, mode, type) = imp.find_module('_psyco', __path__)
|
|
||||||
except ImportError:
|
|
||||||
ext = [suffix for suffix, mode, type in imp.get_suffixes()
|
|
||||||
if type == imp.C_EXTENSION]
|
|
||||||
if ext:
|
|
||||||
extramsg = (" (cannot locate the compiled extension '_psyco%s' "
|
|
||||||
"in the package path '%s')" % (ext[0], '; '.join(__path__)))
|
|
||||||
else:
|
|
||||||
extramsg = (" (check that the compiled extension '%s' is for "
|
|
||||||
"the correct Python version; this is Python %s)" %
|
|
||||||
(filename, sys.version.split()[0]))
|
|
||||||
raise ImportError, str(e) + extramsg
|
|
||||||
|
|
||||||
# Publish important data by importing them in the package
|
|
||||||
from support import __version__, error, warning, _getrealframe, _getemulframe
|
|
||||||
from support import version_info, __version__ as hexversion
|
|
||||||
from core import full, profile, background, runonly, stop, cannotcompile
|
|
||||||
from core import log, bind, unbind, proxy, unproxy, dumpcodebuf
|
|
||||||
from _psyco import setfilter
|
|
||||||
from _psyco import compact, compacttype
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco class support module.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco class support module.
|
|
||||||
|
|
||||||
'psyco.classes.psyobj' is an alternate Psyco-optimized root for classes.
|
|
||||||
Any class inheriting from it or using the metaclass '__metaclass__' might
|
|
||||||
get optimized specifically for Psyco. It is equivalent to call
|
|
||||||
psyco.bind() on the class object after its creation.
|
|
||||||
|
|
||||||
Importing everything from psyco.classes in a module will import the
|
|
||||||
'__metaclass__' name, so all classes defined after a
|
|
||||||
|
|
||||||
from psyco.classes import *
|
|
||||||
|
|
||||||
will automatically use the Psyco-optimized metaclass.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
__all__ = ['psyobj', 'psymetaclass', '__metaclass__']
|
|
||||||
|
|
||||||
|
|
||||||
from _psyco import compacttype
|
|
||||||
import core
|
|
||||||
from types import FunctionType
|
|
||||||
|
|
||||||
class psymetaclass(compacttype):
|
|
||||||
"Psyco-optimized meta-class. Turns all methods into Psyco proxies."
|
|
||||||
|
|
||||||
def __new__(cls, name, bases, dict):
|
|
||||||
bindlist = dict.get('__psyco__bind__')
|
|
||||||
if bindlist is None:
|
|
||||||
bindlist = [key for key, value in dict.items()
|
|
||||||
if isinstance(value, FunctionType)]
|
|
||||||
for attr in bindlist:
|
|
||||||
dict[attr] = core.proxy(dict[attr])
|
|
||||||
return super(psymetaclass, cls).__new__(cls, name, bases, dict)
|
|
||||||
|
|
||||||
psyobj = psymetaclass("psyobj", (), {})
|
|
||||||
__metaclass__ = psymetaclass
|
|
||||||
@@ -1,231 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco main functions.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco main functions.
|
|
||||||
|
|
||||||
Here are the routines that you can use from your applications.
|
|
||||||
These are mostly interfaces to the C core, but they depend on
|
|
||||||
the Python version.
|
|
||||||
|
|
||||||
You can use these functions from the 'psyco' module instead of
|
|
||||||
'psyco.core', e.g.
|
|
||||||
|
|
||||||
import psyco
|
|
||||||
psyco.log('/tmp/psyco.log')
|
|
||||||
psyco.profile()
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
import _psyco
|
|
||||||
import types
|
|
||||||
from support import *
|
|
||||||
|
|
||||||
newfunction = types.FunctionType
|
|
||||||
newinstancemethod = types.MethodType
|
|
||||||
|
|
||||||
|
|
||||||
# Default charge profiler values
|
|
||||||
default_watermark = 0.09 # between 0.0 (0%) and 1.0 (100%)
|
|
||||||
default_halflife = 0.5 # seconds
|
|
||||||
default_pollfreq_profile = 20 # Hz
|
|
||||||
default_pollfreq_background = 100 # Hz -- a maximum for sleep's resolution
|
|
||||||
default_parentframe = 0.25 # should not be more than 0.5 (50%)
|
|
||||||
|
|
||||||
|
|
||||||
def full(memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Compile as much as possible.
|
|
||||||
|
|
||||||
Typical use is for small scripts performing intensive computations
|
|
||||||
or string handling."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.FullCompiler()
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def profile(watermark = default_watermark,
|
|
||||||
halflife = default_halflife,
|
|
||||||
pollfreq = default_pollfreq_profile,
|
|
||||||
parentframe = default_parentframe,
|
|
||||||
memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Turn on profiling.
|
|
||||||
|
|
||||||
The 'watermark' parameter controls how easily running functions will
|
|
||||||
be compiled. The smaller the value, the more functions are compiled."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.ActivePassiveProfiler(watermark, halflife,
|
|
||||||
pollfreq, parentframe)
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def background(watermark = default_watermark,
|
|
||||||
halflife = default_halflife,
|
|
||||||
pollfreq = default_pollfreq_background,
|
|
||||||
parentframe = default_parentframe,
|
|
||||||
memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Turn on passive profiling.
|
|
||||||
|
|
||||||
This is a very lightweight mode in which only intensively computing
|
|
||||||
functions can be detected. The smaller the 'watermark', the more functions
|
|
||||||
are compiled."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.PassiveProfiler(watermark, halflife, pollfreq, parentframe)
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def runonly(memory=None, time=None, memorymax=None, timemax=None):
|
|
||||||
"""Nonprofiler.
|
|
||||||
|
|
||||||
XXX check if this is useful and document."""
|
|
||||||
import profiler
|
|
||||||
p = profiler.RunOnly()
|
|
||||||
p.run(memory, time, memorymax, timemax)
|
|
||||||
|
|
||||||
|
|
||||||
def stop():
|
|
||||||
"""Turn off all automatic compilation. bind() calls remain in effect."""
|
|
||||||
import profiler
|
|
||||||
profiler.go([])
|
|
||||||
|
|
||||||
|
|
||||||
def log(logfile='', mode='w', top=10):
|
|
||||||
"""Enable logging to the given file.
|
|
||||||
|
|
||||||
If the file name is unspecified, a default name is built by appending
|
|
||||||
a 'log-psyco' extension to the main script name.
|
|
||||||
|
|
||||||
Mode is 'a' to append to a possibly existing file or 'w' to overwrite
|
|
||||||
an existing file. Note that the log file may grow quickly in 'a' mode."""
|
|
||||||
import profiler, logger
|
|
||||||
if not logfile:
|
|
||||||
import os
|
|
||||||
logfile, dummy = os.path.splitext(sys.argv[0])
|
|
||||||
if os.path.basename(logfile):
|
|
||||||
logfile += '.'
|
|
||||||
logfile += 'log-psyco'
|
|
||||||
if hasattr(_psyco, 'VERBOSE_LEVEL'):
|
|
||||||
print >> sys.stderr, 'psyco: logging to', logfile
|
|
||||||
# logger.current should be a real file object; subtle problems
|
|
||||||
# will show up if its write() and flush() methods are written
|
|
||||||
# in Python, as Psyco will invoke them while compiling.
|
|
||||||
logger.current = open(logfile, mode)
|
|
||||||
logger.print_charges = top
|
|
||||||
profiler.logger = logger
|
|
||||||
logger.writedate('Logging started')
|
|
||||||
cannotcompile(logger.psycowrite)
|
|
||||||
_psyco.statwrite(logger=logger.psycowrite)
|
|
||||||
|
|
||||||
|
|
||||||
def bind(x, rec=None):
|
|
||||||
"""Enable compilation of the given function, method, or class object.
|
|
||||||
|
|
||||||
If C is a class (or anything with a '__dict__' attribute), bind(C) will
|
|
||||||
rebind all functions and methods found in C.__dict__ (which means, for
|
|
||||||
classes, all methods defined in the class but not in its parents).
|
|
||||||
|
|
||||||
The optional second argument specifies the number of recursive
|
|
||||||
compilation levels: all functions called by func are compiled
|
|
||||||
up to the given depth of indirection."""
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
if rec is None:
|
|
||||||
x.func_code = _psyco.proxycode(x)
|
|
||||||
else:
|
|
||||||
x.func_code = _psyco.proxycode(x, rec)
|
|
||||||
return
|
|
||||||
if hasattr(x, '__dict__'):
|
|
||||||
funcs = [o for o in x.__dict__.values()
|
|
||||||
if isinstance(o, types.MethodType)
|
|
||||||
or isinstance(o, types.FunctionType)]
|
|
||||||
if not funcs:
|
|
||||||
raise error, ("nothing bindable found in %s object" %
|
|
||||||
type(x).__name__)
|
|
||||||
for o in funcs:
|
|
||||||
bind(o, rec)
|
|
||||||
return
|
|
||||||
raise TypeError, "cannot bind %s objects" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def unbind(x):
|
|
||||||
"""Reverse of bind()."""
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
try:
|
|
||||||
f = _psyco.unproxycode(x.func_code)
|
|
||||||
except error:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
x.func_code = f.func_code
|
|
||||||
return
|
|
||||||
if hasattr(x, '__dict__'):
|
|
||||||
for o in x.__dict__.values():
|
|
||||||
if (isinstance(o, types.MethodType)
|
|
||||||
or isinstance(o, types.FunctionType)):
|
|
||||||
unbind(o)
|
|
||||||
return
|
|
||||||
raise TypeError, "cannot unbind %s objects" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def proxy(x, rec=None):
|
|
||||||
"""Return a Psyco-enabled copy of the function.
|
|
||||||
|
|
||||||
The original function is still available for non-compiled calls.
|
|
||||||
The optional second argument specifies the number of recursive
|
|
||||||
compilation levels: all functions called by func are compiled
|
|
||||||
up to the given depth of indirection."""
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
if rec is None:
|
|
||||||
code = _psyco.proxycode(x)
|
|
||||||
else:
|
|
||||||
code = _psyco.proxycode(x, rec)
|
|
||||||
return newfunction(code, x.func_globals, x.func_name)
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
p = proxy(x.im_func, rec)
|
|
||||||
return newinstancemethod(p, x.im_self, x.im_class)
|
|
||||||
raise TypeError, "cannot proxy %s objects" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def unproxy(proxy):
|
|
||||||
"""Return a new copy of the original function of method behind a proxy.
|
|
||||||
The result behaves like the original function in that calling it
|
|
||||||
does not trigger compilation nor execution of any compiled code."""
|
|
||||||
if isinstance(proxy, types.FunctionType):
|
|
||||||
return _psyco.unproxycode(proxy.func_code)
|
|
||||||
if isinstance(proxy, types.MethodType):
|
|
||||||
f = unproxy(proxy.im_func)
|
|
||||||
return newinstancemethod(f, proxy.im_self, proxy.im_class)
|
|
||||||
raise TypeError, "%s objects cannot be proxies" % type(proxy).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def cannotcompile(x):
|
|
||||||
"""Instruct Psyco never to compile the given function, method
|
|
||||||
or code object."""
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
x = x.func_code
|
|
||||||
if isinstance(x, types.CodeType):
|
|
||||||
_psyco.cannotcompile(x)
|
|
||||||
else:
|
|
||||||
raise TypeError, "unexpected %s object" % type(x).__name__
|
|
||||||
|
|
||||||
|
|
||||||
def dumpcodebuf():
|
|
||||||
"""Write in file psyco.dump a copy of the emitted machine code,
|
|
||||||
provided Psyco was compiled with a non-zero CODE_DUMP.
|
|
||||||
See py-utils/httpxam.py to examine psyco.dump."""
|
|
||||||
if hasattr(_psyco, 'dumpcodebuf'):
|
|
||||||
_psyco.dumpcodebuf()
|
|
||||||
|
|
||||||
|
|
||||||
###########################################################################
|
|
||||||
# Psyco variables
|
|
||||||
# error * the error raised by Psyco
|
|
||||||
# warning * the warning raised by Psyco
|
|
||||||
# __in_psyco__ * a new built-in variable which is always zero, but which
|
|
||||||
# Psyco special-cases by returning 1 instead. So
|
|
||||||
# __in_psyco__ can be used in a function to know if
|
|
||||||
# that function is being executed by Psyco or not.
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Support code for the 'psyco.compact' type.
|
|
||||||
|
|
||||||
from __future__ import generators
|
|
||||||
|
|
||||||
try:
|
|
||||||
from UserDict import DictMixin
|
|
||||||
except ImportError:
|
|
||||||
|
|
||||||
# backported from Python 2.3 to Python 2.2
|
|
||||||
class DictMixin:
|
|
||||||
# Mixin defining all dictionary methods for classes that already have
|
|
||||||
# a minimum dictionary interface including getitem, setitem, delitem,
|
|
||||||
# and keys. Without knowledge of the subclass constructor, the mixin
|
|
||||||
# does not define __init__() or copy(). In addition to the four base
|
|
||||||
# methods, progressively more efficiency comes with defining
|
|
||||||
# __contains__(), __iter__(), and iteritems().
|
|
||||||
|
|
||||||
# second level definitions support higher levels
|
|
||||||
def __iter__(self):
|
|
||||||
for k in self.keys():
|
|
||||||
yield k
|
|
||||||
def has_key(self, key):
|
|
||||||
try:
|
|
||||||
value = self[key]
|
|
||||||
except KeyError:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
def __contains__(self, key):
|
|
||||||
return self.has_key(key)
|
|
||||||
|
|
||||||
# third level takes advantage of second level definitions
|
|
||||||
def iteritems(self):
|
|
||||||
for k in self:
|
|
||||||
yield (k, self[k])
|
|
||||||
def iterkeys(self):
|
|
||||||
return self.__iter__()
|
|
||||||
|
|
||||||
# fourth level uses definitions from lower levels
|
|
||||||
def itervalues(self):
|
|
||||||
for _, v in self.iteritems():
|
|
||||||
yield v
|
|
||||||
def values(self):
|
|
||||||
return [v for _, v in self.iteritems()]
|
|
||||||
def items(self):
|
|
||||||
return list(self.iteritems())
|
|
||||||
def clear(self):
|
|
||||||
for key in self.keys():
|
|
||||||
del self[key]
|
|
||||||
def setdefault(self, key, default):
|
|
||||||
try:
|
|
||||||
return self[key]
|
|
||||||
except KeyError:
|
|
||||||
self[key] = default
|
|
||||||
return default
|
|
||||||
def pop(self, key, *args):
|
|
||||||
if len(args) > 1:
|
|
||||||
raise TypeError, "pop expected at most 2 arguments, got "\
|
|
||||||
+ repr(1 + len(args))
|
|
||||||
try:
|
|
||||||
value = self[key]
|
|
||||||
except KeyError:
|
|
||||||
if args:
|
|
||||||
return args[0]
|
|
||||||
raise
|
|
||||||
del self[key]
|
|
||||||
return value
|
|
||||||
def popitem(self):
|
|
||||||
try:
|
|
||||||
k, v = self.iteritems().next()
|
|
||||||
except StopIteration:
|
|
||||||
raise KeyError, 'container is empty'
|
|
||||||
del self[k]
|
|
||||||
return (k, v)
|
|
||||||
def update(self, other):
|
|
||||||
# Make progressively weaker assumptions about "other"
|
|
||||||
if hasattr(other, 'iteritems'): # iteritems saves memory and lookups
|
|
||||||
for k, v in other.iteritems():
|
|
||||||
self[k] = v
|
|
||||||
elif hasattr(other, '__iter__'): # iter saves memory
|
|
||||||
for k in other:
|
|
||||||
self[k] = other[k]
|
|
||||||
else:
|
|
||||||
for k in other.keys():
|
|
||||||
self[k] = other[k]
|
|
||||||
def get(self, key, default=None):
|
|
||||||
try:
|
|
||||||
return self[key]
|
|
||||||
except KeyError:
|
|
||||||
return default
|
|
||||||
def __repr__(self):
|
|
||||||
return repr(dict(self.iteritems()))
|
|
||||||
def __cmp__(self, other):
|
|
||||||
if other is None:
|
|
||||||
return 1
|
|
||||||
if isinstance(other, DictMixin):
|
|
||||||
other = dict(other.iteritems())
|
|
||||||
return cmp(dict(self.iteritems()), other)
|
|
||||||
def __len__(self):
|
|
||||||
return len(self.keys())
|
|
||||||
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
from _psyco import compact
|
|
||||||
|
|
||||||
|
|
||||||
class compactdictproxy(DictMixin):
|
|
||||||
|
|
||||||
def __init__(self, ko):
|
|
||||||
self._ko = ko # compact object of which 'self' is the dict
|
|
||||||
|
|
||||||
def __getitem__(self, key):
|
|
||||||
return compact.__getslot__(self._ko, key)
|
|
||||||
|
|
||||||
def __setitem__(self, key, value):
|
|
||||||
compact.__setslot__(self._ko, key, value)
|
|
||||||
|
|
||||||
def __delitem__(self, key):
|
|
||||||
compact.__delslot__(self._ko, key)
|
|
||||||
|
|
||||||
def keys(self):
|
|
||||||
return compact.__members__.__get__(self._ko)
|
|
||||||
|
|
||||||
def clear(self):
|
|
||||||
keys = self.keys()
|
|
||||||
keys.reverse()
|
|
||||||
for key in keys:
|
|
||||||
del self[key]
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
keys = ', '.join(self.keys())
|
|
||||||
return '<compactdictproxy object {%s}>' % (keys,)
|
|
||||||
@@ -1,96 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco logger.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco logger.
|
|
||||||
|
|
||||||
See log() in core.py.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
|
|
||||||
import _psyco
|
|
||||||
from time import time, localtime, strftime
|
|
||||||
|
|
||||||
|
|
||||||
current = None
|
|
||||||
print_charges = 10
|
|
||||||
dump_delay = 0.2
|
|
||||||
dump_last = 0.0
|
|
||||||
|
|
||||||
def write(s, level):
|
|
||||||
t = time()
|
|
||||||
f = t-int(t)
|
|
||||||
try:
|
|
||||||
current.write("%s.%02d %-*s%s\n" % (
|
|
||||||
strftime("%X", localtime(int(t))),
|
|
||||||
int(f*100.0), 63-level, s,
|
|
||||||
"%"*level))
|
|
||||||
current.flush()
|
|
||||||
except (OSError, IOError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def psycowrite(s):
|
|
||||||
t = time()
|
|
||||||
f = t-int(t)
|
|
||||||
try:
|
|
||||||
current.write("%s.%02d %-*s%s\n" % (
|
|
||||||
strftime("%X", localtime(int(t))),
|
|
||||||
int(f*100.0), 60, s.strip(),
|
|
||||||
"% %"))
|
|
||||||
current.flush()
|
|
||||||
except (OSError, IOError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
##def writelines(lines, level=0):
|
|
||||||
## if lines:
|
|
||||||
## t = time()
|
|
||||||
## f = t-int(t)
|
|
||||||
## timedesc = strftime("%x %X", localtime(int(t)))
|
|
||||||
## print >> current, "%s.%03d %-*s %s" % (
|
|
||||||
## timedesc, int(f*1000),
|
|
||||||
## 50-level, lines[0],
|
|
||||||
## "+"*level)
|
|
||||||
## timedesc = " " * (len(timedesc)+5)
|
|
||||||
## for line in lines[1:]:
|
|
||||||
## print >> current, timedesc, line
|
|
||||||
|
|
||||||
def writememory():
|
|
||||||
write("memory usage: %d+ kb" % _psyco.memory(), 1)
|
|
||||||
|
|
||||||
def dumpcharges():
|
|
||||||
global dump_last
|
|
||||||
if print_charges:
|
|
||||||
t = time()
|
|
||||||
if not (dump_last <= t < dump_last+dump_delay):
|
|
||||||
if t <= dump_last+1.5*dump_delay:
|
|
||||||
dump_last += dump_delay
|
|
||||||
else:
|
|
||||||
dump_last = t
|
|
||||||
#write("%s: charges:" % who, 0)
|
|
||||||
lst = _psyco.stattop(print_charges)
|
|
||||||
if lst:
|
|
||||||
f = t-int(t)
|
|
||||||
lines = ["%s.%02d ______\n" % (
|
|
||||||
strftime("%X", localtime(int(t))),
|
|
||||||
int(f*100.0))]
|
|
||||||
i = 1
|
|
||||||
for co, charge in lst:
|
|
||||||
detail = co.co_filename
|
|
||||||
if len(detail) > 19:
|
|
||||||
detail = '...' + detail[-17:]
|
|
||||||
lines.append(" #%-3d |%4.1f %%| %-26s%20s:%d\n" %
|
|
||||||
(i, charge*100.0, co.co_name, detail,
|
|
||||||
co.co_firstlineno))
|
|
||||||
i += 1
|
|
||||||
current.writelines(lines)
|
|
||||||
current.flush()
|
|
||||||
|
|
||||||
def writefinalstats():
|
|
||||||
dumpcharges()
|
|
||||||
writememory()
|
|
||||||
writedate("program exit")
|
|
||||||
|
|
||||||
def writedate(msg):
|
|
||||||
write('%s, %s' % (msg, strftime("%x")), 20)
|
|
||||||
@@ -1,379 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco profiler (Python part).
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco profiler (Python part).
|
|
||||||
|
|
||||||
The implementation of the non-time-critical parts of the profiler.
|
|
||||||
See profile() and full() in core.py for the easy interface.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
import _psyco
|
|
||||||
from support import *
|
|
||||||
import math, time, types, atexit
|
|
||||||
now = time.time
|
|
||||||
try:
|
|
||||||
import thread
|
|
||||||
except ImportError:
|
|
||||||
import dummy_thread as thread
|
|
||||||
|
|
||||||
|
|
||||||
# current profiler instance
|
|
||||||
current = None
|
|
||||||
|
|
||||||
# enabled profilers, in order of priority
|
|
||||||
profilers = []
|
|
||||||
|
|
||||||
# logger module (when enabled by core.log())
|
|
||||||
logger = None
|
|
||||||
|
|
||||||
# a lock for a thread-safe go()
|
|
||||||
go_lock = thread.allocate_lock()
|
|
||||||
|
|
||||||
def go(stop=0):
|
|
||||||
# run the highest-priority profiler in 'profilers'
|
|
||||||
global current
|
|
||||||
go_lock.acquire()
|
|
||||||
try:
|
|
||||||
prev = current
|
|
||||||
if stop:
|
|
||||||
del profilers[:]
|
|
||||||
if prev:
|
|
||||||
if profilers and profilers[0] is prev:
|
|
||||||
return # best profiler already running
|
|
||||||
prev.stop()
|
|
||||||
current = None
|
|
||||||
for p in profilers[:]:
|
|
||||||
if p.start():
|
|
||||||
current = p
|
|
||||||
if logger: # and p is not prev:
|
|
||||||
logger.write("%s: starting" % p.__class__.__name__, 5)
|
|
||||||
return
|
|
||||||
finally:
|
|
||||||
go_lock.release()
|
|
||||||
# no profiler is running now
|
|
||||||
if stop:
|
|
||||||
if logger:
|
|
||||||
logger.writefinalstats()
|
|
||||||
else:
|
|
||||||
tag2bind()
|
|
||||||
|
|
||||||
atexit.register(go, 1)
|
|
||||||
|
|
||||||
|
|
||||||
def buildfncache(globals, cache):
|
|
||||||
if hasattr(types.IntType, '__dict__'):
|
|
||||||
clstypes = (types.ClassType, types.TypeType)
|
|
||||||
else:
|
|
||||||
clstypes = types.ClassType
|
|
||||||
for x in globals.values():
|
|
||||||
if isinstance(x, types.MethodType):
|
|
||||||
x = x.im_func
|
|
||||||
if isinstance(x, types.FunctionType):
|
|
||||||
cache[x.func_code] = x, ''
|
|
||||||
elif isinstance(x, clstypes):
|
|
||||||
for y in x.__dict__.values():
|
|
||||||
if isinstance(y, types.MethodType):
|
|
||||||
y = y.im_func
|
|
||||||
if isinstance(y, types.FunctionType):
|
|
||||||
cache[y.func_code] = y, x.__name__
|
|
||||||
|
|
||||||
# code-to-function mapping (cache)
|
|
||||||
function_cache = {}
|
|
||||||
|
|
||||||
def trytobind(co, globals, log=1):
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
buildfncache(globals, function_cache)
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
if logger:
|
|
||||||
logger.write('warning: cannot find function %s in %s' %
|
|
||||||
(co.co_name, globals.get('__name__', '?')), 3)
|
|
||||||
return # give up
|
|
||||||
if logger and log:
|
|
||||||
modulename = globals.get('__name__', '?')
|
|
||||||
if clsname:
|
|
||||||
modulename += '.' + clsname
|
|
||||||
logger.write('bind function: %s.%s' % (modulename, co.co_name), 1)
|
|
||||||
f.func_code = _psyco.proxycode(f)
|
|
||||||
|
|
||||||
|
|
||||||
# the list of code objects that have been tagged
|
|
||||||
tagged_codes = []
|
|
||||||
|
|
||||||
def tag(co, globals):
|
|
||||||
if logger:
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
buildfncache(globals, function_cache)
|
|
||||||
try:
|
|
||||||
f, clsname = function_cache[co]
|
|
||||||
except KeyError:
|
|
||||||
clsname = '' # give up
|
|
||||||
modulename = globals.get('__name__', '?')
|
|
||||||
if clsname:
|
|
||||||
modulename += '.' + clsname
|
|
||||||
logger.write('tag function: %s.%s' % (modulename, co.co_name), 1)
|
|
||||||
tagged_codes.append((co, globals))
|
|
||||||
_psyco.turbo_frame(co)
|
|
||||||
_psyco.turbo_code(co)
|
|
||||||
|
|
||||||
def tag2bind():
|
|
||||||
if tagged_codes:
|
|
||||||
if logger:
|
|
||||||
logger.write('profiling stopped, binding %d functions' %
|
|
||||||
len(tagged_codes), 2)
|
|
||||||
for co, globals in tagged_codes:
|
|
||||||
trytobind(co, globals, 0)
|
|
||||||
function_cache.clear()
|
|
||||||
del tagged_codes[:]
|
|
||||||
|
|
||||||
|
|
||||||
class Profiler:
|
|
||||||
MemoryTimerResolution = 0.103
|
|
||||||
|
|
||||||
def run(self, memory, time, memorymax, timemax):
|
|
||||||
self.memory = memory
|
|
||||||
self.memorymax = memorymax
|
|
||||||
self.time = time
|
|
||||||
if timemax is None:
|
|
||||||
self.endtime = None
|
|
||||||
else:
|
|
||||||
self.endtime = now() + timemax
|
|
||||||
self.alarms = []
|
|
||||||
profilers.append(self)
|
|
||||||
go()
|
|
||||||
|
|
||||||
def start(self):
|
|
||||||
curmem = _psyco.memory()
|
|
||||||
memlimits = []
|
|
||||||
if self.memorymax is not None:
|
|
||||||
if curmem >= self.memorymax:
|
|
||||||
if logger:
|
|
||||||
logger.writememory()
|
|
||||||
return self.limitreached('memorymax')
|
|
||||||
memlimits.append(self.memorymax)
|
|
||||||
if self.memory is not None:
|
|
||||||
if self.memory <= 0:
|
|
||||||
if logger:
|
|
||||||
logger.writememory()
|
|
||||||
return self.limitreached('memory')
|
|
||||||
memlimits.append(curmem + self.memory)
|
|
||||||
self.memory_at_start = curmem
|
|
||||||
|
|
||||||
curtime = now()
|
|
||||||
timelimits = []
|
|
||||||
if self.endtime is not None:
|
|
||||||
if curtime >= self.endtime:
|
|
||||||
return self.limitreached('timemax')
|
|
||||||
timelimits.append(self.endtime - curtime)
|
|
||||||
if self.time is not None:
|
|
||||||
if self.time <= 0.0:
|
|
||||||
return self.limitreached('time')
|
|
||||||
timelimits.append(self.time)
|
|
||||||
self.time_at_start = curtime
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.do_start()
|
|
||||||
except error, e:
|
|
||||||
if logger:
|
|
||||||
logger.write('%s: disabled by psyco.error:' % (
|
|
||||||
self.__class__.__name__), 4)
|
|
||||||
logger.write(' %s' % str(e), 3)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if memlimits:
|
|
||||||
self.memlimits_args = (time.sleep, (self.MemoryTimerResolution,),
|
|
||||||
self.check_memory, (min(memlimits),))
|
|
||||||
self.alarms.append(_psyco.alarm(*self.memlimits_args))
|
|
||||||
if timelimits:
|
|
||||||
self.alarms.append(_psyco.alarm(time.sleep, (min(timelimits),),
|
|
||||||
self.time_out))
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
for alarm in self.alarms:
|
|
||||||
alarm.stop(0)
|
|
||||||
for alarm in self.alarms:
|
|
||||||
alarm.stop(1) # wait for parallel threads to stop
|
|
||||||
del self.alarms[:]
|
|
||||||
if self.time is not None:
|
|
||||||
self.time -= now() - self.time_at_start
|
|
||||||
if self.memory is not None:
|
|
||||||
self.memory -= _psyco.memory() - self.memory_at_start
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.do_stop()
|
|
||||||
except error:
|
|
||||||
return 0
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def check_memory(self, limit):
|
|
||||||
if _psyco.memory() < limit:
|
|
||||||
return self.memlimits_args
|
|
||||||
go()
|
|
||||||
|
|
||||||
def time_out(self):
|
|
||||||
self.time = 0.0
|
|
||||||
go()
|
|
||||||
|
|
||||||
def limitreached(self, limitname):
|
|
||||||
try:
|
|
||||||
profilers.remove(self)
|
|
||||||
except ValueError:
|
|
||||||
pass
|
|
||||||
if logger:
|
|
||||||
logger.write('%s: disabled (%s limit reached)' % (
|
|
||||||
self.__class__.__name__, limitname), 4)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
class FullCompiler(Profiler):
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
_psyco.profiling('f')
|
|
||||||
|
|
||||||
def do_stop(self):
|
|
||||||
_psyco.profiling('.')
|
|
||||||
|
|
||||||
|
|
||||||
class RunOnly(Profiler):
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
_psyco.profiling('n')
|
|
||||||
|
|
||||||
def do_stop(self):
|
|
||||||
_psyco.profiling('.')
|
|
||||||
|
|
||||||
|
|
||||||
class ChargeProfiler(Profiler):
|
|
||||||
|
|
||||||
def __init__(self, watermark, parentframe):
|
|
||||||
self.watermark = watermark
|
|
||||||
self.parent2 = parentframe * 2.0
|
|
||||||
self.lock = thread.allocate_lock()
|
|
||||||
|
|
||||||
def init_charges(self):
|
|
||||||
_psyco.statwrite(watermark = self.watermark,
|
|
||||||
parent2 = self.parent2)
|
|
||||||
|
|
||||||
def do_stop(self):
|
|
||||||
_psyco.profiling('.')
|
|
||||||
_psyco.statwrite(callback = None)
|
|
||||||
|
|
||||||
|
|
||||||
class ActiveProfiler(ChargeProfiler):
|
|
||||||
|
|
||||||
def active_start(self):
|
|
||||||
_psyco.profiling('p')
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
self.init_charges()
|
|
||||||
self.active_start()
|
|
||||||
_psyco.statwrite(callback = self.charge_callback)
|
|
||||||
|
|
||||||
def charge_callback(self, frame, charge):
|
|
||||||
tag(frame.f_code, frame.f_globals)
|
|
||||||
|
|
||||||
|
|
||||||
class PassiveProfiler(ChargeProfiler):
|
|
||||||
|
|
||||||
initial_charge_unit = _psyco.statread('unit')
|
|
||||||
reset_stats_after = 120 # half-lives (maximum 200!)
|
|
||||||
reset_limit = initial_charge_unit * (2.0 ** reset_stats_after)
|
|
||||||
|
|
||||||
def __init__(self, watermark, halflife, pollfreq, parentframe):
|
|
||||||
ChargeProfiler.__init__(self, watermark, parentframe)
|
|
||||||
self.pollfreq = pollfreq
|
|
||||||
# self.progress is slightly more than 1.0, and computed so that
|
|
||||||
# do_profile() will double the change_unit every 'halflife' seconds.
|
|
||||||
self.progress = 2.0 ** (1.0 / (halflife * pollfreq))
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
_psyco.statwrite(unit = self.initial_charge_unit, callback = None)
|
|
||||||
_psyco.statreset()
|
|
||||||
if logger:
|
|
||||||
logger.write("%s: resetting stats" % self.__class__.__name__, 1)
|
|
||||||
|
|
||||||
def passive_start(self):
|
|
||||||
self.passivealarm_args = (time.sleep, (1.0 / self.pollfreq,),
|
|
||||||
self.do_profile)
|
|
||||||
self.alarms.append(_psyco.alarm(*self.passivealarm_args))
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
tag2bind()
|
|
||||||
self.init_charges()
|
|
||||||
self.passive_start()
|
|
||||||
|
|
||||||
def do_profile(self):
|
|
||||||
_psyco.statcollect()
|
|
||||||
if logger:
|
|
||||||
logger.dumpcharges()
|
|
||||||
nunit = _psyco.statread('unit') * self.progress
|
|
||||||
if nunit > self.reset_limit:
|
|
||||||
self.reset()
|
|
||||||
else:
|
|
||||||
_psyco.statwrite(unit = nunit, callback = self.charge_callback)
|
|
||||||
return self.passivealarm_args
|
|
||||||
|
|
||||||
def charge_callback(self, frame, charge):
|
|
||||||
trytobind(frame.f_code, frame.f_globals)
|
|
||||||
|
|
||||||
|
|
||||||
class ActivePassiveProfiler(PassiveProfiler, ActiveProfiler):
|
|
||||||
|
|
||||||
def do_start(self):
|
|
||||||
self.init_charges()
|
|
||||||
self.active_start()
|
|
||||||
self.passive_start()
|
|
||||||
|
|
||||||
def charge_callback(self, frame, charge):
|
|
||||||
tag(frame.f_code, frame.f_globals)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#
|
|
||||||
# we register our own version of sys.settrace(), sys.setprofile()
|
|
||||||
# and thread.start_new_thread().
|
|
||||||
#
|
|
||||||
|
|
||||||
def psyco_settrace(*args, **kw):
|
|
||||||
"This is the Psyco-aware version of sys.settrace()."
|
|
||||||
result = original_settrace(*args, **kw)
|
|
||||||
go()
|
|
||||||
return result
|
|
||||||
|
|
||||||
def psyco_setprofile(*args, **kw):
|
|
||||||
"This is the Psyco-aware version of sys.setprofile()."
|
|
||||||
result = original_setprofile(*args, **kw)
|
|
||||||
go()
|
|
||||||
return result
|
|
||||||
|
|
||||||
def psyco_thread_stub(callable, args, kw):
|
|
||||||
_psyco.statcollect()
|
|
||||||
if kw is None:
|
|
||||||
return callable(*args)
|
|
||||||
else:
|
|
||||||
return callable(*args, **kw)
|
|
||||||
|
|
||||||
def psyco_start_new_thread(callable, args, kw=None):
|
|
||||||
"This is the Psyco-aware version of thread.start_new_thread()."
|
|
||||||
return original_start_new_thread(psyco_thread_stub, (callable, args, kw))
|
|
||||||
|
|
||||||
original_settrace = sys.settrace
|
|
||||||
original_setprofile = sys.setprofile
|
|
||||||
original_start_new_thread = thread.start_new_thread
|
|
||||||
sys.settrace = psyco_settrace
|
|
||||||
sys.setprofile = psyco_setprofile
|
|
||||||
thread.start_new_thread = psyco_start_new_thread
|
|
||||||
# hack to patch threading._start_new_thread if the module is
|
|
||||||
# already loaded
|
|
||||||
if ('threading' in sys.modules and
|
|
||||||
hasattr(sys.modules['threading'], '_start_new_thread')):
|
|
||||||
sys.modules['threading']._start_new_thread = psyco_start_new_thread
|
|
||||||
@@ -1,191 +0,0 @@
|
|||||||
###########################################################################
|
|
||||||
#
|
|
||||||
# Psyco general support module.
|
|
||||||
# Copyright (C) 2001-2002 Armin Rigo et.al.
|
|
||||||
|
|
||||||
"""Psyco general support module.
|
|
||||||
|
|
||||||
For internal use.
|
|
||||||
"""
|
|
||||||
###########################################################################
|
|
||||||
|
|
||||||
import sys, _psyco, __builtin__
|
|
||||||
|
|
||||||
error = _psyco.error
|
|
||||||
class warning(Warning):
|
|
||||||
pass
|
|
||||||
|
|
||||||
_psyco.NoLocalsWarning = warning
|
|
||||||
|
|
||||||
def warn(msg):
|
|
||||||
from warnings import warn
|
|
||||||
warn(msg, warning, stacklevel=2)
|
|
||||||
|
|
||||||
#
|
|
||||||
# Version checks
|
|
||||||
#
|
|
||||||
__version__ = 0x010600f0
|
|
||||||
if _psyco.PSYVER != __version__:
|
|
||||||
raise error, "version mismatch between Psyco parts, reinstall it"
|
|
||||||
|
|
||||||
version_info = (__version__ >> 24,
|
|
||||||
(__version__ >> 16) & 0xff,
|
|
||||||
(__version__ >> 8) & 0xff,
|
|
||||||
{0xa0: 'alpha',
|
|
||||||
0xb0: 'beta',
|
|
||||||
0xc0: 'candidate',
|
|
||||||
0xf0: 'final'}[__version__ & 0xf0],
|
|
||||||
__version__ & 0xf)
|
|
||||||
|
|
||||||
|
|
||||||
VERSION_LIMITS = [0x02020200, # 2.2.2
|
|
||||||
0x02030000, # 2.3
|
|
||||||
0x02040000] # 2.4
|
|
||||||
|
|
||||||
if ([v for v in VERSION_LIMITS if v <= sys.hexversion] !=
|
|
||||||
[v for v in VERSION_LIMITS if v <= _psyco.PYVER ]):
|
|
||||||
if sys.hexversion < VERSION_LIMITS[0]:
|
|
||||||
warn("Psyco requires Python version 2.2.2 or later")
|
|
||||||
else:
|
|
||||||
warn("Psyco version does not match Python version. "
|
|
||||||
"Psyco must be updated or recompiled")
|
|
||||||
|
|
||||||
|
|
||||||
if hasattr(_psyco, 'ALL_CHECKS') and hasattr(_psyco, 'VERBOSE_LEVEL'):
|
|
||||||
print >> sys.stderr, ('psyco: running in debugging mode on %s' %
|
|
||||||
_psyco.PROCESSOR)
|
|
||||||
|
|
||||||
|
|
||||||
###########################################################################
|
|
||||||
# sys._getframe() gives strange results on a mixed Psyco- and Python-style
|
|
||||||
# stack frame. Psyco provides a replacement that partially emulates Python
|
|
||||||
# frames from Psyco frames. The new sys._getframe() may return objects of
|
|
||||||
# a custom "Psyco frame" type, which is a subtype of the normal frame type.
|
|
||||||
#
|
|
||||||
# The same problems require some other built-in functions to be replaced
|
|
||||||
# as well. Note that the local variables are not available in any
|
|
||||||
# dictionary with Psyco.
|
|
||||||
|
|
||||||
|
|
||||||
class Frame:
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class PythonFrame(Frame):
|
|
||||||
|
|
||||||
def __init__(self, frame):
|
|
||||||
self.__dict__.update({
|
|
||||||
'_frame': frame,
|
|
||||||
})
|
|
||||||
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
if attr == 'f_back':
|
|
||||||
try:
|
|
||||||
result = embedframe(_psyco.getframe(self._frame))
|
|
||||||
except ValueError:
|
|
||||||
result = None
|
|
||||||
except error:
|
|
||||||
warn("f_back is skipping dead Psyco frames")
|
|
||||||
result = self._frame.f_back
|
|
||||||
self.__dict__['f_back'] = result
|
|
||||||
return result
|
|
||||||
else:
|
|
||||||
return getattr(self._frame, attr)
|
|
||||||
|
|
||||||
def __setattr__(self, attr, value):
|
|
||||||
setattr(self._frame, attr, value)
|
|
||||||
|
|
||||||
def __delattr__(self, attr):
|
|
||||||
delattr(self._frame, attr)
|
|
||||||
|
|
||||||
|
|
||||||
class PsycoFrame(Frame):
|
|
||||||
|
|
||||||
def __init__(self, tag):
|
|
||||||
self.__dict__.update({
|
|
||||||
'_tag' : tag,
|
|
||||||
'f_code' : tag[0],
|
|
||||||
'f_globals': tag[1],
|
|
||||||
})
|
|
||||||
|
|
||||||
def __getattr__(self, attr):
|
|
||||||
if attr == 'f_back':
|
|
||||||
try:
|
|
||||||
result = embedframe(_psyco.getframe(self._tag))
|
|
||||||
except ValueError:
|
|
||||||
result = None
|
|
||||||
elif attr == 'f_lineno':
|
|
||||||
result = self.f_code.co_firstlineno # better than nothing
|
|
||||||
elif attr == 'f_builtins':
|
|
||||||
result = self.f_globals['__builtins__']
|
|
||||||
elif attr == 'f_restricted':
|
|
||||||
result = self.f_builtins is not __builtins__
|
|
||||||
elif attr == 'f_locals':
|
|
||||||
raise AttributeError, ("local variables of functions run by Psyco "
|
|
||||||
"cannot be accessed in any way, sorry")
|
|
||||||
else:
|
|
||||||
raise AttributeError, ("emulated Psyco frames have "
|
|
||||||
"no '%s' attribute" % attr)
|
|
||||||
self.__dict__[attr] = result
|
|
||||||
return result
|
|
||||||
|
|
||||||
def __setattr__(self, attr, value):
|
|
||||||
raise AttributeError, "Psyco frame objects are read-only"
|
|
||||||
|
|
||||||
def __delattr__(self, attr):
|
|
||||||
if attr == 'f_trace':
|
|
||||||
# for bdb which relies on CPython frames exhibiting a slightly
|
|
||||||
# buggy behavior: you can 'del f.f_trace' as often as you like
|
|
||||||
# even without having set it previously.
|
|
||||||
return
|
|
||||||
raise AttributeError, "Psyco frame objects are read-only"
|
|
||||||
|
|
||||||
|
|
||||||
def embedframe(result):
|
|
||||||
if type(result) is type(()):
|
|
||||||
return PsycoFrame(result)
|
|
||||||
else:
|
|
||||||
return PythonFrame(result)
|
|
||||||
|
|
||||||
def _getframe(depth=0):
|
|
||||||
"""Return a frame object from the call stack. This is a replacement for
|
|
||||||
sys._getframe() which is aware of Psyco frames.
|
|
||||||
|
|
||||||
The returned objects are instances of either PythonFrame or PsycoFrame
|
|
||||||
instead of being real Python-level frame object, so that they can emulate
|
|
||||||
the common attributes of frame objects.
|
|
||||||
|
|
||||||
The original sys._getframe() ignoring Psyco frames altogether is stored in
|
|
||||||
psyco._getrealframe(). See also psyco._getemulframe()."""
|
|
||||||
# 'depth+1' to account for this _getframe() Python function
|
|
||||||
return embedframe(_psyco.getframe(depth+1))
|
|
||||||
|
|
||||||
def _getemulframe(depth=0):
|
|
||||||
"""As _getframe(), but the returned objects are real Python frame objects
|
|
||||||
emulating Psyco frames. Some of their attributes can be wrong or missing,
|
|
||||||
however."""
|
|
||||||
# 'depth+1' to account for this _getemulframe() Python function
|
|
||||||
return _psyco.getframe(depth+1, 1)
|
|
||||||
|
|
||||||
def patch(name, module=__builtin__):
|
|
||||||
f = getattr(_psyco, name)
|
|
||||||
org = getattr(module, name)
|
|
||||||
if org is not f:
|
|
||||||
setattr(module, name, f)
|
|
||||||
setattr(_psyco, 'original_' + name, org)
|
|
||||||
|
|
||||||
_getrealframe = sys._getframe
|
|
||||||
sys._getframe = _getframe
|
|
||||||
patch('globals')
|
|
||||||
patch('eval')
|
|
||||||
patch('execfile')
|
|
||||||
patch('locals')
|
|
||||||
patch('vars')
|
|
||||||
patch('dir')
|
|
||||||
patch('input')
|
|
||||||
_psyco.original_raw_input = raw_input
|
|
||||||
__builtin__.__in_psyco__ = 0==1 # False
|
|
||||||
|
|
||||||
if hasattr(_psyco, 'compact'):
|
|
||||||
import kdictproxy
|
|
||||||
_psyco.compactdictproxy = kdictproxy.compactdictproxy
|
|
||||||
31
Calibre_Plugins/eReaderPDB2PML_plugin/pycrypto_des.py
Normal file
31
Calibre_Plugins/eReaderPDB2PML_plugin/pycrypto_des.py
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
|
||||||
|
|
||||||
|
|
||||||
|
def load_pycrypto():
|
||||||
|
try :
|
||||||
|
from Crypto.Cipher import DES as _DES
|
||||||
|
except:
|
||||||
|
return None
|
||||||
|
|
||||||
|
class DES(object):
|
||||||
|
def __init__(self, key):
|
||||||
|
if len(key) != 8 :
|
||||||
|
raise Error('DES improper key used')
|
||||||
|
self.key = key
|
||||||
|
self._des = _DES.new(key,_DES.MODE_ECB)
|
||||||
|
def desdecrypt(self, data):
|
||||||
|
return self._des.decrypt(data)
|
||||||
|
def decrypt(self, data):
|
||||||
|
if not data:
|
||||||
|
return ''
|
||||||
|
i = 0
|
||||||
|
result = []
|
||||||
|
while i < len(data):
|
||||||
|
block = data[i:i+8]
|
||||||
|
processed_block = self.desdecrypt(block)
|
||||||
|
result.append(processed_block)
|
||||||
|
i += 8
|
||||||
|
return ''.join(result)
|
||||||
|
return DES
|
||||||
|
|
||||||
220
Calibre_Plugins/eReaderPDB2PML_plugin/python_des.py
Normal file
220
Calibre_Plugins/eReaderPDB2PML_plugin/python_des.py
Normal file
@@ -0,0 +1,220 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# vim:ts=4:sw=4:softtabstop=4:smarttab:expandtab
|
||||||
|
import sys
|
||||||
|
|
||||||
|
ECB = 0
|
||||||
|
CBC = 1
|
||||||
|
class Des(object):
|
||||||
|
__pc1 = [56, 48, 40, 32, 24, 16, 8, 0, 57, 49, 41, 33, 25, 17,
|
||||||
|
9, 1, 58, 50, 42, 34, 26, 18, 10, 2, 59, 51, 43, 35,
|
||||||
|
62, 54, 46, 38, 30, 22, 14, 6, 61, 53, 45, 37, 29, 21,
|
||||||
|
13, 5, 60, 52, 44, 36, 28, 20, 12, 4, 27, 19, 11, 3]
|
||||||
|
__left_rotations = [1, 1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1]
|
||||||
|
__pc2 = [13, 16, 10, 23, 0, 4,2, 27, 14, 5, 20, 9,
|
||||||
|
22, 18, 11, 3, 25, 7, 15, 6, 26, 19, 12, 1,
|
||||||
|
40, 51, 30, 36, 46, 54, 29, 39, 50, 44, 32, 47,
|
||||||
|
43, 48, 38, 55, 33, 52, 45, 41, 49, 35, 28, 31]
|
||||||
|
__ip = [57, 49, 41, 33, 25, 17, 9, 1, 59, 51, 43, 35, 27, 19, 11, 3,
|
||||||
|
61, 53, 45, 37, 29, 21, 13, 5, 63, 55, 47, 39, 31, 23, 15, 7,
|
||||||
|
56, 48, 40, 32, 24, 16, 8, 0, 58, 50, 42, 34, 26, 18, 10, 2,
|
||||||
|
60, 52, 44, 36, 28, 20, 12, 4, 62, 54, 46, 38, 30, 22, 14, 6]
|
||||||
|
__expansion_table = [31, 0, 1, 2, 3, 4, 3, 4, 5, 6, 7, 8,
|
||||||
|
7, 8, 9, 10, 11, 12,11, 12, 13, 14, 15, 16,
|
||||||
|
15, 16, 17, 18, 19, 20,19, 20, 21, 22, 23, 24,
|
||||||
|
23, 24, 25, 26, 27, 28,27, 28, 29, 30, 31, 0]
|
||||||
|
__sbox = [[14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7,
|
||||||
|
0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8,
|
||||||
|
4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0,
|
||||||
|
15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13],
|
||||||
|
[15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10,
|
||||||
|
3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5,
|
||||||
|
0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15,
|
||||||
|
13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9],
|
||||||
|
[10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8,
|
||||||
|
13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1,
|
||||||
|
13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7,
|
||||||
|
1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12],
|
||||||
|
[7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15,
|
||||||
|
13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9,
|
||||||
|
10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4,
|
||||||
|
3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14],
|
||||||
|
[2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9,
|
||||||
|
14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6,
|
||||||
|
4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14,
|
||||||
|
11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3],
|
||||||
|
[12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11,
|
||||||
|
10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8,
|
||||||
|
9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6,
|
||||||
|
4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13],
|
||||||
|
[4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1,
|
||||||
|
13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6,
|
||||||
|
1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2,
|
||||||
|
6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12],
|
||||||
|
[13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7,
|
||||||
|
1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2,
|
||||||
|
7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8,
|
||||||
|
2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11],]
|
||||||
|
__p = [15, 6, 19, 20, 28, 11,27, 16, 0, 14, 22, 25,
|
||||||
|
4, 17, 30, 9, 1, 7,23,13, 31, 26, 2, 8,18, 12, 29, 5, 21, 10,3, 24]
|
||||||
|
__fp = [39, 7, 47, 15, 55, 23, 63, 31,38, 6, 46, 14, 54, 22, 62, 30,
|
||||||
|
37, 5, 45, 13, 53, 21, 61, 29,36, 4, 44, 12, 52, 20, 60, 28,
|
||||||
|
35, 3, 43, 11, 51, 19, 59, 27,34, 2, 42, 10, 50, 18, 58, 26,
|
||||||
|
33, 1, 41, 9, 49, 17, 57, 25,32, 0, 40, 8, 48, 16, 56, 24]
|
||||||
|
# Type of crypting being done
|
||||||
|
ENCRYPT = 0x00
|
||||||
|
DECRYPT = 0x01
|
||||||
|
def __init__(self, key, mode=ECB, IV=None):
|
||||||
|
if len(key) != 8:
|
||||||
|
raise ValueError("Invalid DES key size. Key must be exactly 8 bytes long.")
|
||||||
|
self.block_size = 8
|
||||||
|
self.key_size = 8
|
||||||
|
self.__padding = ''
|
||||||
|
self.setMode(mode)
|
||||||
|
if IV:
|
||||||
|
self.setIV(IV)
|
||||||
|
self.L = []
|
||||||
|
self.R = []
|
||||||
|
self.Kn = [ [0] * 48 ] * 16 # 16 48-bit keys (K1 - K16)
|
||||||
|
self.final = []
|
||||||
|
self.setKey(key)
|
||||||
|
def getKey(self):
|
||||||
|
return self.__key
|
||||||
|
def setKey(self, key):
|
||||||
|
self.__key = key
|
||||||
|
self.__create_sub_keys()
|
||||||
|
def getMode(self):
|
||||||
|
return self.__mode
|
||||||
|
def setMode(self, mode):
|
||||||
|
self.__mode = mode
|
||||||
|
def getIV(self):
|
||||||
|
return self.__iv
|
||||||
|
def setIV(self, IV):
|
||||||
|
if not IV or len(IV) != self.block_size:
|
||||||
|
raise ValueError("Invalid Initial Value (IV), must be a multiple of " + str(self.block_size) + " bytes")
|
||||||
|
self.__iv = IV
|
||||||
|
def getPadding(self):
|
||||||
|
return self.__padding
|
||||||
|
def __String_to_BitList(self, data):
|
||||||
|
l = len(data) * 8
|
||||||
|
result = [0] * l
|
||||||
|
pos = 0
|
||||||
|
for c in data:
|
||||||
|
i = 7
|
||||||
|
ch = ord(c)
|
||||||
|
while i >= 0:
|
||||||
|
if ch & (1 << i) != 0:
|
||||||
|
result[pos] = 1
|
||||||
|
else:
|
||||||
|
result[pos] = 0
|
||||||
|
pos += 1
|
||||||
|
i -= 1
|
||||||
|
return result
|
||||||
|
def __BitList_to_String(self, data):
|
||||||
|
result = ''
|
||||||
|
pos = 0
|
||||||
|
c = 0
|
||||||
|
while pos < len(data):
|
||||||
|
c += data[pos] << (7 - (pos % 8))
|
||||||
|
if (pos % 8) == 7:
|
||||||
|
result += chr(c)
|
||||||
|
c = 0
|
||||||
|
pos += 1
|
||||||
|
return result
|
||||||
|
def __permutate(self, table, block):
|
||||||
|
return [block[x] for x in table]
|
||||||
|
def __create_sub_keys(self):
|
||||||
|
key = self.__permutate(Des.__pc1, self.__String_to_BitList(self.getKey()))
|
||||||
|
i = 0
|
||||||
|
self.L = key[:28]
|
||||||
|
self.R = key[28:]
|
||||||
|
while i < 16:
|
||||||
|
j = 0
|
||||||
|
while j < Des.__left_rotations[i]:
|
||||||
|
self.L.append(self.L[0])
|
||||||
|
del self.L[0]
|
||||||
|
self.R.append(self.R[0])
|
||||||
|
del self.R[0]
|
||||||
|
j += 1
|
||||||
|
self.Kn[i] = self.__permutate(Des.__pc2, self.L + self.R)
|
||||||
|
i += 1
|
||||||
|
def __des_crypt(self, block, crypt_type):
|
||||||
|
block = self.__permutate(Des.__ip, block)
|
||||||
|
self.L = block[:32]
|
||||||
|
self.R = block[32:]
|
||||||
|
if crypt_type == Des.ENCRYPT:
|
||||||
|
iteration = 0
|
||||||
|
iteration_adjustment = 1
|
||||||
|
else:
|
||||||
|
iteration = 15
|
||||||
|
iteration_adjustment = -1
|
||||||
|
i = 0
|
||||||
|
while i < 16:
|
||||||
|
tempR = self.R[:]
|
||||||
|
self.R = self.__permutate(Des.__expansion_table, self.R)
|
||||||
|
self.R = [x ^ y for x,y in zip(self.R, self.Kn[iteration])]
|
||||||
|
B = [self.R[:6], self.R[6:12], self.R[12:18], self.R[18:24], self.R[24:30], self.R[30:36], self.R[36:42], self.R[42:]]
|
||||||
|
j = 0
|
||||||
|
Bn = [0] * 32
|
||||||
|
pos = 0
|
||||||
|
while j < 8:
|
||||||
|
m = (B[j][0] << 1) + B[j][5]
|
||||||
|
n = (B[j][1] << 3) + (B[j][2] << 2) + (B[j][3] << 1) + B[j][4]
|
||||||
|
v = Des.__sbox[j][(m << 4) + n]
|
||||||
|
Bn[pos] = (v & 8) >> 3
|
||||||
|
Bn[pos + 1] = (v & 4) >> 2
|
||||||
|
Bn[pos + 2] = (v & 2) >> 1
|
||||||
|
Bn[pos + 3] = v & 1
|
||||||
|
pos += 4
|
||||||
|
j += 1
|
||||||
|
self.R = self.__permutate(Des.__p, Bn)
|
||||||
|
self.R = [x ^ y for x, y in zip(self.R, self.L)]
|
||||||
|
self.L = tempR
|
||||||
|
i += 1
|
||||||
|
iteration += iteration_adjustment
|
||||||
|
self.final = self.__permutate(Des.__fp, self.R + self.L)
|
||||||
|
return self.final
|
||||||
|
def crypt(self, data, crypt_type):
|
||||||
|
if not data:
|
||||||
|
return ''
|
||||||
|
if len(data) % self.block_size != 0:
|
||||||
|
if crypt_type == Des.DECRYPT: # Decryption must work on 8 byte blocks
|
||||||
|
raise ValueError("Invalid data length, data must be a multiple of " + str(self.block_size) + " bytes\n.")
|
||||||
|
if not self.getPadding():
|
||||||
|
raise ValueError("Invalid data length, data must be a multiple of " + str(self.block_size) + " bytes\n. Try setting the optional padding character")
|
||||||
|
else:
|
||||||
|
data += (self.block_size - (len(data) % self.block_size)) * self.getPadding()
|
||||||
|
if self.getMode() == CBC:
|
||||||
|
if self.getIV():
|
||||||
|
iv = self.__String_to_BitList(self.getIV())
|
||||||
|
else:
|
||||||
|
raise ValueError("For CBC mode, you must supply the Initial Value (IV) for ciphering")
|
||||||
|
i = 0
|
||||||
|
dict = {}
|
||||||
|
result = []
|
||||||
|
while i < len(data):
|
||||||
|
block = self.__String_to_BitList(data[i:i+8])
|
||||||
|
if self.getMode() == CBC:
|
||||||
|
if crypt_type == Des.ENCRYPT:
|
||||||
|
block = [x ^ y for x, y in zip(block, iv)]
|
||||||
|
processed_block = self.__des_crypt(block, crypt_type)
|
||||||
|
if crypt_type == Des.DECRYPT:
|
||||||
|
processed_block = [x ^ y for x, y in zip(processed_block, iv)]
|
||||||
|
iv = block
|
||||||
|
else:
|
||||||
|
iv = processed_block
|
||||||
|
else:
|
||||||
|
processed_block = self.__des_crypt(block, crypt_type)
|
||||||
|
result.append(self.__BitList_to_String(processed_block))
|
||||||
|
i += 8
|
||||||
|
if crypt_type == Des.DECRYPT and self.getPadding():
|
||||||
|
s = result[-1]
|
||||||
|
while s[-1] == self.getPadding():
|
||||||
|
s = s[:-1]
|
||||||
|
result[-1] = s
|
||||||
|
return ''.join(result)
|
||||||
|
def encrypt(self, data, pad=''):
|
||||||
|
self.__padding = pad
|
||||||
|
return self.crypt(data, Des.ENCRYPT)
|
||||||
|
def decrypt(self, data, pad=''):
|
||||||
|
self.__padding = pad
|
||||||
|
return self.crypt(data, Des.DECRYPT)
|
||||||
Binary file not shown.
@@ -1,65 +0,0 @@
|
|||||||
Ignoble Epub DeDRM - ignobleepub_vXX_plugin.zip
|
|
||||||
Requires Calibre version 0.6.44 or higher.
|
|
||||||
|
|
||||||
All credit given to I <3 Cabbages for the original standalone scripts.
|
|
||||||
I had the much easier job of converting them to a Calibre plugin.
|
|
||||||
|
|
||||||
This plugin is meant to decrypt Barnes & Noble Epubs that are protected
|
|
||||||
with Adobe's Adept encryption. It is meant to function without having to install
|
|
||||||
any dependencies... other than having Calibre installed, of course. It will still
|
|
||||||
work if you have Python and PyCrypto already installed, but they aren't necessary.
|
|
||||||
|
|
||||||
Installation:
|
|
||||||
|
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file
|
|
||||||
dialog button to select the plugin's zip file (ignobleepub_vXX_plugin.zip) and
|
|
||||||
click the 'Add' button. you're done.
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
|
|
||||||
1) The easiest way to configure the plugin is to enter your name (Barnes & Noble account
|
|
||||||
name) and credit card number (the one used to purchase the books) into the plugin's
|
|
||||||
customization window. It's the same info you would enter into the ignoblekeygen script.
|
|
||||||
Highlight the plugin (Ignoble Epub DeDRM) and click the "Customize Plugin" button on
|
|
||||||
Calibre's Preferences->Plugins page. Enter the name and credit card number separated
|
|
||||||
by a comma: Your Name,1234123412341234
|
|
||||||
|
|
||||||
If you've purchased books with more than one credit card, separate that other info with
|
|
||||||
a colon: Your Name,1234123412341234:Other Name,2345234523452345
|
|
||||||
|
|
||||||
** NOTE ** The above method is your only option if you don't have/can't run the original
|
|
||||||
I <3 Cabbages scripts on your particular machine.
|
|
||||||
|
|
||||||
** NOTE ** Your credit card number will be on display in Calibre's Plugin configuration
|
|
||||||
page when using the above method. If other people have access to your computer,
|
|
||||||
you may want to use the second configuration method below.
|
|
||||||
|
|
||||||
2) If you already have keyfiles generated with I <3 Cabbages' ignoblekeygen.pyw
|
|
||||||
script, you can put those keyfiles into Calibre's configuration directory. The easiest
|
|
||||||
way to find the correct directory is to go to Calibre's Preferences page... click
|
|
||||||
on the 'Miscellaneous' button (looks like a gear), and then click the 'Open Calibre
|
|
||||||
configuration directory' button. Paste your keyfiles in there. Just make sure that
|
|
||||||
they have different names and are saved with the '.b64' extension (like the ignoblekeygen
|
|
||||||
script produces). This directory isn't touched when upgrading Calibre, so it's quite safe
|
|
||||||
to leave then there.
|
|
||||||
|
|
||||||
All keyfiles from method 2 and all data entered from method 1 will be used to attempt
|
|
||||||
to decrypt a book. You can use method 1 or method 2, or a combination of both.
|
|
||||||
|
|
||||||
Troubleshooting:
|
|
||||||
|
|
||||||
If you find that it's not working for you (imported epubs still have DRM), you can
|
|
||||||
save a lot of time and trouble by trying to add the epub to Calibre with the command
|
|
||||||
line tools. This will print out a lot of helpful debugging info that can be copied into
|
|
||||||
any online help requests. I'm going to ask you to do it first, anyway, so you might
|
|
||||||
as well get used to it. ;)
|
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're
|
|
||||||
trying to import resides. Then type the command "calibredb add your_ebook.epub".
|
|
||||||
Don't type the quotes and obviously change the 'your_ebook.epub' to whatever the
|
|
||||||
filename of your book is. Copy the resulting output and paste it into any online
|
|
||||||
help request you make.
|
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default.
|
|
||||||
If you go to the 'Preferences' page and click on the miscellaneous button, you'll
|
|
||||||
see the option to install the command line tools.
|
|
||||||
@@ -1,10 +1,10 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
|
|
||||||
# ignobleepub_v01_plugin.py
|
# ignobleepub_plugin.py
|
||||||
# Released under the terms of the GNU General Public Licence, version 3 or
|
# Released under the terms of the GNU General Public Licence, version 3 or
|
||||||
# later. <http://www.gnu.org/licenses/>
|
# later. <http://www.gnu.org/licenses/>
|
||||||
#
|
#
|
||||||
# Requires Calibre version 0.6.44 or higher.
|
# Requires Calibre version 0.7.55 or higher.
|
||||||
#
|
#
|
||||||
# All credit given to I <3 Cabbages for the original standalone scripts.
|
# All credit given to I <3 Cabbages for the original standalone scripts.
|
||||||
# I had the much easier job of converting them to Calibre a plugin.
|
# I had the much easier job of converting them to Calibre a plugin.
|
||||||
@@ -41,8 +41,14 @@
|
|||||||
#
|
#
|
||||||
#
|
#
|
||||||
# Revision history:
|
# Revision history:
|
||||||
# 0.1 - Initial release
|
# 0.1.0 - Initial release
|
||||||
|
# 0.1.1 - Allow Windows users to make use of openssl if they have it installed.
|
||||||
|
# - Incorporated SomeUpdates zipfix routine.
|
||||||
|
# 0.1.2 - bug fix for non-ascii file names in encryption.xml
|
||||||
|
# 0.1.3 - Try PyCrypto on Windows first
|
||||||
|
# 0.1.4 - update zipfix to deal with mimetype not in correct place
|
||||||
|
# 0.1.5 - update zipfix to deal with completely missing mimetype files
|
||||||
|
# 0.1.6 - update ot the new calibre plugin interface
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Decrypt Barnes & Noble ADEPT encrypted EPUB books.
|
Decrypt Barnes & Noble ADEPT encrypted EPUB books.
|
||||||
@@ -77,7 +83,10 @@ def _load_crypto_libcrypto():
|
|||||||
Structure, c_ulong, create_string_buffer, cast
|
Structure, c_ulong, create_string_buffer, cast
|
||||||
from ctypes.util import find_library
|
from ctypes.util import find_library
|
||||||
|
|
||||||
libcrypto = find_library('crypto')
|
if sys.platform.startswith('win'):
|
||||||
|
libcrypto = find_library('libeay32')
|
||||||
|
else:
|
||||||
|
libcrypto = find_library('crypto')
|
||||||
if libcrypto is None:
|
if libcrypto is None:
|
||||||
raise IGNOBLEError('libcrypto not found')
|
raise IGNOBLEError('libcrypto not found')
|
||||||
libcrypto = CDLL(libcrypto)
|
libcrypto = CDLL(libcrypto)
|
||||||
@@ -164,7 +173,10 @@ def _load_crypto_pycrypto():
|
|||||||
|
|
||||||
def _load_crypto():
|
def _load_crypto():
|
||||||
_aes = _aes2 = None
|
_aes = _aes2 = None
|
||||||
for loader in (_load_crypto_libcrypto, _load_crypto_pycrypto):
|
cryptolist = (_load_crypto_libcrypto, _load_crypto_pycrypto)
|
||||||
|
if sys.platform.startswith('win'):
|
||||||
|
cryptolist = (_load_crypto_pycrypto, _load_crypto_libcrypto)
|
||||||
|
for loader in cryptolist:
|
||||||
try:
|
try:
|
||||||
_aes, _aes2 = loader()
|
_aes, _aes2 = loader()
|
||||||
break
|
break
|
||||||
@@ -204,6 +216,7 @@ class Decryptor(object):
|
|||||||
enc('CipherReference'))
|
enc('CipherReference'))
|
||||||
for elem in encryption.findall(expr):
|
for elem in encryption.findall(expr):
|
||||||
path = elem.get('URI', None)
|
path = elem.get('URI', None)
|
||||||
|
path = path.encode('utf-8')
|
||||||
if path is not None:
|
if path is not None:
|
||||||
encrypted.add(path)
|
encrypted.add(path)
|
||||||
|
|
||||||
@@ -254,6 +267,7 @@ def plugin_main(userkey, inpath, outpath):
|
|||||||
return 0
|
return 0
|
||||||
|
|
||||||
from calibre.customize import FileTypePlugin
|
from calibre.customize import FileTypePlugin
|
||||||
|
from calibre.constants import iswindows, isosx
|
||||||
|
|
||||||
class IgnobleDeDRM(FileTypePlugin):
|
class IgnobleDeDRM(FileTypePlugin):
|
||||||
name = 'Ignoble Epub DeDRM'
|
name = 'Ignoble Epub DeDRM'
|
||||||
@@ -261,8 +275,8 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
Credit given to I <3 Cabbages for the original stand-alone scripts.'
|
Credit given to I <3 Cabbages for the original stand-alone scripts.'
|
||||||
supported_platforms = ['linux', 'osx', 'windows']
|
supported_platforms = ['linux', 'osx', 'windows']
|
||||||
author = 'DiapDealer'
|
author = 'DiapDealer'
|
||||||
version = (0, 1, 0)
|
version = (0, 1, 6)
|
||||||
minimum_calibre_version = (0, 6, 44) # Compiled python libraries cannot be imported in earlier versions.
|
minimum_calibre_version = (0, 7, 55) # Compiled python libraries cannot be imported in earlier versions.
|
||||||
file_types = set(['epub'])
|
file_types = set(['epub'])
|
||||||
on_import = True
|
on_import = True
|
||||||
|
|
||||||
@@ -270,21 +284,10 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
global AES
|
global AES
|
||||||
global AES2
|
global AES2
|
||||||
|
|
||||||
from calibre.gui2 import is_ok_to_use_qt
|
|
||||||
from PyQt4.Qt import QMessageBox
|
|
||||||
from calibre.constants import iswindows, isosx
|
|
||||||
|
|
||||||
# Add the included pycrypto import directory for Windows users.
|
|
||||||
pdir = 'windows' if iswindows else 'osx' if isosx else 'linux'
|
|
||||||
ppath = os.path.join(self.sys_insertion_path, pdir)
|
|
||||||
#sys.path.insert(0, ppath)
|
|
||||||
sys.path.append(ppath)
|
|
||||||
|
|
||||||
AES, AES2 = _load_crypto()
|
AES, AES2 = _load_crypto()
|
||||||
|
|
||||||
if AES == None or AES2 == None:
|
if AES == None or AES2 == None:
|
||||||
# Failed to load libcrypto or PyCrypto... Adobe Epubs can\'t be decrypted.'
|
# Failed to load libcrypto or PyCrypto... Adobe Epubs can't be decrypted.'
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise IGNOBLEError('IgnobleEpub - Failed to load crypto libs.')
|
raise IGNOBLEError('IgnobleEpub - Failed to load crypto libs.')
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -313,7 +316,6 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
# Get name and credit card number from Plugin Customization
|
# Get name and credit card number from Plugin Customization
|
||||||
if not userkeys and not self.site_customization:
|
if not userkeys and not self.site_customization:
|
||||||
# Plugin hasn't been configured... do nothing.
|
# Plugin hasn't been configured... do nothing.
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise IGNOBLEError('IgnobleEpub - No keys found. Plugin not configured.')
|
raise IGNOBLEError('IgnobleEpub - No keys found. Plugin not configured.')
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -326,7 +328,6 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
name, ccn = i.split(',')
|
name, ccn = i.split(',')
|
||||||
keycount += 1
|
keycount += 1
|
||||||
except ValueError:
|
except ValueError:
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise IGNOBLEError('IgnobleEpub - Error parsing user supplied data.')
|
raise IGNOBLEError('IgnobleEpub - Error parsing user supplied data.')
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -337,17 +338,25 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
# Attempt to decrypt epub with each encryption key (generated or provided).
|
# Attempt to decrypt epub with each encryption key (generated or provided).
|
||||||
for userkey in userkeys:
|
for userkey in userkeys:
|
||||||
# Create a TemporaryPersistent file to work with.
|
# Create a TemporaryPersistent file to work with.
|
||||||
|
# Check original epub archive for zip errors.
|
||||||
|
from calibre_plugins.ignobleepub import zipfix
|
||||||
|
inf = self.temporary_file('.epub')
|
||||||
|
try:
|
||||||
|
fr = zipfix.fixZip(path_to_ebook, inf.name)
|
||||||
|
fr.fix()
|
||||||
|
except Exception, e:
|
||||||
|
raise Exception(e)
|
||||||
|
return
|
||||||
of = self.temporary_file('.epub')
|
of = self.temporary_file('.epub')
|
||||||
|
|
||||||
# Give the user key, ebook and TemporaryPersistent file to the Stripper function.
|
# Give the user key, ebook and TemporaryPersistent file to the Stripper function.
|
||||||
result = plugin_main(userkey, path_to_ebook, of.name)
|
result = plugin_main(userkey, inf.name, of.name)
|
||||||
|
|
||||||
# Ebook is not a B&N Adept epub... do nothing and pass it on.
|
# Ebook is not a B&N Adept epub... do nothing and pass it on.
|
||||||
# This allows a non-encrypted epub to be imported without error messages.
|
# This allows a non-encrypted epub to be imported without error messages.
|
||||||
if result == 1:
|
if result == 1:
|
||||||
print 'IgnobleEpub: Not a B&N Adept Epub... punting.'
|
print 'IgnobleEpub: Not a B&N Adept Epub... punting.'
|
||||||
of.close()
|
of.close()
|
||||||
sys.path.remove(ppath)
|
|
||||||
return path_to_ebook
|
return path_to_ebook
|
||||||
break
|
break
|
||||||
|
|
||||||
@@ -356,7 +365,6 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
if result == 0:
|
if result == 0:
|
||||||
print 'IgnobleEpub: Encryption successfully removed.'
|
print 'IgnobleEpub: Encryption successfully removed.'
|
||||||
of.close()
|
of.close()
|
||||||
sys.path.remove(ppath)
|
|
||||||
return of.name
|
return of.name
|
||||||
break
|
break
|
||||||
|
|
||||||
@@ -366,10 +374,9 @@ class IgnobleDeDRM(FileTypePlugin):
|
|||||||
# Something went wrong with decryption.
|
# Something went wrong with decryption.
|
||||||
# Import the original unmolested epub.
|
# Import the original unmolested epub.
|
||||||
of.close
|
of.close
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise IGNOBLEError('IgnobleEpub - Ultimately failed to decrypt.')
|
raise IGNOBLEError('IgnobleEpub - Ultimately failed to decrypt.')
|
||||||
return
|
return
|
||||||
|
|
||||||
|
|
||||||
def customization_help(self, gui=False):
|
def customization_help(self, gui=False):
|
||||||
return 'Enter B&N Account name and CC# (separate name and CC# with a comma)'
|
return 'Enter B&N Account name and CC# (separate name and CC# with a comma)'
|
||||||
|
|||||||
@@ -1,51 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Secret-key encryption algorithms.
|
|
||||||
|
|
||||||
Secret-key encryption algorithms transform plaintext in some way that
|
|
||||||
is dependent on a key, producing ciphertext. This transformation can
|
|
||||||
easily be reversed, if (and, hopefully, only if) one knows the key.
|
|
||||||
|
|
||||||
The encryption modules here all support the interface described in PEP
|
|
||||||
272, "API for Block Encryption Algorithms".
|
|
||||||
|
|
||||||
If you don't know which algorithm to choose, use AES because it's
|
|
||||||
standard and has undergone a fair bit of examination.
|
|
||||||
|
|
||||||
Crypto.Cipher.AES Advanced Encryption Standard
|
|
||||||
Crypto.Cipher.ARC2 Alleged RC2
|
|
||||||
Crypto.Cipher.ARC4 Alleged RC4
|
|
||||||
Crypto.Cipher.Blowfish
|
|
||||||
Crypto.Cipher.CAST
|
|
||||||
Crypto.Cipher.DES The Data Encryption Standard. Very commonly used
|
|
||||||
in the past, but today its 56-bit keys are too small.
|
|
||||||
Crypto.Cipher.DES3 Triple DES.
|
|
||||||
Crypto.Cipher.XOR The simple XOR cipher.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['AES', 'ARC2', 'ARC4',
|
|
||||||
'Blowfish', 'CAST', 'DES', 'DES3',
|
|
||||||
'XOR'
|
|
||||||
]
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Python Cryptography Toolkit
|
|
||||||
|
|
||||||
A collection of cryptographic modules implementing various algorithms
|
|
||||||
and protocols.
|
|
||||||
|
|
||||||
Subpackages:
|
|
||||||
Crypto.Cipher Secret-key encryption algorithms (AES, DES, ARC4)
|
|
||||||
Crypto.Hash Hashing algorithms (MD5, SHA, HMAC)
|
|
||||||
Crypto.Protocol Cryptographic protocols (Chaffing, all-or-nothing
|
|
||||||
transform). This package does not contain any
|
|
||||||
network protocols.
|
|
||||||
Crypto.PublicKey Public-key encryption and signature algorithms
|
|
||||||
(RSA, DSA)
|
|
||||||
Crypto.Util Various useful modules and functions (long-to-string
|
|
||||||
conversion, random number generation, number
|
|
||||||
theoretic functions)
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['Cipher', 'Hash', 'Protocol', 'PublicKey', 'Util']
|
|
||||||
|
|
||||||
__version__ = '2.3' # See also below and setup.py
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
# New software should look at this instead of at __version__ above.
|
|
||||||
version_info = (2, 1, 0, 'final', 0) # See also above and setup.py
|
|
||||||
|
|
||||||
@@ -1,57 +0,0 @@
|
|||||||
# -*- coding: ascii -*-
|
|
||||||
#
|
|
||||||
# pct_warnings.py : PyCrypto warnings file
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
#
|
|
||||||
# Base classes. All our warnings inherit from one of these in order to allow
|
|
||||||
# the user to specifically filter them.
|
|
||||||
#
|
|
||||||
|
|
||||||
class CryptoWarning(Warning):
|
|
||||||
"""Base class for PyCrypto warnings"""
|
|
||||||
|
|
||||||
class CryptoDeprecationWarning(DeprecationWarning, CryptoWarning):
|
|
||||||
"""Base PyCrypto DeprecationWarning class"""
|
|
||||||
|
|
||||||
class CryptoRuntimeWarning(RuntimeWarning, CryptoWarning):
|
|
||||||
"""Base PyCrypto RuntimeWarning class"""
|
|
||||||
|
|
||||||
#
|
|
||||||
# Warnings that we might actually use
|
|
||||||
#
|
|
||||||
|
|
||||||
class RandomPool_DeprecationWarning(CryptoDeprecationWarning):
|
|
||||||
"""Issued when Crypto.Util.randpool.RandomPool is instantiated."""
|
|
||||||
|
|
||||||
class ClockRewindWarning(CryptoRuntimeWarning):
|
|
||||||
"""Warning for when the system clock moves backwards."""
|
|
||||||
|
|
||||||
class GetRandomNumber_DeprecationWarning(CryptoDeprecationWarning):
|
|
||||||
"""Issued when Crypto.Util.number.getRandomNumber is invoked."""
|
|
||||||
|
|
||||||
# By default, we want this warning to be shown every time we compensate for
|
|
||||||
# clock rewinding.
|
|
||||||
import warnings as _warnings
|
|
||||||
_warnings.filterwarnings('always', category=ClockRewindWarning, append=1)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
156
Calibre_Plugins/ignobleepub_plugin/zipfix.py
Normal file
156
Calibre_Plugins/ignobleepub_plugin/zipfix.py
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import zlib
|
||||||
|
import zipfile
|
||||||
|
import os
|
||||||
|
import os.path
|
||||||
|
import getopt
|
||||||
|
from struct import unpack
|
||||||
|
|
||||||
|
|
||||||
|
_FILENAME_LEN_OFFSET = 26
|
||||||
|
_EXTRA_LEN_OFFSET = 28
|
||||||
|
_FILENAME_OFFSET = 30
|
||||||
|
_MAX_SIZE = 64 * 1024
|
||||||
|
_MIMETYPE = 'application/epub+zip'
|
||||||
|
|
||||||
|
class ZipInfo(zipfile.ZipInfo):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
if 'compress_type' in kwargs:
|
||||||
|
compress_type = kwargs.pop('compress_type')
|
||||||
|
super(ZipInfo, self).__init__(*args, **kwargs)
|
||||||
|
self.compress_type = compress_type
|
||||||
|
|
||||||
|
class fixZip:
|
||||||
|
def __init__(self, zinput, zoutput):
|
||||||
|
self.ztype = 'zip'
|
||||||
|
if zinput.lower().find('.epub') >= 0 :
|
||||||
|
self.ztype = 'epub'
|
||||||
|
self.inzip = zipfile.ZipFile(zinput,'r')
|
||||||
|
self.outzip = zipfile.ZipFile(zoutput,'w')
|
||||||
|
# open the input zip for reading only as a raw file
|
||||||
|
self.bzf = file(zinput,'rb')
|
||||||
|
|
||||||
|
def getlocalname(self, zi):
|
||||||
|
local_header_offset = zi.header_offset
|
||||||
|
self.bzf.seek(local_header_offset + _FILENAME_LEN_OFFSET)
|
||||||
|
leninfo = self.bzf.read(2)
|
||||||
|
local_name_length, = unpack('<H', leninfo)
|
||||||
|
self.bzf.seek(local_header_offset + _FILENAME_OFFSET)
|
||||||
|
local_name = self.bzf.read(local_name_length)
|
||||||
|
return local_name
|
||||||
|
|
||||||
|
def uncompress(self, cmpdata):
|
||||||
|
dc = zlib.decompressobj(-15)
|
||||||
|
data = ''
|
||||||
|
while len(cmpdata) > 0:
|
||||||
|
if len(cmpdata) > _MAX_SIZE :
|
||||||
|
newdata = cmpdata[0:_MAX_SIZE]
|
||||||
|
cmpdata = cmpdata[_MAX_SIZE:]
|
||||||
|
else:
|
||||||
|
newdata = cmpdata
|
||||||
|
cmpdata = ''
|
||||||
|
newdata = dc.decompress(newdata)
|
||||||
|
unprocessed = dc.unconsumed_tail
|
||||||
|
if len(unprocessed) == 0:
|
||||||
|
newdata += dc.flush()
|
||||||
|
data += newdata
|
||||||
|
cmpdata += unprocessed
|
||||||
|
unprocessed = ''
|
||||||
|
return data
|
||||||
|
|
||||||
|
def getfiledata(self, zi):
|
||||||
|
# get file name length and exta data length to find start of file data
|
||||||
|
local_header_offset = zi.header_offset
|
||||||
|
|
||||||
|
self.bzf.seek(local_header_offset + _FILENAME_LEN_OFFSET)
|
||||||
|
leninfo = self.bzf.read(2)
|
||||||
|
local_name_length, = unpack('<H', leninfo)
|
||||||
|
|
||||||
|
self.bzf.seek(local_header_offset + _EXTRA_LEN_OFFSET)
|
||||||
|
exinfo = self.bzf.read(2)
|
||||||
|
extra_field_length, = unpack('<H', exinfo)
|
||||||
|
|
||||||
|
self.bzf.seek(local_header_offset + _FILENAME_OFFSET + local_name_length + extra_field_length)
|
||||||
|
data = None
|
||||||
|
|
||||||
|
# if not compressed we are good to go
|
||||||
|
if zi.compress_type == zipfile.ZIP_STORED:
|
||||||
|
data = self.bzf.read(zi.file_size)
|
||||||
|
|
||||||
|
# if compressed we must decompress it using zlib
|
||||||
|
if zi.compress_type == zipfile.ZIP_DEFLATED:
|
||||||
|
cmpdata = self.bzf.read(zi.compress_size)
|
||||||
|
data = self.uncompress(cmpdata)
|
||||||
|
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def fix(self):
|
||||||
|
# get the zipinfo for each member of the input archive
|
||||||
|
# and copy member over to output archive
|
||||||
|
# if problems exist with local vs central filename, fix them
|
||||||
|
|
||||||
|
# if epub write mimetype file first, with no compression
|
||||||
|
if self.ztype == 'epub':
|
||||||
|
nzinfo = ZipInfo('mimetype', compress_type=zipfile.ZIP_STORED)
|
||||||
|
self.outzip.writestr(nzinfo, _MIMETYPE)
|
||||||
|
|
||||||
|
# write the rest of the files
|
||||||
|
for zinfo in self.inzip.infolist():
|
||||||
|
if zinfo.filename != "mimetype" or self.ztype == '.zip':
|
||||||
|
data = None
|
||||||
|
nzinfo = zinfo
|
||||||
|
try:
|
||||||
|
data = self.inzip.read(zinfo.filename)
|
||||||
|
except zipfile.BadZipfile or zipfile.error:
|
||||||
|
local_name = self.getlocalname(zinfo)
|
||||||
|
data = self.getfiledata(zinfo)
|
||||||
|
nzinfo.filename = local_name
|
||||||
|
|
||||||
|
nzinfo.date_time = zinfo.date_time
|
||||||
|
nzinfo.compress_type = zinfo.compress_type
|
||||||
|
nzinfo.flag_bits = 0
|
||||||
|
nzinfo.internal_attr = 0
|
||||||
|
self.outzip.writestr(nzinfo,data)
|
||||||
|
|
||||||
|
self.bzf.close()
|
||||||
|
self.inzip.close()
|
||||||
|
self.outzip.close()
|
||||||
|
|
||||||
|
|
||||||
|
def usage():
|
||||||
|
print """usage: zipfix.py inputzip outputzip
|
||||||
|
inputzip is the source zipfile to fix
|
||||||
|
outputzip is the fixed zip archive
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def repairBook(infile, outfile):
|
||||||
|
if not os.path.exists(infile):
|
||||||
|
print "Error: Input Zip File does not exist"
|
||||||
|
return 1
|
||||||
|
try:
|
||||||
|
fr = fixZip(infile, outfile)
|
||||||
|
fr.fix()
|
||||||
|
return 0
|
||||||
|
except Exception, e:
|
||||||
|
print "Error Occurred ", e
|
||||||
|
return 2
|
||||||
|
|
||||||
|
|
||||||
|
def main(argv=sys.argv):
|
||||||
|
if len(argv)!=3:
|
||||||
|
usage()
|
||||||
|
return 1
|
||||||
|
infile = argv[1]
|
||||||
|
outfile = argv[2]
|
||||||
|
return repairBook(infile, outfile)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__' :
|
||||||
|
sys.exit(main())
|
||||||
|
|
||||||
|
|
||||||
Binary file not shown.
@@ -1,62 +0,0 @@
|
|||||||
Inept Epub DeDRM - ineptepub_vXX_plugin.zip
|
|
||||||
Requires Calibre version 0.6.44 or higher.
|
|
||||||
|
|
||||||
All credit given to I <3 Cabbages for the original standalone scripts.
|
|
||||||
I had the much easier job of converting them to a Calibre plugin.
|
|
||||||
|
|
||||||
This plugin is meant to decrypt Adobe Digital Edition Epubs that are protected
|
|
||||||
with Adobe's Adept encryption. It is meant to function without having to install
|
|
||||||
any dependencies... other than having Calibre installed, of course. It will still
|
|
||||||
work if you have Python and PyCrypto already installed, but they aren't necessary.
|
|
||||||
|
|
||||||
Installation:
|
|
||||||
|
|
||||||
Go to Calibre's Preferences page... click on the Plugins button. Use the file
|
|
||||||
dialog button to select the plugin's zip file (ineptepub_vXX_plugin.zip) and
|
|
||||||
click the 'Add' button. you're done.
|
|
||||||
|
|
||||||
Configuration:
|
|
||||||
|
|
||||||
When first run, the plugin will attempt to find your Adobe Digital Editions installation
|
|
||||||
(on Windows and Mac OS's). If successful, it will create an 'adeptkey.der' file and
|
|
||||||
save it in Calibre's configuration directory. It will use that file on subsequent runs.
|
|
||||||
If there are already '*.der' files in the directory, the plugin won't attempt to
|
|
||||||
find the Adobe Digital Editions installation installation.
|
|
||||||
|
|
||||||
So if you have Adobe Digital Editions installation installed on the same machine as Calibre...
|
|
||||||
you are ready to go. If not... keep reading.
|
|
||||||
|
|
||||||
If you already have keyfiles generated with I <3 Cabbages' ineptkey.pyw script,
|
|
||||||
you can put those keyfiles in Calibre's configuration directory. The easiest
|
|
||||||
way to find the correct directory is to go to Calibre's Preferences page... click
|
|
||||||
on the 'Miscellaneous' button (looks like a gear), and then click the 'Open Calibre
|
|
||||||
configuration directory' button. Paste your keyfiles in there. Just make sure that
|
|
||||||
they have different names and are saved with the '.der' extension (like the ineptkey
|
|
||||||
script produces). This directory isn't touched when upgrading Calibre, so it's quite
|
|
||||||
safe to leave them there.
|
|
||||||
|
|
||||||
Since there is no Linux version of Adobe Digital Editions, Linux users will have to
|
|
||||||
obtain a keyfile through other methods and put the file in Calibre's configuration directory.
|
|
||||||
|
|
||||||
All keyfiles with a '.der' extension found in Calibre's configuration directory will
|
|
||||||
be used to attempt to decrypt a book.
|
|
||||||
|
|
||||||
** NOTE ** There is no plugin customization data for the Inept Epub DeDRM plugin.
|
|
||||||
|
|
||||||
Troubleshooting:
|
|
||||||
|
|
||||||
If you find that it's not working for you (imported epubs still have DRM), you can
|
|
||||||
save a lot of time and trouble by trying to add the epub to Calibre with the command
|
|
||||||
line tools. This will print out a lot of helpful debugging info that can be copied into
|
|
||||||
any online help requests. I'm going to ask you to do it first, anyway, so you might
|
|
||||||
as well get used to it. ;)
|
|
||||||
|
|
||||||
Open a command prompt (terminal) and change to the directory where the ebook you're
|
|
||||||
trying to import resides. Then type the command "calibredb add your_ebook.epub".
|
|
||||||
Don't type the quotes and obviously change the 'your_ebook.epub' to whatever the
|
|
||||||
filename of your book is. Copy the resulting output and paste it into any online
|
|
||||||
help request you make.
|
|
||||||
|
|
||||||
** Note: the Mac version of Calibre doesn't install the command line tools by default.
|
|
||||||
If you go to the 'Preferences' page and click on the miscellaneous button, you'll
|
|
||||||
see the option to install the command line tools.
|
|
||||||
@@ -1,10 +1,10 @@
|
|||||||
#! /usr/bin/python
|
#! /usr/bin/python
|
||||||
|
|
||||||
# ineptepub_v01_plugin.py
|
# ineptepub_plugin.py
|
||||||
# Released under the terms of the GNU General Public Licence, version 3 or
|
# Released under the terms of the GNU General Public Licence, version 3 or
|
||||||
# later. <http://www.gnu.org/licenses/>
|
# later. <http://www.gnu.org/licenses/>
|
||||||
#
|
#
|
||||||
# Requires Calibre version 0.6.44 or higher.
|
# Requires Calibre version 0.7.55 or higher.
|
||||||
#
|
#
|
||||||
# All credit given to I <3 Cabbages for the original standalone scripts.
|
# All credit given to I <3 Cabbages for the original standalone scripts.
|
||||||
# I had the much easier job of converting them to a Calibre plugin.
|
# I had the much easier job of converting them to a Calibre plugin.
|
||||||
@@ -41,7 +41,15 @@
|
|||||||
#
|
#
|
||||||
# Revision history:
|
# Revision history:
|
||||||
# 0.1 - Initial release
|
# 0.1 - Initial release
|
||||||
|
# 0.1.1 - Allow Windows users to make use of openssl if they have it installed.
|
||||||
|
# - Incorporated SomeUpdates zipfix routine.
|
||||||
|
# 0.1.2 - Removed Carbon dependency for Mac users. Fixes an issue that was a
|
||||||
|
# result of Calibre changing to python 2.7.
|
||||||
|
# 0.1.3 - bug fix for epubs with non-ascii chars in file names
|
||||||
|
# 0.1.4 - default to try PyCrypto first on Windows
|
||||||
|
# 0.1.5 - update zipfix to handle out of position mimetypes
|
||||||
|
# 0.1.6 - update zipfix to handle completely missing mimetype files
|
||||||
|
# 0.1.7 - update to new calibre plugin interface
|
||||||
|
|
||||||
"""
|
"""
|
||||||
Decrypt Adobe ADEPT-encrypted EPUB books.
|
Decrypt Adobe ADEPT-encrypted EPUB books.
|
||||||
@@ -76,7 +84,10 @@ def _load_crypto_libcrypto():
|
|||||||
Structure, c_ulong, create_string_buffer, cast
|
Structure, c_ulong, create_string_buffer, cast
|
||||||
from ctypes.util import find_library
|
from ctypes.util import find_library
|
||||||
|
|
||||||
libcrypto = find_library('crypto')
|
if sys.platform.startswith('win'):
|
||||||
|
libcrypto = find_library('libeay32')
|
||||||
|
else:
|
||||||
|
libcrypto = find_library('crypto')
|
||||||
if libcrypto is None:
|
if libcrypto is None:
|
||||||
raise ADEPTError('libcrypto not found')
|
raise ADEPTError('libcrypto not found')
|
||||||
libcrypto = CDLL(libcrypto)
|
libcrypto = CDLL(libcrypto)
|
||||||
@@ -276,7 +287,10 @@ def _load_crypto_pycrypto():
|
|||||||
|
|
||||||
def _load_crypto():
|
def _load_crypto():
|
||||||
_aes = _rsa = None
|
_aes = _rsa = None
|
||||||
for loader in (_load_crypto_libcrypto, _load_crypto_pycrypto):
|
cryptolist = (_load_crypto_libcrypto, _load_crypto_pycrypto)
|
||||||
|
if sys.platform.startswith('win'):
|
||||||
|
cryptolist = (_load_crypto_pycrypto, _load_crypto_libcrypto)
|
||||||
|
for loader in cryptolist:
|
||||||
try:
|
try:
|
||||||
_aes, _rsa = loader()
|
_aes, _rsa = loader()
|
||||||
break
|
break
|
||||||
@@ -301,6 +315,7 @@ class Decryptor(object):
|
|||||||
enc('CipherReference'))
|
enc('CipherReference'))
|
||||||
for elem in encryption.findall(expr):
|
for elem in encryption.findall(expr):
|
||||||
path = elem.get('URI', None)
|
path = elem.get('URI', None)
|
||||||
|
path = path.encode('utf-8')
|
||||||
if path is not None:
|
if path is not None:
|
||||||
encrypted.add(path)
|
encrypted.add(path)
|
||||||
|
|
||||||
@@ -351,6 +366,7 @@ def plugin_main(userkey, inpath, outpath):
|
|||||||
return 0
|
return 0
|
||||||
|
|
||||||
from calibre.customize import FileTypePlugin
|
from calibre.customize import FileTypePlugin
|
||||||
|
from calibre.constants import iswindows, isosx
|
||||||
|
|
||||||
class IneptDeDRM(FileTypePlugin):
|
class IneptDeDRM(FileTypePlugin):
|
||||||
name = 'Inept Epub DeDRM'
|
name = 'Inept Epub DeDRM'
|
||||||
@@ -358,8 +374,8 @@ class IneptDeDRM(FileTypePlugin):
|
|||||||
Credit given to I <3 Cabbages for the original stand-alone scripts.'
|
Credit given to I <3 Cabbages for the original stand-alone scripts.'
|
||||||
supported_platforms = ['linux', 'osx', 'windows']
|
supported_platforms = ['linux', 'osx', 'windows']
|
||||||
author = 'DiapDealer'
|
author = 'DiapDealer'
|
||||||
version = (0, 1, 0)
|
version = (0, 1, 7)
|
||||||
minimum_calibre_version = (0, 6, 44) # Compiled python libraries cannot be imported in earlier versions.
|
minimum_calibre_version = (0, 7, 55) # Compiled python libraries cannot be imported in earlier versions.
|
||||||
file_types = set(['epub'])
|
file_types = set(['epub'])
|
||||||
on_import = True
|
on_import = True
|
||||||
priority = 100
|
priority = 100
|
||||||
@@ -368,22 +384,10 @@ class IneptDeDRM(FileTypePlugin):
|
|||||||
global AES
|
global AES
|
||||||
global RSA
|
global RSA
|
||||||
|
|
||||||
from calibre.gui2 import is_ok_to_use_qt
|
|
||||||
from PyQt4.Qt import QMessageBox
|
|
||||||
from calibre.constants import iswindows, isosx
|
|
||||||
|
|
||||||
# Add the included pycrypto import directory for Windows users.
|
|
||||||
# Add the included Carbon import directory for Mac users.
|
|
||||||
pdir = 'windows' if iswindows else 'osx' if isosx else 'linux'
|
|
||||||
ppath = os.path.join(self.sys_insertion_path, pdir)
|
|
||||||
#sys.path.insert(0, ppath)
|
|
||||||
sys.path.append(ppath)
|
|
||||||
|
|
||||||
AES, RSA = _load_crypto()
|
AES, RSA = _load_crypto()
|
||||||
|
|
||||||
if AES == None or RSA == None:
|
if AES == None or RSA == None:
|
||||||
# Failed to load libcrypto or PyCrypto... Adobe Epubs can\'t be decrypted.'
|
# Failed to load libcrypto or PyCrypto... Adobe Epubs can\'t be decrypted.'
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise ADEPTError('IneptEpub: Failed to load crypto libs... Adobe Epubs can\'t be decrypted.')
|
raise ADEPTError('IneptEpub: Failed to load crypto libs... Adobe Epubs can\'t be decrypted.')
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -412,11 +416,11 @@ class IneptDeDRM(FileTypePlugin):
|
|||||||
# Calibre's configuration directory for future use.
|
# Calibre's configuration directory for future use.
|
||||||
if iswindows or isosx:
|
if iswindows or isosx:
|
||||||
# ADE key retrieval script included in respective OS folder.
|
# ADE key retrieval script included in respective OS folder.
|
||||||
from ade_key import retrieve_key
|
from calibre_plugins.ineptepub.ade_key import retrieve_key
|
||||||
try:
|
try:
|
||||||
keydata = retrieve_key()
|
keydata = retrieve_key()
|
||||||
userkeys.append(keydata)
|
userkeys.append(keydata)
|
||||||
keypath = os.path.join(confpath, 'adeptkey.der')
|
keypath = os.path.join(confpath, 'calibre-adeptkey.der')
|
||||||
with open(keypath, 'wb') as f:
|
with open(keypath, 'wb') as f:
|
||||||
f.write(keydata)
|
f.write(keydata)
|
||||||
print 'IneptEpub: Created keyfile from ADE install.'
|
print 'IneptEpub: Created keyfile from ADE install.'
|
||||||
@@ -426,24 +430,31 @@ class IneptDeDRM(FileTypePlugin):
|
|||||||
|
|
||||||
if not userkeys:
|
if not userkeys:
|
||||||
# No user keys found... bail out.
|
# No user keys found... bail out.
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise ADEPTError('IneptEpub - No keys found. Check keyfile(s)/ADE install')
|
raise ADEPTError('IneptEpub - No keys found. Check keyfile(s)/ADE install')
|
||||||
return
|
return
|
||||||
|
|
||||||
# Attempt to decrypt epub with each encryption key found.
|
# Attempt to decrypt epub with each encryption key found.
|
||||||
for userkey in userkeys:
|
for userkey in userkeys:
|
||||||
# Create a TemporaryPersistent file to work with.
|
# Create a TemporaryPersistent file to work with.
|
||||||
|
# Check original epub archive for zip errors.
|
||||||
|
from calibre_plugins.ineptepub import zipfix
|
||||||
|
inf = self.temporary_file('.epub')
|
||||||
|
try:
|
||||||
|
fr = zipfix.fixZip(path_to_ebook, inf.name)
|
||||||
|
fr.fix()
|
||||||
|
except Exception, e:
|
||||||
|
raise Exception(e)
|
||||||
|
return
|
||||||
of = self.temporary_file('.epub')
|
of = self.temporary_file('.epub')
|
||||||
|
|
||||||
# Give the user key, ebook and TemporaryPersistent file to the plugin_main function.
|
# Give the user key, ebook and TemporaryPersistent file to the plugin_main function.
|
||||||
result = plugin_main(userkey, path_to_ebook, of.name)
|
result = plugin_main(userkey, inf.name, of.name)
|
||||||
|
|
||||||
# Ebook is not an Adobe Adept epub... do nothing and pass it on.
|
# Ebook is not an Adobe Adept epub... do nothing and pass it on.
|
||||||
# This allows a non-encrypted epub to be imported without error messages.
|
# This allows a non-encrypted epub to be imported without error messages.
|
||||||
if result == 1:
|
if result == 1:
|
||||||
print 'IneptEpub: Not an Adobe Adept Epub... punting.'
|
print 'IneptEpub: Not an Adobe Adept Epub... punting.'
|
||||||
of.close()
|
of.close()
|
||||||
sys.path.remove(ppath)
|
|
||||||
return path_to_ebook
|
return path_to_ebook
|
||||||
break
|
break
|
||||||
|
|
||||||
@@ -452,7 +463,6 @@ class IneptDeDRM(FileTypePlugin):
|
|||||||
if result == 0:
|
if result == 0:
|
||||||
print 'IneptEpub: Encryption successfully removed.'
|
print 'IneptEpub: Encryption successfully removed.'
|
||||||
of.close
|
of.close
|
||||||
sys.path.remove(ppath)
|
|
||||||
return of.name
|
return of.name
|
||||||
break
|
break
|
||||||
|
|
||||||
@@ -462,7 +472,6 @@ class IneptDeDRM(FileTypePlugin):
|
|||||||
# Something went wrong with decryption.
|
# Something went wrong with decryption.
|
||||||
# Import the original unmolested epub.
|
# Import the original unmolested epub.
|
||||||
of.close
|
of.close
|
||||||
sys.path.remove(ppath)
|
|
||||||
raise ADEPTError('IneptEpub - Ultimately failed to decrypt')
|
raise ADEPTError('IneptEpub - Ultimately failed to decrypt')
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -19,24 +19,86 @@ class ADEPTError(Exception):
|
|||||||
if iswindows:
|
if iswindows:
|
||||||
from ctypes import windll, c_char_p, c_wchar_p, c_uint, POINTER, byref, \
|
from ctypes import windll, c_char_p, c_wchar_p, c_uint, POINTER, byref, \
|
||||||
create_unicode_buffer, create_string_buffer, CFUNCTYPE, addressof, \
|
create_unicode_buffer, create_string_buffer, CFUNCTYPE, addressof, \
|
||||||
string_at, Structure, c_void_p, cast, c_size_t, memmove
|
string_at, Structure, c_void_p, cast, c_size_t, memmove, CDLL, c_int, \
|
||||||
|
c_long, c_ulong
|
||||||
|
|
||||||
from ctypes.wintypes import LPVOID, DWORD, BOOL
|
from ctypes.wintypes import LPVOID, DWORD, BOOL
|
||||||
import _winreg as winreg
|
import _winreg as winreg
|
||||||
|
|
||||||
|
def _load_crypto_libcrypto():
|
||||||
|
from ctypes.util import find_library
|
||||||
|
libcrypto = find_library('libeay32')
|
||||||
|
if libcrypto is None:
|
||||||
|
raise ADEPTError('libcrypto not found')
|
||||||
|
libcrypto = CDLL(libcrypto)
|
||||||
|
AES_MAXNR = 14
|
||||||
|
c_char_pp = POINTER(c_char_p)
|
||||||
|
c_int_p = POINTER(c_int)
|
||||||
|
class AES_KEY(Structure):
|
||||||
|
_fields_ = [('rd_key', c_long * (4 * (AES_MAXNR + 1))),
|
||||||
|
('rounds', c_int)]
|
||||||
|
AES_KEY_p = POINTER(AES_KEY)
|
||||||
|
|
||||||
try:
|
def F(restype, name, argtypes):
|
||||||
from Crypto.Cipher import AES as _aes
|
func = getattr(libcrypto, name)
|
||||||
except ImportError:
|
func.restype = restype
|
||||||
_aes = None
|
func.argtypes = argtypes
|
||||||
|
return func
|
||||||
|
|
||||||
|
AES_set_decrypt_key = F(c_int, 'AES_set_decrypt_key',
|
||||||
|
[c_char_p, c_int, AES_KEY_p])
|
||||||
|
AES_cbc_encrypt = F(None, 'AES_cbc_encrypt',
|
||||||
|
[c_char_p, c_char_p, c_ulong, AES_KEY_p, c_char_p,
|
||||||
|
c_int])
|
||||||
|
class AES(object):
|
||||||
|
def __init__(self, userkey):
|
||||||
|
self._blocksize = len(userkey)
|
||||||
|
if (self._blocksize != 16) and (self._blocksize != 24) and (self._blocksize != 32) :
|
||||||
|
raise ADEPTError('AES improper key used')
|
||||||
|
key = self._key = AES_KEY()
|
||||||
|
rv = AES_set_decrypt_key(userkey, len(userkey) * 8, key)
|
||||||
|
if rv < 0:
|
||||||
|
raise ADEPTError('Failed to initialize AES key')
|
||||||
|
def decrypt(self, data):
|
||||||
|
out = create_string_buffer(len(data))
|
||||||
|
iv = ("\x00" * self._blocksize)
|
||||||
|
rv = AES_cbc_encrypt(data, out, len(data), self._key, iv, 0)
|
||||||
|
if rv == 0:
|
||||||
|
raise ADEPTError('AES decryption failed')
|
||||||
|
return out.raw
|
||||||
|
return AES
|
||||||
|
|
||||||
|
def _load_crypto_pycrypto():
|
||||||
|
from Crypto.Cipher import AES as _AES
|
||||||
|
class AES(object):
|
||||||
|
def __init__(self, key):
|
||||||
|
self._aes = _AES.new(key, _AES.MODE_CBC)
|
||||||
|
def decrypt(self, data):
|
||||||
|
return self._aes.decrypt(data)
|
||||||
|
return AES
|
||||||
|
|
||||||
|
def _load_crypto():
|
||||||
|
AES = None
|
||||||
|
for loader in (_load_crypto_pycrypto, _load_crypto_libcrypto):
|
||||||
|
try:
|
||||||
|
AES = loader()
|
||||||
|
break
|
||||||
|
except (ImportError, ADEPTError):
|
||||||
|
pass
|
||||||
|
return AES
|
||||||
|
|
||||||
|
AES = _load_crypto()
|
||||||
|
|
||||||
|
|
||||||
DEVICE_KEY_PATH = r'Software\Adobe\Adept\Device'
|
DEVICE_KEY_PATH = r'Software\Adobe\Adept\Device'
|
||||||
PRIVATE_LICENCE_KEY_PATH = r'Software\Adobe\Adept\Activation'
|
PRIVATE_LICENCE_KEY_PATH = r'Software\Adobe\Adept\Activation'
|
||||||
|
|
||||||
MAX_PATH = 255
|
MAX_PATH = 255
|
||||||
|
|
||||||
kernel32 = windll.kernel32
|
kernel32 = windll.kernel32
|
||||||
advapi32 = windll.advapi32
|
advapi32 = windll.advapi32
|
||||||
crypt32 = windll.crypt32
|
crypt32 = windll.crypt32
|
||||||
|
|
||||||
def GetSystemDirectory():
|
def GetSystemDirectory():
|
||||||
GetSystemDirectoryW = kernel32.GetSystemDirectoryW
|
GetSystemDirectoryW = kernel32.GetSystemDirectoryW
|
||||||
GetSystemDirectoryW.argtypes = [c_wchar_p, c_uint]
|
GetSystemDirectoryW.argtypes = [c_wchar_p, c_uint]
|
||||||
@@ -47,7 +109,7 @@ if iswindows:
|
|||||||
return buffer.value
|
return buffer.value
|
||||||
return GetSystemDirectory
|
return GetSystemDirectory
|
||||||
GetSystemDirectory = GetSystemDirectory()
|
GetSystemDirectory = GetSystemDirectory()
|
||||||
|
|
||||||
def GetVolumeSerialNumber():
|
def GetVolumeSerialNumber():
|
||||||
GetVolumeInformationW = kernel32.GetVolumeInformationW
|
GetVolumeInformationW = kernel32.GetVolumeInformationW
|
||||||
GetVolumeInformationW.argtypes = [c_wchar_p, c_wchar_p, c_uint,
|
GetVolumeInformationW.argtypes = [c_wchar_p, c_wchar_p, c_uint,
|
||||||
@@ -61,7 +123,7 @@ if iswindows:
|
|||||||
return vsn.value
|
return vsn.value
|
||||||
return GetVolumeSerialNumber
|
return GetVolumeSerialNumber
|
||||||
GetVolumeSerialNumber = GetVolumeSerialNumber()
|
GetVolumeSerialNumber = GetVolumeSerialNumber()
|
||||||
|
|
||||||
def GetUserName():
|
def GetUserName():
|
||||||
GetUserNameW = advapi32.GetUserNameW
|
GetUserNameW = advapi32.GetUserNameW
|
||||||
GetUserNameW.argtypes = [c_wchar_p, POINTER(c_uint)]
|
GetUserNameW.argtypes = [c_wchar_p, POINTER(c_uint)]
|
||||||
@@ -75,11 +137,11 @@ if iswindows:
|
|||||||
return buffer.value.encode('utf-16-le')[::2]
|
return buffer.value.encode('utf-16-le')[::2]
|
||||||
return GetUserName
|
return GetUserName
|
||||||
GetUserName = GetUserName()
|
GetUserName = GetUserName()
|
||||||
|
|
||||||
PAGE_EXECUTE_READWRITE = 0x40
|
PAGE_EXECUTE_READWRITE = 0x40
|
||||||
MEM_COMMIT = 0x1000
|
MEM_COMMIT = 0x1000
|
||||||
MEM_RESERVE = 0x2000
|
MEM_RESERVE = 0x2000
|
||||||
|
|
||||||
def VirtualAlloc():
|
def VirtualAlloc():
|
||||||
_VirtualAlloc = kernel32.VirtualAlloc
|
_VirtualAlloc = kernel32.VirtualAlloc
|
||||||
_VirtualAlloc.argtypes = [LPVOID, c_size_t, DWORD, DWORD]
|
_VirtualAlloc.argtypes = [LPVOID, c_size_t, DWORD, DWORD]
|
||||||
@@ -89,9 +151,9 @@ if iswindows:
|
|||||||
return _VirtualAlloc(addr, size, alloctype, protect)
|
return _VirtualAlloc(addr, size, alloctype, protect)
|
||||||
return VirtualAlloc
|
return VirtualAlloc
|
||||||
VirtualAlloc = VirtualAlloc()
|
VirtualAlloc = VirtualAlloc()
|
||||||
|
|
||||||
MEM_RELEASE = 0x8000
|
MEM_RELEASE = 0x8000
|
||||||
|
|
||||||
def VirtualFree():
|
def VirtualFree():
|
||||||
_VirtualFree = kernel32.VirtualFree
|
_VirtualFree = kernel32.VirtualFree
|
||||||
_VirtualFree.argtypes = [LPVOID, c_size_t, DWORD]
|
_VirtualFree.argtypes = [LPVOID, c_size_t, DWORD]
|
||||||
@@ -100,22 +162,22 @@ if iswindows:
|
|||||||
return _VirtualFree(addr, size, freetype)
|
return _VirtualFree(addr, size, freetype)
|
||||||
return VirtualFree
|
return VirtualFree
|
||||||
VirtualFree = VirtualFree()
|
VirtualFree = VirtualFree()
|
||||||
|
|
||||||
class NativeFunction(object):
|
class NativeFunction(object):
|
||||||
def __init__(self, restype, argtypes, insns):
|
def __init__(self, restype, argtypes, insns):
|
||||||
self._buf = buf = VirtualAlloc(None, len(insns))
|
self._buf = buf = VirtualAlloc(None, len(insns))
|
||||||
memmove(buf, insns, len(insns))
|
memmove(buf, insns, len(insns))
|
||||||
ftype = CFUNCTYPE(restype, *argtypes)
|
ftype = CFUNCTYPE(restype, *argtypes)
|
||||||
self._native = ftype(buf)
|
self._native = ftype(buf)
|
||||||
|
|
||||||
def __call__(self, *args):
|
def __call__(self, *args):
|
||||||
return self._native(*args)
|
return self._native(*args)
|
||||||
|
|
||||||
def __del__(self):
|
def __del__(self):
|
||||||
if self._buf is not None:
|
if self._buf is not None:
|
||||||
VirtualFree(self._buf)
|
VirtualFree(self._buf)
|
||||||
self._buf = None
|
self._buf = None
|
||||||
|
|
||||||
if struct.calcsize("P") == 4:
|
if struct.calcsize("P") == 4:
|
||||||
CPUID0_INSNS = (
|
CPUID0_INSNS = (
|
||||||
"\x53" # push %ebx
|
"\x53" # push %ebx
|
||||||
@@ -157,7 +219,7 @@ if iswindows:
|
|||||||
"\x5b" # pop %rbx
|
"\x5b" # pop %rbx
|
||||||
"\xc3" # retq
|
"\xc3" # retq
|
||||||
)
|
)
|
||||||
|
|
||||||
def cpuid0():
|
def cpuid0():
|
||||||
_cpuid0 = NativeFunction(None, [c_char_p], CPUID0_INSNS)
|
_cpuid0 = NativeFunction(None, [c_char_p], CPUID0_INSNS)
|
||||||
buf = create_string_buffer(12)
|
buf = create_string_buffer(12)
|
||||||
@@ -166,14 +228,14 @@ if iswindows:
|
|||||||
return buf.raw
|
return buf.raw
|
||||||
return cpuid0
|
return cpuid0
|
||||||
cpuid0 = cpuid0()
|
cpuid0 = cpuid0()
|
||||||
|
|
||||||
cpuid1 = NativeFunction(c_uint, [], CPUID1_INSNS)
|
cpuid1 = NativeFunction(c_uint, [], CPUID1_INSNS)
|
||||||
|
|
||||||
class DataBlob(Structure):
|
class DataBlob(Structure):
|
||||||
_fields_ = [('cbData', c_uint),
|
_fields_ = [('cbData', c_uint),
|
||||||
('pbData', c_void_p)]
|
('pbData', c_void_p)]
|
||||||
DataBlob_p = POINTER(DataBlob)
|
DataBlob_p = POINTER(DataBlob)
|
||||||
|
|
||||||
def CryptUnprotectData():
|
def CryptUnprotectData():
|
||||||
_CryptUnprotectData = crypt32.CryptUnprotectData
|
_CryptUnprotectData = crypt32.CryptUnprotectData
|
||||||
_CryptUnprotectData.argtypes = [DataBlob_p, c_wchar_p, DataBlob_p,
|
_CryptUnprotectData.argtypes = [DataBlob_p, c_wchar_p, DataBlob_p,
|
||||||
@@ -191,10 +253,14 @@ if iswindows:
|
|||||||
return string_at(outdata.pbData, outdata.cbData)
|
return string_at(outdata.pbData, outdata.cbData)
|
||||||
return CryptUnprotectData
|
return CryptUnprotectData
|
||||||
CryptUnprotectData = CryptUnprotectData()
|
CryptUnprotectData = CryptUnprotectData()
|
||||||
|
|
||||||
def retrieve_key():
|
def retrieve_key():
|
||||||
if _aes is None:
|
if AES is None:
|
||||||
raise ADEPTError("Couldn\'t load PyCrypto")
|
tkMessageBox.showerror(
|
||||||
|
"ADEPT Key",
|
||||||
|
"This script requires PyCrypto or OpenSSL which must be installed "
|
||||||
|
"separately. Read the top-of-script comment for details.")
|
||||||
|
return False
|
||||||
root = GetSystemDirectory().split('\\')[0] + '\\'
|
root = GetSystemDirectory().split('\\')[0] + '\\'
|
||||||
serial = GetVolumeSerialNumber(root)
|
serial = GetVolumeSerialNumber(root)
|
||||||
vendor = cpuid0()
|
vendor = cpuid0()
|
||||||
@@ -236,42 +302,39 @@ if iswindows:
|
|||||||
if userkey is None:
|
if userkey is None:
|
||||||
raise ADEPTError('Could not locate privateLicenseKey')
|
raise ADEPTError('Could not locate privateLicenseKey')
|
||||||
userkey = userkey.decode('base64')
|
userkey = userkey.decode('base64')
|
||||||
userkey = _aes.new(keykey, _aes.MODE_CBC).decrypt(userkey)
|
aes = AES(keykey)
|
||||||
|
userkey = aes.decrypt(userkey)
|
||||||
userkey = userkey[26:-ord(userkey[-1])]
|
userkey = userkey[26:-ord(userkey[-1])]
|
||||||
return userkey
|
return userkey
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
|
||||||
import xml.etree.ElementTree as etree
|
import xml.etree.ElementTree as etree
|
||||||
import Carbon.File
|
import subprocess
|
||||||
import Carbon.Folder
|
|
||||||
import Carbon.Folders
|
|
||||||
import MacOS
|
|
||||||
|
|
||||||
ACTIVATION_PATH = 'Adobe/Digital Editions/activation.dat'
|
|
||||||
NSMAP = {'adept': 'http://ns.adobe.com/adept',
|
NSMAP = {'adept': 'http://ns.adobe.com/adept',
|
||||||
'enc': 'http://www.w3.org/2001/04/xmlenc#'}
|
'enc': 'http://www.w3.org/2001/04/xmlenc#'}
|
||||||
|
|
||||||
def find_folder(domain, dtype):
|
def findActivationDat():
|
||||||
try:
|
home = os.getenv('HOME')
|
||||||
fsref = Carbon.Folder.FSFindFolder(domain, dtype, False)
|
cmdline = 'find "' + home + '/Library/Application Support/Adobe/Digital Editions" -name "activation.dat"'
|
||||||
return Carbon.File.pathname(fsref)
|
cmdline = cmdline.encode(sys.getfilesystemencoding())
|
||||||
except MacOS.Error:
|
p2 = subprocess.Popen(cmdline, shell=True, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=False)
|
||||||
return None
|
out1, out2 = p2.communicate()
|
||||||
|
reslst = out1.split('\n')
|
||||||
def find_app_support_file(subpath):
|
cnt = len(reslst)
|
||||||
dtype = Carbon.Folders.kApplicationSupportFolderType
|
for j in xrange(cnt):
|
||||||
for domain in Carbon.Folders.kUserDomain, Carbon.Folders.kLocalDomain:
|
resline = reslst[j]
|
||||||
path = find_folder(domain, dtype)
|
pp = resline.find('activation.dat')
|
||||||
if path is None:
|
if pp >= 0:
|
||||||
continue
|
ActDatPath = resline
|
||||||
path = os.path.join(path, subpath)
|
break
|
||||||
if os.path.isfile(path):
|
if os.path.exists(ActDatPath):
|
||||||
return path
|
return ActDatPath
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def retrieve_key():
|
def retrieve_key():
|
||||||
actpath = find_app_support_file(ACTIVATION_PATH)
|
actpath = findActivationDat()
|
||||||
if actpath is None:
|
if actpath is None:
|
||||||
raise ADEPTError("Could not locate ADE activation")
|
raise ADEPTError("Could not locate ADE activation")
|
||||||
tree = etree.parse(actpath)
|
tree = etree.parse(actpath)
|
||||||
|
|||||||
@@ -1,51 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Secret-key encryption algorithms.
|
|
||||||
|
|
||||||
Secret-key encryption algorithms transform plaintext in some way that
|
|
||||||
is dependent on a key, producing ciphertext. This transformation can
|
|
||||||
easily be reversed, if (and, hopefully, only if) one knows the key.
|
|
||||||
|
|
||||||
The encryption modules here all support the interface described in PEP
|
|
||||||
272, "API for Block Encryption Algorithms".
|
|
||||||
|
|
||||||
If you don't know which algorithm to choose, use AES because it's
|
|
||||||
standard and has undergone a fair bit of examination.
|
|
||||||
|
|
||||||
Crypto.Cipher.AES Advanced Encryption Standard
|
|
||||||
Crypto.Cipher.ARC2 Alleged RC2
|
|
||||||
Crypto.Cipher.ARC4 Alleged RC4
|
|
||||||
Crypto.Cipher.Blowfish
|
|
||||||
Crypto.Cipher.CAST
|
|
||||||
Crypto.Cipher.DES The Data Encryption Standard. Very commonly used
|
|
||||||
in the past, but today its 56-bit keys are too small.
|
|
||||||
Crypto.Cipher.DES3 Triple DES.
|
|
||||||
Crypto.Cipher.XOR The simple XOR cipher.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['AES', 'ARC2', 'ARC4',
|
|
||||||
'Blowfish', 'CAST', 'DES', 'DES3',
|
|
||||||
'XOR'
|
|
||||||
]
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
|
|
||||||
@@ -1,44 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Hashing algorithms
|
|
||||||
|
|
||||||
Hash functions take arbitrary strings as input, and produce an output
|
|
||||||
of fixed size that is dependent on the input; it should never be
|
|
||||||
possible to derive the input data given only the hash function's
|
|
||||||
output. Hash functions can be used simply as a checksum, or, in
|
|
||||||
association with a public-key algorithm, can be used to implement
|
|
||||||
digital signatures.
|
|
||||||
|
|
||||||
The hashing modules here all support the interface described in PEP
|
|
||||||
247, "API for Cryptographic Hash Functions".
|
|
||||||
|
|
||||||
Submodules:
|
|
||||||
Crypto.Hash.HMAC RFC 2104: Keyed-Hashing for Message Authentication
|
|
||||||
Crypto.Hash.MD2
|
|
||||||
Crypto.Hash.MD4
|
|
||||||
Crypto.Hash.MD5
|
|
||||||
Crypto.Hash.RIPEMD160
|
|
||||||
Crypto.Hash.SHA
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['HMAC', 'MD2', 'MD4', 'MD5', 'RIPEMD', 'RIPEMD160', 'SHA', 'SHA256']
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
@@ -1,184 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# PublicKey/RSA.py : RSA public key primitive
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""RSA public-key cryptography algorithm."""
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
__all__ = ['generate', 'construct', 'error']
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
from Crypto.PublicKey import _RSA, _slowmath, pubkey
|
|
||||||
from Crypto import Random
|
|
||||||
|
|
||||||
try:
|
|
||||||
from Crypto.PublicKey import _fastmath
|
|
||||||
except ImportError:
|
|
||||||
_fastmath = None
|
|
||||||
|
|
||||||
class _RSAobj(pubkey.pubkey):
|
|
||||||
keydata = ['n', 'e', 'd', 'p', 'q', 'u']
|
|
||||||
|
|
||||||
def __init__(self, implementation, key):
|
|
||||||
self.implementation = implementation
|
|
||||||
self.key = key
|
|
||||||
|
|
||||||
def __getattr__(self, attrname):
|
|
||||||
if attrname in self.keydata:
|
|
||||||
# For backward compatibility, allow the user to get (not set) the
|
|
||||||
# RSA key parameters directly from this object.
|
|
||||||
return getattr(self.key, attrname)
|
|
||||||
else:
|
|
||||||
raise AttributeError("%s object has no %r attribute" % (self.__class__.__name__, attrname,))
|
|
||||||
|
|
||||||
def _encrypt(self, c, K):
|
|
||||||
return (self.key._encrypt(c),)
|
|
||||||
|
|
||||||
def _decrypt(self, c):
|
|
||||||
#(ciphertext,) = c
|
|
||||||
(ciphertext,) = c[:1] # HACK - We should use the previous line
|
|
||||||
# instead, but this is more compatible and we're
|
|
||||||
# going to replace the Crypto.PublicKey API soon
|
|
||||||
# anyway.
|
|
||||||
return self.key._decrypt(ciphertext)
|
|
||||||
|
|
||||||
def _blind(self, m, r):
|
|
||||||
return self.key._blind(m, r)
|
|
||||||
|
|
||||||
def _unblind(self, m, r):
|
|
||||||
return self.key._unblind(m, r)
|
|
||||||
|
|
||||||
def _sign(self, m, K=None):
|
|
||||||
return (self.key._sign(m),)
|
|
||||||
|
|
||||||
def _verify(self, m, sig):
|
|
||||||
#(s,) = sig
|
|
||||||
(s,) = sig[:1] # HACK - We should use the previous line instead, but
|
|
||||||
# this is more compatible and we're going to replace
|
|
||||||
# the Crypto.PublicKey API soon anyway.
|
|
||||||
return self.key._verify(m, s)
|
|
||||||
|
|
||||||
def has_private(self):
|
|
||||||
return self.key.has_private()
|
|
||||||
|
|
||||||
def size(self):
|
|
||||||
return self.key.size()
|
|
||||||
|
|
||||||
def can_blind(self):
|
|
||||||
return True
|
|
||||||
|
|
||||||
def can_encrypt(self):
|
|
||||||
return True
|
|
||||||
|
|
||||||
def can_sign(self):
|
|
||||||
return True
|
|
||||||
|
|
||||||
def publickey(self):
|
|
||||||
return self.implementation.construct((self.key.n, self.key.e))
|
|
||||||
|
|
||||||
def __getstate__(self):
|
|
||||||
d = {}
|
|
||||||
for k in self.keydata:
|
|
||||||
try:
|
|
||||||
d[k] = getattr(self.key, k)
|
|
||||||
except AttributeError:
|
|
||||||
pass
|
|
||||||
return d
|
|
||||||
|
|
||||||
def __setstate__(self, d):
|
|
||||||
if not hasattr(self, 'implementation'):
|
|
||||||
self.implementation = RSAImplementation()
|
|
||||||
t = []
|
|
||||||
for k in self.keydata:
|
|
||||||
if not d.has_key(k):
|
|
||||||
break
|
|
||||||
t.append(d[k])
|
|
||||||
self.key = self.implementation._math.rsa_construct(*tuple(t))
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
attrs = []
|
|
||||||
for k in self.keydata:
|
|
||||||
if k == 'n':
|
|
||||||
attrs.append("n(%d)" % (self.size()+1,))
|
|
||||||
elif hasattr(self.key, k):
|
|
||||||
attrs.append(k)
|
|
||||||
if self.has_private():
|
|
||||||
attrs.append("private")
|
|
||||||
return "<%s @0x%x %s>" % (self.__class__.__name__, id(self), ",".join(attrs))
|
|
||||||
|
|
||||||
class RSAImplementation(object):
|
|
||||||
def __init__(self, **kwargs):
|
|
||||||
# 'use_fast_math' parameter:
|
|
||||||
# None (default) - Use fast math if available; Use slow math if not.
|
|
||||||
# True - Use fast math, and raise RuntimeError if it's not available.
|
|
||||||
# False - Use slow math.
|
|
||||||
use_fast_math = kwargs.get('use_fast_math', None)
|
|
||||||
if use_fast_math is None: # Automatic
|
|
||||||
if _fastmath is not None:
|
|
||||||
self._math = _fastmath
|
|
||||||
else:
|
|
||||||
self._math = _slowmath
|
|
||||||
|
|
||||||
elif use_fast_math: # Explicitly select fast math
|
|
||||||
if _fastmath is not None:
|
|
||||||
self._math = _fastmath
|
|
||||||
else:
|
|
||||||
raise RuntimeError("fast math module not available")
|
|
||||||
|
|
||||||
else: # Explicitly select slow math
|
|
||||||
self._math = _slowmath
|
|
||||||
|
|
||||||
self.error = self._math.error
|
|
||||||
|
|
||||||
# 'default_randfunc' parameter:
|
|
||||||
# None (default) - use Random.new().read
|
|
||||||
# not None - use the specified function
|
|
||||||
self._default_randfunc = kwargs.get('default_randfunc', None)
|
|
||||||
self._current_randfunc = None
|
|
||||||
|
|
||||||
def _get_randfunc(self, randfunc):
|
|
||||||
if randfunc is not None:
|
|
||||||
return randfunc
|
|
||||||
elif self._current_randfunc is None:
|
|
||||||
self._current_randfunc = Random.new().read
|
|
||||||
return self._current_randfunc
|
|
||||||
|
|
||||||
def generate(self, bits, randfunc=None, progress_func=None):
|
|
||||||
rf = self._get_randfunc(randfunc)
|
|
||||||
obj = _RSA.generate_py(bits, rf, progress_func) # TODO: Don't use legacy _RSA module
|
|
||||||
key = self._math.rsa_construct(obj.n, obj.e, obj.d, obj.p, obj.q, obj.u)
|
|
||||||
return _RSAobj(self, key)
|
|
||||||
|
|
||||||
def construct(self, tup):
|
|
||||||
key = self._math.rsa_construct(*tup)
|
|
||||||
return _RSAobj(self, key)
|
|
||||||
|
|
||||||
_impl = RSAImplementation()
|
|
||||||
generate = _impl.generate
|
|
||||||
construct = _impl.construct
|
|
||||||
error = _impl.error
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
|
|
||||||
@@ -1,95 +0,0 @@
|
|||||||
#
|
|
||||||
# RSA.py : RSA encryption/decryption
|
|
||||||
#
|
|
||||||
# Part of the Python Cryptography Toolkit
|
|
||||||
#
|
|
||||||
# Written by Andrew Kuchling, Paul Swartz, and others
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
#
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
from Crypto.PublicKey import pubkey
|
|
||||||
from Crypto.Util import number
|
|
||||||
|
|
||||||
def generate_py(bits, randfunc, progress_func=None):
|
|
||||||
"""generate(bits:int, randfunc:callable, progress_func:callable)
|
|
||||||
|
|
||||||
Generate an RSA key of length 'bits', using 'randfunc' to get
|
|
||||||
random data and 'progress_func', if present, to display
|
|
||||||
the progress of the key generation.
|
|
||||||
"""
|
|
||||||
obj=RSAobj()
|
|
||||||
obj.e = 65537L
|
|
||||||
|
|
||||||
# Generate the prime factors of n
|
|
||||||
if progress_func:
|
|
||||||
progress_func('p,q\n')
|
|
||||||
p = q = 1L
|
|
||||||
while number.size(p*q) < bits:
|
|
||||||
# Note that q might be one bit longer than p if somebody specifies an odd
|
|
||||||
# number of bits for the key. (Why would anyone do that? You don't get
|
|
||||||
# more security.)
|
|
||||||
#
|
|
||||||
# Note also that we ensure that e is coprime to (p-1) and (q-1).
|
|
||||||
# This is needed for encryption to work properly, according to the 1997
|
|
||||||
# paper by Robert D. Silverman of RSA Labs, "Fast generation of random,
|
|
||||||
# strong RSA primes", available at
|
|
||||||
# http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.2713&rep=rep1&type=pdf
|
|
||||||
# Since e=65537 is prime, it is sufficient to check that e divides
|
|
||||||
# neither (p-1) nor (q-1).
|
|
||||||
p = 1L
|
|
||||||
while (p - 1) % obj.e == 0:
|
|
||||||
if progress_func:
|
|
||||||
progress_func('p\n')
|
|
||||||
p = pubkey.getPrime(bits/2, randfunc)
|
|
||||||
q = 1L
|
|
||||||
while (q - 1) % obj.e == 0:
|
|
||||||
if progress_func:
|
|
||||||
progress_func('q\n')
|
|
||||||
q = pubkey.getPrime(bits - (bits/2), randfunc)
|
|
||||||
|
|
||||||
# p shall be smaller than q (for calc of u)
|
|
||||||
if p > q:
|
|
||||||
(p, q)=(q, p)
|
|
||||||
obj.p = p
|
|
||||||
obj.q = q
|
|
||||||
|
|
||||||
if progress_func:
|
|
||||||
progress_func('u\n')
|
|
||||||
obj.u = pubkey.inverse(obj.p, obj.q)
|
|
||||||
obj.n = obj.p*obj.q
|
|
||||||
|
|
||||||
if progress_func:
|
|
||||||
progress_func('d\n')
|
|
||||||
obj.d=pubkey.inverse(obj.e, (obj.p-1)*(obj.q-1))
|
|
||||||
|
|
||||||
assert bits <= 1+obj.size(), "Generated key is too small"
|
|
||||||
|
|
||||||
return obj
|
|
||||||
|
|
||||||
class RSAobj(pubkey.pubkey):
|
|
||||||
|
|
||||||
def size(self):
|
|
||||||
"""size() : int
|
|
||||||
Return the maximum number of bits that can be handled by this key.
|
|
||||||
"""
|
|
||||||
return number.size(self.n) - 1
|
|
||||||
|
|
||||||
@@ -1,37 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Public-key encryption and signature algorithms.
|
|
||||||
|
|
||||||
Public-key encryption uses two different keys, one for encryption and
|
|
||||||
one for decryption. The encryption key can be made public, and the
|
|
||||||
decryption key is kept private. Many public-key algorithms can also
|
|
||||||
be used to sign messages, and some can *only* be used for signatures.
|
|
||||||
|
|
||||||
Crypto.PublicKey.DSA Digital Signature Algorithm. (Signature only)
|
|
||||||
Crypto.PublicKey.ElGamal (Signing and encryption)
|
|
||||||
Crypto.PublicKey.RSA (Signing, encryption, and blinding)
|
|
||||||
Crypto.PublicKey.qNEW (Signature only)
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['RSA', 'DSA', 'ElGamal', 'qNEW']
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
@@ -1,134 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# PubKey/RSA/_slowmath.py : Pure Python implementation of the RSA portions of _fastmath
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Pure Python implementation of the RSA-related portions of Crypto.PublicKey._fastmath."""
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
__all__ = ['rsa_construct']
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
from Crypto.Util.number import size, inverse
|
|
||||||
|
|
||||||
class error(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
class _RSAKey(object):
|
|
||||||
def _blind(self, m, r):
|
|
||||||
# compute r**e * m (mod n)
|
|
||||||
return m * pow(r, self.e, self.n)
|
|
||||||
|
|
||||||
def _unblind(self, m, r):
|
|
||||||
# compute m / r (mod n)
|
|
||||||
return inverse(r, self.n) * m % self.n
|
|
||||||
|
|
||||||
def _decrypt(self, c):
|
|
||||||
# compute c**d (mod n)
|
|
||||||
if not self.has_private():
|
|
||||||
raise TypeError("No private key")
|
|
||||||
return pow(c, self.d, self.n) # TODO: CRT exponentiation
|
|
||||||
|
|
||||||
def _encrypt(self, m):
|
|
||||||
# compute m**d (mod n)
|
|
||||||
return pow(m, self.e, self.n)
|
|
||||||
|
|
||||||
def _sign(self, m): # alias for _decrypt
|
|
||||||
if not self.has_private():
|
|
||||||
raise TypeError("No private key")
|
|
||||||
return self._decrypt(m)
|
|
||||||
|
|
||||||
def _verify(self, m, sig):
|
|
||||||
return self._encrypt(sig) == m
|
|
||||||
|
|
||||||
def has_private(self):
|
|
||||||
return hasattr(self, 'd')
|
|
||||||
|
|
||||||
def size(self):
|
|
||||||
"""Return the maximum number of bits that can be encrypted"""
|
|
||||||
return size(self.n) - 1
|
|
||||||
|
|
||||||
def rsa_construct(n, e, d=None, p=None, q=None, u=None):
|
|
||||||
"""Construct an RSAKey object"""
|
|
||||||
assert isinstance(n, long)
|
|
||||||
assert isinstance(e, long)
|
|
||||||
assert isinstance(d, (long, type(None)))
|
|
||||||
assert isinstance(p, (long, type(None)))
|
|
||||||
assert isinstance(q, (long, type(None)))
|
|
||||||
assert isinstance(u, (long, type(None)))
|
|
||||||
obj = _RSAKey()
|
|
||||||
obj.n = n
|
|
||||||
obj.e = e
|
|
||||||
if d is not None: obj.d = d
|
|
||||||
if p is not None: obj.p = p
|
|
||||||
if q is not None: obj.q = q
|
|
||||||
if u is not None: obj.u = u
|
|
||||||
return obj
|
|
||||||
|
|
||||||
class _DSAKey(object):
|
|
||||||
def size(self):
|
|
||||||
"""Return the maximum number of bits that can be encrypted"""
|
|
||||||
return size(self.p) - 1
|
|
||||||
|
|
||||||
def has_private(self):
|
|
||||||
return hasattr(self, 'x')
|
|
||||||
|
|
||||||
def _sign(self, m, k): # alias for _decrypt
|
|
||||||
# SECURITY TODO - We _should_ be computing SHA1(m), but we don't because that's the API.
|
|
||||||
if not self.has_private():
|
|
||||||
raise TypeError("No private key")
|
|
||||||
if not (1L < k < self.q):
|
|
||||||
raise ValueError("k is not between 2 and q-1")
|
|
||||||
inv_k = inverse(k, self.q) # Compute k**-1 mod q
|
|
||||||
r = pow(self.g, k, self.p) % self.q # r = (g**k mod p) mod q
|
|
||||||
s = (inv_k * (m + self.x * r)) % self.q
|
|
||||||
return (r, s)
|
|
||||||
|
|
||||||
def _verify(self, m, r, s):
|
|
||||||
# SECURITY TODO - We _should_ be computing SHA1(m), but we don't because that's the API.
|
|
||||||
if not (0 < r < self.q) or not (0 < s < self.q):
|
|
||||||
return False
|
|
||||||
w = inverse(s, self.q)
|
|
||||||
u1 = (m*w) % self.q
|
|
||||||
u2 = (r*w) % self.q
|
|
||||||
v = (pow(self.g, u1, self.p) * pow(self.y, u2, self.p) % self.p) % self.q
|
|
||||||
return v == r
|
|
||||||
|
|
||||||
def dsa_construct(y, g, p, q, x=None):
|
|
||||||
assert isinstance(y, long)
|
|
||||||
assert isinstance(g, long)
|
|
||||||
assert isinstance(p, long)
|
|
||||||
assert isinstance(q, long)
|
|
||||||
assert isinstance(x, (long, type(None)))
|
|
||||||
obj = _DSAKey()
|
|
||||||
obj.y = y
|
|
||||||
obj.g = g
|
|
||||||
obj.p = p
|
|
||||||
obj.q = q
|
|
||||||
if x is not None: obj.x = x
|
|
||||||
return obj
|
|
||||||
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
|
|
||||||
@@ -1,192 +0,0 @@
|
|||||||
#
|
|
||||||
# pubkey.py : Internal functions for public key operations
|
|
||||||
#
|
|
||||||
# Part of the Python Cryptography Toolkit
|
|
||||||
#
|
|
||||||
# Written by Andrew Kuchling, Paul Swartz, and others
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
#
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
import types, warnings
|
|
||||||
from Crypto.Util.number import *
|
|
||||||
|
|
||||||
# Basic public key class
|
|
||||||
class pubkey:
|
|
||||||
def __init__(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def __getstate__(self):
|
|
||||||
"""To keep key objects platform-independent, the key data is
|
|
||||||
converted to standard Python long integers before being
|
|
||||||
written out. It will then be reconverted as necessary on
|
|
||||||
restoration."""
|
|
||||||
d=self.__dict__
|
|
||||||
for key in self.keydata:
|
|
||||||
if d.has_key(key): d[key]=long(d[key])
|
|
||||||
return d
|
|
||||||
|
|
||||||
def __setstate__(self, d):
|
|
||||||
"""On unpickling a key object, the key data is converted to the big
|
|
||||||
number representation being used, whether that is Python long
|
|
||||||
integers, MPZ objects, or whatever."""
|
|
||||||
for key in self.keydata:
|
|
||||||
if d.has_key(key): self.__dict__[key]=bignum(d[key])
|
|
||||||
|
|
||||||
def encrypt(self, plaintext, K):
|
|
||||||
"""encrypt(plaintext:string|long, K:string|long) : tuple
|
|
||||||
Encrypt the string or integer plaintext. K is a random
|
|
||||||
parameter required by some algorithms.
|
|
||||||
"""
|
|
||||||
wasString=0
|
|
||||||
if isinstance(plaintext, types.StringType):
|
|
||||||
plaintext=bytes_to_long(plaintext) ; wasString=1
|
|
||||||
if isinstance(K, types.StringType):
|
|
||||||
K=bytes_to_long(K)
|
|
||||||
ciphertext=self._encrypt(plaintext, K)
|
|
||||||
if wasString: return tuple(map(long_to_bytes, ciphertext))
|
|
||||||
else: return ciphertext
|
|
||||||
|
|
||||||
def decrypt(self, ciphertext):
|
|
||||||
"""decrypt(ciphertext:tuple|string|long): string
|
|
||||||
Decrypt 'ciphertext' using this key.
|
|
||||||
"""
|
|
||||||
wasString=0
|
|
||||||
if not isinstance(ciphertext, types.TupleType):
|
|
||||||
ciphertext=(ciphertext,)
|
|
||||||
if isinstance(ciphertext[0], types.StringType):
|
|
||||||
ciphertext=tuple(map(bytes_to_long, ciphertext)) ; wasString=1
|
|
||||||
plaintext=self._decrypt(ciphertext)
|
|
||||||
if wasString: return long_to_bytes(plaintext)
|
|
||||||
else: return plaintext
|
|
||||||
|
|
||||||
def sign(self, M, K):
|
|
||||||
"""sign(M : string|long, K:string|long) : tuple
|
|
||||||
Return a tuple containing the signature for the message M.
|
|
||||||
K is a random parameter required by some algorithms.
|
|
||||||
"""
|
|
||||||
if (not self.has_private()):
|
|
||||||
raise TypeError('Private key not available in this object')
|
|
||||||
if isinstance(M, types.StringType): M=bytes_to_long(M)
|
|
||||||
if isinstance(K, types.StringType): K=bytes_to_long(K)
|
|
||||||
return self._sign(M, K)
|
|
||||||
|
|
||||||
def verify (self, M, signature):
|
|
||||||
"""verify(M:string|long, signature:tuple) : bool
|
|
||||||
Verify that the signature is valid for the message M;
|
|
||||||
returns true if the signature checks out.
|
|
||||||
"""
|
|
||||||
if isinstance(M, types.StringType): M=bytes_to_long(M)
|
|
||||||
return self._verify(M, signature)
|
|
||||||
|
|
||||||
# alias to compensate for the old validate() name
|
|
||||||
def validate (self, M, signature):
|
|
||||||
warnings.warn("validate() method name is obsolete; use verify()",
|
|
||||||
DeprecationWarning)
|
|
||||||
|
|
||||||
def blind(self, M, B):
|
|
||||||
"""blind(M : string|long, B : string|long) : string|long
|
|
||||||
Blind message M using blinding factor B.
|
|
||||||
"""
|
|
||||||
wasString=0
|
|
||||||
if isinstance(M, types.StringType):
|
|
||||||
M=bytes_to_long(M) ; wasString=1
|
|
||||||
if isinstance(B, types.StringType): B=bytes_to_long(B)
|
|
||||||
blindedmessage=self._blind(M, B)
|
|
||||||
if wasString: return long_to_bytes(blindedmessage)
|
|
||||||
else: return blindedmessage
|
|
||||||
|
|
||||||
def unblind(self, M, B):
|
|
||||||
"""unblind(M : string|long, B : string|long) : string|long
|
|
||||||
Unblind message M using blinding factor B.
|
|
||||||
"""
|
|
||||||
wasString=0
|
|
||||||
if isinstance(M, types.StringType):
|
|
||||||
M=bytes_to_long(M) ; wasString=1
|
|
||||||
if isinstance(B, types.StringType): B=bytes_to_long(B)
|
|
||||||
unblindedmessage=self._unblind(M, B)
|
|
||||||
if wasString: return long_to_bytes(unblindedmessage)
|
|
||||||
else: return unblindedmessage
|
|
||||||
|
|
||||||
|
|
||||||
# The following methods will usually be left alone, except for
|
|
||||||
# signature-only algorithms. They both return Boolean values
|
|
||||||
# recording whether this key's algorithm can sign and encrypt.
|
|
||||||
def can_sign (self):
|
|
||||||
"""can_sign() : bool
|
|
||||||
Return a Boolean value recording whether this algorithm can
|
|
||||||
generate signatures. (This does not imply that this
|
|
||||||
particular key object has the private information required to
|
|
||||||
to generate a signature.)
|
|
||||||
"""
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def can_encrypt (self):
|
|
||||||
"""can_encrypt() : bool
|
|
||||||
Return a Boolean value recording whether this algorithm can
|
|
||||||
encrypt data. (This does not imply that this
|
|
||||||
particular key object has the private information required to
|
|
||||||
to decrypt a message.)
|
|
||||||
"""
|
|
||||||
return 1
|
|
||||||
|
|
||||||
def can_blind (self):
|
|
||||||
"""can_blind() : bool
|
|
||||||
Return a Boolean value recording whether this algorithm can
|
|
||||||
blind data. (This does not imply that this
|
|
||||||
particular key object has the private information required to
|
|
||||||
to blind a message.)
|
|
||||||
"""
|
|
||||||
return 0
|
|
||||||
|
|
||||||
# The following methods will certainly be overridden by
|
|
||||||
# subclasses.
|
|
||||||
|
|
||||||
def size (self):
|
|
||||||
"""size() : int
|
|
||||||
Return the maximum number of bits that can be handled by this key.
|
|
||||||
"""
|
|
||||||
return 0
|
|
||||||
|
|
||||||
def has_private (self):
|
|
||||||
"""has_private() : bool
|
|
||||||
Return a Boolean denoting whether the object contains
|
|
||||||
private components.
|
|
||||||
"""
|
|
||||||
return 0
|
|
||||||
|
|
||||||
def publickey (self):
|
|
||||||
"""publickey(): object
|
|
||||||
Return a new key object containing only the public information.
|
|
||||||
"""
|
|
||||||
return self
|
|
||||||
|
|
||||||
def __eq__ (self, other):
|
|
||||||
"""__eq__(other): 0, 1
|
|
||||||
Compare us to other for equality.
|
|
||||||
"""
|
|
||||||
return self.__getstate__() == other.__getstate__()
|
|
||||||
|
|
||||||
def __ne__ (self, other):
|
|
||||||
"""__ne__(other): 0, 1
|
|
||||||
Compare us to other for inequality.
|
|
||||||
"""
|
|
||||||
return not self.__eq__(other)
|
|
||||||
@@ -1,139 +0,0 @@
|
|||||||
# -*- coding: ascii -*-
|
|
||||||
#
|
|
||||||
# FortunaAccumulator.py : Fortuna's internal accumulator
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
from binascii import b2a_hex
|
|
||||||
import time
|
|
||||||
import warnings
|
|
||||||
|
|
||||||
from Crypto.pct_warnings import ClockRewindWarning
|
|
||||||
import SHAd256
|
|
||||||
|
|
||||||
import FortunaGenerator
|
|
||||||
|
|
||||||
class FortunaPool(object):
|
|
||||||
"""Fortuna pool type
|
|
||||||
|
|
||||||
This object acts like a hash object, with the following differences:
|
|
||||||
|
|
||||||
- It keeps a count (the .length attribute) of the number of bytes that
|
|
||||||
have been added to the pool
|
|
||||||
- It supports a .reset() method for in-place reinitialization
|
|
||||||
- The method to add bytes to the pool is .append(), not .update().
|
|
||||||
"""
|
|
||||||
|
|
||||||
digest_size = SHAd256.digest_size
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.reset()
|
|
||||||
|
|
||||||
def append(self, data):
|
|
||||||
self._h.update(data)
|
|
||||||
self.length += len(data)
|
|
||||||
|
|
||||||
def digest(self):
|
|
||||||
return self._h.digest()
|
|
||||||
|
|
||||||
def hexdigest(self):
|
|
||||||
return b2a_hex(self.digest())
|
|
||||||
|
|
||||||
def reset(self):
|
|
||||||
self._h = SHAd256.new()
|
|
||||||
self.length = 0
|
|
||||||
|
|
||||||
def which_pools(r):
|
|
||||||
"""Return a list of pools indexes (in range(32)) that are to be included during reseed number r.
|
|
||||||
|
|
||||||
According to _Practical Cryptography_, chapter 10.5.2 "Pools":
|
|
||||||
|
|
||||||
"Pool P_i is included if 2**i is a divisor of r. Thus P_0 is used
|
|
||||||
every reseed, P_1 every other reseed, P_2 every fourth reseed, etc."
|
|
||||||
"""
|
|
||||||
# This is a separate function so that it can be unit-tested.
|
|
||||||
assert r >= 1
|
|
||||||
retval = []
|
|
||||||
mask = 0
|
|
||||||
for i in range(32):
|
|
||||||
# "Pool P_i is included if 2**i is a divisor of [reseed_count]"
|
|
||||||
if (r & mask) == 0:
|
|
||||||
retval.append(i)
|
|
||||||
else:
|
|
||||||
break # optimization. once this fails, it always fails
|
|
||||||
mask = (mask << 1) | 1L
|
|
||||||
return retval
|
|
||||||
|
|
||||||
class FortunaAccumulator(object):
|
|
||||||
|
|
||||||
min_pool_size = 64 # TODO: explain why
|
|
||||||
reseed_interval = 0.100 # 100 ms TODO: explain why
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.reseed_count = 0
|
|
||||||
self.generator = FortunaGenerator.AESGenerator()
|
|
||||||
self.last_reseed = None
|
|
||||||
|
|
||||||
# Initialize 32 FortunaPool instances.
|
|
||||||
# NB: This is _not_ equivalent to [FortunaPool()]*32, which would give
|
|
||||||
# us 32 references to the _same_ FortunaPool instance (and cause the
|
|
||||||
# assertion below to fail).
|
|
||||||
self.pools = [FortunaPool() for i in range(32)] # 32 pools
|
|
||||||
assert(self.pools[0] is not self.pools[1])
|
|
||||||
|
|
||||||
def random_data(self, bytes):
|
|
||||||
current_time = time.time()
|
|
||||||
if self.last_reseed > current_time:
|
|
||||||
warnings.warn("Clock rewind detected. Resetting last_reseed.", ClockRewindWarning)
|
|
||||||
self.last_reseed = None
|
|
||||||
if (self.pools[0].length >= self.min_pool_size and
|
|
||||||
(self.last_reseed is None or
|
|
||||||
current_time > self.last_reseed + self.reseed_interval)):
|
|
||||||
self._reseed(current_time)
|
|
||||||
# The following should fail if we haven't seeded the pool yet.
|
|
||||||
return self.generator.pseudo_random_data(bytes)
|
|
||||||
|
|
||||||
def _reseed(self, current_time=None):
|
|
||||||
if current_time is None:
|
|
||||||
current_time = time.time()
|
|
||||||
seed = []
|
|
||||||
self.reseed_count += 1
|
|
||||||
self.last_reseed = current_time
|
|
||||||
for i in which_pools(self.reseed_count):
|
|
||||||
seed.append(self.pools[i].digest())
|
|
||||||
self.pools[i].reset()
|
|
||||||
|
|
||||||
seed = "".join(seed)
|
|
||||||
self.generator.reseed(seed)
|
|
||||||
|
|
||||||
def add_random_event(self, source_number, pool_number, data):
|
|
||||||
assert 1 <= len(data) <= 32
|
|
||||||
assert 0 <= source_number <= 255
|
|
||||||
assert 0 <= pool_number <= 31
|
|
||||||
self.pools[pool_number].append(chr(source_number))
|
|
||||||
self.pools[pool_number].append(chr(len(data)))
|
|
||||||
self.pools[pool_number].append(data)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,128 +0,0 @@
|
|||||||
# -*- coding: ascii -*-
|
|
||||||
#
|
|
||||||
# FortunaGenerator.py : Fortuna's internal PRNG
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
import struct
|
|
||||||
|
|
||||||
from Crypto.Util.number import ceil_shift, exact_log2, exact_div
|
|
||||||
from Crypto.Util import Counter
|
|
||||||
from Crypto.Cipher import AES
|
|
||||||
|
|
||||||
import SHAd256
|
|
||||||
|
|
||||||
class AESGenerator(object):
|
|
||||||
"""The Fortuna "generator"
|
|
||||||
|
|
||||||
This is used internally by the Fortuna PRNG to generate arbitrary amounts
|
|
||||||
of pseudorandom data from a smaller amount of seed data.
|
|
||||||
|
|
||||||
The output is generated by running AES-256 in counter mode and re-keying
|
|
||||||
after every mebibyte (2**16 blocks) of output.
|
|
||||||
"""
|
|
||||||
|
|
||||||
block_size = AES.block_size # output block size in octets (128 bits)
|
|
||||||
key_size = 32 # key size in octets (256 bits)
|
|
||||||
|
|
||||||
# Because of the birthday paradox, we expect to find approximately one
|
|
||||||
# collision for every 2**64 blocks of output from a real random source.
|
|
||||||
# However, this code generates pseudorandom data by running AES in
|
|
||||||
# counter mode, so there will be no collisions until the counter
|
|
||||||
# (theoretically) wraps around at 2**128 blocks. Thus, in order to prevent
|
|
||||||
# Fortuna's pseudorandom output from deviating perceptibly from a true
|
|
||||||
# random source, Ferguson and Schneier specify a limit of 2**16 blocks
|
|
||||||
# without rekeying.
|
|
||||||
max_blocks_per_request = 2**16 # Allow no more than this number of blocks per _pseudo_random_data request
|
|
||||||
|
|
||||||
_four_kiblocks_of_zeros = "\0" * block_size * 4096
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.counter = Counter.new(nbits=self.block_size*8, initial_value=0, little_endian=True)
|
|
||||||
self.key = None
|
|
||||||
|
|
||||||
# Set some helper constants
|
|
||||||
self.block_size_shift = exact_log2(self.block_size)
|
|
||||||
assert (1 << self.block_size_shift) == self.block_size
|
|
||||||
|
|
||||||
self.blocks_per_key = exact_div(self.key_size, self.block_size)
|
|
||||||
assert self.key_size == self.blocks_per_key * self.block_size
|
|
||||||
|
|
||||||
self.max_bytes_per_request = self.max_blocks_per_request * self.block_size
|
|
||||||
|
|
||||||
def reseed(self, seed):
|
|
||||||
if self.key is None:
|
|
||||||
self.key = "\0" * self.key_size
|
|
||||||
self._set_key(SHAd256.new(self.key + seed).digest())
|
|
||||||
self.counter() # increment counter
|
|
||||||
assert len(self.key) == self.key_size
|
|
||||||
|
|
||||||
def pseudo_random_data(self, bytes):
|
|
||||||
assert bytes >= 0
|
|
||||||
|
|
||||||
num_full_blocks = bytes >> 20
|
|
||||||
remainder = bytes & ((1<<20)-1)
|
|
||||||
|
|
||||||
retval = []
|
|
||||||
for i in xrange(num_full_blocks):
|
|
||||||
retval.append(self._pseudo_random_data(1<<20))
|
|
||||||
retval.append(self._pseudo_random_data(remainder))
|
|
||||||
|
|
||||||
return "".join(retval)
|
|
||||||
|
|
||||||
def _set_key(self, key):
|
|
||||||
self.key = key
|
|
||||||
self._cipher = AES.new(key, AES.MODE_CTR, counter=self.counter)
|
|
||||||
|
|
||||||
def _pseudo_random_data(self, bytes):
|
|
||||||
if not (0 <= bytes <= self.max_bytes_per_request):
|
|
||||||
raise AssertionError("You cannot ask for more than 1 MiB of data per request")
|
|
||||||
|
|
||||||
num_blocks = ceil_shift(bytes, self.block_size_shift) # num_blocks = ceil(bytes / self.block_size)
|
|
||||||
|
|
||||||
# Compute the output
|
|
||||||
retval = self._generate_blocks(num_blocks)[:bytes]
|
|
||||||
|
|
||||||
# Switch to a new key to avoid later compromises of this output (i.e.
|
|
||||||
# state compromise extension attacks)
|
|
||||||
self._set_key(self._generate_blocks(self.blocks_per_key))
|
|
||||||
|
|
||||||
assert len(retval) == bytes
|
|
||||||
assert len(self.key) == self.key_size
|
|
||||||
|
|
||||||
return retval
|
|
||||||
|
|
||||||
def _generate_blocks(self, num_blocks):
|
|
||||||
if self.key is None:
|
|
||||||
raise AssertionError("generator must be seeded before use")
|
|
||||||
assert 0 <= num_blocks <= self.max_blocks_per_request
|
|
||||||
retval = []
|
|
||||||
for i in xrange(num_blocks >> 12): # xrange(num_blocks / 4096)
|
|
||||||
retval.append(self._cipher.encrypt(self._four_kiblocks_of_zeros))
|
|
||||||
remaining_bytes = (num_blocks & 4095) << self.block_size_shift # (num_blocks % 4095) * self.block_size
|
|
||||||
retval.append(self._cipher.encrypt(self._four_kiblocks_of_zeros[:remaining_bytes]))
|
|
||||||
return "".join(retval)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,88 +0,0 @@
|
|||||||
# -*- coding: ascii -*-
|
|
||||||
#
|
|
||||||
# Random/Fortuna/SHAd256.py : SHA_d-256 hash function implementation
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""\
|
|
||||||
SHA_d-256 hash function implementation.
|
|
||||||
|
|
||||||
This module should comply with PEP 247.
|
|
||||||
"""
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = ['new', 'digest_size']
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
from binascii import b2a_hex
|
|
||||||
|
|
||||||
from Crypto.Hash import SHA256
|
|
||||||
|
|
||||||
assert SHA256.digest_size == 32
|
|
||||||
|
|
||||||
class _SHAd256(object):
|
|
||||||
"""SHA-256, doubled.
|
|
||||||
|
|
||||||
Returns SHA-256(SHA-256(data)).
|
|
||||||
"""
|
|
||||||
|
|
||||||
digest_size = SHA256.digest_size
|
|
||||||
|
|
||||||
_internal = object()
|
|
||||||
|
|
||||||
def __init__(self, internal_api_check, sha256_hash_obj):
|
|
||||||
if internal_api_check is not self._internal:
|
|
||||||
raise AssertionError("Do not instantiate this class directly. Use %s.new()" % (__name__,))
|
|
||||||
self._h = sha256_hash_obj
|
|
||||||
|
|
||||||
# PEP 247 "copy" method
|
|
||||||
def copy(self):
|
|
||||||
"""Return a copy of this hashing object"""
|
|
||||||
return _SHAd256(SHAd256._internal, self._h.copy())
|
|
||||||
|
|
||||||
# PEP 247 "digest" method
|
|
||||||
def digest(self):
|
|
||||||
"""Return the hash value of this object as a binary string"""
|
|
||||||
retval = SHA256.new(self._h.digest()).digest()
|
|
||||||
assert len(retval) == 32
|
|
||||||
return retval
|
|
||||||
|
|
||||||
# PEP 247 "hexdigest" method
|
|
||||||
def hexdigest(self):
|
|
||||||
"""Return the hash value of this object as a (lowercase) hexadecimal string"""
|
|
||||||
retval = b2a_hex(self.digest())
|
|
||||||
assert len(retval) == 64
|
|
||||||
return retval
|
|
||||||
|
|
||||||
# PEP 247 "update" method
|
|
||||||
def update(self, data):
|
|
||||||
self._h.update(data)
|
|
||||||
|
|
||||||
# PEP 247 module-level "digest_size" variable
|
|
||||||
digest_size = _SHAd256.digest_size
|
|
||||||
|
|
||||||
# PEP 247 module-level "new" function
|
|
||||||
def new(data=""):
|
|
||||||
"""Return a new SHAd256 hashing object"""
|
|
||||||
return _SHAd256(_SHAd256._internal, SHA256.new(data))
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,40 +0,0 @@
|
|||||||
#
|
|
||||||
# Random/OSRNG/__init__.py : Platform-independent OS RNG API
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Provides a platform-independent interface to the random number generators
|
|
||||||
supplied by various operating systems."""
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
if os.name == 'posix':
|
|
||||||
from Crypto.Random.OSRNG.posix import new
|
|
||||||
elif os.name == 'nt':
|
|
||||||
from Crypto.Random.OSRNG.nt import new
|
|
||||||
elif hasattr(os, 'urandom'):
|
|
||||||
from Crypto.Random.OSRNG.fallback import new
|
|
||||||
else:
|
|
||||||
raise ImportError("Not implemented")
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
#
|
|
||||||
# Random/OSRNG/fallback.py : Fallback entropy source for systems with os.urandom
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = ['PythonOSURandomRNG']
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
from rng_base import BaseRNG
|
|
||||||
|
|
||||||
class PythonOSURandomRNG(BaseRNG):
|
|
||||||
|
|
||||||
name = "<os.urandom>"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self._read = os.urandom
|
|
||||||
BaseRNG.__init__(self)
|
|
||||||
|
|
||||||
def _close(self):
|
|
||||||
self._read = None
|
|
||||||
|
|
||||||
def new(*args, **kwargs):
|
|
||||||
return PythonOSURandomRNG(*args, **kwargs)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,74 +0,0 @@
|
|||||||
#
|
|
||||||
# Random/OSRNG/nt.py : OS entropy source for MS Windows
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = ['WindowsRNG']
|
|
||||||
|
|
||||||
import winrandom
|
|
||||||
from rng_base import BaseRNG
|
|
||||||
|
|
||||||
class WindowsRNG(BaseRNG):
|
|
||||||
|
|
||||||
name = "<CryptGenRandom>"
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.__winrand = winrandom.new()
|
|
||||||
BaseRNG.__init__(self)
|
|
||||||
|
|
||||||
def flush(self):
|
|
||||||
"""Work around weakness in Windows RNG.
|
|
||||||
|
|
||||||
The CryptGenRandom mechanism in some versions of Windows allows an
|
|
||||||
attacker to learn 128 KiB of past and future output. As a workaround,
|
|
||||||
this function reads 128 KiB of 'random' data from Windows and discards
|
|
||||||
it.
|
|
||||||
|
|
||||||
For more information about the weaknesses in CryptGenRandom, see
|
|
||||||
_Cryptanalysis of the Random Number Generator of the Windows Operating
|
|
||||||
System_, by Leo Dorrendorf and Zvi Gutterman and Benny Pinkas
|
|
||||||
http://eprint.iacr.org/2007/419
|
|
||||||
"""
|
|
||||||
if self.closed:
|
|
||||||
raise ValueError("I/O operation on closed file")
|
|
||||||
data = self.__winrand.get_bytes(128*1024)
|
|
||||||
assert (len(data) == 128*1024)
|
|
||||||
BaseRNG.flush(self)
|
|
||||||
|
|
||||||
def _close(self):
|
|
||||||
self.__winrand = None
|
|
||||||
|
|
||||||
def _read(self, N):
|
|
||||||
# Unfortunately, research shows that CryptGenRandom doesn't provide
|
|
||||||
# forward secrecy and fails the next-bit test unless we apply a
|
|
||||||
# workaround, which we do here. See http://eprint.iacr.org/2007/419
|
|
||||||
# for information on the vulnerability.
|
|
||||||
self.flush()
|
|
||||||
data = self.__winrand.get_bytes(N)
|
|
||||||
self.flush()
|
|
||||||
return data
|
|
||||||
|
|
||||||
def new(*args, **kwargs):
|
|
||||||
return WindowsRNG(*args, **kwargs)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,86 +0,0 @@
|
|||||||
#
|
|
||||||
# Random/OSRNG/rng_base.py : Base class for OSRNG
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
class BaseRNG(object):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.closed = False
|
|
||||||
self._selftest()
|
|
||||||
|
|
||||||
def __del__(self):
|
|
||||||
self.close()
|
|
||||||
|
|
||||||
def _selftest(self):
|
|
||||||
# Test that urandom can return data
|
|
||||||
data = self.read(16)
|
|
||||||
if len(data) != 16:
|
|
||||||
raise AssertionError("read truncated")
|
|
||||||
|
|
||||||
# Test that we get different data every time (if we don't, the RNG is
|
|
||||||
# probably malfunctioning)
|
|
||||||
data2 = self.read(16)
|
|
||||||
if data == data2:
|
|
||||||
raise AssertionError("OS RNG returned duplicate data")
|
|
||||||
|
|
||||||
# PEP 343: Support for the "with" statement
|
|
||||||
def __enter__(self):
|
|
||||||
pass
|
|
||||||
def __exit__(self):
|
|
||||||
"""PEP 343 support"""
|
|
||||||
self.close()
|
|
||||||
|
|
||||||
def close(self):
|
|
||||||
if not self.closed:
|
|
||||||
self._close()
|
|
||||||
self.closed = True
|
|
||||||
|
|
||||||
def flush(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def read(self, N=-1):
|
|
||||||
"""Return N bytes from the RNG."""
|
|
||||||
if self.closed:
|
|
||||||
raise ValueError("I/O operation on closed file")
|
|
||||||
if not isinstance(N, (long, int)):
|
|
||||||
raise TypeError("an integer is required")
|
|
||||||
if N < 0:
|
|
||||||
raise ValueError("cannot read to end of infinite stream")
|
|
||||||
elif N == 0:
|
|
||||||
return ""
|
|
||||||
data = self._read(N)
|
|
||||||
if len(data) != N:
|
|
||||||
raise AssertionError("%s produced truncated output (requested %d, got %d)" % (self.name, N, len(data)))
|
|
||||||
return data
|
|
||||||
|
|
||||||
def _close(self):
|
|
||||||
raise NotImplementedError("child class must implement this")
|
|
||||||
|
|
||||||
def _read(self, N):
|
|
||||||
raise NotImplementedError("child class must implement this")
|
|
||||||
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,213 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Random/_UserFriendlyRNG.py : A user-friendly random number generator
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
import os
|
|
||||||
import threading
|
|
||||||
import struct
|
|
||||||
import time
|
|
||||||
from math import floor
|
|
||||||
|
|
||||||
from Crypto.Random import OSRNG
|
|
||||||
from Crypto.Random.Fortuna import FortunaAccumulator
|
|
||||||
|
|
||||||
class _EntropySource(object):
|
|
||||||
def __init__(self, accumulator, src_num):
|
|
||||||
self._fortuna = accumulator
|
|
||||||
self._src_num = src_num
|
|
||||||
self._pool_num = 0
|
|
||||||
|
|
||||||
def feed(self, data):
|
|
||||||
self._fortuna.add_random_event(self._src_num, self._pool_num, data)
|
|
||||||
self._pool_num = (self._pool_num + 1) & 31
|
|
||||||
|
|
||||||
class _EntropyCollector(object):
|
|
||||||
|
|
||||||
def __init__(self, accumulator):
|
|
||||||
self._osrng = OSRNG.new()
|
|
||||||
self._osrng_es = _EntropySource(accumulator, 255)
|
|
||||||
self._time_es = _EntropySource(accumulator, 254)
|
|
||||||
self._clock_es = _EntropySource(accumulator, 253)
|
|
||||||
|
|
||||||
def reinit(self):
|
|
||||||
# Add 256 bits to each of the 32 pools, twice. (For a total of 16384
|
|
||||||
# bits collected from the operating system.)
|
|
||||||
for i in range(2):
|
|
||||||
block = self._osrng.read(32*32)
|
|
||||||
for p in range(32):
|
|
||||||
self._osrng_es.feed(block[p*32:(p+1)*32])
|
|
||||||
block = None
|
|
||||||
self._osrng.flush()
|
|
||||||
|
|
||||||
def collect(self):
|
|
||||||
# Collect 64 bits of entropy from the operating system and feed it to Fortuna.
|
|
||||||
self._osrng_es.feed(self._osrng.read(8))
|
|
||||||
|
|
||||||
# Add the fractional part of time.time()
|
|
||||||
t = time.time()
|
|
||||||
self._time_es.feed(struct.pack("@I", int(2**30 * (t - floor(t)))))
|
|
||||||
|
|
||||||
# Add the fractional part of time.clock()
|
|
||||||
t = time.clock()
|
|
||||||
self._clock_es.feed(struct.pack("@I", int(2**30 * (t - floor(t)))))
|
|
||||||
|
|
||||||
|
|
||||||
class _UserFriendlyRNG(object):
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.closed = False
|
|
||||||
self._fa = FortunaAccumulator.FortunaAccumulator()
|
|
||||||
self._ec = _EntropyCollector(self._fa)
|
|
||||||
self.reinit()
|
|
||||||
|
|
||||||
def reinit(self):
|
|
||||||
"""Initialize the random number generator and seed it with entropy from
|
|
||||||
the operating system.
|
|
||||||
"""
|
|
||||||
self._pid = os.getpid()
|
|
||||||
self._ec.reinit()
|
|
||||||
|
|
||||||
def close(self):
|
|
||||||
self.closed = True
|
|
||||||
self._osrng = None
|
|
||||||
self._fa = None
|
|
||||||
|
|
||||||
def flush(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def read(self, N):
|
|
||||||
"""Return N bytes from the RNG."""
|
|
||||||
if self.closed:
|
|
||||||
raise ValueError("I/O operation on closed file")
|
|
||||||
if not isinstance(N, (long, int)):
|
|
||||||
raise TypeError("an integer is required")
|
|
||||||
if N < 0:
|
|
||||||
raise ValueError("cannot read to end of infinite stream")
|
|
||||||
|
|
||||||
# Collect some entropy and feed it to Fortuna
|
|
||||||
self._ec.collect()
|
|
||||||
|
|
||||||
# Ask Fortuna to generate some bytes
|
|
||||||
retval = self._fa.random_data(N)
|
|
||||||
|
|
||||||
# Check that we haven't forked in the meantime. (If we have, we don't
|
|
||||||
# want to use the data, because it might have been duplicated in the
|
|
||||||
# parent process.
|
|
||||||
self._check_pid()
|
|
||||||
|
|
||||||
# Return the random data.
|
|
||||||
return retval
|
|
||||||
|
|
||||||
def _check_pid(self):
|
|
||||||
# Lame fork detection to remind developers to invoke Random.atfork()
|
|
||||||
# after every call to os.fork(). Note that this check is not reliable,
|
|
||||||
# since process IDs can be reused on most operating systems.
|
|
||||||
#
|
|
||||||
# You need to do Random.atfork() in the child process after every call
|
|
||||||
# to os.fork() to avoid reusing PRNG state. If you want to avoid
|
|
||||||
# leaking PRNG state to child processes (for example, if you are using
|
|
||||||
# os.setuid()) then you should also invoke Random.atfork() in the
|
|
||||||
# *parent* process.
|
|
||||||
if os.getpid() != self._pid:
|
|
||||||
raise AssertionError("PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()")
|
|
||||||
|
|
||||||
|
|
||||||
class _LockingUserFriendlyRNG(_UserFriendlyRNG):
|
|
||||||
def __init__(self):
|
|
||||||
self._lock = threading.Lock()
|
|
||||||
_UserFriendlyRNG.__init__(self)
|
|
||||||
|
|
||||||
def close(self):
|
|
||||||
self._lock.acquire()
|
|
||||||
try:
|
|
||||||
return _UserFriendlyRNG.close(self)
|
|
||||||
finally:
|
|
||||||
self._lock.release()
|
|
||||||
|
|
||||||
def reinit(self):
|
|
||||||
self._lock.acquire()
|
|
||||||
try:
|
|
||||||
return _UserFriendlyRNG.reinit(self)
|
|
||||||
finally:
|
|
||||||
self._lock.release()
|
|
||||||
|
|
||||||
def read(self, bytes):
|
|
||||||
self._lock.acquire()
|
|
||||||
try:
|
|
||||||
return _UserFriendlyRNG.read(self, bytes)
|
|
||||||
finally:
|
|
||||||
self._lock.release()
|
|
||||||
|
|
||||||
class RNGFile(object):
|
|
||||||
def __init__(self, singleton):
|
|
||||||
self.closed = False
|
|
||||||
self._singleton = singleton
|
|
||||||
|
|
||||||
# PEP 343: Support for the "with" statement
|
|
||||||
def __enter__(self):
|
|
||||||
"""PEP 343 support"""
|
|
||||||
def __exit__(self):
|
|
||||||
"""PEP 343 support"""
|
|
||||||
self.close()
|
|
||||||
|
|
||||||
def close(self):
|
|
||||||
# Don't actually close the singleton, just close this RNGFile instance.
|
|
||||||
self.closed = True
|
|
||||||
self._singleton = None
|
|
||||||
|
|
||||||
def read(self, bytes):
|
|
||||||
if self.closed:
|
|
||||||
raise ValueError("I/O operation on closed file")
|
|
||||||
return self._singleton.read(bytes)
|
|
||||||
|
|
||||||
def flush(self):
|
|
||||||
if self.closed:
|
|
||||||
raise ValueError("I/O operation on closed file")
|
|
||||||
|
|
||||||
_singleton_lock = threading.Lock()
|
|
||||||
_singleton = None
|
|
||||||
def _get_singleton():
|
|
||||||
global _singleton
|
|
||||||
_singleton_lock.acquire()
|
|
||||||
try:
|
|
||||||
if _singleton is None:
|
|
||||||
_singleton = _LockingUserFriendlyRNG()
|
|
||||||
return _singleton
|
|
||||||
finally:
|
|
||||||
_singleton_lock.release()
|
|
||||||
|
|
||||||
def new():
|
|
||||||
return RNGFile(_get_singleton())
|
|
||||||
|
|
||||||
def reinit():
|
|
||||||
_get_singleton().reinit()
|
|
||||||
|
|
||||||
def get_random_bytes(n):
|
|
||||||
"""Return the specified number of cryptographically-strong random bytes."""
|
|
||||||
return _get_singleton().read(n)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Random/__init__.py : PyCrypto random number generation
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = ['new']
|
|
||||||
|
|
||||||
import OSRNG
|
|
||||||
import _UserFriendlyRNG
|
|
||||||
|
|
||||||
def new(*args, **kwargs):
|
|
||||||
"""Return a file-like object that outputs cryptographically random bytes."""
|
|
||||||
return _UserFriendlyRNG.new(*args, **kwargs)
|
|
||||||
|
|
||||||
def atfork():
|
|
||||||
"""Call this whenever you call os.fork()"""
|
|
||||||
_UserFriendlyRNG.reinit()
|
|
||||||
|
|
||||||
def get_random_bytes(n):
|
|
||||||
"""Return the specified number of cryptographically-strong random bytes."""
|
|
||||||
return _UserFriendlyRNG.get_random_bytes(n)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,143 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Random/random.py : Strong alternative for the standard 'random' module
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""A cryptographically strong version of Python's standard "random" module."""
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = ['StrongRandom', 'getrandbits', 'randrange', 'randint', 'choice', 'shuffle', 'sample']
|
|
||||||
|
|
||||||
from Crypto import Random
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
class StrongRandom(object):
|
|
||||||
def __init__(self, rng=None, randfunc=None):
|
|
||||||
if randfunc is None and rng is None:
|
|
||||||
self._randfunc = None
|
|
||||||
elif randfunc is not None and rng is None:
|
|
||||||
self._randfunc = randfunc
|
|
||||||
elif randfunc is None and rng is not None:
|
|
||||||
self._randfunc = rng.read
|
|
||||||
else:
|
|
||||||
raise ValueError("Cannot specify both 'rng' and 'randfunc'")
|
|
||||||
|
|
||||||
def getrandbits(self, k):
|
|
||||||
"""Return a python long integer with k random bits."""
|
|
||||||
if self._randfunc is None:
|
|
||||||
self._randfunc = Random.new().read
|
|
||||||
mask = (1L << k) - 1
|
|
||||||
return mask & bytes_to_long(self._randfunc(ceil_div(k, 8)))
|
|
||||||
|
|
||||||
def randrange(self, *args):
|
|
||||||
"""randrange([start,] stop[, step]):
|
|
||||||
Return a randomly-selected element from range(start, stop, step)."""
|
|
||||||
if len(args) == 3:
|
|
||||||
(start, stop, step) = args
|
|
||||||
elif len(args) == 2:
|
|
||||||
(start, stop) = args
|
|
||||||
step = 1
|
|
||||||
elif len(args) == 1:
|
|
||||||
(stop,) = args
|
|
||||||
start = 0
|
|
||||||
step = 1
|
|
||||||
else:
|
|
||||||
raise TypeError("randrange expected at most 3 arguments, got %d" % (len(args),))
|
|
||||||
if (not isinstance(start, (int, long))
|
|
||||||
or not isinstance(stop, (int, long))
|
|
||||||
or not isinstance(step, (int, long))):
|
|
||||||
raise TypeError("randrange requires integer arguments")
|
|
||||||
if step == 0:
|
|
||||||
raise ValueError("randrange step argument must not be zero")
|
|
||||||
|
|
||||||
num_choices = ceil_div(stop - start, step)
|
|
||||||
if num_choices < 0:
|
|
||||||
num_choices = 0
|
|
||||||
if num_choices < 1:
|
|
||||||
raise ValueError("empty range for randrange(%r, %r, %r)" % (start, stop, step))
|
|
||||||
|
|
||||||
# Pick a random number in the range of possible numbers
|
|
||||||
r = num_choices
|
|
||||||
while r >= num_choices:
|
|
||||||
r = self.getrandbits(size(num_choices))
|
|
||||||
|
|
||||||
return start + (step * r)
|
|
||||||
|
|
||||||
def randint(self, a, b):
|
|
||||||
"""Return a random integer N such that a <= N <= b."""
|
|
||||||
if not isinstance(a, (int, long)) or not isinstance(b, (int, long)):
|
|
||||||
raise TypeError("randint requires integer arguments")
|
|
||||||
N = self.randrange(a, b+1)
|
|
||||||
assert a <= N <= b
|
|
||||||
return N
|
|
||||||
|
|
||||||
def choice(self, seq):
|
|
||||||
"""Return a random element from a (non-empty) sequence.
|
|
||||||
|
|
||||||
If the seqence is empty, raises IndexError.
|
|
||||||
"""
|
|
||||||
if len(seq) == 0:
|
|
||||||
raise IndexError("empty sequence")
|
|
||||||
return seq[self.randrange(len(seq))]
|
|
||||||
|
|
||||||
def shuffle(self, x):
|
|
||||||
"""Shuffle the sequence in place."""
|
|
||||||
# Make a (copy) of the list of objects we want to shuffle
|
|
||||||
items = list(x)
|
|
||||||
|
|
||||||
# Choose a random item (without replacement) until all the items have been
|
|
||||||
# chosen.
|
|
||||||
for i in xrange(len(x)):
|
|
||||||
p = self.randint(len(items))
|
|
||||||
x[i] = items[p]
|
|
||||||
del items[p]
|
|
||||||
|
|
||||||
def sample(self, population, k):
|
|
||||||
"""Return a k-length list of unique elements chosen from the population sequence."""
|
|
||||||
|
|
||||||
num_choices = len(population)
|
|
||||||
if k > num_choices:
|
|
||||||
raise ValueError("sample larger than population")
|
|
||||||
|
|
||||||
retval = []
|
|
||||||
selected = {} # we emulate a set using a dict here
|
|
||||||
for i in xrange(k):
|
|
||||||
r = None
|
|
||||||
while r is None or r in selected:
|
|
||||||
r = self.randrange(num_choices)
|
|
||||||
retval.append(population[r])
|
|
||||||
selected[r] = 1
|
|
||||||
return retval
|
|
||||||
|
|
||||||
_r = StrongRandom()
|
|
||||||
getrandbits = _r.getrandbits
|
|
||||||
randrange = _r.randrange
|
|
||||||
randint = _r.randint
|
|
||||||
choice = _r.choice
|
|
||||||
shuffle = _r.shuffle
|
|
||||||
sample = _r.sample
|
|
||||||
|
|
||||||
# These are at the bottom to avoid problems with recursive imports
|
|
||||||
from Crypto.Util.number import ceil_div, bytes_to_long, long_to_bytes, size
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,61 +0,0 @@
|
|||||||
# -*- coding: ascii -*-
|
|
||||||
#
|
|
||||||
# Util/Counter.py : Fast counter for use with CTR-mode ciphers
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
from Crypto.Util import _counter
|
|
||||||
import struct
|
|
||||||
|
|
||||||
# Factory function
|
|
||||||
def new(nbits, prefix="", suffix="", initial_value=1, overflow=0, little_endian=False, allow_wraparound=False, disable_shortcut=False):
|
|
||||||
# TODO: Document this
|
|
||||||
|
|
||||||
# Sanity-check the message size
|
|
||||||
(nbytes, remainder) = divmod(nbits, 8)
|
|
||||||
if remainder != 0:
|
|
||||||
# In the future, we might support arbitrary bit lengths, but for now we don't.
|
|
||||||
raise ValueError("nbits must be a multiple of 8; got %d" % (nbits,))
|
|
||||||
if nbytes < 1:
|
|
||||||
raise ValueError("nbits too small")
|
|
||||||
elif nbytes > 0xffff:
|
|
||||||
raise ValueError("nbits too large")
|
|
||||||
|
|
||||||
initval = _encode(initial_value, nbytes, little_endian)
|
|
||||||
if little_endian:
|
|
||||||
return _counter._newLE(str(prefix), str(suffix), initval, allow_wraparound=allow_wraparound, disable_shortcut=disable_shortcut)
|
|
||||||
else:
|
|
||||||
return _counter._newBE(str(prefix), str(suffix), initval, allow_wraparound=allow_wraparound, disable_shortcut=disable_shortcut)
|
|
||||||
|
|
||||||
def _encode(n, nbytes, little_endian=False):
|
|
||||||
retval = []
|
|
||||||
n = long(n)
|
|
||||||
for i in range(nbytes):
|
|
||||||
if little_endian:
|
|
||||||
retval.append(chr(n & 0xff))
|
|
||||||
else:
|
|
||||||
retval.insert(0, chr(n & 0xff))
|
|
||||||
n >>= 8
|
|
||||||
return "".join(retval)
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Miscellaneous modules
|
|
||||||
|
|
||||||
Contains useful modules that don't belong into any of the
|
|
||||||
other Crypto.* subpackages.
|
|
||||||
|
|
||||||
Crypto.Util.number Number-theoretic functions (primality testing, etc.)
|
|
||||||
Crypto.Util.randpool Random number generation
|
|
||||||
Crypto.Util.RFC1751 Converts between 128-bit keys and human-readable
|
|
||||||
strings of words.
|
|
||||||
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['randpool', 'RFC1751', 'number', 'strxor']
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
@@ -1,117 +0,0 @@
|
|||||||
# -*- coding: ascii -*-
|
|
||||||
#
|
|
||||||
# Util/_number_new.py : utility functions
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
## NOTE: Do not import this module directly. Import these functions from Crypto.Util.number.
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = ['ceil_shift', 'ceil_div', 'floor_div', 'exact_log2', 'exact_div']
|
|
||||||
|
|
||||||
from Crypto.Util.python_compat import *
|
|
||||||
|
|
||||||
def ceil_shift(n, b):
|
|
||||||
"""Return ceil(n / 2**b) without performing any floating-point or division operations.
|
|
||||||
|
|
||||||
This is done by right-shifting n by b bits and incrementing the result by 1
|
|
||||||
if any '1' bits were shifted out.
|
|
||||||
"""
|
|
||||||
if not isinstance(n, (int, long)) or not isinstance(b, (int, long)):
|
|
||||||
raise TypeError("unsupported operand type(s): %r and %r" % (type(n).__name__, type(b).__name__))
|
|
||||||
|
|
||||||
assert n >= 0 and b >= 0 # I haven't tested or even thought about negative values
|
|
||||||
mask = (1L << b) - 1
|
|
||||||
if n & mask:
|
|
||||||
return (n >> b) + 1
|
|
||||||
else:
|
|
||||||
return n >> b
|
|
||||||
|
|
||||||
def ceil_div(a, b):
|
|
||||||
"""Return ceil(a / b) without performing any floating-point operations."""
|
|
||||||
|
|
||||||
if not isinstance(a, (int, long)) or not isinstance(b, (int, long)):
|
|
||||||
raise TypeError("unsupported operand type(s): %r and %r" % (type(a).__name__, type(b).__name__))
|
|
||||||
|
|
||||||
(q, r) = divmod(a, b)
|
|
||||||
if r:
|
|
||||||
return q + 1
|
|
||||||
else:
|
|
||||||
return q
|
|
||||||
|
|
||||||
def floor_div(a, b):
|
|
||||||
if not isinstance(a, (int, long)) or not isinstance(b, (int, long)):
|
|
||||||
raise TypeError("unsupported operand type(s): %r and %r" % (type(a).__name__, type(b).__name__))
|
|
||||||
|
|
||||||
(q, r) = divmod(a, b)
|
|
||||||
return q
|
|
||||||
|
|
||||||
def exact_log2(num):
|
|
||||||
"""Find and return an integer i >= 0 such that num == 2**i.
|
|
||||||
|
|
||||||
If no such integer exists, this function raises ValueError.
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not isinstance(num, (int, long)):
|
|
||||||
raise TypeError("unsupported operand type: %r" % (type(num).__name__,))
|
|
||||||
|
|
||||||
n = long(num)
|
|
||||||
if n <= 0:
|
|
||||||
raise ValueError("cannot compute logarithm of non-positive number")
|
|
||||||
|
|
||||||
i = 0
|
|
||||||
while n != 0:
|
|
||||||
if (n & 1) and n != 1:
|
|
||||||
raise ValueError("No solution could be found")
|
|
||||||
i += 1
|
|
||||||
n >>= 1
|
|
||||||
i -= 1
|
|
||||||
|
|
||||||
assert num == (1L << i)
|
|
||||||
return i
|
|
||||||
|
|
||||||
def exact_div(p, d, allow_divzero=False):
|
|
||||||
"""Find and return an integer n such that p == n * d
|
|
||||||
|
|
||||||
If no such integer exists, this function raises ValueError.
|
|
||||||
|
|
||||||
Both operands must be integers.
|
|
||||||
|
|
||||||
If the second operand is zero, this function will raise ZeroDivisionError
|
|
||||||
unless allow_divzero is true (default: False).
|
|
||||||
"""
|
|
||||||
|
|
||||||
if not isinstance(p, (int, long)) or not isinstance(d, (int, long)):
|
|
||||||
raise TypeError("unsupported operand type(s): %r and %r" % (type(p).__name__, type(d).__name__))
|
|
||||||
|
|
||||||
if d == 0 and allow_divzero:
|
|
||||||
n = 0
|
|
||||||
if p != n * d:
|
|
||||||
raise ValueError("No solution could be found")
|
|
||||||
else:
|
|
||||||
(n, r) = divmod(p, d)
|
|
||||||
if r != 0:
|
|
||||||
raise ValueError("No solution could be found")
|
|
||||||
|
|
||||||
assert p == n * d
|
|
||||||
return n
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,250 +0,0 @@
|
|||||||
#
|
|
||||||
# number.py : Number-theoretic functions
|
|
||||||
#
|
|
||||||
# Part of the Python Cryptography Toolkit
|
|
||||||
#
|
|
||||||
# Written by Andrew M. Kuchling, Barry A. Warsaw, and others
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
#
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
bignum = long
|
|
||||||
try:
|
|
||||||
from Crypto.PublicKey import _fastmath
|
|
||||||
except ImportError:
|
|
||||||
_fastmath = None
|
|
||||||
|
|
||||||
# New functions
|
|
||||||
from _number_new import *
|
|
||||||
|
|
||||||
# Commented out and replaced with faster versions below
|
|
||||||
## def long2str(n):
|
|
||||||
## s=''
|
|
||||||
## while n>0:
|
|
||||||
## s=chr(n & 255)+s
|
|
||||||
## n=n>>8
|
|
||||||
## return s
|
|
||||||
|
|
||||||
## import types
|
|
||||||
## def str2long(s):
|
|
||||||
## if type(s)!=types.StringType: return s # Integers will be left alone
|
|
||||||
## return reduce(lambda x,y : x*256+ord(y), s, 0L)
|
|
||||||
|
|
||||||
def size (N):
|
|
||||||
"""size(N:long) : int
|
|
||||||
Returns the size of the number N in bits.
|
|
||||||
"""
|
|
||||||
bits, power = 0,1L
|
|
||||||
while N >= power:
|
|
||||||
bits += 1
|
|
||||||
power = power << 1
|
|
||||||
return bits
|
|
||||||
|
|
||||||
def getRandomNumber(N, randfunc=None):
|
|
||||||
"""getRandomNumber(N:int, randfunc:callable):long
|
|
||||||
Return a random N-bit number.
|
|
||||||
|
|
||||||
If randfunc is omitted, then Random.new().read is used.
|
|
||||||
|
|
||||||
NOTE: Confusingly, this function does NOT return N random bits; It returns
|
|
||||||
a random N-bit number, i.e. a random number between 2**(N-1) and (2**N)-1.
|
|
||||||
|
|
||||||
This function is for internal use only and may be renamed or removed in
|
|
||||||
the future.
|
|
||||||
"""
|
|
||||||
if randfunc is None:
|
|
||||||
_import_Random()
|
|
||||||
randfunc = Random.new().read
|
|
||||||
|
|
||||||
S = randfunc(N/8)
|
|
||||||
odd_bits = N % 8
|
|
||||||
if odd_bits != 0:
|
|
||||||
char = ord(randfunc(1)) >> (8-odd_bits)
|
|
||||||
S = chr(char) + S
|
|
||||||
value = bytes_to_long(S)
|
|
||||||
value |= 2L ** (N-1) # Ensure high bit is set
|
|
||||||
assert size(value) >= N
|
|
||||||
return value
|
|
||||||
|
|
||||||
def GCD(x,y):
|
|
||||||
"""GCD(x:long, y:long): long
|
|
||||||
Return the GCD of x and y.
|
|
||||||
"""
|
|
||||||
x = abs(x) ; y = abs(y)
|
|
||||||
while x > 0:
|
|
||||||
x, y = y % x, x
|
|
||||||
return y
|
|
||||||
|
|
||||||
def inverse(u, v):
|
|
||||||
"""inverse(u:long, u:long):long
|
|
||||||
Return the inverse of u mod v.
|
|
||||||
"""
|
|
||||||
u3, v3 = long(u), long(v)
|
|
||||||
u1, v1 = 1L, 0L
|
|
||||||
while v3 > 0:
|
|
||||||
q=u3 / v3
|
|
||||||
u1, v1 = v1, u1 - v1*q
|
|
||||||
u3, v3 = v3, u3 - v3*q
|
|
||||||
while u1<0:
|
|
||||||
u1 = u1 + v
|
|
||||||
return u1
|
|
||||||
|
|
||||||
# Given a number of bits to generate and a random generation function,
|
|
||||||
# find a prime number of the appropriate size.
|
|
||||||
|
|
||||||
def getPrime(N, randfunc=None):
|
|
||||||
"""getPrime(N:int, randfunc:callable):long
|
|
||||||
Return a random N-bit prime number.
|
|
||||||
|
|
||||||
If randfunc is omitted, then Random.new().read is used.
|
|
||||||
"""
|
|
||||||
if randfunc is None:
|
|
||||||
_import_Random()
|
|
||||||
randfunc = Random.new().read
|
|
||||||
|
|
||||||
number=getRandomNumber(N, randfunc) | 1
|
|
||||||
while (not isPrime(number, randfunc=randfunc)):
|
|
||||||
number=number+2
|
|
||||||
return number
|
|
||||||
|
|
||||||
def isPrime(N, randfunc=None):
|
|
||||||
"""isPrime(N:long, randfunc:callable):bool
|
|
||||||
Return true if N is prime.
|
|
||||||
|
|
||||||
If randfunc is omitted, then Random.new().read is used.
|
|
||||||
"""
|
|
||||||
_import_Random()
|
|
||||||
if randfunc is None:
|
|
||||||
randfunc = Random.new().read
|
|
||||||
|
|
||||||
randint = StrongRandom(randfunc=randfunc).randint
|
|
||||||
|
|
||||||
if N == 1:
|
|
||||||
return 0
|
|
||||||
if N in sieve:
|
|
||||||
return 1
|
|
||||||
for i in sieve:
|
|
||||||
if (N % i)==0:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
# Use the accelerator if available
|
|
||||||
if _fastmath is not None:
|
|
||||||
return _fastmath.isPrime(N)
|
|
||||||
|
|
||||||
# Compute the highest bit that's set in N
|
|
||||||
N1 = N - 1L
|
|
||||||
n = 1L
|
|
||||||
while (n<N):
|
|
||||||
n=n<<1L
|
|
||||||
n = n >> 1L
|
|
||||||
|
|
||||||
# Rabin-Miller test
|
|
||||||
for c in sieve[:7]:
|
|
||||||
a=long(c) ; d=1L ; t=n
|
|
||||||
while (t): # Iterate over the bits in N1
|
|
||||||
x=(d*d) % N
|
|
||||||
if x==1L and d!=1L and d!=N1:
|
|
||||||
return 0 # Square root of 1 found
|
|
||||||
if N1 & t:
|
|
||||||
d=(x*a) % N
|
|
||||||
else:
|
|
||||||
d=x
|
|
||||||
t = t >> 1L
|
|
||||||
if d!=1L:
|
|
||||||
return 0
|
|
||||||
return 1
|
|
||||||
|
|
||||||
# Small primes used for checking primality; these are all the primes
|
|
||||||
# less than 256. This should be enough to eliminate most of the odd
|
|
||||||
# numbers before needing to do a Rabin-Miller test at all.
|
|
||||||
|
|
||||||
sieve=[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59,
|
|
||||||
61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127,
|
|
||||||
131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193,
|
|
||||||
197, 199, 211, 223, 227, 229, 233, 239, 241, 251]
|
|
||||||
|
|
||||||
# Improved conversion functions contributed by Barry Warsaw, after
|
|
||||||
# careful benchmarking
|
|
||||||
|
|
||||||
import struct
|
|
||||||
|
|
||||||
def long_to_bytes(n, blocksize=0):
|
|
||||||
"""long_to_bytes(n:long, blocksize:int) : string
|
|
||||||
Convert a long integer to a byte string.
|
|
||||||
|
|
||||||
If optional blocksize is given and greater than zero, pad the front of the
|
|
||||||
byte string with binary zeros so that the length is a multiple of
|
|
||||||
blocksize.
|
|
||||||
"""
|
|
||||||
# after much testing, this algorithm was deemed to be the fastest
|
|
||||||
s = ''
|
|
||||||
n = long(n)
|
|
||||||
pack = struct.pack
|
|
||||||
while n > 0:
|
|
||||||
s = pack('>I', n & 0xffffffffL) + s
|
|
||||||
n = n >> 32
|
|
||||||
# strip off leading zeros
|
|
||||||
for i in range(len(s)):
|
|
||||||
if s[i] != '\000':
|
|
||||||
break
|
|
||||||
else:
|
|
||||||
# only happens when n == 0
|
|
||||||
s = '\000'
|
|
||||||
i = 0
|
|
||||||
s = s[i:]
|
|
||||||
# add back some pad bytes. this could be done more efficiently w.r.t. the
|
|
||||||
# de-padding being done above, but sigh...
|
|
||||||
if blocksize > 0 and len(s) % blocksize:
|
|
||||||
s = (blocksize - len(s) % blocksize) * '\000' + s
|
|
||||||
return s
|
|
||||||
|
|
||||||
def bytes_to_long(s):
|
|
||||||
"""bytes_to_long(string) : long
|
|
||||||
Convert a byte string to a long integer.
|
|
||||||
|
|
||||||
This is (essentially) the inverse of long_to_bytes().
|
|
||||||
"""
|
|
||||||
acc = 0L
|
|
||||||
unpack = struct.unpack
|
|
||||||
length = len(s)
|
|
||||||
if length % 4:
|
|
||||||
extra = (4 - length % 4)
|
|
||||||
s = '\000' * extra + s
|
|
||||||
length = length + extra
|
|
||||||
for i in range(0, length, 4):
|
|
||||||
acc = (acc << 32) + unpack('>I', s[i:i+4])[0]
|
|
||||||
return acc
|
|
||||||
|
|
||||||
# For backwards compatibility...
|
|
||||||
import warnings
|
|
||||||
def long2str(n, blocksize=0):
|
|
||||||
warnings.warn("long2str() has been replaced by long_to_bytes()")
|
|
||||||
return long_to_bytes(n, blocksize)
|
|
||||||
def str2long(s):
|
|
||||||
warnings.warn("str2long() has been replaced by bytes_to_long()")
|
|
||||||
return bytes_to_long(s)
|
|
||||||
|
|
||||||
def _import_Random():
|
|
||||||
# This is called in a function instead of at the module level in order to avoid problems with recursive imports
|
|
||||||
global Random, StrongRandom
|
|
||||||
from Crypto import Random
|
|
||||||
from Crypto.Random.random import StrongRandom
|
|
||||||
|
|
||||||
@@ -1,84 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# Util/python_compat.py : Compatibility code for old versions of Python
|
|
||||||
#
|
|
||||||
# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net>
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Compatibility code for old versions of Python
|
|
||||||
|
|
||||||
Currently, this just defines:
|
|
||||||
- True and False
|
|
||||||
- object
|
|
||||||
- isinstance
|
|
||||||
"""
|
|
||||||
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
__all__ = []
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import __builtin__
|
|
||||||
|
|
||||||
# 'True' and 'False' aren't defined in Python 2.1. Define them.
|
|
||||||
try:
|
|
||||||
True, False
|
|
||||||
except NameError:
|
|
||||||
(True, False) = (1, 0)
|
|
||||||
__all__ += ['True', 'False']
|
|
||||||
|
|
||||||
# New-style classes were introduced in Python 2.2. Defining "object" in Python
|
|
||||||
# 2.1 lets us use new-style classes in versions of Python that support them,
|
|
||||||
# while still maintaining backward compatibility with old-style classes
|
|
||||||
try:
|
|
||||||
object
|
|
||||||
except NameError:
|
|
||||||
class object: pass
|
|
||||||
__all__ += ['object']
|
|
||||||
|
|
||||||
# Starting with Python 2.2, isinstance allows a tuple for the second argument.
|
|
||||||
# Also, builtins like "tuple", "list", "str", "unicode", "int", and "long"
|
|
||||||
# became first-class types, rather than functions. We want to support
|
|
||||||
# constructs like:
|
|
||||||
# isinstance(x, (int, long))
|
|
||||||
# So we hack it for Python 2.1.
|
|
||||||
try:
|
|
||||||
isinstance(5, (int, long))
|
|
||||||
except TypeError:
|
|
||||||
__all__ += ['isinstance']
|
|
||||||
_builtin_type_map = {
|
|
||||||
tuple: type(()),
|
|
||||||
list: type([]),
|
|
||||||
str: type(""),
|
|
||||||
unicode: type(u""),
|
|
||||||
int: type(0),
|
|
||||||
long: type(0L),
|
|
||||||
}
|
|
||||||
def isinstance(obj, t):
|
|
||||||
if not __builtin__.isinstance(t, type(())):
|
|
||||||
# t is not a tuple
|
|
||||||
return __builtin__.isinstance(obj, _builtin_type_map.get(t, t))
|
|
||||||
else:
|
|
||||||
# t is a tuple
|
|
||||||
for typ in t:
|
|
||||||
if __builtin__.isinstance(obj, _builtin_type_map.get(typ, typ)):
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
# vim:set ts=4 sw=4 sts=4 expandtab:
|
|
||||||
@@ -1,46 +0,0 @@
|
|||||||
# -*- coding: utf-8 -*-
|
|
||||||
#
|
|
||||||
# ===================================================================
|
|
||||||
# The contents of this file are dedicated to the public domain. To
|
|
||||||
# the extent that dedication to the public domain is not available,
|
|
||||||
# everyone is granted a worldwide, perpetual, royalty-free,
|
|
||||||
# non-exclusive license to exercise all rights associated with the
|
|
||||||
# contents of this file for any purpose whatsoever.
|
|
||||||
# No rights are reserved.
|
|
||||||
#
|
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
|
||||||
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
|
||||||
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
|
||||||
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
|
|
||||||
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
|
||||||
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
|
||||||
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
# SOFTWARE.
|
|
||||||
# ===================================================================
|
|
||||||
|
|
||||||
"""Python Cryptography Toolkit
|
|
||||||
|
|
||||||
A collection of cryptographic modules implementing various algorithms
|
|
||||||
and protocols.
|
|
||||||
|
|
||||||
Subpackages:
|
|
||||||
Crypto.Cipher Secret-key encryption algorithms (AES, DES, ARC4)
|
|
||||||
Crypto.Hash Hashing algorithms (MD5, SHA, HMAC)
|
|
||||||
Crypto.Protocol Cryptographic protocols (Chaffing, all-or-nothing
|
|
||||||
transform). This package does not contain any
|
|
||||||
network protocols.
|
|
||||||
Crypto.PublicKey Public-key encryption and signature algorithms
|
|
||||||
(RSA, DSA)
|
|
||||||
Crypto.Util Various useful modules and functions (long-to-string
|
|
||||||
conversion, random number generation, number
|
|
||||||
theoretic functions)
|
|
||||||
"""
|
|
||||||
|
|
||||||
__all__ = ['Cipher', 'Hash', 'Protocol', 'PublicKey', 'Util']
|
|
||||||
|
|
||||||
__version__ = '2.3' # See also below and setup.py
|
|
||||||
__revision__ = "$Id$"
|
|
||||||
|
|
||||||
# New software should look at this instead of at __version__ above.
|
|
||||||
version_info = (2, 1, 0, 'final', 0) # See also above and setup.py
|
|
||||||
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user