Initial Commit of Stratum + Getwork Proxy

This commit is contained in:
Ahmed Bodiwala 2013-11-20 11:31:07 +00:00
commit 429baa72a5
33 changed files with 3605 additions and 0 deletions

15
LICENSE Normal file
View File

@ -0,0 +1,15 @@
Stratum mining - *coin pool using Stratum protocol
Copyright (C) 2012 Marek Palatinus <info@bitcoin.cz>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.

58
README.md Normal file
View File

@ -0,0 +1,58 @@
#Description
Stratum-mining is a pooled mining protocol. It is a replacement for *getwork* based pooling servers by allowing clients to generate work. The stratum protocol is described [here](http://mining.bitcoin.cz/stratum-mining) in full detail.
This is a implementation of stratum-mining for scrypt based coins. It is compatible with *MPOS* as well as *mmcfe*, as it complies with the standards of *pushpool*. The end goal is to build on these standards to come up with a more stable solution.
The goal is to make a reliable stratum mining server for scrypt based coins. Over time I will develop this to be more feature rich and very stable. If you would like to see a feature please file a feature request.
**NOTE:** This fork is still in development. Many features may be broken. Please report any broken features or issues.
#Features
* Stratum Mining Pool
* Solved Block Confirmation
* Vardiff support
* Solution Block Hash Support
* *NEW* SHA256 and Scrypt Algo Support
* Log Rotation
* Initial low difficulty share confirmation
* Multiple *coind* wallets
* On the fly addition of new *coind* wallets
* MySQL database support
* Adjustable database commit parameters
* Bypass password check for workers
#Requirements
*stratum-mining* is built in python. I have been testing it with 2.7.3, but it should work with other versions. The requirements for running the software are below.
* Python 2.7+
* python-twisted
* stratum
* MySQL Server
* SHA256 or Scrypt CoinDaemon
Other coins have been known to work with this implementation. I have tested with the following coins, but there may be many others that work.
*Orbitcoin
*FireFlyCoin
#Installation
The installation of this *stratum-mining* can be found in the INSTALL.md file.
#Contact
I am available in the #MPOS, #crypto-expert, #digitalcoin, #bytecoin and #worldcoin channels on freenode. Although i am willing to provide support through IRC please file issues on the repo
#Credits
* Original version by Slush0 (original stratum code)
* More Features added by GeneralFault, Wadee Womersley and Moopless
* Scrypt conversion from work done by viperaus
* PoS conversion done by TheSeven
* Modifications to make it more user friendly and easier to setup for multiple coins done by Ahmed_Bodi
#License
This software is provides AS-IS without any warranties of any kind. Please use at your own risk.

0
conf/__init__.py Normal file
View File

147
conf/config_sample.py Normal file
View File

@ -0,0 +1,147 @@
'''
This is example configuration for Stratum server.
Please rename it to config.py and fill correct values.
This is already setup with sane values for solomining.
You NEED to set the parameters in BASIC SETTINGS
'''
# ******************** BASIC SETTINGS ***************
# These are the MUST BE SET parameters!
CENTRAL_WALLET = 'set_valid_addresss_in_config!' # local coin address where money goes
COINDAEMON_TRUSTED_HOST = 'localhost'
COINDAEMON_TRUSTED_PORT = 8332
COINDAEMON_TRUSTED_USER = 'user'
COINDAEMON_TRUSTED_PASSWORD = 'somepassword'
# ******************** BASIC SETTINGS ***************
# Backup Coin Daemon address's (consider having at least 1 backup)
# You can have up to 99
#COINDAEMON_TRUSTED_HOST_1 = 'localhost'
#COINDAEMON_TRUSTED_PORT_1 = 8332
#COINDAEMON_TRUSTED_USER_1 = 'user'
#COINDAEMON_TRUSTED_PASSWORD_1 = 'somepassword'
#COINDAEMON_TRUSTED_HOST_2 = 'localhost'
#COINDAEMON_TRUSTED_PORT_2 = 8332
#COINDAEMON_TRUSTED_USER_2 = 'user'
#COINDAEMON_TRUSTED_PASSWORD_2 = 'somepassword'
# ******************** GENERAL SETTINGS ***************
# Enable some verbose debug (logging requests and responses).
DEBUG = False
# Destination for application logs, files rotated once per day.
LOGDIR = 'log/'
# Main application log file.
LOGFILE = None # eg. 'stratum.log'
# Logging Rotation can be enabled with the following settings
# It if not enabled here, you can set up logrotate to rotate the files.
# For built in log rotation set LOG_ROTATION = True and configrue the variables
LOG_ROTATION = True
LOG_SIZE = 10485760 # Rotate every 10M
LOG_RETENTION = 10 # Keep 10 Logs
# How many threads use for synchronous methods (services).
# 30 is enough for small installation, for real usage
# it should be slightly more, say 100-300.
THREAD_POOL_SIZE = 300
# Disable the example service
ENABLE_EXAMPLE_SERVICE = False
# ******************** TRANSPORTS *********************
# Hostname or external IP to expose
HOSTNAME = 'localhost'
# Port used for Socket transport. Use 'None' for disabling the transport.
LISTEN_SOCKET_TRANSPORT = 3333
# Port used for HTTP Poll transport. Use 'None' for disabling the transport
LISTEN_HTTP_TRANSPORT = None
# Port used for HTTPS Poll transport
LISTEN_HTTPS_TRANSPORT = None
# Port used for WebSocket transport, 'None' for disabling WS
LISTEN_WS_TRANSPORT = None
# Port used for secure WebSocket, 'None' for disabling WSS
LISTEN_WSS_TRANSPORT = None
# Salt used when hashing passwords
PASSWORD_SALT = 'some_crazy_string'
# ******************** Database *********************
# MySQL
DB_MYSQL_HOST = 'localhost'
DB_MYSQL_DBNAME = 'pooldb'
DB_MYSQL_USER = 'pooldb'
DB_MYSQL_PASS = '**empty**'
# ******************** Adv. DB Settings *********************
# Don't change these unless you know what you are doing
DB_LOADER_CHECKTIME = 15 # How often we check to see if we should run the loader
DB_LOADER_REC_MIN = 10 # Min Records before the bulk loader fires
DB_LOADER_REC_MAX = 50 # Max Records the bulk loader will commit at a time
DB_LOADER_FORCE_TIME = 300 # How often the cache should be flushed into the DB regardless of size.
DB_STATS_AVG_TIME = 300 # When using the DATABASE_EXTEND option, average speed over X sec
# Note: this is also how often it updates
DB_USERCACHE_TIME = 600 # How long the usercache is good for before we refresh
# ******************** Pool Settings *********************
# User Auth Options
USERS_AUTOADD = False # Automatically add users to db when they connect.
# This basically disables User Auth for the pool.
USERS_CHECK_PASSWORD = False # Check the workers password? (Many pools don't)
# Transaction Settings
COINBASE_EXTRAS = '/stratumPool/' # Extra Descriptive String to incorporate in solved blocks
ALLOW_NONLOCAL_WALLET = False # Allow valid, but NON-Local wallet's
# Coin Daemon communication polling settings (In Seconds)
PREVHASH_REFRESH_INTERVAL = 5 # How often to check for new Blocks
# If using the blocknotify script (recommended) set = to MERKLE_REFRESH_INTERVAL
# (No reason to poll if we're getting pushed notifications)
MERKLE_REFRESH_INTERVAL = 60 # How often check memorypool
# This effectively resets the template and incorporates new transactions.
# This should be "slow"
INSTANCE_ID = 31 # Used for extranonce and needs to be 0-31
# ******************** Pool Difficulty Settings *********************
# Again, Don't change unless you know what this is for.
# Pool Target (Base Difficulty)
# In order to match the Pool Target with a frontend like MPOS the following formula is used: (stratum diff) ~= 2^((target bits in pushpool) - 16)
# E.G. a Pool Target of 16 would = a MPOS and PushPool Target bit's of 20
POOL_TARGET = 16 # Pool-wide difficulty target int >= 1
# Variable Difficulty Enable
VARIABLE_DIFF = True # Master variable difficulty enable
# Variable diff tuning variables
#VARDIFF will start at the POOL_TARGET. It can go as low as the VDIFF_MIN and as high as min(VDIFF_MAX or the coin daemon's difficulty)
USE_COINDAEMON_DIFF = False # Set the maximum difficulty to the coin daemon's difficulty.
DIFF_UPDATE_FREQUENCY = 86400 # Update the COINDAEMON difficulty once a day for the VARDIFF maximum
VDIFF_MIN_TARGET = 15 # Minimum Target difficulty
VDIFF_MAX_TARGET = 1000 # Maximum Target difficulty
VDIFF_TARGET_TIME = 30 # Target time per share (i.e. try to get 1 share per this many seconds)
VDIFF_RETARGET_TIME = 120 # Check to see if we should retarget this often
VDIFF_VARIANCE_PERCENT = 20 # Allow average time to very this % from target without retarget
#### Advanced Option #####
# For backwards compatibility, we send the scrypt hash to the solutions column in the shares table
# For block confirmation, we have an option to send the block hash in
# Please make sure your front end is compatible with the block hash in the solutions table.
# For People using the MPOS frontend enabling this is recommended. It allows the frontend to compare the block hash to the coin daemon reducing the liklihood of missing share error's for blocks
SOLUTION_BLOCK_HASH = False # If enabled, enter the block hash. If false enter the scrypt/sha hash into the shares table

36
launcher.tac Normal file
View File

@ -0,0 +1,36 @@
# Run me with "twistd -ny launcher.tac -l -"
# Add conf directory to python path.
# Configuration file is standard python module.
import os, sys
sys.path = [os.path.join(os.getcwd(), 'conf'),os.path.join(os.getcwd(), 'externals', 'stratum-mining-proxy'),] + sys.path
from twisted.internet import defer
# Run listening when mining service is ready
on_startup = defer.Deferred()
import stratum
import lib.settings as settings
# Bootstrap Stratum framework
application = stratum.setup(on_startup)
# Load mining service into stratum framework
import mining
from mining.interfaces import Interfaces
from mining.interfaces import WorkerManagerInterface, TimestamperInterface, \
ShareManagerInterface, ShareLimiterInterface
if settings.VARIABLE_DIFF == True:
from mining.basic_share_limiter import BasicShareLimiter
Interfaces.set_share_limiter(BasicShareLimiter())
else:
from mining.interfaces import ShareLimiterInterface
Interfaces.set_share_limiter(ShareLimiterInterface())
Interfaces.set_share_manager(ShareManagerInterface())
Interfaces.set_worker_manager(WorkerManagerInterface())
Interfaces.set_timestamper(TimestamperInterface())
mining.setup(on_startup)

0
lib/__init__.py Normal file
View File

98
lib/bitcoin_rpc.py Normal file
View File

@ -0,0 +1,98 @@
'''
Implements simple interface to a coin daemon's RPC.
'''
import simplejson as json
import base64
from twisted.internet import defer
from twisted.web import client
import time
import lib.logger
log = lib.logger.get_logger('bitcoin_rpc')
class BitcoinRPC(object):
def __init__(self, host, port, username, password):
self.bitcoin_url = 'http://%s:%d' % (host, port)
self.credentials = base64.b64encode("%s:%s" % (username, password))
self.headers = {
'Content-Type': 'text/json',
'Authorization': 'Basic %s' % self.credentials,
}
client.HTTPClientFactory.noisy = False
def _call_raw(self, data):
client.Headers
return client.getPage(
url=self.bitcoin_url,
method='POST',
headers=self.headers,
postdata=data,
)
def _call(self, method, params):
return self._call_raw(json.dumps({
'jsonrpc': '2.0',
'method': method,
'params': params,
'id': '1',
}))
@defer.inlineCallbacks
def submitblock(self, block_hex, block_hash_hex):
# Try submitblock if that fails, go to getblocktemplate
try:
resp = (yield self._call('submitblock', [block_hex,]))
except Exception:
try:
resp = (yield self._call('getblocktemplate', [{'mode': 'submit', 'data': block_hex}]))
except Exception as e:
log.exception("Problem Submitting block %s" % str(e))
raise
if json.loads(resp)['result'] == None:
# make sure the block was created.
defer.returnValue((yield self.blockexists(block_hash_hex)))
else:
defer.returnValue(False)
@defer.inlineCallbacks
def getinfo(self):
resp = (yield self._call('getinfo', []))
defer.returnValue(json.loads(resp)['result'])
@defer.inlineCallbacks
def getblocktemplate(self):
resp = (yield self._call('getblocktemplate', [{}]))
defer.returnValue(json.loads(resp)['result'])
@defer.inlineCallbacks
def prevhash(self):
resp = (yield self._call('getwork', []))
try:
defer.returnValue(json.loads(resp)['result']['data'][8:72])
except Exception as e:
log.exception("Cannot decode prevhash %s" % str(e))
raise
@defer.inlineCallbacks
def validateaddress(self, address):
resp = (yield self._call('validateaddress', [address,]))
defer.returnValue(json.loads(resp)['result'])
@defer.inlineCallbacks
def getdifficulty(self):
resp = (yield self._call('getdifficulty', []))
defer.returnValue(json.loads(resp)['result'])
@defer.inlineCallbacks
def blockexists(self, block_hash_hex):
resp = (yield self._call('getblock', [block_hash_hex,]))
if "hash" in json.loads(resp)['result'] and json.loads(resp)['result']['hash'] == block_hash_hex:
log.debug("Block Confirmed: %s" % block_hash_hex)
defer.returnValue(True)
else:
log.info("Cannot find block for %s" % block_hash_hex)
defer.returnValue(False)

133
lib/bitcoin_rpc_manager.py Normal file
View File

@ -0,0 +1,133 @@
'''
Implements simple interface to a coin daemon's RPC.
'''
import simplejson as json
from twisted.internet import defer
import settings
import time
import lib.logger
log = lib.logger.get_logger('bitcoin_rpc_manager')
from lib.bitcoin_rpc import BitcoinRPC
class BitcoinRPCManager(object):
def __init__(self):
self.conns = {}
self.conns[0] = BitcoinRPC(settings.COINDAEMON_TRUSTED_HOST,
settings.COINDAEMON_TRUSTED_PORT,
settings.COINDAEMON_TRUSTED_USER,
settings.COINDAEMIN_TRUSTED_PASSWORD)
self.curr_conn = 0
for x in range (1, 99):
if hasattr(settings, 'COINDAEMON_TRUSTED_HOST_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_PORT_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_USER_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_PASSWORD_' + str(x)):
self.conns[len(self.conns)] = BitcoinRPC(settings.__dict__['COINDAEMON_TRUSTED_HOST_' + str(x)],
settings.__dict__['COINDAEMON_TRUSTED_PORT_' + str(x)],
settings.__dict__['COINDAEMON_TRUSTED_USER_' + str(x)],
settings.__dict__['COINDAEMON_TRUSTED_PASSWORD_' + str(x)])
def add_connection(self, host, port, user, password):
# TODO: Some string sanity checks
self.conns[len(self.conns)] = BitcoinRPC(host, port, user, password)
def next_connection(self):
time.sleep(1)
if len(self.conns) <= 1:
log.error("Problem with Pool 0 -- NO ALTERNATE POOLS!!!")
time.sleep(4)
return
log.error("Problem with Pool %i Switching to Next!" % (self.curr_conn) )
self.curr_conn = self.curr_conn + 1
if self.curr_conn >= len(self.conns):
self.curr_conn = 0
@defer.inlineCallbacks
def check_height(self):
while True:
try:
resp = (yield self.conns[self.curr_conn]._call('getinfo', []))
break
except:
log.error("Check Height -- Pool %i Down!" % (self.curr_conn) )
self.next_connection()
curr_height = json.loads(resp)['result']['blocks']
log.debug("Check Height -- Current Pool %i : %i" % (self.curr_conn,curr_height) )
for i in self.conns:
if i == self.curr_conn:
continue
try:
resp = (yield self.conns[i]._call('getinfo', []))
except:
log.error("Check Height -- Pool %i Down!" % (i,) )
continue
height = json.loads(resp)['result']['blocks']
log.debug("Check Height -- Pool %i : %i" % (i,height) )
if height > curr_height:
self.curr_conn = i
defer.returnValue(True)
def _call_raw(self, data):
while True:
try:
return self.conns[self.curr_conn]._call_raw(data)
except:
self.next_connection()
def _call(self, method, params):
while True:
try:
return self.conns[self.curr_conn]._call(method,params)
except:
self.next_connection()
def submitblock(self, block_hex, block_hash_hex):
while True:
try:
return self.conns[self.curr_conn].submitblock(block_hex, block_hash_hex)
except:
self.next_connection()
def getinfo(self):
while True:
try:
return self.conns[self.curr_conn].getinfo()
except:
self.next_connection()
def getblocktemplate(self):
while True:
try:
return self.conns[self.curr_conn].getblocktemplate()
except:
self.next_connection()
def prevhash(self):
self.check_height()
while True:
try:
return self.conns[self.curr_conn].prevhash()
except:
self.next_connection()
def validateaddress(self, address):
while True:
try:
return self.conns[self.curr_conn].validateaddress(address)
except:
self.next_connection()
def getdifficulty(self):
while True:
try:
return self.conns[self.curr_conn].getdifficulty()
except:
self.next_connection()

140
lib/block_template.py Normal file
View File

@ -0,0 +1,140 @@
import StringIO
import binascii
import struct
import util
import merkletree
import halfnode
from coinbasetx import CoinbaseTransaction
# Remove dependency to settings, coinbase extras should be
# provided from coinbaser
import settings
class BlockTemplate(halfnode.CBlock):
'''Template is used for generating new jobs for clients.
Let's iterate extranonce1, extranonce2, ntime and nonce
to find out valid coin block!'''
coinbase_transaction_class = CoinbaseTransaction
def __init__(self, timestamper, coinbaser, job_id):
super(BlockTemplate, self).__init__()
self.job_id = job_id
self.timestamper = timestamper
self.coinbaser = coinbaser
self.prevhash_bin = '' # reversed binary form of prevhash
self.prevhash_hex = ''
self.timedelta = 0
self.curtime = 0
self.target = 0
#self.coinbase_hex = None
self.merkletree = None
self.broadcast_args = []
# List of 4-tuples (extranonce1, extranonce2, ntime, nonce)
# registers already submitted and checked shares
# There may be registered also invalid shares inside!
self.submits = []
def fill_from_rpc(self, data):
'''Convert getblocktemplate result into BlockTemplate instance'''
#txhashes = [None] + [ binascii.unhexlify(t['hash']) for t in data['transactions'] ]
txhashes = [None] + [ util.ser_uint256(int(t['hash'], 16)) for t in data['transactions'] ]
mt = merkletree.MerkleTree(txhashes)
coinbase = self.coinbase_transaction_class(self.timestamper, self.coinbaser, data['coinbasevalue'],
data['coinbaseaux']['flags'], data['height'], settings.COINBASE_EXTRAS)
self.height = data['height']
self.nVersion = data['version']
self.hashPrevBlock = int(data['previousblockhash'], 16)
self.nBits = int(data['bits'], 16)
self.hashMerkleRoot = 0
self.nTime = 0
self.nNonce = 0
self.vtx = [ coinbase, ]
for tx in data['transactions']:
t = halfnode.CTransaction()
t.deserialize(StringIO.StringIO(binascii.unhexlify(tx['data'])))
self.vtx.append(t)
self.curtime = data['curtime']
self.timedelta = self.curtime - int(self.timestamper.time())
self.merkletree = mt
self.target = util.uint256_from_compact(self.nBits)
# Reversed prevhash
self.prevhash_bin = binascii.unhexlify(util.reverse_hash(data['previousblockhash']))
self.prevhash_hex = "%064x" % self.hashPrevBlock
self.broadcast_args = self.build_broadcast_args()
def register_submit(self, extranonce1, extranonce2, ntime, nonce):
'''Client submitted some solution. Let's register it to
prevent double submissions.'''
t = (extranonce1, extranonce2, ntime, nonce)
if t not in self.submits:
self.submits.append(t)
return True
return False
def build_broadcast_args(self):
'''Build parameters of mining.notify call. All clients
may receive the same params, because they include
their unique extranonce1 into the coinbase, so every
coinbase_hash (and then merkle_root) will be unique as well.'''
job_id = self.job_id
prevhash = binascii.hexlify(self.prevhash_bin)
(coinb1, coinb2) = [ binascii.hexlify(x) for x in self.vtx[0]._serialized ]
merkle_branch = [ binascii.hexlify(x) for x in self.merkletree._steps ]
version = binascii.hexlify(struct.pack(">i", self.nVersion))
nbits = binascii.hexlify(struct.pack(">I", self.nBits))
ntime = binascii.hexlify(struct.pack(">I", self.curtime))
clean_jobs = True
return (job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, clean_jobs)
def serialize_coinbase(self, extranonce1, extranonce2):
'''Serialize coinbase with given extranonce1 and extranonce2
in binary form'''
(part1, part2) = self.vtx[0]._serialized
return part1 + extranonce1 + extranonce2 + part2
def check_ntime(self, ntime):
'''Check for ntime restrictions.'''
if ntime < self.curtime:
return False
if ntime > (self.timestamper.time() + 7200):
# Be strict on ntime into the near future
# may be unnecessary
return False
return True
def serialize_header(self, merkle_root_int, ntime_bin, nonce_bin):
'''Serialize header for calculating block hash'''
r = struct.pack(">i", self.nVersion)
r += self.prevhash_bin
r += util.ser_uint256_be(merkle_root_int)
r += ntime_bin
r += struct.pack(">I", self.nBits)
r += nonce_bin
return r
def finalize(self, merkle_root_int, extranonce1_bin, extranonce2_bin, ntime, nonce):
'''Take all parameters required to compile block candidate.
self.is_valid() should return True then...'''
self.hashMerkleRoot = merkle_root_int
self.nTime = ntime
self.nNonce = nonce
self.vtx[0].set_extranonce(extranonce1_bin + extranonce2_bin)
self.sha256 = None # We changed block parameters, let's reset sha256 cache

65
lib/block_updater.py Normal file
View File

@ -0,0 +1,65 @@
from twisted.internet import reactor, defer
import settings
import util
from mining.interfaces import Interfaces
import lib.logger
log = lib.logger.get_logger('block_updater')
class BlockUpdater(object):
'''
Polls upstream's getinfo() and detecting new block on the network.
This will call registry.update_block when new prevhash appear.
This is just failback alternative when something
with ./litecoind -blocknotify will go wrong.
'''
def __init__(self, registry, bitcoin_rpc):
self.bitcoin_rpc = bitcoin_rpc
self.registry = registry
self.clock = None
self.schedule()
def schedule(self):
when = self._get_next_time()
log.debug("Next prevhash update in %.03f sec" % when)
log.debug("Merkle update in next %.03f sec" % \
((self.registry.last_update + settings.MERKLE_REFRESH_INTERVAL)-Interfaces.timestamper.time()))
self.clock = reactor.callLater(when, self.run)
def _get_next_time(self):
when = settings.PREVHASH_REFRESH_INTERVAL - (Interfaces.timestamper.time() - self.registry.last_update) % \
settings.PREVHASH_REFRESH_INTERVAL
return when
@defer.inlineCallbacks
def run(self):
update = False
try:
if self.registry.last_block:
current_prevhash = "%064x" % self.registry.last_block.hashPrevBlock
else:
current_prevhash = None
log.info("Checking for new block.")
prevhash = util.reverse_hash((yield self.bitcoin_rpc.prevhash()))
if prevhash and prevhash != current_prevhash:
log.info("New block! Prevhash: %s" % prevhash)
update = True
elif Interfaces.timestamper.time() - self.registry.last_update >= settings.MERKLE_REFRESH_INTERVAL:
log.info("Merkle update! Prevhash: %s" % prevhash)
update = True
if update:
self.registry.update_block()
except Exception:
log.exception("UpdateWatchdog.run failed")
finally:
self.schedule()

67
lib/coinbaser.py Normal file
View File

@ -0,0 +1,67 @@
import util
from twisted.internet import defer
import settings
import lib.logger
log = lib.logger.get_logger('coinbaser')
# TODO: Add on_* hooks in the app
class SimpleCoinbaser(object):
'''This very simple coinbaser uses a constant coin address
for all generated blocks.'''
def __init__(self, bitcoin_rpc, address):
# Fire callback when coinbaser is ready
self.on_load = defer.Deferred()
self.address = address
self.is_valid = False # We need to check if pool can use this address
self.bitcoin_rpc = bitcoin_rpc
self._validate()
def _validate(self):
d = self.bitcoin_rpc.validateaddress(self.address)
d.addCallback(self._address_check)
d.addErrback(self._failure)
def _address_check(self, result):
if result['isvalid'] and result['ismine']:
self.is_valid = True
log.info("Coinbase address '%s' is valid" % self.address)
if not self.on_load.called:
self.on_load.callback(True)
elif result['isvalid'] and settings.ALLOW_NONLOCAL_WALLET == True :
self.is_valid = True
log.warning("!!! Coinbase address '%s' is valid BUT it is not local" % self.address)
if not self.on_load.called:
self.on_load.callback(True)
else:
self.is_valid = False
log.error("Coinbase address '%s' is NOT valid!" % self.address)
def _failure(self, failure):
log.error("Cannot validate Bitcoin address '%s'" % self.address)
raise
#def on_new_block(self):
# pass
#def on_new_template(self):
# pass
def get_script_pubkey(self):
if not self.is_valid:
# Try again, maybe the coind was down?
self._validate()
raise Exception("Coinbase address is not validated!")
return util.script_to_address(self.address)
def get_coinbase_data(self):
return ''

49
lib/coinbasetx.py Normal file
View File

@ -0,0 +1,49 @@
import binascii
import halfnode
import struct
import util
class CoinbaseTransaction(halfnode.CTransaction):
'''Construct special transaction used for coinbase tx.
It also implements quick serialization using pre-cached
scriptSig template.'''
extranonce_type = '>Q'
extranonce_placeholder = struct.pack(extranonce_type, int('f000000ff111111f', 16))
extranonce_size = struct.calcsize(extranonce_type)
def __init__(self, timestamper, coinbaser, value, flags, height, data):
super(CoinbaseTransaction, self).__init__()
#self.extranonce = 0
if len(self.extranonce_placeholder) != self.extranonce_size:
raise Exception("Extranonce placeholder don't match expected length!")
tx_in = halfnode.CTxIn()
tx_in.prevout.hash = 0L
tx_in.prevout.n = 2**32-1
tx_in._scriptSig_template = (
util.ser_number(height) + binascii.unhexlify(flags) + util.ser_number(int(timestamper.time())) + \
chr(self.extranonce_size),
util.ser_string(coinbaser.get_coinbase_data() + data)
)
tx_in.scriptSig = tx_in._scriptSig_template[0] + self.extranonce_placeholder + tx_in._scriptSig_template[1]
tx_out = halfnode.CTxOut()
tx_out.nValue = value
tx_out.scriptPubKey = coinbaser.get_script_pubkey()
self.vin.append(tx_in)
self.vout.append(tx_out)
# Two parts of serialized coinbase, just put part1 + extranonce + part2 to have final serialized tx
self._serialized = super(CoinbaseTransaction, self).serialize().split(self.extranonce_placeholder)
def set_extranonce(self, extranonce):
if len(extranonce) != self.extranonce_size:
raise Exception("Incorrect extranonce size")
(part1, part2) = self.vin[0]._scriptSig_template
self.vin[0].scriptSig = part1 + extranonce + part2

201
lib/config_default.py Executable file
View File

@ -0,0 +1,201 @@
'''
This is example configuration for Stratum server.
Please rename it to config.py and fill correct values.
'''
# ******************** GENERAL SETTINGS ***************
# Enable some verbose debug (logging requests and responses).
DEBUG = False
# Destination for application logs, files rotated once per day.
LOGDIR = 'log/'
# Main application log file.
LOGFILE = 'stratum.log' #'stratum.log'
# Possible values: DEBUG, INFO, WARNING, ERROR, CRITICAL
LOGLEVEL = 'DEBUG'
# Logging Rotation can be enabled with the following settings
# It if not enabled here, you can set up logrotate to rotate the files.
# For built in log rotation set LOG_ROTATION = True and configrue the variables
LOG_ROTATION = True
LOG_SIZE = 10485760 # Rotate every 10M
LOG_RETENTION = 10 # Keep 10 Logs
# How many threads use for synchronous methods (services).
# 30 is enough for small installation, for real usage
# it should be slightly more, say 100-300.
THREAD_POOL_SIZE = 300
# RPC call throws TimeoutServiceException once total time since request has been
# placed (time to delivery to client + time for processing on the client)
# crosses _TOTAL (in second).
# _TOTAL reflects the fact that not all transports deliver RPC requests to the clients
# instantly, so request can wait some time in the buffer on server side.
# NOT IMPLEMENTED YET
#RPC_TIMEOUT_TOTAL = 600
# RPC call throws TimeoutServiceException once client is processing request longer
# than _PROCESS (in second)
# NOT IMPLEMENTED YET
#RPC_TIMEOUT_PROCESS = 30
# Do you want to expose "example" service in server?
# Useful for learning the server,you probably want to disable
# this on production
ENABLE_EXAMPLE_SERVICE = False
# ******************** TRANSPORTS *********************
# Hostname or external IP to expose
HOSTNAME = 'localhost'
# Port used for Socket transport. Use 'None' for disabling the transport.
LISTEN_SOCKET_TRANSPORT = 3333
# Port used for HTTP Poll transport. Use 'None' for disabling the transport
LISTEN_HTTP_TRANSPORT = None
# Port used for HTTPS Poll transport
LISTEN_HTTPS_TRANSPORT = None
# Port used for WebSocket transport, 'None' for disabling WS
LISTEN_WS_TRANSPORT = None
# Port used for secure WebSocket, 'None' for disabling WSS
LISTEN_WSS_TRANSPORT = None
# ******************** SSL SETTINGS ******************
# Private key and certification file for SSL protected transports
# You can find howto for generating self-signed certificate in README file
SSL_PRIVKEY = 'server.key'
SSL_CACERT = 'server.crt'
# ******************** TCP SETTINGS ******************
# Enables support for socket encapsulation, which is compatible
# with haproxy 1.5+. By enabling this, first line of received
# data will represent some metadata about proxied stream:
# PROXY <TCP4 or TCP6> <source IP> <dest IP> <source port> </dest port>\n
#
# Full specification: http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
TCP_PROXY_PROTOCOL = False
# ******************** HTTP SETTINGS *****************
# Keepalive for HTTP transport sessions (at this time for both poll and push)
# High value leads to higher memory usage (all sessions are stored in memory ATM).
# Low value leads to more frequent session reinitializing (like downloading address history).
HTTP_SESSION_TIMEOUT = 3600 # in seconds
# Maximum number of messages (notifications, responses) waiting to delivery to HTTP Poll clients.
# Buffer length is PER CONNECTION. High value will consume a lot of RAM,
# short history will cause that in some edge cases clients won't receive older events.
HTTP_BUFFER_LIMIT = 10000
# User agent used in HTTP requests (for both HTTP transports and for proxy calls from services)
USER_AGENT = 'Stratum/0.1'
# Provide human-friendly user interface on HTTP transports for browsing exposed services.
BROWSER_ENABLE = True
# ******************** *COIND SETTINGS ************
# Hostname and credentials for one trusted Bitcoin node ("Satoshi's client").
# Stratum uses both P2P port (which is 8333 everytime) and RPC port
COINDAEMON_TRUSTED_HOST = '127.0.0.1'
COINDAEMON_TRUSTED_PORT = 8332 # RPC port
COINDAEMON_TRUSTED_USER = 'stratum'
COINDAEMON_TRUSTED_PASSWORD = '***somepassword***'
# ******************** OTHER CORE SETTINGS *********************
# Use "echo -n '<yourpassword>' | sha256sum | cut -f1 -d' ' "
# for calculating SHA256 of your preferred password
ADMIN_PASSWORD_SHA256 = None # Admin functionality is disabled
#ADMIN_PASSWORD_SHA256 = '9e6c0c1db1e0dfb3fa5159deb4ecd9715b3c8cd6b06bd4a3ad77e9a8c5694219' # SHA256 of the password
# IP from which admin calls are allowed.
# Set None to allow admin calls from all IPs
ADMIN_RESTRICT_INTERFACE = '127.0.0.1'
# Use "./signature.py > signing_key.pem" to generate unique signing key for your server
SIGNING_KEY = None # Message signing is disabled
#SIGNING_KEY = 'signing_key.pem'
# Origin of signed messages. Provide some unique string,
# ideally URL where users can find some information about your identity
SIGNING_ID = None
#SIGNING_ID = 'stratum.somedomain.com' # Use custom string
#SIGNING_ID = HOSTNAME # Use hostname as the signing ID
# *********************** IRC / PEER CONFIGURATION *************
IRC_NICK = "stratum%s" # Skip IRC registration
#IRC_NICK = "stratum" # Use nickname of your choice
# Which hostname / external IP expose in IRC room
# This should be official HOSTNAME for normal operation.
IRC_HOSTNAME = HOSTNAME
# Don't change this unless you're creating private Stratum cloud.
IRC_SERVER = 'irc.freenode.net'
IRC_ROOM = '#stratum-mining-nodes'
IRC_PORT = 6667
# Hardcoded list of Stratum nodes for clients to switch when this node is not available.
PEERS = [
{
'hostname': 'stratum.bitcoin.cz',
'trusted': True, # This node is trustworthy
'weight': -1, # Higher number means higher priority for selection.
# -1 will work mostly as a backup when other servers won't work.
# (IRC peers have weight=0 automatically).
},
]
'''
DATABASE_DRIVER = 'MySQLdb'
DATABASE_HOST = 'palatinus.cz'
DATABASE_DBNAME = 'marekp_bitcointe'
DATABASE_USER = 'marekp_bitcointe'
DATABASE_PASSWORD = '**empty**'
'''
#VADRIFF
# Variable Difficulty Enable
VARIABLE_DIFF = False # Master variable difficulty enable
# Variable diff tuning variables
#VARDIFF will start at the POOL_TARGET. It can go as low as the VDIFF_MIN and as high as min(VDIFF_MAX or the coin daemon's difficulty)
USE_COINDAEMON_DIFF = False # Set the maximum difficulty to the *coin difficulty.
DIFF_UPDATE_FREQUENCY = 86400 # Update the *coin difficulty once a day for the VARDIFF maximum
VDIFF_MIN_TARGET = 15 # Minimum Target difficulty
VDIFF_MAX_TARGET = 1000 # Maximum Target difficulty
VDIFF_TARGET_TIME = 30 # Target time per share (i.e. try to get 1 share per this many seconds)
VDIFF_RETARGET_TIME = 120 # Check to see if we should retarget this often
VDIFF_VARIANCE_PERCENT = 20 # Allow average time to very this % from target without retarget
#### Advanced Option #####
# For backwards compatibility, we send the scrypt hash to the solutions column in the shares table
# For block confirmation, we have an option to send the block hash in
# Please make sure your front end is compatible with the block hash in the solutions table.
SOLUTION_BLOCK_HASH = True # If enabled, send the block hash. If false send the scrypt hash in the shares table
# ******************** Adv. DB Settings *********************
# Don't change these unless you know what you are doing
DB_LOADER_CHECKTIME = 15 # How often we check to see if we should run the loader
DB_LOADER_REC_MIN = 1 # Min Records before the bulk loader fires
DB_LOADER_REC_MAX = 50 # Max Records the bulk loader will commit at a time
DB_LOADER_FORCE_TIME = 300 # How often the cache should be flushed into the DB regardless of size.
DB_STATS_AVG_TIME = 300 # When using the DATABASE_EXTEND option, average speed over X sec
# Note: this is also how often it updates
DB_USERCACHE_TIME = 600 # How long the usercache is good for before we refresh

4
lib/exceptions.py Normal file
View File

@ -0,0 +1,4 @@
from stratum.custom_exceptions import ServiceException
class SubmitException(ServiceException):
pass

25
lib/extranonce_counter.py Normal file
View File

@ -0,0 +1,25 @@
import struct
class ExtranonceCounter(object):
'''Implementation of a counter producing
unique extranonce across all pool instances.
This is just dumb "quick&dirty" solution,
but it can be changed at any time without breaking anything.'''
def __init__(self, instance_id):
if instance_id < 0 or instance_id > 31:
raise Exception("Current ExtranonceCounter implementation needs an instance_id in <0, 31>.")
# Last 5 most-significant bits represents instance_id
# The rest is just an iterator of jobs.
self.counter = instance_id << 27
self.size = struct.calcsize('>L')
def get_size(self):
'''Return expected size of generated extranonce in bytes'''
return self.size
def get_new_bin(self):
self.counter += 1
return struct.pack('>L', self.counter)

542
lib/halfnode.py Normal file
View File

@ -0,0 +1,542 @@
#!/usr/bin/python
# Public Domain
# Original author: ArtForz
# Twisted integration: slush
import struct
import socket
import binascii
import time
import sys
import random
import cStringIO
from Crypto.Hash import SHA256
from twisted.internet.protocol import Protocol
from util import *
import ltc_scrypt
import lib.logger
log = lib.logger.get_logger('halfnode')
MY_VERSION = 31402
MY_SUBVERSION = ".4"
class CAddress(object):
def __init__(self):
self.nTime = 0
self.nServices = 1
self.pchReserved = "\x00" * 10 + "\xff" * 2
self.ip = "0.0.0.0"
self.port = 0
def deserialize(self, f):
#self.nTime = struct.unpack("<I", f.read(4))[0]
self.nServices = struct.unpack("<Q", f.read(8))[0]
self.pchReserved = f.read(12)
self.ip = socket.inet_ntoa(f.read(4))
self.port = struct.unpack(">H", f.read(2))[0]
def serialize(self):
r = ""
#r += struct.pack("<I", self.nTime)
r += struct.pack("<Q", self.nServices)
r += self.pchReserved
r += socket.inet_aton(self.ip)
r += struct.pack(">H", self.port)
return r
def __repr__(self):
return "CAddress(nServices=%i ip=%s port=%i)" % (self.nServices, self.ip, self.port)
class CInv(object):
typemap = {
0: "Error",
1: "TX",
2: "Block"}
def __init__(self):
self.type = 0
self.hash = 0L
def deserialize(self, f):
self.type = struct.unpack("<i", f.read(4))[0]
self.hash = deser_uint256(f)
def serialize(self):
r = ""
r += struct.pack("<i", self.type)
r += ser_uint256(self.hash)
return r
def __repr__(self):
return "CInv(type=%s hash=%064x)" % (self.typemap[self.type], self.hash)
class CBlockLocator(object):
def __init__(self):
self.nVersion = MY_VERSION
self.vHave = []
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.vHave = deser_uint256_vector(f)
def serialize(self):
r = ""
r += struct.pack("<i", self.nVersion)
r += ser_uint256_vector(self.vHave)
return r
def __repr__(self):
return "CBlockLocator(nVersion=%i vHave=%s)" % (self.nVersion, repr(self.vHave))
class COutPoint(object):
def __init__(self):
self.hash = 0
self.n = 0
def deserialize(self, f):
self.hash = deser_uint256(f)
self.n = struct.unpack("<I", f.read(4))[0]
def serialize(self):
r = ""
r += ser_uint256(self.hash)
r += struct.pack("<I", self.n)
return r
def __repr__(self):
return "COutPoint(hash=%064x n=%i)" % (self.hash, self.n)
class CTxIn(object):
def __init__(self):
self.prevout = COutPoint()
self.scriptSig = ""
self.nSequence = 0
def deserialize(self, f):
self.prevout = COutPoint()
self.prevout.deserialize(f)
self.scriptSig = deser_string(f)
self.nSequence = struct.unpack("<I", f.read(4))[0]
def serialize(self):
r = ""
r += self.prevout.serialize()
r += ser_string(self.scriptSig)
r += struct.pack("<I", self.nSequence)
return r
def __repr__(self):
return "CTxIn(prevout=%s scriptSig=%s nSequence=%i)" % (repr(self.prevout), binascii.hexlify(self.scriptSig), self.nSequence)
class CTxOut(object):
def __init__(self):
self.nValue = 0
self.scriptPubKey = ""
def deserialize(self, f):
self.nValue = struct.unpack("<q", f.read(8))[0]
self.scriptPubKey = deser_string(f)
def serialize(self):
r = ""
r += struct.pack("<q", self.nValue)
r += ser_string(self.scriptPubKey)
return r
def __repr__(self):
return "CTxOut(nValue=%i.%08i scriptPubKey=%s)" % (self.nValue // 100000000, self.nValue % 100000000, binascii.hexlify(self.scriptPubKey))
class CTransaction(object):
def __init__(self):
self.nVersion = 1
self.vin = []
self.vout = []
self.nLockTime = 0
self.sha256 = None
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.vin = deser_vector(f, CTxIn)
self.vout = deser_vector(f, CTxOut)
self.nLockTime = struct.unpack("<I", f.read(4))[0]
self.sha256 = None
def serialize(self):
r = ""
r += struct.pack("<i", self.nVersion)
r += ser_vector(self.vin)
r += ser_vector(self.vout)
r += struct.pack("<I", self.nLockTime)
return r
def calc_sha256(self):
if self.sha256 is None:
self.sha256 = uint256_from_str(SHA256.new(SHA256.new(self.serialize()).digest()).digest())
return self.sha256
def is_valid(self):
self.calc_sha256()
for tout in self.vout:
if tout.nValue < 0 or tout.nValue > 21000000L * 100000000L:
return False
return True
def __repr__(self):
return "CTransaction(nVersion=%i vin=%s vout=%s nLockTime=%i)" % (self.nVersion, repr(self.vin), repr(self.vout), self.nLockTime)
class CBlock(object):
def __init__(self):
self.nVersion = 1
self.hashPrevBlock = 0
self.hashMerkleRoot = 0
self.nTime = 0
self.nBits = 0
self.nNonce = 0
self.vtx = []
self.sha256 = None
self.scrypt = None
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.hashPrevBlock = deser_uint256(f)
self.hashMerkleRoot = deser_uint256(f)
self.nTime = struct.unpack("<I", f.read(4))[0]
self.nBits = struct.unpack("<I", f.read(4))[0]
self.nNonce = struct.unpack("<I", f.read(4))[0]
self.vtx = deser_vector(f, CTransaction)
def serialize(self):
r = []
r.append(struct.pack("<i", self.nVersion))
r.append(ser_uint256(self.hashPrevBlock))
r.append(ser_uint256(self.hashMerkleRoot))
r.append(struct.pack("<I", self.nTime))
r.append(struct.pack("<I", self.nBits))
r.append(struct.pack("<I", self.nNonce))
r.append(ser_vector(self.vtx))
return ''.join(r)
def calc_sha256(self):
if self.sha256 is None:
r = []
r.append(struct.pack("<i", self.nVersion))
r.append(ser_uint256(self.hashPrevBlock))
r.append(ser_uint256(self.hashMerkleRoot))
r.append(struct.pack("<I", self.nTime))
r.append(struct.pack("<I", self.nBits))
r.append(struct.pack("<I", self.nNonce))
self.sha256 = uint256_from_str(SHA256.new(SHA256.new(''.join(r)).digest()).digest())
return self.sha256
def calc_scrypt(self):
if self.scrypt is None:
r = []
r.append(struct.pack("<i", self.nVersion))
r.append(ser_uint256(self.hashPrevBlock))
r.append(ser_uint256(self.hashMerkleRoot))
r.append(struct.pack("<I", self.nTime))
r.append(struct.pack("<I", self.nBits))
r.append(struct.pack("<I", self.nNonce))
self.scrypt = uint256_from_str(ltc_scrypt.getPoWHash(''.join(r)))
return self.scrypt
def is_valid(self):
#self.calc_sha256()
self.calc_scrypt()
target = uint256_from_compact(self.nBits)
#if self.sha256 > target:
if self.scrypt > target:
return False
hashes = []
for tx in self.vtx:
tx.sha256 = None
if not tx.is_valid():
return False
tx.calc_sha256()
hashes.append(ser_uint256(tx.sha256))
while len(hashes) > 1:
newhashes = []
for i in xrange(0, len(hashes), 2):
i2 = min(i+1, len(hashes)-1)
newhashes.append(SHA256.new(SHA256.new(hashes[i] + hashes[i2]).digest()).digest())
hashes = newhashes
if uint256_from_str(hashes[0]) != self.hashMerkleRoot:
return False
return True
def __repr__(self):
return "CBlock(nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s nBits=%08x nNonce=%08x vtx=%s)" % (self.nVersion, self.hashPrevBlock, self.hashMerkleRoot, time.ctime(self.nTime), self.nBits, self.nNonce, repr(self.vtx))
class msg_version(object):
command = "version"
def __init__(self):
self.nVersion = MY_VERSION
self.nServices = 0
self.nTime = time.time()
self.addrTo = CAddress()
self.addrFrom = CAddress()
self.nNonce = random.getrandbits(64)
self.strSubVer = MY_SUBVERSION
self.nStartingHeight = 0
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
if self.nVersion == 10300:
self.nVersion = 300
self.nServices = struct.unpack("<Q", f.read(8))[0]
self.nTime = struct.unpack("<q", f.read(8))[0]
self.addrTo = CAddress()
self.addrTo.deserialize(f)
self.addrFrom = CAddress()
self.addrFrom.deserialize(f)
self.nNonce = struct.unpack("<Q", f.read(8))[0]
self.strSubVer = deser_string(f)
self.nStartingHeight = struct.unpack("<i", f.read(4))[0]
def serialize(self):
r = []
r.append(struct.pack("<i", self.nVersion))
r.append(struct.pack("<Q", self.nServices))
r.append(struct.pack("<q", self.nTime))
r.append(self.addrTo.serialize())
r.append(self.addrFrom.serialize())
r.append(struct.pack("<Q", self.nNonce))
r.append(ser_string(self.strSubVer))
r.append(struct.pack("<i", self.nStartingHeight))
return ''.join(r)
def __repr__(self):
return "msg_version(nVersion=%i nServices=%i nTime=%s addrTo=%s addrFrom=%s nNonce=0x%016X strSubVer=%s nStartingHeight=%i)" % (self.nVersion, self.nServices, time.ctime(self.nTime), repr(self.addrTo), repr(self.addrFrom), self.nNonce, self.strSubVer, self.nStartingHeight)
class msg_verack(object):
command = "verack"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return ""
def __repr__(self):
return "msg_verack()"
class msg_addr(object):
command = "addr"
def __init__(self):
self.addrs = []
def deserialize(self, f):
self.addrs = deser_vector(f, CAddress)
def serialize(self):
return ser_vector(self.addrs)
def __repr__(self):
return "msg_addr(addrs=%s)" % (repr(self.addrs))
class msg_inv(object):
command = "inv"
def __init__(self):
self.inv = []
def deserialize(self, f):
self.inv = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.inv)
def __repr__(self):
return "msg_inv(inv=%s)" % (repr(self.inv))
class msg_getdata(object):
command = "getdata"
def __init__(self):
self.inv = []
def deserialize(self, f):
self.inv = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.inv)
def __repr__(self):
return "msg_getdata(inv=%s)" % (repr(self.inv))
class msg_getblocks(object):
command = "getblocks"
def __init__(self):
self.locator = CBlockLocator()
self.hashstop = 0L
def deserialize(self, f):
self.locator = CBlockLocator()
self.locator.deserialize(f)
self.hashstop = deser_uint256(f)
def serialize(self):
r = []
r.append(self.locator.serialize())
r.append(ser_uint256(self.hashstop))
return ''.join(r)
def __repr__(self):
return "msg_getblocks(locator=%s hashstop=%064x)" % (repr(self.locator), self.hashstop)
class msg_tx(object):
command = "tx"
def __init__(self):
self.tx = CTransaction()
def deserialize(self, f):
self.tx.deserialize(f)
def serialize(self):
return self.tx.serialize()
def __repr__(self):
return "msg_tx(tx=%s)" % (repr(self.tx))
class msg_block(object):
command = "block"
def __init__(self):
self.block = CBlock()
def deserialize(self, f):
self.block.deserialize(f)
def serialize(self):
return self.block.serialize()
def __repr__(self):
return "msg_block(block=%s)" % (repr(self.block))
class msg_getaddr(object):
command = "getaddr"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return ""
def __repr__(self):
return "msg_getaddr()"
class msg_ping(object):
command = "ping"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return ""
def __repr__(self):
return "msg_ping()"
class msg_alert(object):
command = "alert"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return ""
def __repr__(self):
return "msg_alert()"
class BitcoinP2PProtocol(Protocol):
messagemap = {
"version": msg_version,
"verack": msg_verack,
"addr": msg_addr,
"inv": msg_inv,
"getdata": msg_getdata,
"getblocks": msg_getblocks,
"tx": msg_tx,
"block": msg_block,
"getaddr": msg_getaddr,
"ping": msg_ping,
"alert": msg_alert,
}
def connectionMade(self):
peer = self.transport.getPeer()
self.dstaddr = peer.host
self.dstport = peer.port
self.recvbuf = ""
self.last_sent = 0
t = msg_version()
t.nStartingHeight = getattr(self, 'nStartingHeight', 0)
t.addrTo.ip = self.dstaddr
t.addrTo.port = self.dstport
t.addrTo.nTime = time.time()
t.addrFrom.ip = "0.0.0.0"
t.addrFrom.port = 0
t.addrFrom.nTime = time.time()
self.send_message(t)
def dataReceived(self, data):
self.recvbuf += data
self.got_data()
def got_data(self):
while True:
if len(self.recvbuf) < 4:
return
if self.recvbuf[:4] != "\xf9\xbe\xb4\xd9":
raise ValueError("got garbage %s" % repr(self.recvbuf))
if len(self.recvbuf) < 4 + 12 + 4 + 4:
return
command = self.recvbuf[4:4+12].split("\x00", 1)[0]
msglen = struct.unpack("<i", self.recvbuf[4+12:4+12+4])[0]
checksum = self.recvbuf[4+12+4:4+12+4+4]
if len(self.recvbuf) < 4 + 12 + 4 + 4 + msglen:
return
msg = self.recvbuf[4+12+4+4:4+12+4+4+msglen]
th = SHA256.new(msg).digest()
h = SHA256.new(th).digest()
if checksum != h[:4]:
raise ValueError("got bad checksum %s" % repr(self.recvbuf))
self.recvbuf = self.recvbuf[4+12+4+4+msglen:]
if command in self.messagemap:
f = cStringIO.StringIO(msg)
t = self.messagemap[command]()
t.deserialize(f)
self.got_message(t)
else:
print "UNKNOWN COMMAND", command, repr(msg)
def prepare_message(self, message):
command = message.command
data = message.serialize()
tmsg = "\xf9\xbe\xb4\xd9"
tmsg += command
tmsg += "\x00" * (12 - len(command))
tmsg += struct.pack("<I", len(data))
th = SHA256.new(data).digest()
h = SHA256.new(th).digest()
tmsg += h[:4]
tmsg += data
return tmsg
def send_serialized_message(self, tmsg):
if not self.connected:
return
self.transport.write(tmsg)
self.last_sent = time.time()
def send_message(self, message):
if not self.connected:
return
#print message.command
#print "send %s" % repr(message)
command = message.command
data = message.serialize()
tmsg = "\xf9\xbe\xb4\xd9"
tmsg += command
tmsg += "\x00" * (12 - len(command))
tmsg += struct.pack("<I", len(data))
th = SHA256.new(data).digest()
h = SHA256.new(th).digest()
tmsg += h[:4]
tmsg += data
#print tmsg, len(tmsg)
self.transport.write(tmsg)
self.last_sent = time.time()
def got_message(self, message):
if self.last_sent + 30 * 60 < time.time():
self.send_message(msg_ping())
mname = 'do_' + message.command
#print mname
if not hasattr(self, mname):
return
method = getattr(self, mname)
method(message)
# if message.command == "tx":
# message.tx.calc_sha256()
# sha256 = message.tx.sha256
# pubkey = binascii.hexlify(message.tx.vout[0].scriptPubKey)
# txlock.acquire()
# tx.append([str(sha256), str(time.time()), str(self.dstaddr), pubkey])
# txlock.release()
def do_version(self, message):
#print message
self.send_message(msg_verack())
def do_inv(self, message):
want = msg_getdata()
for i in message.inv:
if i.type == 1:
want.inv.append(i)
if i.type == 2:
want.inv.append(i)
if len(want.inv):
self.send_message(want)

60
lib/logger.py Executable file
View File

@ -0,0 +1,60 @@
'''Simple wrapper around python's logging package'''
import os
import logging
from logging import handlers
from twisted.python import log as twisted_log
import settings
'''
class Logger(object):
def debug(self, msg):
twisted_log.msg(msg)
def info(self, msg):
twisted_log.msg(msg)
def warning(self, msg):
twisted_log.msg(msg)
def error(self, msg):
twisted_log.msg(msg)
def critical(self, msg):
twisted_log.msg(msg)
'''
def get_logger(name):
logger = logging.getLogger(name)
logger.addHandler(stream_handler)
logger.setLevel(getattr(logging, settings.LOGLEVEL))
if settings.LOGFILE != None:
logger.addHandler(file_handler)
logger.debug("Logging initialized")
return logger
#return Logger()
if settings.DEBUG:
fmt = logging.Formatter("%(asctime)s %(levelname)s %(name)s %(module)s.%(funcName)s # %(message)s")
else:
fmt = logging.Formatter("%(asctime)s %(levelname)s %(name)s # %(message)s")
if settings.LOGFILE != None:
# Create the log folder if it does not exist
try:
os.makedirs(os.path.join(os.getcwd(), settings.LOGDIR))
except OSError:
pass
# Setup log rotation if specified in the config
if settings.LOG_ROTATION:
file_handler = logging.handlers.RotatingFileHandler(os.path.join(settings.LOGDIR, settings.LOGFILE),'a', settings.LOG_SIZE,settings.LOG_RETENTION)
else:
file_handler = logging.handlers.WatchedFileHandler(os.path.join(settings.LOGDIR, settings.LOGFILE))
file_handler.setFormatter(fmt)
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(fmt)

103
lib/merkletree.py Normal file
View File

@ -0,0 +1,103 @@
# Eloipool - Python Bitcoin pool server
# Copyright (C) 2011-2012 Luke Dashjr <luke-jr+eloipool@utopios.org>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from hashlib import sha256
from util import doublesha
class MerkleTree:
def __init__(self, data, detailed=False):
self.data = data
self.recalculate(detailed)
self._hash_steps = None
def recalculate(self, detailed=False):
L = self.data
steps = []
if detailed:
detail = []
PreL = []
StartL = 0
else:
detail = None
PreL = [None]
StartL = 2
Ll = len(L)
if detailed or Ll > 1:
while True:
if detailed:
detail += L
if Ll == 1:
break
steps.append(L[1])
if Ll % 2:
L += [L[-1]]
L = PreL + [doublesha(L[i] + L[i + 1]) for i in range(StartL, Ll, 2)]
Ll = len(L)
self._steps = steps
self.detail = detail
def hash_steps(self):
if self._hash_steps == None:
self._hash_steps = doublesha(''.join(self._steps))
return self._hash_steps
def withFirst(self, f):
steps = self._steps
for s in steps:
f = doublesha(f + s)
return f
def merkleRoot(self):
return self.withFirst(self.data[0])
# MerkleTree tests
def _test():
import binascii
import time
mt = MerkleTree([None] + [binascii.unhexlify(a) for a in [
'999d2c8bb6bda0bf784d9ebeb631d711dbbbfe1bc006ea13d6ad0d6a2649a971',
'3f92594d5a3d7b4df29d7dd7c46a0dac39a96e751ba0fc9bab5435ea5e22a19d',
'a5633f03855f541d8e60a6340fc491d49709dc821f3acb571956a856637adcb6',
'28d97c850eaf917a4c76c02474b05b70a197eaefb468d21c22ed110afe8ec9e0',
]])
assert(
b'82293f182d5db07d08acf334a5a907012bbb9990851557ac0ec028116081bd5a' ==
binascii.b2a_hex(mt.withFirst(binascii.unhexlify('d43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7')))
)
print '82293f182d5db07d08acf334a5a907012bbb9990851557ac0ec028116081bd5a'
txes = [binascii.unhexlify(a) for a in [
'd43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7',
'999d2c8bb6bda0bf784d9ebeb631d711dbbbfe1bc006ea13d6ad0d6a2649a971',
'3f92594d5a3d7b4df29d7dd7c46a0dac39a96e751ba0fc9bab5435ea5e22a19d',
'a5633f03855f541d8e60a6340fc491d49709dc821f3acb571956a856637adcb6',
'28d97c850eaf917a4c76c02474b05b70a197eaefb468d21c22ed110afe8ec9e0',
]]
s = time.time()
mt = MerkleTree(txes)
for x in range(100):
y = int('d43b669fb42cfa84695b844c0402d410213faa4f3e66cb7248f688ff19d5e5f7', 16)
#y += x
coinbasehash = binascii.unhexlify("%x" % y)
x = binascii.b2a_hex(mt.withFirst(coinbasehash))
print x
print time.time() - s
if __name__ == '__main__':
_test()

51
lib/settings.py Executable file
View File

@ -0,0 +1,51 @@
def setup():
'''
This will import modules config_default and config and move their variables
into current module (variables in config have higher priority than config_default).
Thanks to this, you can import settings anywhere in the application and you'll get
actual application settings.
This config is related to server side. You don't need config.py if you
want to use client part only.
'''
def read_values(cfg):
for varname in cfg.__dict__.keys():
if varname.startswith('__'):
continue
value = getattr(cfg, varname)
yield (varname, value)
import config_default
try:
import conf.config as config
except ImportError:
# Custom config not presented, but we can still use defaults
config = None
import sys
module = sys.modules[__name__]
for name,value in read_values(config_default):
module.__dict__[name] = value
changes = {}
if config:
for name,value in read_values(config):
if value != module.__dict__.get(name, None):
changes[name] = value
module.__dict__[name] = value
if module.__dict__['DEBUG'] and changes:
print "----------------"
print "Custom settings:"
for k, v in changes.items():
if 'passw' in k.lower():
print k, ": ********"
else:
print k, ":", v
print "----------------"
setup()

281
lib/template_registry.py Normal file
View File

@ -0,0 +1,281 @@
import weakref
import binascii
import util
import StringIO
import ltc_scrypt
from twisted.internet import defer
from lib.exceptions import SubmitException
import lib.logger
log = lib.logger.get_logger('template_registry')
from mining.interfaces import Interfaces
from extranonce_counter import ExtranonceCounter
import lib.settings as settings
class JobIdGenerator(object):
'''Generate pseudo-unique job_id. It does not need to be absolutely unique,
because pool sends "clean_jobs" flag to clients and they should drop all previous jobs.'''
counter = 0
@classmethod
def get_new_id(cls):
cls.counter += 1
if cls.counter % 0xffff == 0:
cls.counter = 1
return "%x" % cls.counter
class TemplateRegistry(object):
'''Implements the main logic of the pool. Keep track
on valid block templates, provide internal interface for stratum
service and implements block validation and submits.'''
def __init__(self, block_template_class, coinbaser, bitcoin_rpc, instance_id,
on_template_callback, on_block_callback):
self.prevhashes = {}
self.jobs = weakref.WeakValueDictionary()
self.extranonce_counter = ExtranonceCounter(instance_id)
self.extranonce2_size = block_template_class.coinbase_transaction_class.extranonce_size \
- self.extranonce_counter.get_size()
self.coinbaser = coinbaser
self.block_template_class = block_template_class
self.bitcoin_rpc = bitcoin_rpc
self.on_block_callback = on_block_callback
self.on_template_callback = on_template_callback
self.last_block = None
self.update_in_progress = False
self.last_update = None
# Create first block template on startup
self.update_block()
def get_new_extranonce1(self):
'''Generates unique extranonce1 (e.g. for newly
subscribed connection.'''
return self.extranonce_counter.get_new_bin()
def get_last_broadcast_args(self):
'''Returns arguments for mining.notify
from last known template.'''
return self.last_block.broadcast_args
def add_template(self, block,block_height):
'''Adds new template to the registry.
It also clean up templates which should
not be used anymore.'''
prevhash = block.prevhash_hex
if prevhash in self.prevhashes.keys():
new_block = False
else:
new_block = True
self.prevhashes[prevhash] = []
# Blocks sorted by prevhash, so it's easy to drop
# them on blockchain update
self.prevhashes[prevhash].append(block)
# Weak reference for fast lookup using job_id
self.jobs[block.job_id] = block
# Use this template for every new request
self.last_block = block
# Drop templates of obsolete blocks
for ph in self.prevhashes.keys():
if ph != prevhash:
del self.prevhashes[ph]
log.info("New template for %s" % prevhash)
if new_block:
# Tell the system about new block
# It is mostly important for share manager
self.on_block_callback(prevhash, block_height)
# Everything is ready, let's broadcast jobs!
self.on_template_callback(new_block)
#from twisted.internet import reactor
#reactor.callLater(10, self.on_block_callback, new_block)
def update_block(self):
'''Registry calls the getblocktemplate() RPC
and build new block template.'''
if self.update_in_progress:
# Block has been already detected
return
self.update_in_progress = True
self.last_update = Interfaces.timestamper.time()
d = self.bitcoin_rpc.getblocktemplate()
d.addCallback(self._update_block)
d.addErrback(self._update_block_failed)
def _update_block_failed(self, failure):
log.error(str(failure))
self.update_in_progress = False
def _update_block(self, data):
start = Interfaces.timestamper.time()
template = self.block_template_class(Interfaces.timestamper, self.coinbaser, JobIdGenerator.get_new_id())
template.fill_from_rpc(data)
self.add_template(template,data['height'])
log.info("Update finished, %.03f sec, %d txes" % \
(Interfaces.timestamper.time() - start, len(template.vtx)))
self.update_in_progress = False
return data
def diff_to_target(self, difficulty):
'''Converts difficulty to target'''
#diff1 = 0x00000000ffff0000000000000000000000000000000000000000000000000000
diff1 = 0x0000ffff00000000000000000000000000000000000000000000000000000000
return diff1 / difficulty
def get_job(self, job_id):
'''For given job_id returns BlockTemplate instance or None'''
try:
j = self.jobs[job_id]
except:
log.info("Job id '%s' not found" % job_id)
return None
# Now we have to check if job is still valid.
# Unfortunately weak references are not bulletproof and
# old reference can be found until next run of garbage collector.
if j.prevhash_hex not in self.prevhashes:
log.info("Prevhash of job '%s' is unknown" % job_id)
return None
if j not in self.prevhashes[j.prevhash_hex]:
log.info("Job %s is unknown" % job_id)
return None
return j
def submit_share(self, job_id, worker_name, session, extranonce1_bin, extranonce2, ntime, nonce,
difficulty):
'''Check parameters and finalize block template. If it leads
to valid block candidate, asynchronously submits the block
back to the bitcoin network.
- extranonce1_bin is binary. No checks performed, it should be from session data
- job_id, extranonce2, ntime, nonce - in hex form sent by the client
- difficulty - decimal number from session, again no checks performed
- submitblock_callback - reference to method which receive result of submitblock()
'''
# Check if extranonce2 looks correctly. extranonce2 is in hex form...
if len(extranonce2) != self.extranonce2_size * 2:
raise SubmitException("Incorrect size of extranonce2. Expected %d chars" % (self.extranonce2_size*2))
# Check for job
job = self.get_job(job_id)
if job == None:
raise SubmitException("Job '%s' not found" % job_id)
# Check if ntime looks correct
if len(ntime) != 8:
raise SubmitException("Incorrect size of ntime. Expected 8 chars")
if not job.check_ntime(int(ntime, 16)):
raise SubmitException("Ntime out of range")
# Check nonce
if len(nonce) != 8:
raise SubmitException("Incorrect size of nonce. Expected 8 chars")
# Check for duplicated submit
if not job.register_submit(extranonce1_bin, extranonce2, ntime, nonce):
log.info("Duplicate from %s, (%s %s %s %s)" % \
(worker_name, binascii.hexlify(extranonce1_bin), extranonce2, ntime, nonce))
raise SubmitException("Duplicate share")
# Now let's do the hard work!
# ---------------------------
# 0. Some sugar
extranonce2_bin = binascii.unhexlify(extranonce2)
ntime_bin = binascii.unhexlify(ntime)
nonce_bin = binascii.unhexlify(nonce)
# 1. Build coinbase
coinbase_bin = job.serialize_coinbase(extranonce1_bin, extranonce2_bin)
coinbase_hash = util.doublesha(coinbase_bin)
# 2. Calculate merkle root
merkle_root_bin = job.merkletree.withFirst(coinbase_hash)
merkle_root_int = util.uint256_from_str(merkle_root_bin)
# 3. Serialize header with given merkle, ntime and nonce
header_bin = job.serialize_header(merkle_root_int, ntime_bin, nonce_bin)
# 4. Reverse header and compare it with target of the user
hash_bin = ltc_scrypt.getPoWHash(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
hash_int = util.uint256_from_str(hash_bin)
scrypt_hash_hex = "%064x" % hash_int
header_hex = binascii.hexlify(header_bin)
header_hex = header_hex+"000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000"
target_user = self.diff_to_target(difficulty)
if hash_int > target_user and \
( 'prev_jobid' not in session or session['prev_jobid'] < job_id \
or 'prev_diff' not in session or hash_int > self.diff_to_target(session['prev_diff']) ):
raise SubmitException("Share is above target")
# Mostly for debugging purposes
target_info = self.diff_to_target(100000)
if hash_int <= target_info:
log.info("Yay, share with diff above 100000")
# Algebra tells us the diff_to_target is the same as hash_to_diff
share_diff = int(self.diff_to_target(hash_int))
# 5. Compare hash with target of the network
if hash_int <= job.target:
# Yay! It is block candidate!
log.info("We found a block candidate! %s" % scrypt_hash_hex)
# Reverse the header and get the potential block hash (for scrypt only)
block_hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
block_hash_hex = block_hash_bin[::-1].encode('hex_codec')
# 6. Finalize and serialize block object
job.finalize(merkle_root_int, extranonce1_bin, extranonce2_bin, int(ntime, 16), int(nonce, 16))
if not job.is_valid():
# Should not happen
log.error("Final job validation failed!")
# 7. Submit block to the network
serialized = binascii.hexlify(job.serialize())
on_submit = self.bitcoin_rpc.submitblock(serialized, block_hash_hex)
if on_submit:
self.update_block()
if settings.SOLUTION_BLOCK_HASH:
return (header_hex, block_hash_hex, share_diff, on_submit)
else:
return (header_hex, scrypt_hash_hex, share_diff, on_submit)
if settings.SOLUTION_BLOCK_HASH:
# Reverse the header and get the potential block hash (for scrypt only) only do this if we want to send in the block hash to the shares table
block_hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
block_hash_hex = block_hash_bin[::-1].encode('hex_codec')
return (header_hex, block_hash_hex, share_diff, None)
else:
return (header_hex, scrypt_hash_hex, share_diff, None)

217
lib/util.py Normal file
View File

@ -0,0 +1,217 @@
'''Various helper methods. It probably needs some cleanup.'''
import struct
import StringIO
import binascii
from hashlib import sha256
def deser_string(f):
nit = struct.unpack("<B", f.read(1))[0]
if nit == 253:
nit = struct.unpack("<H", f.read(2))[0]
elif nit == 254:
nit = struct.unpack("<I", f.read(4))[0]
elif nit == 255:
nit = struct.unpack("<Q", f.read(8))[0]
return f.read(nit)
def ser_string(s):
if len(s) < 253:
return chr(len(s)) + s
elif len(s) < 0x10000:
return chr(253) + struct.pack("<H", len(s)) + s
elif len(s) < 0x100000000L:
return chr(254) + struct.pack("<I", len(s)) + s
return chr(255) + struct.pack("<Q", len(s)) + s
def deser_uint256(f):
r = 0L
for i in xrange(8):
t = struct.unpack("<I", f.read(4))[0]
r += t << (i * 32)
return r
def ser_uint256(u):
rs = ""
for i in xrange(8):
rs += struct.pack("<I", u & 0xFFFFFFFFL)
u >>= 32
return rs
def uint256_from_str(s):
r = 0L
t = struct.unpack("<IIIIIIII", s[:32])
for i in xrange(8):
r += t[i] << (i * 32)
return r
def uint256_from_str_be(s):
r = 0L
t = struct.unpack(">IIIIIIII", s[:32])
for i in xrange(8):
r += t[i] << (i * 32)
return r
def uint256_from_compact(c):
nbytes = (c >> 24) & 0xFF
v = (c & 0xFFFFFFL) << (8 * (nbytes - 3))
return v
def deser_vector(f, c):
nit = struct.unpack("<B", f.read(1))[0]
if nit == 253:
nit = struct.unpack("<H", f.read(2))[0]
elif nit == 254:
nit = struct.unpack("<I", f.read(4))[0]
elif nit == 255:
nit = struct.unpack("<Q", f.read(8))[0]
r = []
for i in xrange(nit):
t = c()
t.deserialize(f)
r.append(t)
return r
def ser_vector(l):
r = ""
if len(l) < 253:
r = chr(len(l))
elif len(l) < 0x10000:
r = chr(253) + struct.pack("<H", len(l))
elif len(l) < 0x100000000L:
r = chr(254) + struct.pack("<I", len(l))
else:
r = chr(255) + struct.pack("<Q", len(l))
for i in l:
r += i.serialize()
return r
def deser_uint256_vector(f):
nit = struct.unpack("<B", f.read(1))[0]
if nit == 253:
nit = struct.unpack("<H", f.read(2))[0]
elif nit == 254:
nit = struct.unpack("<I", f.read(4))[0]
elif nit == 255:
nit = struct.unpack("<Q", f.read(8))[0]
r = []
for i in xrange(nit):
t = deser_uint256(f)
r.append(t)
return r
def ser_uint256_vector(l):
r = ""
if len(l) < 253:
r = chr(len(l))
elif len(l) < 0x10000:
r = chr(253) + struct.pack("<H", len(l))
elif len(l) < 0x100000000L:
r = chr(254) + struct.pack("<I", len(l))
else:
r = chr(255) + struct.pack("<Q", len(l))
for i in l:
r += ser_uint256(i)
return r
__b58chars = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
__b58base = len(__b58chars)
def b58decode(v, length):
""" decode v into a string of len bytes
"""
long_value = 0L
for (i, c) in enumerate(v[::-1]):
long_value += __b58chars.find(c) * (__b58base**i)
result = ''
while long_value >= 256:
div, mod = divmod(long_value, 256)
result = chr(mod) + result
long_value = div
result = chr(long_value) + result
nPad = 0
for c in v:
if c == __b58chars[0]: nPad += 1
else: break
result = chr(0)*nPad + result
if length is not None and len(result) != length:
return None
return result
def b58encode(value):
""" encode integer 'value' as a base58 string; returns string
"""
encoded = ''
while value >= __b58base:
div, mod = divmod(value, __b58base)
encoded = __b58chars[mod] + encoded # add to left
value = div
encoded = __b58chars[value] + encoded # most significant remainder
return encoded
def reverse_hash(h):
# This only revert byte order, nothing more
if len(h) != 64:
raise Exception('hash must have 64 hexa chars')
return ''.join([ h[56-i:64-i] for i in range(0, 64, 8) ])
def doublesha(b):
return sha256(sha256(b).digest()).digest()
def bits_to_target(bits):
return struct.unpack('<L', bits[:3] + b'\0')[0] * 2**(8*(int(bits[3], 16) - 3))
def address_to_pubkeyhash(addr):
try:
addr = b58decode(addr, 25)
except:
return None
if addr is None:
return None
ver = addr[0]
cksumA = addr[-4:]
cksumB = doublesha(addr[:-4])[:4]
if cksumA != cksumB:
return None
return (ver, addr[1:-4])
def ser_uint256_be(u):
'''ser_uint256 to big endian'''
rs = ""
for i in xrange(8):
rs += struct.pack(">I", u & 0xFFFFFFFFL)
u >>= 32
return rs
def deser_uint256_be(f):
r = 0L
for i in xrange(8):
t = struct.unpack(">I", f.read(4))[0]
r += t << (i * 32)
return r
def ser_number(n):
# For encoding nHeight into coinbase
s = bytearray(b'\1')
while n > 127:
s[0] += 1
s.append(n % 256)
n //= 256
s.append(n)
return bytes(s)
def script_to_address(addr):
d = address_to_pubkeyhash(addr)
if not d:
raise ValueError('invalid address')
(ver, pubkeyhash) = d
return b'\x76\xa9\x14' + pubkeyhash + b'\x88\xac'

195
mining/DBInterface.py Normal file
View File

@ -0,0 +1,195 @@
from twisted.internet import reactor, defer
import time
from datetime import datetime
import Queue
import signal
import lib.settings as settings
import lib.logger
log = lib.logger.get_logger('DBInterface')
class DBInterface():
def __init__(self):
self.dbi = self.connectDB()
def init_main(self):
self.dbi.check_tables()
self.q = Queue.Queue()
self.queueclock = None
self.usercache = {}
self.clearusercache()
self.nextStatsUpdate = 0
self.scheduleImport()
self.next_force_import_time = time.time() + settings.DB_LOADER_FORCE_TIME
signal.signal(signal.SIGINT, self.signal_handler)
def signal_handler(self, signal, frame):
print "SIGINT Detected, shutting down"
self.do_import(self.dbi, True)
reactor.stop()
def set_bitcoinrpc(self, bitcoinrpc):
self.bitcoinrpc = bitcoinrpc
def connectDB(self):
if settings.VARIABLE_DIFF:
log.debug("DB_Mysql_Vardiff INIT")
import DB_Mysql_Vardiff
return DB_Mysql_Vardiff.DB_Mysql_Vardiff()
else:
log.debug('DB_Mysql INIT')
import DB_Mysql
return DB_Mysql.DB_Mysql()
def clearusercache(self):
log.debug("DBInterface.clearusercache called")
self.usercache = {}
self.usercacheclock = reactor.callLater(settings.DB_USERCACHE_TIME , self.clearusercache)
def scheduleImport(self):
# This schedule's the Import
use_thread = True
if use_thread:
self.queueclock = reactor.callLater(settings.DB_LOADER_CHECKTIME , self.run_import_thread)
else:
self.queueclock = reactor.callLater(settings.DB_LOADER_CHECKTIME , self.run_import)
def run_import_thread(self):
log.debug("run_import_thread current size: %d", self.q.qsize())
if self.q.qsize() >= settings.DB_LOADER_REC_MIN or time.time() >= self.next_force_import_time: # Don't incur thread overhead if we're not going to run
reactor.callInThread(self.import_thread)
self.scheduleImport()
def run_import(self):
log.debug("DBInterface.run_import called")
self.do_import(self.dbi, False)
self.scheduleImport()
def import_thread(self):
# Here we are in the thread.
dbi = self.connectDB()
self.do_import(dbi, False)
dbi.close()
def _update_pool_info(self, data):
self.dbi.update_pool_info({ 'blocks' : data['blocks'], 'balance' : data['balance'],
'connections' : data['connections'], 'difficulty' : data['difficulty'] })
def do_import(self, dbi, force):
log.debug("DBInterface.do_import called. force: %s, queue size: %s", 'yes' if force == True else 'no', self.q.qsize())
# Flush the whole queue on force
forcesize = 0
if force == True:
forcesize = self.q.qsize()
# Only run if we have data
while self.q.empty() == False and (force == True or self.q.qsize() >= settings.DB_LOADER_REC_MIN or time.time() >= self.next_force_import_time or forcesize > 0):
self.next_force_import_time = time.time() + settings.DB_LOADER_FORCE_TIME
force = False
# Put together the data we want to import
sqldata = []
datacnt = 0
while self.q.empty() == False and datacnt < settings.DB_LOADER_REC_MAX:
datacnt += 1
data = self.q.get()
sqldata.append(data)
self.q.task_done()
forcesize -= datacnt
# try to do the import, if we fail, log the error and put the data back in the queue
try:
log.info("Inserting %s Share Records", datacnt)
dbi.import_shares(sqldata)
except Exception as e:
log.error("Insert Share Records Failed: %s", e.args[0])
for k, v in enumerate(sqldata):
self.q.put(v)
break # Allows us to sleep a little
def queue_share(self, data):
self.q.put(data)
def found_block(self, data):
try:
log.info("Updating Found Block Share Record")
self.do_import(self.dbi, True) # We can't Update if the record is not there.
self.dbi.found_block(data)
except Exception as e:
log.error("Update Found Block Share Record Failed: %s", e.args[0])
def check_password(self, username, password):
if username == "":
log.info("Rejected worker for blank username")
return False
# Force username and password to be strings
username = str(username)
password = str(password)
wid = username + ":-:" + password
if wid in self.usercache:
return True
elif not settings.USERS_CHECK_PASSWORD and self.user_exists(username):
self.usercache[wid] = 1
return True
elif self.dbi.check_password(username, password):
self.usercache[wid] = 1
return True
elif settings.USERS_AUTOADD == True:
self.insert_user(username, password)
self.usercache[wid] = 1
return True
log.info("Authentication for %s failed" % username)
return False
def list_users(self):
return self.dbi.list_users()
def get_user(self, id):
return self.dbi.get_user(id)
def user_exists(self, username):
user = self.dbi.get_user(username)
return user is not None
def insert_user(self, username, password):
return self.dbi.insert_user(username, password)
def delete_user(self, username):
self.usercache = {}
return self.dbi.delete_user(username)
def update_user(self, username, password):
self.usercache = {}
return self.dbi.update_user(username, password)
def update_worker_diff(self, username, diff):
return self.dbi.update_worker_diff(username, diff)
def get_pool_stats(self):
return self.dbi.get_pool_stats()
def get_workers_stats(self):
return self.dbi.get_workers_stats()
def clear_worker_diff(self):
return self.dbi.clear_worker_diff()

347
mining/DB_Mysql.py Normal file
View File

@ -0,0 +1,347 @@
import time
import hashlib
import lib.settings as settings
import lib.logger
log = lib.logger.get_logger('DB_Mysql')
import MySQLdb
class DB_Mysql():
def __init__(self):
log.debug("Connecting to DB")
required_settings = ['PASSWORD_SALT', 'DB_MYSQL_HOST',
'DB_MYSQL_USER', 'DB_MYSQL_PASS',
'DB_MYSQL_DBNAME']
for setting_name in required_settings:
if not hasattr(settings, setting_name):
raise ValueError("%s isn't set, please set in config.py" % setting_name)
self.salt = getattr(settings, 'PASSWORD_SALT')
self.connect()
def connect(self):
self.dbh = MySQLdb.connect(
getattr(settings, 'DB_MYSQL_HOST'),
getattr(settings, 'DB_MYSQL_USER'),
getattr(settings, 'DB_MYSQL_PASS'),
getattr(settings, 'DB_MYSQL_DBNAME')
)
self.dbc = self.dbh.cursor()
self.dbh.autocommit(True)
def execute(self, query, args=None):
try:
self.dbc.execute(query, args)
except MySQLdb.OperationalError:
log.debug("MySQL connection lost during execute, attempting reconnect")
self.connect()
self.dbc = self.dbh.cursor()
self.dbc.execute(query, args)
def executemany(self, query, args=None):
try:
self.dbc.executemany(query, args)
except MySQLdb.OperationalError:
log.debug("MySQL connection lost during executemany, attempting reconnect")
self.connect()
self.dbc = self.dbh.cursor()
self.dbc.executemany(query, args)
def import_shares(self, data):
# Data layout
# 0: worker_name,
# 1: block_header,
# 2: block_hash,
# 3: difficulty,
# 4: timestamp,
# 5: is_valid,
# 6: ip,
# 7: self.block_height,
# 8: self.prev_hash,
# 9: invalid_reason,
# 10: share_diff
log.debug("Importing Shares")
checkin_times = {}
total_shares = 0
best_diff = 0
for k, v in enumerate(data):
# for database compatibility we are converting our_worker to Y/N format
if v[5]:
v[5] = 'Y'
else:
v[5] = 'N'
self.execute(
"""
INSERT INTO `shares`
(time, rem_host, username, our_result,
upstream_result, reason, solution)
VALUES
(FROM_UNIXTIME(%(time)s), %(host)s,
%(uname)s,
%(lres)s, 'N', %(reason)s, %(solution)s)
""",
{
"time": v[4],
"host": v[6],
"uname": v[0],
"lres": v[5],
"reason": v[9],
"solution": v[2]
}
)
self.dbh.commit()
def found_block(self, data):
# for database compatibility we are converting our_worker to Y/N format
if data[5]:
data[5] = 'Y'
else:
data[5] = 'N'
# Check for the share in the database before updating it
# Note: We can't use DUPLICATE KEY because solution is not a key
self.execute(
"""
Select `id` from `shares`
WHERE `solution` = %(solution)s
LIMIT 1
""",
{
"solution": data[2]
}
)
shareid = self.dbc.fetchone()
if shareid[0] > 0:
# Note: difficulty = -1 here
self.execute(
"""
UPDATE `shares`
SET `upstream_result` = %(result)s
WHERE `solution` = %(solution)s
AND `id` = %(id)s
LIMIT 1
""",
{
"result": data[5],
"solution": data[2],
"id": shareid[0]
}
)
self.dbh.commit()
else:
self.execute(
"""
INSERT INTO `shares`
(time, rem_host, username, our_result,
upstream_result, reason, solution)
VALUES
(FROM_UNIXTIME(%(time)s), %(host)s,
%(uname)s,
%(lres)s, %(result)s, %(reason)s, %(solution)s)
""",
{
"time": v[4],
"host": v[6],
"uname": v[0],
"lres": v[5],
"result": v[5],
"reason": v[9],
"solution": v[2]
}
)
self.dbh.commit()
def list_users(self):
self.execute(
"""
SELECT *
FROM `pool_worker`
WHERE `id`> 0
"""
)
while True:
results = self.dbc.fetchmany()
if not results:
break
for result in results:
yield result
def get_user(self, id_or_username):
log.debug("Finding user with id or username of %s", id_or_username)
self.execute(
"""
SELECT *
FROM `pool_worker`
WHERE `id` = %(id)s
OR `username` = %(uname)s
""",
{
"id": id_or_username if id_or_username.isdigit() else -1,
"uname": id_or_username
}
)
user = self.dbc.fetchone()
return user
def delete_user(self, id_or_username):
if id_or_username.isdigit() and id_or_username == '0':
raise Exception('You cannot delete that user')
log.debug("Deleting user with id or username of %s", id_or_username)
self.execute(
"""
UPDATE `shares`
SET `username` = 0
WHERE `username` = %(uname)s
""",
{
"id": id_or_username if id_or_username.isdigit() else -1,
"uname": id_or_username
}
)
self.execute(
"""
DELETE FROM `pool_worker`
WHERE `id` = %(id)s
OR `username` = %(uname)s
""",
{
"id": id_or_username if id_or_username.isdigit() else -1,
"uname": id_or_username
}
)
self.dbh.commit()
def insert_user(self, username, password):
log.debug("Adding new user %s", username)
self.execute(
"""
INSERT INTO `pool_worker`
(`username`, `password`)
VALUES
(%(uname)s, %(pass)s)
""",
{
"uname": username,
"pass": password
}
)
self.dbh.commit()
return str(username)
def update_user(self, id_or_username, password):
log.debug("Updating password for user %s", id_or_username);
self.execute(
"""
UPDATE `pool_worker`
SET `password` = %(pass)s
WHERE `id` = %(id)s
OR `username` = %(uname)s
""",
{
"id": id_or_username if id_or_username.isdigit() else -1,
"uname": id_or_username,
"pass": password
}
)
self.dbh.commit()
def check_password(self, username, password):
log.debug("Checking username/password for %s", username)
self.execute(
"""
SELECT COUNT(*)
FROM `pool_worker`
WHERE `username` = %(uname)s
AND `password` = %(pass)s
""",
{
"uname": username,
"pass": password
}
)
data = self.dbc.fetchone()
if data[0] > 0:
return True
return False
def get_workers_stats(self):
self.execute(
"""
SELECT `username`, `speed`, `last_checkin`, `total_shares`,
`total_rejects`, `total_found`, `alive`
FROM `pool_worker`
WHERE `id` > 0
"""
)
ret = {}
for data in self.dbc.fetchall():
ret[data[0]] = {
"username": data[0],
"speed": int(data[1]),
"last_checkin": time.mktime(data[2].timetuple()),
"total_shares": int(data[3]),
"total_rejects": int(data[4]),
"total_found": int(data[5]),
"alive": True if data[6] is 1 else False,
}
return ret
def close(self):
self.dbh.close()
def check_tables(self):
log.debug("Checking Database")
self.execute(
"""
SELECT COUNT(*)
FROM INFORMATION_SCHEMA.STATISTICS
WHERE `table_schema` = %(schema)s
AND `table_name` = 'shares'
""",
{
"schema": getattr(settings, 'DB_MYSQL_DBNAME')
}
)
data = self.dbc.fetchone()
if data[0] <= 0:
raise Exception("There is no shares table. Have you imported the schema?")

118
mining/DB_Mysql_Vardiff.py Normal file
View File

@ -0,0 +1,118 @@
import time
import hashlib
import lib.settings as settings
import lib.logger
log = lib.logger.get_logger('DB_Mysql')
import MySQLdb
import DB_Mysql
class DB_Mysql_Vardiff(DB_Mysql.DB_Mysql):
def __init__(self):
DB_Mysql.DB_Mysql.__init__(self)
def import_shares(self, data):
# Data layout
# 0: worker_name,
# 1: block_header,
# 2: block_hash,
# 3: difficulty,
# 4: timestamp,
# 5: is_valid,
# 6: ip,
# 7: self.block_height,
# 8: self.prev_hash,
# 9: invalid_reason,
# 10: share_diff
log.debug("Importing Shares")
checkin_times = {}
total_shares = 0
best_diff = 0
for k, v in enumerate(data):
# for database compatibility we are converting our_worker to Y/N format
if v[5]:
v[5] = 'Y'
else:
v[5] = 'N'
self.execute(
"""
INSERT INTO `shares`
(time, rem_host, username, our_result,
upstream_result, reason, solution, difficulty)
VALUES
(FROM_UNIXTIME(%(time)s), %(host)s,
%(uname)s,
%(lres)s, 'N', %(reason)s, %(solution)s, %(difficulty)s)
""",
{
"time": v[4],
"host": v[6],
"uname": v[0],
"lres": v[5],
"reason": v[9],
"solution": v[2],
"difficulty": v[3]
}
)
self.dbh.commit()
def update_worker_diff(self, username, diff):
log.debug("Setting difficulty for %s to %s", username, diff)
self.execute(
"""
UPDATE `pool_worker`
SET `difficulty` = %(diff)s
WHERE `username` = %(uname)s
""",
{
"uname": username,
"diff": diff
}
)
self.dbh.commit()
def clear_worker_diff(self):
log.debug("Resetting difficulty for all workers")
self.execute(
"""
UPDATE `pool_worker`
SET `difficulty` = 0
"""
)
self.dbh.commit()
def get_workers_stats(self):
self.execute(
"""
SELECT `username`, `speed`, `last_checkin`, `total_shares`,
`total_rejects`, `total_found`, `alive`, `difficulty`
FROM `pool_worker`
WHERE `id` > 0
"""
)
ret = {}
for data in self.dbc.fetchall():
ret[data[0]] = {
"username": data[0],
"speed": int(data[1]),
"last_checkin": time.mktime(data[2].timetuple()),
"total_shares": int(data[3]),
"total_rejects": int(data[4]),
"total_found": int(data[5]),
"alive": True if data[6] is 1 else False,
"difficulty": float(data[7])
}
return ret

96
mining/__init__.py Normal file
View File

@ -0,0 +1,96 @@
from service import MiningService
from subscription import MiningSubscription
from twisted.internet import defer
from twisted.internet.error import ConnectionRefusedError
import time
import simplejson as json
from twisted.internet import reactor
@defer.inlineCallbacks
def setup(on_startup):
'''Setup mining service internal environment.
You should not need to change this. If you
want to use another Worker manager or Share manager,
you should set proper reference to Interfaces class
*before* you call setup() in the launcher script.'''
import lib.settings as settings
# Get logging online as soon as possible
import lib.logger
log = lib.logger.get_logger('mining')
from interfaces import Interfaces
from lib.block_updater import BlockUpdater
from lib.template_registry import TemplateRegistry
from lib.bitcoin_rpc_manager import BitcoinRPCManager
from lib.block_template import BlockTemplate
from lib.coinbaser import SimpleCoinbaser
bitcoin_rpc = BitcoinRPCManager()
# Check litecoind
# Check we can connect (sleep)
# Check the results:
# - getblocktemplate is avalible (Die if not)
# - we are not still downloading the blockchain (Sleep)
log.info("Connecting to litecoind...")
while True:
try:
result = (yield bitcoin_rpc.getblocktemplate())
if isinstance(result, dict):
# litecoind implements version 1 of getblocktemplate
if result['version'] >= 1:
break
else:
log.error("Block Version mismatch: %s" % result['version'])
except ConnectionRefusedError, e:
log.error("Connection refused while trying to connect to litecoin (are your LITECOIN_TRUSTED_* settings correct?)")
reactor.stop()
except Exception, e:
if isinstance(e[2], str):
if isinstance(json.loads(e[2])['error']['message'], str):
error = json.loads(e[2])['error']['message']
if error == "Method not found":
log.error("Litecoind does not support getblocktemplate!!! (time to upgrade.)")
reactor.stop()
elif error == "Litecoind is downloading blocks...":
log.error("Litecoind downloading blockchain... will check back in 30 sec")
time.sleep(29)
else:
log.error("Litecoind Error: %s", error)
time.sleep(1) # If we didn't get a result or the connect failed
log.info('Connected to litecoind - Ready to GO!')
# Start the coinbaser
coinbaser = SimpleCoinbaser(bitcoin_rpc, getattr(settings, 'CENTRAL_WALLET'))
(yield coinbaser.on_load)
registry = TemplateRegistry(BlockTemplate,
coinbaser,
bitcoin_rpc,
getattr(settings, 'INSTANCE_ID'),
MiningSubscription.on_template,
Interfaces.share_manager.on_network_block)
# Template registry is the main interface between Stratum service
# and pool core logic
Interfaces.set_template_registry(registry)
# Set up polling mechanism for detecting new block on the network
# This is just failsafe solution when -blocknotify
# mechanism is not working properly
BlockUpdater(registry, bitcoin_rpc)
log.info("MINING SERVICE IS READY")
on_startup.callback(True)

View File

@ -0,0 +1,151 @@
import lib.settings as settings
import lib.logger
log = lib.logger.get_logger('BasicShareLimiter')
import DBInterface
dbi = DBInterface.DBInterface()
dbi.clear_worker_diff()
from twisted.internet import defer
from mining.interfaces import Interfaces
import time
''' This is just a customized ring buffer '''
class SpeedBuffer:
def __init__(self, size_max):
self.max = size_max
self.data = []
self.cur = 0
def append(self, x):
self.data.append(x)
self.cur += 1
if len(self.data) == self.max:
self.cur = 0
self.__class__ = SpeedBufferFull
def avg(self):
return sum(self.data) / self.cur
def pos(self):
return self.cur
def clear(self):
self.data = []
self.cur = 0
def size(self):
return self.cur
class SpeedBufferFull:
def __init__(self, n):
raise "you should use SpeedBuffer"
def append(self, x):
self.data[self.cur] = x
self.cur = (self.cur + 1) % self.max
def avg(self):
return sum(self.data) / self.max
def pos(self):
return self.cur
def clear(self):
self.data = []
self.cur = 0
self.__class__ = SpeedBuffer
def size(self):
return self.max
class BasicShareLimiter(object):
def __init__(self):
self.worker_stats = {}
self.target = settings.VDIFF_TARGET_TIME
self.retarget = settings.VDIFF_RETARGET_TIME
self.variance = self.target * (float(settings.VDIFF_VARIANCE_PERCENT) / float(100))
self.tmin = self.target - self.variance
self.tmax = self.target + self.variance
self.buffersize = self.retarget / self.target * 4
self.litecoin = {}
self.litecoin_diff = 100000000 # TODO: Set this to VARDIFF_MAX
# TODO: trim the hash of inactive workers
@defer.inlineCallbacks
def update_litecoin_difficulty(self):
# Cache the litecoin difficulty so we do not have to query it on every submit
# Update the difficulty if it is out of date or not set
if 'timestamp' not in self.litecoin or self.litecoin['timestamp'] < int(time.time()) - settings.DIFF_UPDATE_FREQUENCY:
self.litecoin['timestamp'] = time.time()
self.litecoin['difficulty'] = (yield Interfaces.template_registry.bitcoin_rpc.getdifficulty())
log.debug("Updated litecoin difficulty to %s" % (self.litecoin['difficulty']))
self.litecoin_diff = self.litecoin['difficulty']
def submit(self, connection_ref, job_id, current_difficulty, timestamp, worker_name):
ts = int(timestamp)
# Init the stats for this worker if it isn't set.
if worker_name not in self.worker_stats :
self.worker_stats[worker_name] = {'last_rtc': (ts - self.retarget / 2), 'last_ts': ts, 'buffer': SpeedBuffer(self.buffersize) }
dbi.update_worker_diff(worker_name, settings.POOL_TARGET)
return
# Standard share update of data
self.worker_stats[worker_name]['buffer'].append(ts - self.worker_stats[worker_name]['last_ts'])
self.worker_stats[worker_name]['last_ts'] = ts
# Do We retarget? If not, we're done.
if ts - self.worker_stats[worker_name]['last_rtc'] < self.retarget and self.worker_stats[worker_name]['buffer'].size() > 0:
return
# Set up and log our check
self.worker_stats[worker_name]['last_rtc'] = ts
avg = self.worker_stats[worker_name]['buffer'].avg()
log.info("Checking Retarget for %s (%i) avg. %i target %i+-%i" % (worker_name, current_difficulty, avg,
self.target, self.variance))
if avg < 1:
log.info("Reseting avg = 1 since it's SOOO low")
avg = 1
# Figure out our Delta-Diff
ddiff = float((float(current_difficulty) * (float(self.target) / float(avg))) - current_difficulty)
if (avg > self.tmax and current_difficulty > settings.VDIFF_MIN_TARGET):
# For fractional -0.1 ddiff's just drop by 1
if ddiff > -1:
ddiff = -1
# Don't drop below POOL_TARGET
if (ddiff + current_difficulty) < settings.VDIFF_MIN_TARGET:
ddiff = settings.VDIFF_MIN_TARGET - current_difficulty
elif avg < self.tmin:
# For fractional 0.1 ddiff's just up by 1
if ddiff < 1:
ddiff = 1
# Don't go above LITECOIN or VDIFF_MAX_TARGET
self.update_litecoin_difficulty()
if settings.USE_LITECOIN_DIFF:
diff_max = min([settings.VDIFF_MAX_TARGET, self.litecoin_diff])
else:
diff_max = settings.VDIFF_MAX_TARGET
if (ddiff + current_difficulty) > diff_max:
ddiff = diff_max - current_difficulty
else: # If we are here, then we should not be retargeting.
return
# At this point we are retargeting this worker
new_diff = current_difficulty + ddiff
log.info("Retarget for %s %i old: %i new: %i" % (worker_name, ddiff, current_difficulty, new_diff))
self.worker_stats[worker_name]['buffer'].clear()
session = connection_ref().get_session()
session['prev_diff'] = session['difficulty']
session['prev_jobid'] = job_id
session['difficulty'] = new_diff
connection_ref().rpc('mining.set_difficulty', [new_diff, ], is_notification=True)
dbi.update_worker_diff(worker_name, new_diff)

101
mining/interfaces.py Normal file
View File

@ -0,0 +1,101 @@
'''This module contains classes used by pool core to interact with the rest of the pool.
Default implementation do almost nothing, you probably want to override these classes
and customize references to interface instances in your launcher.
(see launcher_demo.tac for an example).
'''
import time
from twisted.internet import reactor, defer
from lib.util import b58encode
import lib.settings as settings
import lib.logger
log = lib.logger.get_logger('interfaces')
import DBInterface
dbi = DBInterface.DBInterface()
dbi.init_main()
class WorkerManagerInterface(object):
def __init__(self):
return
def authorize(self, worker_name, worker_password):
# Important NOTE: This is called on EVERY submitted share. So you'll need caching!!!
return dbi.check_password(worker_name, worker_password)
class ShareLimiterInterface(object):
'''Implement difficulty adjustments here'''
def submit(self, connection_ref, job_id, current_difficulty, timestamp, worker_name):
'''connection - weak reference to Protocol instance
current_difficulty - difficulty of the connection
timestamp - submission time of current share
- raise SubmitException for stop processing this request
- call mining.set_difficulty on connection to adjust the difficulty'''
#return dbi.update_worker_diff(worker_name, settings.POOL_TARGET)
return
class ShareManagerInterface(object):
def __init__(self):
self.block_height = 0
self.prev_hash = 0
def on_network_block(self, prevhash, block_height):
'''Prints when there's new block coming from the network (possibly new round)'''
self.block_height = block_height
self.prev_hash = b58encode(int(prevhash, 16))
pass
def on_submit_share(self, worker_name, block_header, block_hash, difficulty, timestamp, is_valid, ip, invalid_reason, share_diff):
log.info("%s (%s) %s %s" % (block_hash, share_diff, 'valid' if is_valid else 'INVALID', worker_name))
dbi.queue_share([worker_name, block_header, block_hash, difficulty, timestamp, is_valid, ip, self.block_height, self.prev_hash,
invalid_reason, share_diff ])
def on_submit_block(self, is_accepted, worker_name, block_header, block_hash, timestamp, ip, share_diff):
log.info("Block %s %s" % (block_hash, 'ACCEPTED' if is_accepted else 'REJECTED'))
dbi.found_block([worker_name, block_header, block_hash, -1, timestamp, is_accepted, ip, self.block_height, self.prev_hash, share_diff ])
class TimestamperInterface(object):
'''This is the only source for current time in the application.
Override this for generating unix timestamp in different way.'''
def time(self):
return time.time()
class PredictableTimestamperInterface(TimestamperInterface):
'''Predictable timestamper may be useful for unit testing.'''
start_time = 1345678900 # Some day in year 2012
delta = 0
def time(self):
self.delta += 1
return self.start_time + self.delta
class Interfaces(object):
worker_manager = None
share_manager = None
share_limiter = None
timestamper = None
template_registry = None
@classmethod
def set_worker_manager(cls, manager):
cls.worker_manager = manager
@classmethod
def set_share_manager(cls, manager):
cls.share_manager = manager
@classmethod
def set_share_limiter(cls, limiter):
cls.share_limiter = limiter
@classmethod
def set_timestamper(cls, manager):
cls.timestamper = manager
@classmethod
def set_template_registry(cls, registry):
dbi.set_bitcoinrpc(registry.bitcoin_rpc)
cls.template_registry = registry

138
mining/service.py Normal file
View File

@ -0,0 +1,138 @@
import binascii
from twisted.internet import defer
import lib.settings as settings
from stratum.services import GenericService, admin
from stratum.pubsub import Pubsub
from interfaces import Interfaces
from subscription import MiningSubscription
from lib.exceptions import SubmitException
import lib.logger
log = lib.logger.get_logger('mining')
class MiningService(GenericService):
'''This service provides public API for Stratum mining proxy
or any Stratum-compatible miner software.
Warning - any callable argument of this class will be propagated
over Stratum protocol for public audience!'''
service_type = 'mining'
service_vendor = 'stratum'
is_default = True
@admin
def update_block(self):
'''Connect this RPC call to 'litecoind -blocknotify' for
instant notification about new block on the network.
See blocknotify.sh in /scripts/ for more info.'''
log.info("New block notification received")
Interfaces.template_registry.update_block()
return True
@admin
def add_litecoind(self, *args):
''' Function to add a litecoind instance live '''
if len(args) != 4:
raise SubmitException("Incorrect number of parameters sent")
#(host, port, user, password) = args
Interfaces.template_registry.bitcoin_rpc.add_connection(args[0], args[1], args[2], args[3])
log.info("New litecoind connection added %s:%s" % (args[0], args[1]))
return True
def authorize(self, worker_name, worker_password):
'''Let authorize worker on this connection.'''
session = self.connection_ref().get_session()
session.setdefault('authorized', {})
if Interfaces.worker_manager.authorize(worker_name, worker_password):
session['authorized'][worker_name] = worker_password
return True
else:
if worker_name in session['authorized']:
del session['authorized'][worker_name]
return False
def subscribe(self, *args):
'''Subscribe for receiving mining jobs. This will
return subscription details, extranonce1_hex and extranonce2_size'''
extranonce1 = Interfaces.template_registry.get_new_extranonce1()
extranonce2_size = Interfaces.template_registry.extranonce2_size
extranonce1_hex = binascii.hexlify(extranonce1)
session = self.connection_ref().get_session()
session['extranonce1'] = extranonce1
session['difficulty'] = settings.POOL_TARGET # Following protocol specs, default diff is 1
return Pubsub.subscribe(self.connection_ref(), MiningSubscription()) + (extranonce1_hex, extranonce2_size)
def submit(self, worker_name, job_id, extranonce2, ntime, nonce):
'''Try to solve block candidate using given parameters.'''
session = self.connection_ref().get_session()
session.setdefault('authorized', {})
# Check if worker is authorized to submit shares
if not Interfaces.worker_manager.authorize(worker_name, session['authorized'].get(worker_name)):
raise SubmitException("Worker is not authorized")
# Check if extranonce1 is in connection session
extranonce1_bin = session.get('extranonce1', None)
if not extranonce1_bin:
raise SubmitException("Connection is not subscribed for mining")
difficulty = session['difficulty']
submit_time = Interfaces.timestamper.time()
ip = self.connection_ref()._get_ip()
Interfaces.share_limiter.submit(self.connection_ref, job_id, difficulty, submit_time, worker_name)
# This checks if submitted share meet all requirements
# and it is valid proof of work.
try:
(block_header, block_hash, share_diff, on_submit) = Interfaces.template_registry.submit_share(job_id,
worker_name, session, extranonce1_bin, extranonce2, ntime, nonce, difficulty)
except SubmitException as e:
# block_header and block_hash are None when submitted data are corrupted
Interfaces.share_manager.on_submit_share(worker_name, False, False, difficulty,
submit_time, False, ip, e[0], 0)
raise
Interfaces.share_manager.on_submit_share(worker_name, block_header,
block_hash, difficulty, submit_time, True, ip, '', share_diff)
if on_submit != None:
# Pool performs submitblock() to litecoind. Let's hook
# to result and report it to share manager
on_submit.addCallback(Interfaces.share_manager.on_submit_block,
worker_name, block_header, block_hash, submit_time, ip, share_diff)
return True
# Service documentation for remote discovery
update_block.help_text = "Notify Stratum server about new block on the network."
update_block.params = [('password', 'string', 'Administrator password'), ]
authorize.help_text = "Authorize worker for submitting shares on this connection."
authorize.params = [('worker_name', 'string', 'Name of the worker, usually in the form of user_login.worker_id.'),
('worker_password', 'string', 'Worker password'), ]
subscribe.help_text = "Subscribes current connection for receiving new mining jobs."
subscribe.params = []
submit.help_text = "Submit solved share back to the server. Excessive sending of invalid shares "\
"or shares above indicated target (see Stratum mining docs for set_target()) may lead "\
"to temporary or permanent ban of user,worker or IP address."
submit.params = [('worker_name', 'string', 'Name of the worker, usually in the form of user_login.worker_id.'),
('job_id', 'string', 'ID of job (received by mining.notify) which the current solution is based on.'),
('extranonce2', 'string', 'hex-encoded big-endian extranonce2, length depends on extranonce2_size from mining.notify.'),
('ntime', 'string', 'UNIX timestamp (32bit integer, big-endian, hex-encoded), must be >= ntime provided by mining,notify and <= current time'),
('nonce', 'string', '32bit integer, hex-encoded, big-endian'), ]

55
mining/subscription.py Normal file
View File

@ -0,0 +1,55 @@
from stratum.pubsub import Pubsub, Subscription
from mining.interfaces import Interfaces
import lib.settings as settings
import lib.logger
log = lib.logger.get_logger('subscription')
class MiningSubscription(Subscription):
'''This subscription object implements
logic for broadcasting new jobs to the clients.'''
event = 'mining.notify'
@classmethod
def on_template(cls, is_new_block):
'''This is called when TemplateRegistry registers
new block which we have to broadcast clients.'''
start = Interfaces.timestamper.time()
clean_jobs = is_new_block
(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, _) = \
Interfaces.template_registry.get_last_broadcast_args()
# Push new job to subscribed clients
cls.emit(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, clean_jobs)
cnt = Pubsub.get_subscription_count(cls.event)
log.info("BROADCASTED to %d connections in %.03f sec" % (cnt, (Interfaces.timestamper.time() - start)))
def _finish_after_subscribe(self, result):
'''Send new job to newly subscribed client'''
try:
(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, _) = \
Interfaces.template_registry.get_last_broadcast_args()
except Exception:
log.error("Template not ready yet")
return result
# Force set higher difficulty
self.connection_ref().rpc('mining.set_difficulty', [settings.POOL_TARGET, ], is_notification=True)
# self.connection_ref().rpc('client.get_version', [])
# Force client to remove previous jobs if any (eg. from previous connection)
clean_jobs = True
self.emit_single(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, True)
return result
def after_subscribe(self, *args):
'''This will send new job to the client *after* he receive subscription details.
on_finish callback solve the issue that job is broadcasted *during*
the subscription request and client receive messages in wrong order.'''
self.connection_ref().on_finish.addCallback(self._finish_after_subscribe)

51
scripts/addlitecoind.sh Executable file
View File

@ -0,0 +1,51 @@
#!/usr/bin/env python
# Send notification to Stratum mining instance add a new litecoind instance to the pool
import socket
import json
import sys
import argparse
import time
start = time.time()
parser = argparse.ArgumentParser(description='Add a litecoind server to the Stratum instance.')
parser.add_argument('--password', dest='password', type=str, help='use admin password from Stratum server config')
parser.add_argument('--host', dest='host', type=str, default='localhost', help='hostname of Stratum mining instance')
parser.add_argument('--port', dest='port', type=int, default=3333, help='port of Stratum mining instance')
parser.add_argument('--lport', dest='lport', type=int, default=8332, help='port of litecoin instance')
parser.add_argument('--lhost', dest='lhost', type=str, default='localhost', help='hostname of litecoin instance')
parser.add_argument('--luser', dest='luser', type=str, default='user', help='user for the litecoin instance')
parser.add_argument('--lpassword', dest='lpassword', type=str, default='somelargepassword', help='password for the user on the litecoin instance')
args = parser.parse_args()
if args.password == None:
parser.print_help()
sys.exit()
message = {'id': 1, 'method': 'mining.add_litecoind', 'params': [args.password, args.lhost, args.lport, args.luser, args.lpassword]}
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((args.host, args.port))
s.sendall(json.dumps(message)+"\n")
data = s.recv(16000)
s.close()
except IOError:
print "addlitecoind: Cannot connect to the pool"
sys.exit()
for line in data.split("\n"):
if not line.strip():
# Skip last line which doesn't contain any message
continue
message = json.loads(line)
if message['id'] == 1:
if message['result'] == True:
print "addlitecoind: done in %.03f sec" % (time.time() - start)
else:
print "addlitecoind: Error during request:", message['error'][1]
else:
print "addlitecoind: Unexpected message from the server:", message

50
scripts/blocknotify.sh Executable file
View File

@ -0,0 +1,50 @@
#!/usr/bin/env python
# Send notification to Stratum mining instance on localhost that there's new bitcoin block
# You can use this script directly as an variable for -blocknotify argument:
# ./litecoind -blocknotify="blocknotify.sh --password admin_password"
# This is also very basic example how to use Stratum protocol in native Python
import socket
import json
import sys
import argparse
import time
start = time.time()
parser = argparse.ArgumentParser(description='Send notification to Stratum instance about new bitcoin block.')
parser.add_argument('--password', dest='password', type=str, help='use admin password from Stratum server config')
parser.add_argument('--host', dest='host', type=str, default='localhost', help='hostname of Stratum mining instance')
parser.add_argument('--port', dest='port', type=int, default=3333, help='port of Stratum mining instance')
args = parser.parse_args()
if args.password == None:
parser.print_help()
sys.exit()
message = {'id': 1, 'method': 'mining.update_block', 'params': [args.password]}
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((args.host, args.port))
s.sendall(json.dumps(message)+"\n")
data = s.recv(16000)
s.close()
except IOError:
print "blocknotify: Cannot connect to the pool"
sys.exit()
for line in data.split("\n"):
if not line.strip():
# Skip last line which doesn't contain any message
continue
message = json.loads(line)
if message['id'] == 1:
if message['result'] == True:
print "blocknotify: done in %.03f sec" % (time.time() - start)
else:
print "blocknotify: Error during request:", message['error'][1]
else:
print "blocknotify: Unexpected message from the server:", message

8
scripts/generateAdminHash.sh Executable file
View File

@ -0,0 +1,8 @@
#!/bin/bash
if [ "x$1" == "x" ]; then
echo " Usage: $0 <admin Password>"
exit
fi
echo -n "$1" | sha256sum | cut -f1 -d' '

3
update_submodules Executable file
View File

@ -0,0 +1,3 @@
#!/bin/sh
git submodule foreach git pull origin master