Compare commits
1 Commits
master
...
copperlark
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9da912512c |
1
.gitignore
vendored
1
.gitignore
vendored
@ -2,4 +2,3 @@
|
||||
conf/config.py
|
||||
*.log
|
||||
LOG
|
||||
*.bak
|
||||
|
||||
5
.gitmodules
vendored
5
.gitmodules
vendored
@ -6,7 +6,4 @@
|
||||
url = https://github.com/Tydus/litecoin_scrypt.git
|
||||
[submodule "externals/stratum"]
|
||||
path = externals/stratum
|
||||
url = https://github.com/ahmedbodi/stratum.git
|
||||
[submodule "externals/quarkcoin-hash"]
|
||||
path = externals/quarkcoin-hash
|
||||
url = https://github.com/Neisklar/quarkcoin-hash-python
|
||||
url = https://github.com/slush0/stratum.git
|
||||
|
||||
@ -1,9 +0,0 @@
|
||||
language: python
|
||||
python:
|
||||
- "2.6"
|
||||
- "2.7"
|
||||
- "3.2"
|
||||
- "3.3"
|
||||
# command to install dependencies
|
||||
install:
|
||||
- "pip install -r requirements.txt"
|
||||
62
README.md
62
README.md
@ -1,66 +1,64 @@
|
||||
# Description
|
||||
#Description
|
||||
Stratum-mining is a pooled mining protocol. It is a replacement for *getwork* based pooling servers by allowing clients to generate work. The stratum protocol is described [here](http://mining.bitcoin.cz/stratum-mining) in full detail.
|
||||
|
||||
This is a implementation of stratum-mining for scrypt based coins. It is compatible with *MPOS* as it complies with the standards of *pushpool*. The end goal is to build on these standards to come up with a more stable solution.
|
||||
This is a implementation of stratum-mining for scrypt based coins. It is compatible with *MPOS* as well as *mmcfe*, as it complies with the standards of *pushpool*. The end goal is to build on these standards to come up with a more stable solution.
|
||||
|
||||
The goal is to make a reliable stratum mining server for a wide range of coins unlike other forks where the code is limited to specific algorithm's. Over time I will develop this to be more feature rich and very stable. If you would like to see a feature please file a feature request.
|
||||
The goal is to make a reliable stratum mining server for scrypt based coins. Over time I will develop this to be more feature rich and very stable. If you would like to see a feature please file a feature request.
|
||||
|
||||
**NOTE:** This fork is still in development. Many features may be broken. Please report any broken features or issues.
|
||||
|
||||
# Features
|
||||
#Features
|
||||
|
||||
* Stratum Mining Pool
|
||||
* Solved Block Confirmation
|
||||
* Job Based Vardiff support
|
||||
* Vardiff support
|
||||
* Solution Block Hash Support
|
||||
* *NEW* SHA256 and Scrypt Algo Support
|
||||
* Log Rotation
|
||||
* Initial low difficulty share confirmation
|
||||
* Multiple *coind* wallets
|
||||
* On the fly addition of new *coind* wallets
|
||||
* MySQL/PostGres/SQLite database support
|
||||
* MySQL database support
|
||||
* Adjustable database commit parameters
|
||||
* Bypass password check for workers
|
||||
* Proof Of Work and Proof of Stake Coin Support
|
||||
* Transaction Messaging Support
|
||||
|
||||
#Donations
|
||||
* BTC: 18uj5SzQaYVAPX96JZt1VE4K43m5VeYekP
|
||||
* BTE: 8UJLskr8eDYATvYzmaCBw3vbRmeNweT3rW
|
||||
* DGC: D6tdmDCUkZEUaUyLx4dhZH992yTEJSL1tU
|
||||
* LTC: LVDbDHPUF13YZQeJE6AtxDwiF2RyNBsmXh
|
||||
* WDC: WeVFgZQsKSKXGak7NJPp9SrcUexghzTPGJ
|
||||
|
||||
# Requirements
|
||||
#Requirements
|
||||
*stratum-mining* is built in python. I have been testing it with 2.7.3, but it should work with other versions. The requirements for running the software are below.
|
||||
|
||||
* Python 2.7+
|
||||
* python-twisted
|
||||
* stratum
|
||||
* MySQL Server
|
||||
* CoinD's
|
||||
* SHA256 or Scrypt CoinDaemon
|
||||
|
||||
Other coins have been known to work with this implementation. I have tested with the following coins, but there may be many others that work.
|
||||
|
||||
* Orbitcoin.
|
||||
* FireFlyCoin.
|
||||
* ByteCoin
|
||||
* DigitalCoin
|
||||
* Worldcoin
|
||||
* Argentum
|
||||
* Netcoin
|
||||
* FlorinCoin
|
||||
* CHNCoin
|
||||
* Cubits v3
|
||||
* OpenSourceCoin
|
||||
* TekCoin
|
||||
* Franko
|
||||
* Quark
|
||||
* Securecoin
|
||||
|
||||
# Installation
|
||||
#Installation
|
||||
|
||||
The installation of this *stratum-mining* can be found in the Repo Wiki.
|
||||
The installation of this *stratum-mining* can be found in the INSTALL.md file.
|
||||
|
||||
# Credits
|
||||
#Contact
|
||||
I am available in the #MPOS, #crypto-expert, #digitalcoin, #bytecoin and #worldcoin channels on freenode. Although i am willing to provide support through IRC please file issues on the repo
|
||||
|
||||
* Original version by Slush0 and ArtForz (original stratum code)
|
||||
* More Features added by GeneralFault, Wadee Womersley, Viperaus, TheSeven and Moopless
|
||||
* Multi Algo, Vardiff, DB and MPOS support done by Ahmed_Bodi, penner42 and Obigal
|
||||
* Riecoin support implemented by gatra
|
||||
#Credits
|
||||
|
||||
# License
|
||||
This software is provided AS-IS without any warranties of any kind. Please use at your own risk.
|
||||
* Original version by Slush0 (original stratum code)
|
||||
* More Features added by GeneralFault, Wadee Womersley and Moopless
|
||||
* Scrypt conversion from work done by viperaus
|
||||
* PoS conversion done by TheSeven
|
||||
* Modifications to make it more user friendly and easier to setup for multiple coins done by Ahmed_Bodi
|
||||
|
||||
|
||||
#License
|
||||
This software is provides AS-IS without any warranties of any kind. Please use at your own risk.
|
||||
|
||||
|
||||
@ -9,40 +9,34 @@ You NEED to set the parameters in BASIC SETTINGS
|
||||
# ******************** BASIC SETTINGS ***************
|
||||
# These are the MUST BE SET parameters!
|
||||
|
||||
CENTRAL_WALLET = 'set_valid_addresss_in_config!' # Local coin address where money goes
|
||||
CENTRAL_WALLET = 'set_valid_addresss_in_config!' # local coin address where money goes
|
||||
|
||||
COINDAEMON_TRUSTED_HOST = 'localhost'
|
||||
COINDAEMON_TRUSTED_PORT = 28332
|
||||
COINDAEMON_TRUSTED_PORT = 8332
|
||||
COINDAEMON_TRUSTED_USER = 'user'
|
||||
COINDAEMON_TRUSTED_PASSWORD = 'somepassword'
|
||||
|
||||
# Coin algorithm is the option used to determine the algorithm used by stratum
|
||||
# This currently works with POW and POS coins
|
||||
# The available options are:
|
||||
# scrypt, sha256d, scrypt-jane, skeinhash, quark and riecoin
|
||||
# Coin Algorithm is the option used to determine the algortithm used by stratum
|
||||
# This currently only works with POW SHA256 and Scrypt Coins
|
||||
# The available options are scrypt and sha256d.
|
||||
# If the option does not meet either of these criteria stratum defaults to scrypt
|
||||
# For Coins which support TX Messages please enter yes in the TX selection
|
||||
COINDAEMON_ALGO = 'riecoin'
|
||||
COINDAEMON_TX = 'no'
|
||||
COINDAEMON_ALGO = 'scrypt'
|
||||
|
||||
# ******************** BASIC SETTINGS ***************
|
||||
# Backup Coin Daemon address's (consider having at least 1 backup)
|
||||
# You can have up to 99
|
||||
|
||||
#COINDAEMON_TRUSTED_HOST_1 = 'localhost'
|
||||
#COINDAEMON_TRUSTED_PORT_1 = 28332
|
||||
#COINDAEMON_TRUSTED_PORT_1 = 8332
|
||||
#COINDAEMON_TRUSTED_USER_1 = 'user'
|
||||
#COINDAEMON_TRUSTED_PASSWORD_1 = 'somepassword'
|
||||
|
||||
#COINDAEMON_TRUSTED_HOST_2 = 'localhost'
|
||||
#COINDAEMON_TRUSTED_PORT_2 = 28332
|
||||
#COINDAEMON_TRUSTED_PORT_2 = 8332
|
||||
#COINDAEMON_TRUSTED_USER_2 = 'user'
|
||||
#COINDAEMON_TRUSTED_PASSWORD_2 = 'somepassword'
|
||||
|
||||
# ******************** GENERAL SETTINGS ***************
|
||||
# Set process name of twistd, much more comfortable if you run multiple processes on one machine
|
||||
STRATUM_MINING_PROCESS_NAME= 'twistd-stratum-mining'
|
||||
|
||||
|
||||
# Enable some verbose debug (logging requests and responses).
|
||||
DEBUG = False
|
||||
@ -51,29 +45,29 @@ DEBUG = False
|
||||
LOGDIR = 'log/'
|
||||
|
||||
# Main application log file.
|
||||
LOGFILE = None # eg. 'stratum.log'
|
||||
LOGLEVEL = 'DEBUG'
|
||||
LOGFILE = None # eg. 'stratum.log'
|
||||
|
||||
# Logging Rotation can be enabled with the following settings
|
||||
# It if not enabled here, you can set up logrotate to rotate the files.
|
||||
# For built in log rotation set LOG_ROTATION = True and configure the variables
|
||||
# For built in log rotation set LOG_ROTATION = True and configrue the variables
|
||||
LOG_ROTATION = True
|
||||
LOG_SIZE = 10485760 # Rotate every 10M
|
||||
LOG_RETENTION = 10 # Keep 10 Logs
|
||||
|
||||
# How many threads use for synchronous methods (services).
|
||||
# 30 is enough for small installation, for real usage
|
||||
# it should be slightly more, say 100-300.
|
||||
THREAD_POOL_SIZE = 300
|
||||
|
||||
# ******************** TRANSPORTS *********************
|
||||
# Hostname or external IP to expose
|
||||
HOSTNAME = 'localhost'
|
||||
|
||||
# Disable the example service
|
||||
ENABLE_EXAMPLE_SERVICE = False
|
||||
|
||||
# ******************** TRANSPORTS *********************
|
||||
|
||||
# Hostname or external IP to expose
|
||||
HOSTNAME = 'localhost'
|
||||
|
||||
# Port used for Socket transport. Use 'None' for disabling the transport.
|
||||
LISTEN_SOCKET_TRANSPORT = 3333
|
||||
LISTEN_SOCKET_TRANSPORT = 3313
|
||||
# Port used for HTTP Poll transport. Use 'None' for disabling the transport
|
||||
LISTEN_HTTP_TRANSPORT = None
|
||||
# Port used for HTTPS Poll transport
|
||||
@ -83,114 +77,84 @@ LISTEN_WS_TRANSPORT = None
|
||||
# Port used for secure WebSocket, 'None' for disabling WSS
|
||||
LISTEN_WSS_TRANSPORT = None
|
||||
|
||||
# Salt used for Block Notify Password
|
||||
|
||||
# Salt used when hashing passwords
|
||||
PASSWORD_SALT = 'some_crazy_string'
|
||||
|
||||
# ******************** Database *********************
|
||||
DATABASE_DRIVER = 'mysql' # Options: none, sqlite, postgresql or mysql
|
||||
DATABASE_EXTEND = False # SQLite and PGSQL Only!
|
||||
|
||||
# SQLite
|
||||
DB_SQLITE_FILE = 'pooldb.sqlite'
|
||||
# Postgresql
|
||||
DB_PGSQL_HOST = 'localhost'
|
||||
DB_PGSQL_DBNAME = 'pooldb'
|
||||
DB_PGSQL_USER = 'pooldb'
|
||||
DB_PGSQL_PASS = '**empty**'
|
||||
DB_PGSQL_SCHEMA = 'public'
|
||||
# MySQL
|
||||
DB_MYSQL_HOST = 'localhost'
|
||||
DB_MYSQL_DBNAME = 'pooldb'
|
||||
DB_MYSQL_USER = 'pooldb'
|
||||
DB_MYSQL_PASS = '**empty**'
|
||||
DB_MYSQL_PORT = 3306 # Default port for MySQL
|
||||
|
||||
# ******************** Adv. DB Settings *********************
|
||||
# Don't change these unless you know what you are doing
|
||||
|
||||
DB_LOADER_CHECKTIME = 15 # How often we check to see if we should run the loader
|
||||
DB_LOADER_REC_MIN = 10 # Min Records before the bulk loader fires
|
||||
DB_LOADER_REC_MAX = 50 # Max Records the bulk loader will commit at a time
|
||||
DB_LOADER_CHECKTIME = 15 # How often we check to see if we should run the loader
|
||||
DB_LOADER_REC_MIN = 10 # Min Records before the bulk loader fires
|
||||
DB_LOADER_REC_MAX = 50 # Max Records the bulk loader will commit at a time
|
||||
|
||||
DB_LOADER_FORCE_TIME = 300 # How often the cache should be flushed into the DB regardless of size.
|
||||
DB_STATS_AVG_TIME = 300 # When using the DATABASE_EXTEND option, average speed over X sec
|
||||
# Note: this is also how often it updates
|
||||
DB_USERCACHE_TIME = 600 # How long the usercache is good for before we refresh
|
||||
|
||||
DB_STATS_AVG_TIME = 300 # When using the DATABASE_EXTEND option, average speed over X sec
|
||||
# Note: this is also how often it updates
|
||||
DB_USERCACHE_TIME = 600 # How long the usercache is good for before we refresh
|
||||
|
||||
# ******************** Pool Settings *********************
|
||||
|
||||
# User Auth Options
|
||||
USERS_AUTOADD = False # Automatically add users to database when they connect.
|
||||
# This basically disables User Auth for the pool.
|
||||
USERS_CHECK_PASSWORD = False # Check the workers password? (Many pools don't)
|
||||
USERS_AUTOADD = False # Automatically add users to db when they connect.
|
||||
# This basically disables User Auth for the pool.
|
||||
USERS_CHECK_PASSWORD = False # Check the workers password? (Many pools don't)
|
||||
|
||||
# Transaction Settings
|
||||
COINBASE_EXTRAS = '/stratumPool/' # Extra Descriptive String to incorporate in solved blocks
|
||||
ALLOW_NONLOCAL_WALLET = False # Allow valid, but NON-Local wallet's
|
||||
COINBASE_EXTRAS = '/stratumPool/' # Extra Descriptive String to incorporate in solved blocks
|
||||
ALLOW_NONLOCAL_WALLET = False # Allow valid, but NON-Local wallet's
|
||||
|
||||
# Coin Daemon communication polling settings (In Seconds)
|
||||
PREVHASH_REFRESH_INTERVAL = 5 # How often to check for new Blocks
|
||||
# If using the blocknotify script (recommended) set = to MERKLE_REFRESH_INTERVAL
|
||||
# (No reason to poll if we're getting pushed notifications)
|
||||
MERKLE_REFRESH_INTERVAL = 60 # How often check memorypool
|
||||
# This effectively resets the template and incorporates new transactions.
|
||||
# This should be "slow"
|
||||
PREVHASH_REFRESH_INTERVAL = 5 # How often to check for new Blocks
|
||||
# If using the blocknotify script (recommended) set = to MERKLE_REFRESH_INTERVAL
|
||||
# (No reason to poll if we're getting pushed notifications)
|
||||
MERKLE_REFRESH_INTERVAL = 60 # How often check memorypool
|
||||
# This effectively resets the template and incorporates new transactions.
|
||||
# This should be "slow"
|
||||
|
||||
INSTANCE_ID = 31 # Used for extranonce and needs to be 0-31
|
||||
INSTANCE_ID = 31 # Used for extranonce and needs to be 0-31
|
||||
|
||||
# ******************** Pool Difficulty Settings *********************
|
||||
VDIFF_X2_TYPE = True # Powers of 2 e.g. 2,4,8,16,32,64,128,256,512,1024
|
||||
VDIFF_FLOAT = False # Use float difficulty
|
||||
# Again, Don't change unless you know what this is for.
|
||||
|
||||
# Pool Target (Base Difficulty)
|
||||
POOL_TARGET = 4 # Pool-wide difficulty target int >= 1
|
||||
# In order to match the Pool Target with a frontend like MPOS the following formula is used: (stratum diff) ~= 2^((target bits in pushpool) - 16)
|
||||
# E.G. a Pool Target of 16 would = a MPOS and PushPool Target bit's of 20
|
||||
POOL_TARGET = 16 # Pool-wide difficulty target int >= 1
|
||||
|
||||
# Variable Difficulty Enable
|
||||
VARIABLE_DIFF = True # Master variable difficulty enable
|
||||
VARIABLE_DIFF = True # Master variable difficulty enable
|
||||
|
||||
# Variable diff tuning variables
|
||||
#VARDIFF will start at the POOL_TARGET. It can go as low as the VDIFF_MIN and as high as min(VDIFF_MAX or coindaemons difficulty)
|
||||
USE_COINDAEMON_DIFF = False # Set the maximum difficulty to the coindaemon difficulty.
|
||||
DIFF_UPDATE_FREQUENCY = 86400 # Update the coindaemon difficulty once a day for the VARDIFF maximum
|
||||
VDIFF_MIN_TARGET = 16 # Minimum target difficulty
|
||||
VDIFF_MAX_TARGET = 1024 # Maximum target difficulty
|
||||
VDIFF_TARGET_TIME = 15 # Target time per share (i.e. try to get 1 share per this many seconds)
|
||||
VDIFF_RETARGET_TIME = 120 # Check to see if we should retarget this often
|
||||
VDIFF_VARIANCE_PERCENT = 30 # Allow average time to very this % from target without retarget
|
||||
|
||||
# Allow external setting of worker difficulty, checks pool_worker table datarow[6] position for target difficulty
|
||||
# if present or else defaults to pool target, over rides all other difficulty settings, no checks are made
|
||||
# for min or max limits this should be done by your front end software
|
||||
ALLOW_EXTERNAL_DIFFICULTY = False
|
||||
|
||||
#VARDIFF will start at the POOL_TARGET. It can go as low as the VDIFF_MIN and as high as min(VDIFF_MAX or the coin daemon's difficulty)
|
||||
USE_COINDAEMON_DIFF = False # Set the maximum difficulty to the coin daemon's difficulty.
|
||||
DIFF_UPDATE_FREQUENCY = 86400 # Update the COINDAEMON difficulty once a day for the VARDIFF maximum
|
||||
VDIFF_MIN_TARGET = 15 # Minimum Target difficulty
|
||||
VDIFF_MAX_TARGET = 1000 # Maximum Target difficulty
|
||||
VDIFF_TARGET_TIME = 30 # Target time per share (i.e. try to get 1 share per this many seconds)
|
||||
VDIFF_RETARGET_TIME = 120 # Check to see if we should retarget this often
|
||||
VDIFF_VARIANCE_PERCENT = 20 # Allow average time to very this % from target without retarget
|
||||
#### Advanced Option #####
|
||||
# For backwards compatibility, we send the scrypt hash to the solutions column in the shares table
|
||||
# For block confirmation, we have an option to send the block hash in
|
||||
# Please make sure your front end is compatible with the block hash in the solutions table.
|
||||
# For People using the MPOS frontend enabling this is recommended. It allows the frontend to compare the block hash to the coin daemon reducing the likelihood of missing share error's for blocks
|
||||
SOLUTION_BLOCK_HASH = True # If enabled, enter the block hash. If false enter the scrypt/sha hash into the shares table
|
||||
# For People using the MPOS frontend enabling this is recommended. It allows the frontend to compare the block hash to the coin daemon reducing the liklihood of missing share error's for blocks
|
||||
SOLUTION_BLOCK_HASH = True # If enabled, enter the block hash. If false enter the scrypt/sha hash into the shares table
|
||||
|
||||
#Pass scrypt hash to submit block check.
|
||||
#Use if submit block is returning errors and marking submitted blocks invalid upstream, but the submitted blocks are being a accepted by the coin daemon into the block chain.
|
||||
BLOCK_CHECK_SCRYPT_HASH = False
|
||||
|
||||
# ******************** Worker Ban Options *********************
|
||||
ENABLE_WORKER_BANNING = True # Enable/disable temporary worker banning
|
||||
WORKER_CACHE_TIME = 600 # How long the worker stats cache is good before we check and refresh
|
||||
WORKER_BAN_TIME = 300 # How long we temporarily ban worker
|
||||
INVALID_SHARES_PERCENT = 50 # Allow average invalid shares vary this % before we ban
|
||||
|
||||
# ******************** E-Mail Notification Settings *********************
|
||||
NOTIFY_EMAIL_TO = '' # Where to send Start/Found block notifications
|
||||
NOTIFY_EMAIL_TO_DEADMINER = '' # Where to send dead miner notifications
|
||||
NOTIFY_EMAIL_FROM = 'root@localhost' # Sender address
|
||||
NOTIFY_EMAIL_SERVER = 'localhost' # E-Mail sender
|
||||
NOTIFY_EMAIL_USERNAME = '' # E-Mail server SMTP logon
|
||||
NOTIFY_EMAIL_PASSWORD = ''
|
||||
NOTIFY_EMAIL_USETLS = True
|
||||
|
||||
# ******************** Memcache Settings *********************
|
||||
# Memcahce is a requirement. Enter the settings below
|
||||
MEMCACHE_HOST = "localhost" # Hostname or IP that runs memcached
|
||||
MEMCACHE_PORT = 11211 # Port
|
||||
MEMCACHE_TIMEOUT = 900 # Key timeout
|
||||
MEMCACHE_PREFIX = "stratum_" # Prefix for keys
|
||||
# ******************** Getwork Proxy Settings *********************
|
||||
# This enables a copy of slush's getwork proxy for old clients
|
||||
# It will also auto-redirect new clients to the stratum interface
|
||||
# so you can point ALL clients to: http://<yourserver>:<GW_PORT>
|
||||
GW_ENABLE = True # Enable the Proxy
|
||||
GW_PORT = 3333 # Getwork Proxy Port
|
||||
GW_DISABLE_MIDSTATE = False # Disable midstate's (Faster but breaks some clients)
|
||||
GW_SEND_REAL_TARGET = True # Propigate >1 difficulty to Clients (breaks some clients)
|
||||
|
||||
1
externals/quarkcoin-hash
vendored
1
externals/quarkcoin-hash
vendored
@ -1 +0,0 @@
|
||||
Subproject commit 8855c585d06f510134a8c66b686705f494bda0d9
|
||||
@ -3,10 +3,9 @@
|
||||
# Add conf directory to python path.
|
||||
# Configuration file is standard python module.
|
||||
import os, sys
|
||||
sys.path = [os.path.join(os.getcwd(), 'conf'),os.path.join(os.getcwd(), '.'),os.path.join(os.getcwd(), 'externals', 'stratum-mining-proxy'),] + sys.path
|
||||
sys.path = [os.path.join(os.getcwd(), 'conf'),os.path.join(os.getcwd(), 'externals', 'stratum-mining-proxy'),] + sys.path
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.application.service import Application, IProcess
|
||||
|
||||
# Run listening when mining service is ready
|
||||
on_startup = defer.Deferred()
|
||||
@ -15,7 +14,6 @@ import stratum
|
||||
import lib.settings as settings
|
||||
# Bootstrap Stratum framework
|
||||
application = stratum.setup(on_startup)
|
||||
IProcess(application).processName = settings.STRATUM_MINING_PROCESS_NAME
|
||||
|
||||
# Load mining service into stratum framework
|
||||
import mining
|
||||
@ -36,3 +34,7 @@ Interfaces.set_worker_manager(WorkerManagerInterface())
|
||||
Interfaces.set_timestamper(TimestamperInterface())
|
||||
|
||||
mining.setup(on_startup)
|
||||
|
||||
if settings.GW_ENABLE == True :
|
||||
from lib.getwork_proxy import GetworkProxy
|
||||
GetworkProxy(on_startup)
|
||||
|
||||
16
lib/algocheck.py
Normal file
16
lib/algocheck.py
Normal file
@ -0,0 +1,16 @@
|
||||
from twisted.internet import reactor, defer
|
||||
import settings
|
||||
import bitcoin_rpc
|
||||
import util
|
||||
from mining.interfaces import Interfaces
|
||||
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('algocheck')
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def init(self):
|
||||
self.bitcoin_rpc = bitcoin_rpc
|
||||
result = (yield bitcoin_rpc.getblocktemplate())
|
||||
if result[5] == 'newmint':
|
||||
print("Contains Newmint")
|
||||
else: print("DOesnt contain newmint")
|
||||
@ -14,16 +14,14 @@ log = lib.logger.get_logger('bitcoin_rpc')
|
||||
class BitcoinRPC(object):
|
||||
|
||||
def __init__(self, host, port, username, password):
|
||||
log.debug("Got to Bitcoin RPC")
|
||||
self.bitcoin_url = 'http://%s:%d' % (host, port)
|
||||
self.credentials = base64.b64encode("%s:%s" % (username, password))
|
||||
self.headers = {
|
||||
'Content-Type': 'text/json',
|
||||
'Authorization': 'Basic %s' % self.credentials,
|
||||
}
|
||||
client.HTTPClientFactory.noisy = False
|
||||
self.has_submitblock = False
|
||||
|
||||
client.HTTPClientFactory.noisy = False
|
||||
|
||||
def _call_raw(self, data):
|
||||
client.Headers
|
||||
return client.getPage(
|
||||
@ -40,84 +38,24 @@ class BitcoinRPC(object):
|
||||
'params': params,
|
||||
'id': '1',
|
||||
}))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_submitblock(self):
|
||||
try:
|
||||
log.info("Checking for submitblock")
|
||||
resp = (yield self._call('submitblock', []))
|
||||
self.has_submitblock = True
|
||||
except Exception as e:
|
||||
if (str(e) == "404 Not Found"):
|
||||
log.debug("No submitblock detected.")
|
||||
self.has_submitblock = False
|
||||
elif (str(e) == "500 Internal Server Error"):
|
||||
log.debug("submitblock detected.")
|
||||
self.has_submitblock = True
|
||||
else:
|
||||
log.debug("unknown submitblock check result.")
|
||||
self.has_submitblock = True
|
||||
finally:
|
||||
defer.returnValue(self.has_submitblock)
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def submitblock(self, block_hex, hash_hex, scrypt_hex):
|
||||
#try 5 times? 500 Internal Server Error could mean random error or that TX messages setting is wrong
|
||||
attempts = 0
|
||||
while True:
|
||||
attempts += 1
|
||||
if self.has_submitblock == True:
|
||||
try:
|
||||
log.debug("Submitting Block with submitblock: attempt #"+str(attempts))
|
||||
log.debug([block_hex,])
|
||||
resp = (yield self._call('submitblock', [block_hex,]))
|
||||
log.debug("SUBMITBLOCK RESULT: %s", resp)
|
||||
break
|
||||
except Exception as e:
|
||||
if attempts > 4:
|
||||
log.exception("submitblock failed. Problem Submitting block %s" % str(e))
|
||||
log.exception("Try Enabling TX Messages in config.py!")
|
||||
raise
|
||||
else:
|
||||
continue
|
||||
elif self.has_submitblock == False:
|
||||
try:
|
||||
log.debug("Submitting Block with getblocktemplate submit: attempt #"+str(attempts))
|
||||
log.debug([block_hex,])
|
||||
resp = (yield self._call('getblocktemplate', [{'mode': 'submit', 'data': block_hex}]))
|
||||
break
|
||||
except Exception as e:
|
||||
if attempts > 4:
|
||||
log.exception("getblocktemplate submit failed. Problem Submitting block %s" % str(e))
|
||||
log.exception("Try Enabling TX Messages in config.py!")
|
||||
raise
|
||||
else:
|
||||
continue
|
||||
else: # self.has_submitblock = None; unable to detect submitblock, try both
|
||||
try:
|
||||
log.debug("Submitting Block with submitblock")
|
||||
log.debug([block_hex,])
|
||||
resp = (yield self._call('submitblock', [block_hex,]))
|
||||
break
|
||||
except Exception as e:
|
||||
try:
|
||||
log.exception("submitblock Failed, does the coind have submitblock?")
|
||||
log.exception("Trying GetBlockTemplate")
|
||||
resp = (yield self._call('getblocktemplate', [{'mode': 'submit', 'data': block_hex}]))
|
||||
break
|
||||
except Exception as e:
|
||||
if attempts > 4:
|
||||
log.exception("submitblock failed. Problem Submitting block %s" % str(e))
|
||||
log.exception("Try Enabling TX Messages in config.py!")
|
||||
raise
|
||||
else:
|
||||
continue
|
||||
def submitblock(self, block_hex, block_hash_hex):
|
||||
# Try submitblock if that fails, go to getblocktemplate
|
||||
try:
|
||||
print("Submitting Block with Submit Block ")
|
||||
resp = (yield self._call('submitblock', [block_hex,]))
|
||||
except Exception:
|
||||
try:
|
||||
print("Submit Block call failed, trying GetBlockTemplate")
|
||||
resp = (yield self._call('getblocktemplate', [{'mode': 'submit', 'data': block_hex}]))
|
||||
except Exception as e:
|
||||
log.exception("Both SubmitBlock and GetBlockTemplate failed. Problem Submitting block %s" % str(e))
|
||||
raise
|
||||
|
||||
if json.loads(resp)['result'] == None:
|
||||
# make sure the block was created.
|
||||
log.info("CHECKING FOR BLOCK AFTER SUBMITBLOCK")
|
||||
defer.returnValue((yield self.blockexists(hash_hex, scrypt_hex)))
|
||||
# make sure the block was created.
|
||||
defer.returnValue((yield self.blockexists(block_hash_hex)))
|
||||
else:
|
||||
defer.returnValue(False)
|
||||
|
||||
@ -128,29 +66,17 @@ class BitcoinRPC(object):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def getblocktemplate(self):
|
||||
try:
|
||||
resp = (yield self._call('getblocktemplate', [{}]))
|
||||
defer.returnValue(json.loads(resp)['result'])
|
||||
# if internal server error try getblocktemplate without empty {} # ppcoin
|
||||
except Exception as e:
|
||||
if (str(e) == "500 Internal Server Error"):
|
||||
resp = (yield self._call('getblocktemplate', []))
|
||||
defer.returnValue(json.loads(resp)['result'])
|
||||
else:
|
||||
raise
|
||||
resp = (yield self._call('getblocktemplate', [{}]))
|
||||
defer.returnValue(json.loads(resp)['result'])
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def prevhash(self):
|
||||
resp = (yield self._call('getwork', []))
|
||||
try:
|
||||
resp = (yield self._call('getblocktemplate', [{}]))
|
||||
defer.returnValue(json.loads(resp)['result']['previousblockhash'])
|
||||
defer.returnValue(json.loads(resp)['result']['data'][8:72])
|
||||
except Exception as e:
|
||||
if (str(e) == "500 Internal Server Error"):
|
||||
resp = (yield self._call('getblocktemplate', []))
|
||||
defer.returnValue(json.loads(resp)['result']['previousblockhash'])
|
||||
else:
|
||||
log.exception("Cannot decode prevhash %s" % str(e))
|
||||
raise
|
||||
log.exception("Cannot decode prevhash %s" % str(e))
|
||||
raise
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def validateaddress(self, address):
|
||||
@ -163,59 +89,12 @@ class BitcoinRPC(object):
|
||||
defer.returnValue(json.loads(resp)['result'])
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def blockexists(self, hash_hex, scrypt_hex):
|
||||
valid_hash = None
|
||||
blockheight = None
|
||||
# try both hash_hex and scrypt_hex to find block
|
||||
try:
|
||||
resp = (yield self._call('getblock', [hash_hex,]))
|
||||
result = json.loads(resp)['result']
|
||||
if "hash" in result and result['hash'] == hash_hex:
|
||||
log.debug("Block found: %s" % hash_hex)
|
||||
valid_hash = hash_hex
|
||||
if "height" in result:
|
||||
blockheight = result['height']
|
||||
else:
|
||||
defer.returnValue(True)
|
||||
else:
|
||||
log.info("Cannot find block for %s" % hash_hex)
|
||||
defer.returnValue(False)
|
||||
|
||||
except Exception as e:
|
||||
try:
|
||||
resp = (yield self._call('getblock', [scrypt_hex,]))
|
||||
result = json.loads(resp)['result']
|
||||
if "hash" in result and result['hash'] == scrypt_hex:
|
||||
valid_hash = scrypt_hex
|
||||
log.debug("Block found: %s" % scrypt_hex)
|
||||
if "height" in result:
|
||||
blockheight = result['height']
|
||||
else:
|
||||
defer.returnValue(True)
|
||||
else:
|
||||
log.info("Cannot find block for %s" % scrypt_hex)
|
||||
defer.returnValue(False)
|
||||
|
||||
except Exception as e:
|
||||
log.info("Cannot find block for hash_hex %s or scrypt_hex %s" % hash_hex, scrypt_hex)
|
||||
defer.returnValue(False)
|
||||
|
||||
#after we've found the block, check the block with that height in the blockchain to see if hashes match
|
||||
try:
|
||||
log.debug("checking block hash against hash of block height: %s", blockheight)
|
||||
resp = (yield self._call('getblockhash', [blockheight,]))
|
||||
hash = json.loads(resp)['result']
|
||||
log.debug("hash of block of height %s: %s", blockheight, hash)
|
||||
if hash == valid_hash:
|
||||
log.debug("Block confirmed: hash of block matches hash of blockheight")
|
||||
defer.returnValue(True)
|
||||
else:
|
||||
log.debug("Block invisible: hash of block does not match hash of blockheight")
|
||||
defer.returnValue(False)
|
||||
|
||||
except Exception as e:
|
||||
# cannot get blockhash from height; block was created, so return true
|
||||
def blockexists(self, block_hash_hex):
|
||||
resp = (yield self._call('getblock', [block_hash_hex,]))
|
||||
if "hash" in json.loads(resp)['result'] and json.loads(resp)['result']['hash'] == block_hash_hex:
|
||||
log.debug("Block Confirmed: %s" % block_hash_hex)
|
||||
defer.returnValue(True)
|
||||
else:
|
||||
log.info("Cannot find block for %s" % hash_hex)
|
||||
log.info("Cannot find block for %s" % block_hash_hex)
|
||||
defer.returnValue(False)
|
||||
|
||||
|
||||
@ -19,123 +19,115 @@ from lib.bitcoin_rpc import BitcoinRPC
|
||||
class BitcoinRPCManager(object):
|
||||
|
||||
def __init__(self):
|
||||
log.debug("Got to Bitcoin RPC Manager")
|
||||
self.conns = {}
|
||||
self.conns[0] = BitcoinRPC(settings.COINDAEMON_TRUSTED_HOST,
|
||||
settings.COINDAEMON_TRUSTED_PORT,
|
||||
settings.COINDAEMON_TRUSTED_USER,
|
||||
settings.COINDAEMON_TRUSTED_PASSWORD)
|
||||
self.curr_conn = 0
|
||||
for x in range (1, 99):
|
||||
if hasattr(settings, 'COINDAEMON_TRUSTED_HOST_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_PORT_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_USER_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_PASSWORD_' + str(x)):
|
||||
self.conns[len(self.conns)] = BitcoinRPC(settings.__dict__['COINDAEMON_TRUSTED_HOST_' + str(x)],
|
||||
settings.__dict__['COINDAEMON_TRUSTED_PORT_' + str(x)],
|
||||
settings.__dict__['COINDAEMON_TRUSTED_USER_' + str(x)],
|
||||
settings.__dict__['COINDAEMON_TRUSTED_PASSWORD_' + str(x)])
|
||||
self.conns = {}
|
||||
self.conns[0] = BitcoinRPC(settings.COINDAEMON_TRUSTED_HOST,
|
||||
settings.COINDAEMON_TRUSTED_PORT,
|
||||
settings.COINDAEMON_TRUSTED_USER,
|
||||
settings.COINDAEMON_TRUSTED_PASSWORD)
|
||||
self.curr_conn = 0
|
||||
for x in range (1, 99):
|
||||
if hasattr(settings, 'COINDAEMON_TRUSTED_HOST_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_PORT_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_USER_' + str(x)) and hasattr(settings, 'COINDAEMON_TRUSTED_PASSWORD_' + str(x)):
|
||||
self.conns[len(self.conns)] = BitcoinRPC(settings.__dict__['COINDAEMON_TRUSTED_HOST_' + str(x)],
|
||||
settings.__dict__['COINDAEMON_TRUSTED_PORT_' + str(x)],
|
||||
settings.__dict__['COINDAEMON_TRUSTED_USER_' + str(x)],
|
||||
settings.__dict__['COINDAEMON_TRUSTED_PASSWORD_' + str(x)])
|
||||
|
||||
def add_connection(self, host, port, user, password):
|
||||
# TODO: Some string sanity checks
|
||||
self.conns[len(self.conns)] = BitcoinRPC(host, port, user, password)
|
||||
|
||||
def next_connection(self):
|
||||
time.sleep(1)
|
||||
if len(self.conns) <= 1:
|
||||
log.error("Problem with Pool 0 -- NO ALTERNATE POOLS!!!")
|
||||
time.sleep(4)
|
||||
time.sleep(1)
|
||||
if len(self.conns) <= 1:
|
||||
log.error("Problem with Pool 0 -- NO ALTERNATE POOLS!!!")
|
||||
time.sleep(4)
|
||||
return
|
||||
log.error("Problem with Pool %i Switching to Next!" % (self.curr_conn) )
|
||||
self.curr_conn = self.curr_conn + 1
|
||||
if self.curr_conn >= len(self.conns):
|
||||
self.curr_conn = 0
|
||||
return
|
||||
log.error("Problem with Pool %i Switching to Next!" % (self.curr_conn) )
|
||||
self.curr_conn = self.curr_conn + 1
|
||||
if self.curr_conn >= len(self.conns):
|
||||
self.curr_conn = 0
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_height(self):
|
||||
while True:
|
||||
try:
|
||||
resp = (yield self.conns[self.curr_conn]._call('getinfo', []))
|
||||
break
|
||||
except:
|
||||
log.error("Check Height -- Pool %i Down!" % (self.curr_conn) )
|
||||
self.next_connection()
|
||||
curr_height = json.loads(resp)['result']['blocks']
|
||||
log.debug("Check Height -- Current Pool %i : %i" % (self.curr_conn,curr_height) )
|
||||
for i in self.conns:
|
||||
if i == self.curr_conn:
|
||||
continue
|
||||
while True:
|
||||
try:
|
||||
resp = (yield self.conns[self.curr_conn]._call('getinfo', []))
|
||||
break
|
||||
except:
|
||||
log.error("Check Height -- Pool %i Down!" % (self.curr_conn) )
|
||||
self.next_connection()
|
||||
curr_height = json.loads(resp)['result']['blocks']
|
||||
log.debug("Check Height -- Current Pool %i : %i" % (self.curr_conn,curr_height) )
|
||||
for i in self.conns:
|
||||
if i == self.curr_conn:
|
||||
continue
|
||||
|
||||
try:
|
||||
resp = (yield self.conns[i]._call('getinfo', []))
|
||||
except:
|
||||
log.error("Check Height -- Pool %i Down!" % (i,) )
|
||||
continue
|
||||
try:
|
||||
resp = (yield self.conns[i]._call('getinfo', []))
|
||||
except:
|
||||
log.error("Check Height -- Pool %i Down!" % (i,) )
|
||||
continue
|
||||
|
||||
height = json.loads(resp)['result']['blocks']
|
||||
log.debug("Check Height -- Pool %i : %i" % (i,height) )
|
||||
if height > curr_height:
|
||||
self.curr_conn = i
|
||||
|
||||
defer.returnValue(True)
|
||||
height = json.loads(resp)['result']['blocks']
|
||||
log.debug("Check Height -- Pool %i : %i" % (i,height) )
|
||||
if height > curr_height:
|
||||
self.curr_conn = i
|
||||
defer.returnValue(True)
|
||||
|
||||
def _call_raw(self, data):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn]._call_raw(data)
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn]._call_raw(data)
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
def _call(self, method, params):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn]._call(method,params)
|
||||
except:
|
||||
self.next_connection()
|
||||
def check_submitblock(self):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].check_submitblock()
|
||||
except:
|
||||
self.next_connection()
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn]._call(method,params)
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
def submitblock(self, block_hex, hash_hex, scrypt_hex):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].submitblock(block_hex, hash_hex, scrypt_hex)
|
||||
except:
|
||||
self.next_connection()
|
||||
def submitblock(self, block_hex, block_hash_hex):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].submitblock(block_hex, block_hash_hex)
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
def getinfo(self):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].getinfo()
|
||||
except:
|
||||
self.next_connection()
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].getinfo()
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
def getblocktemplate(self):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].getblocktemplate()
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].getblocktemplate()
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
def prevhash(self):
|
||||
self.check_height()
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].prevhash()
|
||||
except:
|
||||
self.next_connection()
|
||||
self.check_height()
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].prevhash()
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
def validateaddress(self, address):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].validateaddress(address)
|
||||
except:
|
||||
self.next_connection()
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].validateaddress(address)
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
|
||||
def getdifficulty(self):
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].getdifficulty()
|
||||
except:
|
||||
self.next_connection()
|
||||
while True:
|
||||
try:
|
||||
return self.conns[self.curr_conn].getdifficulty()
|
||||
except:
|
||||
self.next_connection()
|
||||
|
||||
@ -5,15 +5,7 @@ import struct
|
||||
import util
|
||||
import merkletree
|
||||
import halfnode
|
||||
from coinbasetx import CoinbaseTransactionPOW
|
||||
from coinbasetx import CoinbaseTransactionPOS
|
||||
from coinbasetx import CoinbaseTransaction
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('block_template')
|
||||
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('block_template')
|
||||
|
||||
|
||||
# Remove dependency to settings, coinbase extras should be
|
||||
# provided from coinbaser
|
||||
@ -27,8 +19,6 @@ class BlockTemplate(halfnode.CBlock):
|
||||
coinbase_transaction_class = CoinbaseTransaction
|
||||
|
||||
def __init__(self, timestamper, coinbaser, job_id):
|
||||
log.debug("Got To Block_template.py")
|
||||
log.debug("Got To Block_template.py")
|
||||
super(BlockTemplate, self).__init__()
|
||||
|
||||
self.job_id = job_id
|
||||
@ -56,20 +46,14 @@ class BlockTemplate(halfnode.CBlock):
|
||||
#txhashes = [None] + [ binascii.unhexlify(t['hash']) for t in data['transactions'] ]
|
||||
txhashes = [None] + [ util.ser_uint256(int(t['hash'], 16)) for t in data['transactions'] ]
|
||||
mt = merkletree.MerkleTree(txhashes)
|
||||
if settings.COINDAEMON_Reward == 'POW':
|
||||
coinbase = CoinbaseTransactionPOW(self.timestamper, self.coinbaser, data['coinbasevalue'],
|
||||
data['coinbaseaux']['flags'], data['height'],
|
||||
settings.COINBASE_EXTRAS)
|
||||
else:
|
||||
coinbase = CoinbaseTransactionPOS(self.timestamper, self.coinbaser, data['coinbasevalue'],
|
||||
data['coinbaseaux']['flags'], data['height'],
|
||||
settings.COINBASE_EXTRAS, data['curtime'])
|
||||
|
||||
coinbase = self.coinbase_transaction_class(self.timestamper, self.coinbaser, data['coinbasevalue'],
|
||||
data['coinbaseaux']['flags'], data['height'], settings.COINBASE_EXTRAS)
|
||||
|
||||
self.height = data['height']
|
||||
self.nVersion = data['version']
|
||||
self.hashPrevBlock = int(data['previousblockhash'], 16)
|
||||
self.nBits = int(data['bits'], 16)
|
||||
|
||||
self.hashMerkleRoot = 0
|
||||
self.nTime = 0
|
||||
self.nNonce = 0
|
||||
@ -140,12 +124,8 @@ class BlockTemplate(halfnode.CBlock):
|
||||
r = struct.pack(">i", self.nVersion)
|
||||
r += self.prevhash_bin
|
||||
r += util.ser_uint256_be(merkle_root_int)
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
r += struct.pack(">I", self.nBits)
|
||||
r += ntime_bin
|
||||
else:
|
||||
r += ntime_bin
|
||||
r += struct.pack(">I", self.nBits)
|
||||
r += ntime_bin
|
||||
r += struct.pack(">I", self.nBits)
|
||||
r += nonce_bin
|
||||
return r
|
||||
|
||||
|
||||
@ -17,7 +17,6 @@ class BlockUpdater(object):
|
||||
'''
|
||||
|
||||
def __init__(self, registry, bitcoin_rpc):
|
||||
log.debug("Got To Block Updater")
|
||||
self.bitcoin_rpc = bitcoin_rpc
|
||||
self.registry = registry
|
||||
self.clock = None
|
||||
@ -46,7 +45,7 @@ class BlockUpdater(object):
|
||||
current_prevhash = None
|
||||
|
||||
log.info("Checking for new block.")
|
||||
prevhash = util.reverse_hash((yield self.bitcoin_rpc.prevhash()))
|
||||
prevhash = util.reverse_hash((yield self.bitcoin_rpc.prevhash()))
|
||||
if prevhash and prevhash != current_prevhash:
|
||||
log.info("New block! Prevhash: %s" % prevhash)
|
||||
update = True
|
||||
|
||||
@ -9,73 +9,59 @@ log = lib.logger.get_logger('coinbaser')
|
||||
# TODO: Add on_* hooks in the app
|
||||
|
||||
class SimpleCoinbaser(object):
|
||||
'''This very simple coinbaser uses constant bitcoin address
|
||||
'''This very simple coinbaser uses a constant coin address
|
||||
for all generated blocks.'''
|
||||
|
||||
def __init__(self, bitcoin_rpc, address):
|
||||
log.debug("Got to coinbaser")
|
||||
# Fire Callback when the coinbaser is ready
|
||||
# Fire callback when coinbaser is ready
|
||||
self.on_load = defer.Deferred()
|
||||
|
||||
|
||||
self.address = address
|
||||
self.is_valid = False
|
||||
|
||||
self.is_valid = False # We need to check if pool can use this address
|
||||
|
||||
self.bitcoin_rpc = bitcoin_rpc
|
||||
self._validate()
|
||||
|
||||
def _validate(self):
|
||||
d = self.bitcoin_rpc.validateaddress(self.address)
|
||||
d.addCallback(self.address_check)
|
||||
d.addCallback(self._address_check)
|
||||
d.addErrback(self._failure)
|
||||
|
||||
def address_check(self, result):
|
||||
|
||||
def _address_check(self, result):
|
||||
if result['isvalid'] and result['ismine']:
|
||||
self.is_valid = True
|
||||
log.info("Coinbase address '%s' is valid" % self.address)
|
||||
if 'address' in result:
|
||||
log.debug("Address = %s " % result['address'])
|
||||
self.address = result['address']
|
||||
if 'pubkey' in result:
|
||||
log.debug("PubKey = %s " % result['pubkey'])
|
||||
self.pubkey = result['pubkey']
|
||||
if 'iscompressed' in result:
|
||||
log.debug("Is Compressed = %s " % result['iscompressed'])
|
||||
if 'account' in result:
|
||||
log.debug("Account = %s " % result['account'])
|
||||
|
||||
if not self.on_load.called:
|
||||
self.address = result['address']
|
||||
self.on_load.callback(True)
|
||||
|
||||
elif result['isvalid'] and settings.ALLOW_NONLOCAL_WALLET == True :
|
||||
self.is_valid = True
|
||||
log.warning("!!! Coinbase address '%s' is valid BUT it is not local" % self.address)
|
||||
if 'pubkey' in result:
|
||||
log.debug("PubKey = %s " % result['pubkey'])
|
||||
self.pubkey = result['pubkey']
|
||||
if 'account' in result:
|
||||
log.debug("Account = %s " % result['account'])
|
||||
if not self.on_load.called:
|
||||
self.on_load.callback(True)
|
||||
|
||||
self.on_load.callback(True)
|
||||
|
||||
elif result['isvalid'] and settings.ALLOW_NONLOCAL_WALLET == True :
|
||||
self.is_valid = True
|
||||
log.warning("!!! Coinbase address '%s' is valid BUT it is not local" % self.address)
|
||||
|
||||
if not self.on_load.called:
|
||||
self.on_load.callback(True)
|
||||
|
||||
else:
|
||||
self.is_valid = False
|
||||
log.error("Coinbase address '%s' is NOT valid!" % self.address)
|
||||
|
||||
#def on_new_block(self):
|
||||
def _failure(self, failure):
|
||||
log.error("Cannot validate Bitcoin address '%s'" % self.address)
|
||||
raise
|
||||
|
||||
#def on_new_block(self):
|
||||
# pass
|
||||
|
||||
#def on_new_template(self):
|
||||
# pass
|
||||
def _failure(self, failure):
|
||||
log.exception("Cannot validate Wallet address '%s'" % self.address)
|
||||
raise
|
||||
|
||||
def get_script_pubkey(self):
|
||||
if settings.COINDAEMON_Reward == 'POW':
|
||||
if not self.is_valid:
|
||||
# Try again, maybe the coind was down?
|
||||
self._validate()
|
||||
return util.script_to_address(self.address)
|
||||
else:
|
||||
return util.script_to_pubkey(self.pubkey)
|
||||
raise Exception("Coinbase address is not validated!")
|
||||
return util.script_to_address(self.address)
|
||||
|
||||
def get_coinbase_data(self):
|
||||
return ''
|
||||
|
||||
@ -2,106 +2,7 @@ import binascii
|
||||
import halfnode
|
||||
import struct
|
||||
import util
|
||||
import settings
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('coinbasetx')
|
||||
|
||||
#if settings.COINDAEMON_Reward == 'POW':
|
||||
class CoinbaseTransactionPOW(halfnode.CTransaction):
|
||||
'''Construct special transaction used for coinbase tx.
|
||||
It also implements quick serialization using pre-cached
|
||||
scriptSig template.'''
|
||||
|
||||
extranonce_type = '>Q'
|
||||
extranonce_placeholder = struct.pack(extranonce_type, int('f000000ff111111f', 16))
|
||||
extranonce_size = struct.calcsize(extranonce_type)
|
||||
|
||||
def __init__(self, timestamper, coinbaser, value, flags, height, data):
|
||||
super(CoinbaseTransactionPOW, self).__init__()
|
||||
log.debug("Got to CoinBaseTX")
|
||||
#self.extranonce = 0
|
||||
|
||||
if len(self.extranonce_placeholder) != self.extranonce_size:
|
||||
raise Exception("Extranonce placeholder don't match expected length!")
|
||||
|
||||
tx_in = halfnode.CTxIn()
|
||||
tx_in.prevout.hash = 0L
|
||||
tx_in.prevout.n = 2**32-1
|
||||
tx_in._scriptSig_template = (
|
||||
util.ser_number(height) + binascii.unhexlify(flags) + util.ser_number(int(timestamper.time())) + \
|
||||
chr(self.extranonce_size),
|
||||
util.ser_string(coinbaser.get_coinbase_data() + data)
|
||||
)
|
||||
|
||||
tx_in.scriptSig = tx_in._scriptSig_template[0] + self.extranonce_placeholder + tx_in._scriptSig_template[1]
|
||||
|
||||
tx_out = halfnode.CTxOut()
|
||||
tx_out.nValue = value
|
||||
tx_out.scriptPubKey = coinbaser.get_script_pubkey()
|
||||
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
self.strTxComment = "RanchiMall mining"
|
||||
self.vin.append(tx_in)
|
||||
self.vout.append(tx_out)
|
||||
|
||||
# Two parts of serialized coinbase, just put part1 + extranonce + part2 to have final serialized tx
|
||||
self._serialized = super(CoinbaseTransactionPOW, self).serialize().split(self.extranonce_placeholder)
|
||||
|
||||
def set_extranonce(self, extranonce):
|
||||
if len(extranonce) != self.extranonce_size:
|
||||
raise Exception("Incorrect extranonce size")
|
||||
|
||||
(part1, part2) = self.vin[0]._scriptSig_template
|
||||
self.vin[0].scriptSig = part1 + extranonce + part2
|
||||
#elif settings.COINDAEMON_Reward == 'POS':
|
||||
class CoinbaseTransactionPOS(halfnode.CTransaction):
|
||||
'''Construct special transaction used for coinbase tx.
|
||||
It also implements quick serialization using pre-cached
|
||||
scriptSig template.'''
|
||||
|
||||
extranonce_type = '>Q'
|
||||
extranonce_placeholder = struct.pack(extranonce_type, int('f000000ff111111f', 16))
|
||||
extranonce_size = struct.calcsize(extranonce_type)
|
||||
|
||||
def __init__(self, timestamper, coinbaser, value, flags, height, data, ntime):
|
||||
super(CoinbaseTransactionPOS, self).__init__()
|
||||
log.debug("Got to CoinBaseTX")
|
||||
#self.extranonce = 0
|
||||
|
||||
if len(self.extranonce_placeholder) != self.extranonce_size:
|
||||
raise Exception("Extranonce placeholder don't match expected length!")
|
||||
|
||||
tx_in = halfnode.CTxIn()
|
||||
tx_in.prevout.hash = 0L
|
||||
tx_in.prevout.n = 2**32-1
|
||||
tx_in._scriptSig_template = (
|
||||
util.ser_number(height) + binascii.unhexlify(flags) + util.ser_number(int(timestamper.time())) + \
|
||||
chr(self.extranonce_size),
|
||||
util.ser_string(coinbaser.get_coinbase_data() + data)
|
||||
)
|
||||
|
||||
tx_in.scriptSig = tx_in._scriptSig_template[0] + self.extranonce_placeholder + tx_in._scriptSig_template[1]
|
||||
|
||||
tx_out = halfnode.CTxOut()
|
||||
tx_out.nValue = value
|
||||
tx_out.scriptPubKey = coinbaser.get_script_pubkey()
|
||||
|
||||
self.nTime = ntime
|
||||
if settings.COINDAEMON_SHA256_TX == 'yes':
|
||||
self.strTxComment = "RanchiMall mining"
|
||||
self.vin.append(tx_in)
|
||||
self.vout.append(tx_out)
|
||||
|
||||
# Two parts of serialized coinbase, just put part1 + extranonce + part2 to have final serialized tx
|
||||
self._serialized = super(CoinbaseTransactionPOS, self).serialize().split(self.extranonce_placeholder)
|
||||
|
||||
def set_extranonce(self, extranonce):
|
||||
if len(extranonce) != self.extranonce_size:
|
||||
raise Exception("Incorrect extranonce size")
|
||||
|
||||
(part1, part2) = self.vin[0]._scriptSig_template
|
||||
self.vin[0].scriptSig = part1 + extranonce + part2
|
||||
#else:
|
||||
class CoinbaseTransaction(halfnode.CTransaction):
|
||||
'''Construct special transaction used for coinbase tx.
|
||||
It also implements quick serialization using pre-cached
|
||||
@ -111,9 +12,9 @@ class CoinbaseTransaction(halfnode.CTransaction):
|
||||
extranonce_placeholder = struct.pack(extranonce_type, int('f000000ff111111f', 16))
|
||||
extranonce_size = struct.calcsize(extranonce_type)
|
||||
|
||||
def __init__(self, timestamper, coinbaser, value, flags, height, data, ntime):
|
||||
def __init__(self, timestamper, coinbaser, value, flags, height, data):
|
||||
super(CoinbaseTransaction, self).__init__()
|
||||
log.debug("Got to CoinBaseTX")
|
||||
|
||||
#self.extranonce = 0
|
||||
|
||||
if len(self.extranonce_placeholder) != self.extranonce_size:
|
||||
@ -133,8 +34,7 @@ class CoinbaseTransaction(halfnode.CTransaction):
|
||||
tx_out = halfnode.CTxOut()
|
||||
tx_out.nValue = value
|
||||
tx_out.scriptPubKey = coinbaser.get_script_pubkey()
|
||||
|
||||
self.nTime = ntime
|
||||
|
||||
self.vin.append(tx_in)
|
||||
self.vout.append(tx_out)
|
||||
|
||||
@ -146,4 +46,4 @@ class CoinbaseTransaction(halfnode.CTransaction):
|
||||
raise Exception("Incorrect extranonce size")
|
||||
|
||||
(part1, part2) = self.vin[0]._scriptSig_template
|
||||
self.vin[0].scriptSig = part1 + extranonce + part2
|
||||
self.vin[0].scriptSig = part1 + extranonce + part2
|
||||
@ -44,19 +44,29 @@ THREAD_POOL_SIZE = 300
|
||||
|
||||
# Do you want to expose "example" service in server?
|
||||
# Useful for learning the server,you probably want to disable
|
||||
# Disable the example service
|
||||
# this on production
|
||||
ENABLE_EXAMPLE_SERVICE = False
|
||||
|
||||
# ******************** TRANSPORTS *********************
|
||||
|
||||
# Hostname or external IP to expose
|
||||
HOSTNAME = 'localhost'
|
||||
|
||||
# Port used for Socket transport. Use 'None' for disabling the transport.
|
||||
LISTEN_SOCKET_TRANSPORT = 3333
|
||||
|
||||
# Port used for HTTP Poll transport. Use 'None' for disabling the transport
|
||||
LISTEN_HTTP_TRANSPORT = None
|
||||
|
||||
# Port used for HTTPS Poll transport
|
||||
LISTEN_HTTPS_TRANSPORT = None
|
||||
|
||||
# Port used for WebSocket transport, 'None' for disabling WS
|
||||
LISTEN_WS_TRANSPORT = None
|
||||
|
||||
# Port used for secure WebSocket, 'None' for disabling WSS
|
||||
LISTEN_WSS_TRANSPORT = None
|
||||
|
||||
# ******************** SSL SETTINGS ******************
|
||||
|
||||
# Private key and certification file for SSL protected transports
|
||||
@ -101,17 +111,11 @@ COINDAEMON_TRUSTED_PORT = 8332 # RPC port
|
||||
COINDAEMON_TRUSTED_USER = 'stratum'
|
||||
COINDAEMON_TRUSTED_PASSWORD = '***somepassword***'
|
||||
|
||||
|
||||
# Coin Algorithm is the option used to determine the algortithm used by stratum
|
||||
# This currently only works with POW SHA256 and Scrypt Coins
|
||||
# The available options are scrypt and sha256d.
|
||||
# If the option does not meet either of these criteria stratum defaults to scry$
|
||||
# Until AutoReward Selecting Code has been implemented the below options are us$
|
||||
# For Reward type there is POW and POS. please ensure you choose the currect ty$
|
||||
# For SHA256 PoS Coins which support TX Messages please enter yes in the TX sel$
|
||||
COINDAEMON_ALGO = 'riecoin'
|
||||
COINDAEMON_Reward = 'POW'
|
||||
COINDAEMON_SHA256_TX = 'yes'
|
||||
# If the option does not meet either of these criteria stratum defaults to scrypt
|
||||
COINDAEMON_ALGO = 'scrypt'
|
||||
|
||||
# ******************** OTHER CORE SETTINGS *********************
|
||||
# Use "echo -n '<yourpassword>' | sha256sum | cut -f1 -d' ' "
|
||||
@ -140,12 +144,12 @@ IRC_NICK = "stratum%s" # Skip IRC registration
|
||||
|
||||
# Which hostname / external IP expose in IRC room
|
||||
# This should be official HOSTNAME for normal operation.
|
||||
#IRC_HOSTNAME = HOSTNAME
|
||||
IRC_HOSTNAME = HOSTNAME
|
||||
|
||||
# Don't change this unless you're creating private Stratum cloud.
|
||||
#IRC_SERVER = 'irc.freenode.net'
|
||||
#IRC_ROOM = '#stratum-mining-nodes'
|
||||
#IRC_PORT = 6667
|
||||
IRC_SERVER = 'irc.freenode.net'
|
||||
IRC_ROOM = '#stratum-mining-nodes'
|
||||
IRC_PORT = 6667
|
||||
|
||||
# Hardcoded list of Stratum nodes for clients to switch when this node is not available.
|
||||
PEERS = [
|
||||
|
||||
@ -1,6 +1,4 @@
|
||||
import struct
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('extronance')
|
||||
|
||||
class ExtranonceCounter(object):
|
||||
'''Implementation of a counter producing
|
||||
@ -9,11 +7,9 @@ class ExtranonceCounter(object):
|
||||
but it can be changed at any time without breaking anything.'''
|
||||
|
||||
def __init__(self, instance_id):
|
||||
log.debug("Got to Extronance Counter")
|
||||
if instance_id < 0 or instance_id > 31:
|
||||
raise Exception("Current ExtranonceCounter implementation needs an instance_id in <0, 31>.")
|
||||
log.debug("Got To Extronance")
|
||||
|
||||
|
||||
# Last 5 most-significant bits represents instance_id
|
||||
# The rest is just an iterator of jobs.
|
||||
self.counter = instance_id << 27
|
||||
@ -26,4 +22,4 @@ class ExtranonceCounter(object):
|
||||
def get_new_bin(self):
|
||||
self.counter += 1
|
||||
return struct.pack('>L', self.counter)
|
||||
|
||||
|
||||
84
lib/getwork_proxy.py
Normal file
84
lib/getwork_proxy.py
Normal file
@ -0,0 +1,84 @@
|
||||
import stratum.logger
|
||||
log = stratum.logger.get_logger('Getwork Proxy')
|
||||
|
||||
from stratum import settings
|
||||
from stratum.socket_transport import SocketTransportClientFactory
|
||||
|
||||
from twisted.internet import reactor, defer
|
||||
from twisted.web import server
|
||||
|
||||
from mining_libs import getwork_listener
|
||||
from mining_libs import client_service
|
||||
from mining_libs import jobs
|
||||
from mining_libs import worker_registry
|
||||
from mining_libs import version
|
||||
|
||||
class Site(server.Site):
|
||||
def log(self, request):
|
||||
pass
|
||||
|
||||
def on_shutdown(f):
|
||||
log.info("Shutting down proxy...")
|
||||
f.is_reconnecting = False # Don't let stratum factory to reconnect again
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_connect(f, workers, job_registry):
|
||||
log.info("Connected to Stratum pool at %s:%d" % f.main_host)
|
||||
|
||||
# Hook to on_connect again
|
||||
f.on_connect.addCallback(on_connect, workers, job_registry)
|
||||
|
||||
# Every worker have to re-autorize
|
||||
workers.clear_authorizations()
|
||||
|
||||
# Subscribe for receiving jobs
|
||||
log.info("Subscribing for mining jobs")
|
||||
(_, extranonce1, extranonce2_size) = (yield f.rpc('mining.subscribe', []))
|
||||
job_registry.set_extranonce(extranonce1, extranonce2_size)
|
||||
|
||||
defer.returnValue(f)
|
||||
|
||||
def on_disconnect(f, workers, job_registry):
|
||||
log.info("Disconnected from Stratum pool at %s:%d" % f.main_host)
|
||||
f.on_disconnect.addCallback(on_disconnect, workers, job_registry)
|
||||
|
||||
# Reject miners because we don't give a *job :-)
|
||||
workers.clear_authorizations()
|
||||
return f
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def GetworkProxy_main(cb):
|
||||
log.info("Stratum proxy version %s Connecting to Pool..." % version.VERSION)
|
||||
|
||||
# Connect to Stratum pool
|
||||
f = SocketTransportClientFactory(settings.HOSTNAME, settings.LISTEN_SOCKET_TRANSPORT,
|
||||
debug=False, proxy=None, event_handler=client_service.ClientMiningService)
|
||||
|
||||
job_registry = jobs.JobRegistry(f, cmd='', no_midstate=settings.GW_DISABLE_MIDSTATE, real_target=settings.GW_SEND_REAL_TARGET)
|
||||
client_service.ClientMiningService.job_registry = job_registry
|
||||
client_service.ClientMiningService.reset_timeout()
|
||||
|
||||
workers = worker_registry.WorkerRegistry(f)
|
||||
f.on_connect.addCallback(on_connect, workers, job_registry)
|
||||
f.on_disconnect.addCallback(on_disconnect, workers, job_registry)
|
||||
|
||||
# Cleanup properly on shutdown
|
||||
reactor.addSystemEventTrigger('before', 'shutdown', on_shutdown, f)
|
||||
|
||||
# Block until proxy connects to the pool
|
||||
yield f.on_connect
|
||||
|
||||
# Setup getwork listener
|
||||
gw_site = Site(getwork_listener.Root(job_registry, workers,
|
||||
stratum_host=settings.HOSTNAME, stratum_port=settings.LISTEN_SOCKET_TRANSPORT,
|
||||
custom_lp=False, custom_stratum=False,
|
||||
custom_user=False, custom_password=False
|
||||
))
|
||||
gw_site.noisy = False
|
||||
reactor.listenTCP(settings.GW_PORT, gw_site, interface='0.0.0.0')
|
||||
|
||||
log.info("Getwork Proxy is online, Port: %d" % (settings.GW_PORT))
|
||||
|
||||
def GetworkProxy(start_event):
|
||||
start_event.addCallback(GetworkProxy_main)
|
||||
|
||||
225
lib/halfnode.py
225
lib/halfnode.py
@ -14,35 +14,11 @@ from Crypto.Hash import SHA256
|
||||
|
||||
from twisted.internet.protocol import Protocol
|
||||
from util import *
|
||||
import settings
|
||||
|
||||
import settings
|
||||
import sha3
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('halfnode')
|
||||
log.debug("Got to Halfnode")
|
||||
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
log.debug("########################################### Loading LTC Scrypt #########################################################")
|
||||
import ltc_scrypt
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
log.debug("########################################### Loading Quark Support #########################################################")
|
||||
import quark_hash
|
||||
else:
|
||||
log.debug("########################################### Loading SHA256 Support ######################################################")
|
||||
|
||||
#if settings.COINDAEMON_Reward == 'POS':
|
||||
# log.debug("########################################### Loading POS Support #########################################################")
|
||||
# pass
|
||||
#else:
|
||||
# log.debug("########################################### Loading POW Support ######################################################")
|
||||
# pass
|
||||
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
log.debug("########################################### Loading SHA256 Transaction Message Support #########################################################")
|
||||
pass
|
||||
else:
|
||||
log.debug("########################################### NOT Loading SHA256 Transaction Message Support ######################################################")
|
||||
pass
|
||||
|
||||
|
||||
MY_VERSION = 31402
|
||||
MY_SUBVERSION = ".4"
|
||||
@ -156,61 +132,26 @@ class CTxOut(object):
|
||||
|
||||
class CTransaction(object):
|
||||
def __init__(self):
|
||||
if settings.COINDAEMON_Reward == 'POW':
|
||||
self.nVersion = 1
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
self.nVersion = 2
|
||||
self.vin = []
|
||||
self.vout = []
|
||||
self.nLockTime = 0
|
||||
self.sha256 = None
|
||||
elif settings.COINDAEMON_Reward == 'POS':
|
||||
self.nVersion = 1
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
self.nVersion = 2
|
||||
self.nTime = 0
|
||||
self.vin = []
|
||||
self.vout = []
|
||||
self.nLockTime = 0
|
||||
self.sha256 = None
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
self.strTxComment = ""
|
||||
|
||||
self.nVersion = 1
|
||||
self.vin = []
|
||||
self.vout = []
|
||||
self.nLockTime = 0
|
||||
self.sha256 = None
|
||||
self.sha3 = None
|
||||
def deserialize(self, f):
|
||||
if settings.COINDAEMON_Reward == 'POW':
|
||||
self.nVersion = struct.unpack("<i", f.read(4))[0]
|
||||
self.vin = deser_vector(f, CTxIn)
|
||||
self.vout = deser_vector(f, CTxOut)
|
||||
self.nLockTime = struct.unpack("<I", f.read(4))[0]
|
||||
self.sha256 = None
|
||||
elif settings.COINDAEMON_Reward == 'POS':
|
||||
self.nVersion = struct.unpack("<i", f.read(4))[0]
|
||||
self.nTime = struct.unpack("<i", f.read(4))[0]
|
||||
self.vin = deser_vector(f, CTxIn)
|
||||
self.vout = deser_vector(f, CTxOut)
|
||||
self.nLockTime = struct.unpack("<I", f.read(4))[0]
|
||||
self.sha256 = None
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
self.strTxComment = deser_string(f)
|
||||
|
||||
self.nVersion = struct.unpack("<i", f.read(4))[0]
|
||||
self.vin = deser_vector(f, CTxIn)
|
||||
self.vout = deser_vector(f, CTxOut)
|
||||
self.nLockTime = struct.unpack("<I", f.read(4))[0]
|
||||
self.sha256 = None
|
||||
def serialize(self):
|
||||
if settings.COINDAEMON_Reward == 'POW':
|
||||
r = ""
|
||||
r += struct.pack("<i", self.nVersion)
|
||||
r += ser_vector(self.vin)
|
||||
r += ser_vector(self.vout)
|
||||
r += struct.pack("<I", self.nLockTime)
|
||||
elif settings.COINDAEMON_Reward == 'POS':
|
||||
r = ""
|
||||
r += struct.pack("<i", self.nVersion)
|
||||
r += struct.pack("<i", self.nTime)
|
||||
r += ser_vector(self.vin)
|
||||
r += ser_vector(self.vout)
|
||||
r += struct.pack("<I", self.nLockTime)
|
||||
if settings.COINDAEMON_TX == 'yes':
|
||||
r += ser_string(self.strTxComment)
|
||||
r = ""
|
||||
r += struct.pack("<i", self.nVersion)
|
||||
r += ser_vector(self.vin)
|
||||
r += ser_vector(self.vout)
|
||||
r += struct.pack("<I", self.nLockTime)
|
||||
return r
|
||||
|
||||
|
||||
def calc_sha256(self):
|
||||
if self.sha256 is None:
|
||||
self.sha256 = uint256_from_str(SHA256.new(SHA256.new(self.serialize()).digest()).digest())
|
||||
@ -235,91 +176,46 @@ class CBlock(object):
|
||||
self.nNonce = 0
|
||||
self.vtx = []
|
||||
self.sha256 = None
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
self.scrypt = None
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
self.quark = None
|
||||
elif settings.COINDAEMON_ALGO == 'riecoin':
|
||||
self.riecoin = None
|
||||
else: pass
|
||||
if settings.COINDAEMON_Reward == 'POS':
|
||||
self.signature = b""
|
||||
else: pass
|
||||
self.sha3 = None
|
||||
|
||||
def deserialize(self, f):
|
||||
self.nVersion = struct.unpack("<i", f.read(4))[0]
|
||||
self.hashPrevBlock = deser_uint256(f)
|
||||
self.hashMerkleRoot = deser_uint256(f)
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
self.nBits = struct.unpack("<I", f.read(4))[0]
|
||||
self.nTime = struct.unpack("<II", f.read(8))[0]
|
||||
self.nNonce = struct.unpack("<IIIIIIII", f.read(32))[0]
|
||||
else:
|
||||
self.nTime = struct.unpack("<I", f.read(4))[0]
|
||||
self.nBits = struct.unpack("<I", f.read(4))[0]
|
||||
self.nNonce = struct.unpack("<I", f.read(4))[0]
|
||||
self.nTime = struct.unpack("<I", f.read(4))[0]
|
||||
self.nBits = struct.unpack("<I", f.read(4))[0]
|
||||
self.nNonce = struct.unpack("<I", f.read(4))[0]
|
||||
self.vtx = deser_vector(f, CTransaction)
|
||||
if settings.COINDAEMON_Reward == 'POS':
|
||||
self.signature = deser_string(f)
|
||||
else: pass
|
||||
|
||||
def serialize(self):
|
||||
r = []
|
||||
r.append(struct.pack("<i", self.nVersion))
|
||||
r.append(ser_uint256(self.hashPrevBlock))
|
||||
r.append(ser_uint256(self.hashMerkleRoot))
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
r.append(struct.pack("<I", self.nBits))
|
||||
r.append(struct.pack("<Q", self.nTime))
|
||||
r.append(ser_uint256(self.nNonce))
|
||||
else:
|
||||
r.append(struct.pack("<I", self.nTime))
|
||||
r.append(struct.pack("<I", self.nBits))
|
||||
r.append(struct.pack("<I", self.nNonce))
|
||||
r.append(ser_vector(self.vtx))
|
||||
return ''.join(r)
|
||||
|
||||
def calc_sha3(self):
|
||||
if self.sha3 is None:
|
||||
r = []
|
||||
r.append(struct.pack("<i", self.nVersion))
|
||||
r.append(ser_uint256(self.hashPrevBlock))
|
||||
r.append(ser_uint256(self.hashMerkleRoot))
|
||||
r.append(struct.pack("<I", self.nTime))
|
||||
r.append(struct.pack("<I", self.nBits))
|
||||
r.append(struct.pack("<I", self.nNonce))
|
||||
r.append(ser_vector(self.vtx))
|
||||
if settings.COINDAEMON_Reward == 'POS':
|
||||
r.append(ser_string(self.signature))
|
||||
else: pass
|
||||
return ''.join(r)
|
||||
s = sha3.SHA3256()
|
||||
s.update(''.join(r) + str(self.nTime))
|
||||
hash_bin_temp = s.hexdigest()
|
||||
s = sha3.SHA3256()
|
||||
s.update(hash_bin_temp)
|
||||
block_hash_bin = s.hexdigest()
|
||||
self.sha3 = uint256_from_str(block_hash_bin)
|
||||
return self.sha3
|
||||
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
def calc_scrypt(self):
|
||||
if self.scrypt is None:
|
||||
r = []
|
||||
r.append(struct.pack("<i", self.nVersion))
|
||||
r.append(ser_uint256(self.hashPrevBlock))
|
||||
r.append(ser_uint256(self.hashMerkleRoot))
|
||||
r.append(struct.pack("<I", self.nTime))
|
||||
r.append(struct.pack("<I", self.nBits))
|
||||
r.append(struct.pack("<I", self.nNonce))
|
||||
self.scrypt = uint256_from_str(ltc_scrypt.getPoWHash(''.join(r)))
|
||||
return self.scrypt
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
def calc_quark(self):
|
||||
if self.quark is None:
|
||||
r = []
|
||||
r.append(struct.pack("<i", self.nVersion))
|
||||
r.append(ser_uint256(self.hashPrevBlock))
|
||||
r.append(ser_uint256(self.hashMerkleRoot))
|
||||
r.append(struct.pack("<I", self.nTime))
|
||||
r.append(struct.pack("<I", self.nBits))
|
||||
r.append(struct.pack("<I", self.nNonce))
|
||||
self.quark = uint256_from_str(quark_hash.getPoWHash(''.join(r)))
|
||||
return self.quark
|
||||
elif settings.COINDAEMON_ALGO == 'riecoin':
|
||||
def calc_riecoin(self):
|
||||
if self.riecoin is None:
|
||||
r = []
|
||||
r.append(struct.pack("<i", self.nVersion))
|
||||
r.append(ser_uint256(self.hashPrevBlock))
|
||||
r.append(ser_uint256(self.hashMerkleRoot))
|
||||
r.append(struct.pack("<I", self.nBits))
|
||||
r.append(struct.pack("<Q", self.nTime))
|
||||
sha256 = uint256_from_str(SHA256.new(SHA256.new(''.join(r)).digest()).digest())
|
||||
self.riecoin = riecoinPoW( sha256, uint256_from_compact(self.nBits), self.nNonce )
|
||||
return self.riecoin
|
||||
else:
|
||||
def calc_sha256(self):
|
||||
def calc_sha256(self):
|
||||
if self.sha256 is None:
|
||||
r = []
|
||||
r.append(struct.pack("<i", self.nVersion))
|
||||
@ -333,33 +229,12 @@ class CBlock(object):
|
||||
|
||||
|
||||
def is_valid(self):
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
self.calc_riecoin()
|
||||
elif settings.COINDAEMON_ALGO == 'scrypt':
|
||||
self.calc_scrypt()
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
self.calc_quark()
|
||||
else:
|
||||
self.calc_sha256()
|
||||
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
target = settings.POOL_TARGET
|
||||
else:
|
||||
target = uint256_from_compact(self.nBits)
|
||||
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
if self.riecoin < target:
|
||||
return False
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
if self.scrypt > target:
|
||||
return False
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
if self.quark > target:
|
||||
return False
|
||||
else:
|
||||
if self.sha256 > target:
|
||||
return False
|
||||
|
||||
self.calc_sha3()
|
||||
#self.calc_sha256()
|
||||
target = uint256_from_compact(self.nBits)
|
||||
if self.sha3 > target:
|
||||
#if self.sha256 > target:
|
||||
return false
|
||||
hashes = []
|
||||
for tx in self.vtx:
|
||||
tx.sha256 = None
|
||||
|
||||
@ -1,44 +0,0 @@
|
||||
|
||||
if args.irc_announce:
|
||||
from twisted.words.protocols import irc
|
||||
class IRCClient(irc.IRCClient):
|
||||
nickname = 'xpool%02i' % (random.randrange(100),)
|
||||
channel = net.ANNOUNCE_CHANNEL
|
||||
def lineReceived(self, line):
|
||||
if p2pool.DEBUG:
|
||||
print repr(line)
|
||||
irc.IRCClient.lineReceived(self, line)
|
||||
def signedOn(self):
|
||||
self.in_channel = False
|
||||
irc.IRCClient.signedOn(self)
|
||||
self.factory.resetDelay()
|
||||
self.join(self.channel)
|
||||
@defer.inlineCallbacks
|
||||
def new_share(share):
|
||||
if not self.in_channel:
|
||||
return
|
||||
if share.pow_hash <= share.header['bits'].target and abs(share.timestamp - time.time()) < 10*60:
|
||||
yield deferral.sleep(random.expovariate(1/60))
|
||||
message = '\x02%s BLOCK FOUND by %s! %s%064x' % (net.NAME.upper(), bitcoin_data.script2_to_address(share.new_script, net.PARENT), net.PARENT.BLOCK_EXPLORER_URL_PREFIX, share.header_hash)
|
||||
if all('%x' % (share.header_hash,) not in old_message for old_message in self.recent_messages):
|
||||
self.say(self.channel, message)
|
||||
self._remember_message(message)
|
||||
self.watch_id = node.tracker.verified.added.watch(new_share)
|
||||
self.recent_messages = []
|
||||
def joined(self, channel):
|
||||
self.in_channel = True
|
||||
def left(self, channel):
|
||||
self.in_channel = False
|
||||
def _remember_message(self, message):
|
||||
self.recent_messages.append(message)
|
||||
while len(self.recent_messages) > 100:
|
||||
self.recent_messages.pop(0)
|
||||
def privmsg(self, user, channel, message):
|
||||
if channel == self.channel:
|
||||
self._remember_message(message)
|
||||
def connectionLost(self, reason):
|
||||
node.tracker.verified.added.unwatch(self.watch_id)
|
||||
print 'IRC connection lost:', reason.getErrorMessage()
|
||||
class IRCClientFactory(protocol.ReconnectingClientFactory):
|
||||
protocol = IRCClient
|
||||
reactor.connectTCP("irc.freenode.net", 6667, IRCClientFactory())
|
||||
@ -1,43 +0,0 @@
|
||||
import smtplib
|
||||
from email.mime.text import MIMEText
|
||||
|
||||
from stratum import settings
|
||||
|
||||
import stratum.logger
|
||||
log = stratum.logger.get_logger('Notify_Email')
|
||||
|
||||
class NOTIFY_EMAIL():
|
||||
|
||||
def notify_start(self):
|
||||
if settings.NOTIFY_EMAIL_TO != '':
|
||||
self.send_email(settings.NOTIFY_EMAIL_TO,'Stratum Server Started','Stratum server has started!')
|
||||
|
||||
def notify_found_block(self,worker_name):
|
||||
if settings.NOTIFY_EMAIL_TO != '':
|
||||
text = '%s on Stratum server found a block!' % worker_name
|
||||
self.send_email(settings.NOTIFY_EMAIL_TO,'Stratum Server Found Block',text)
|
||||
|
||||
def notify_dead_coindaemon(self,worker_name):
|
||||
if settings.NOTIFY_EMAIL_TO != '':
|
||||
text = 'Coin Daemon Has Crashed Please Report' % worker_name
|
||||
self.send_email(settings.NOTIFY_EMAIL_TO,'Coin Daemon Crashed!',text)
|
||||
|
||||
def send_email(self,to,subject,message):
|
||||
msg = MIMEText(message)
|
||||
msg['Subject'] = subject
|
||||
msg['From'] = settings.NOTIFY_EMAIL_FROM
|
||||
msg['To'] = to
|
||||
try:
|
||||
s = smtplib.SMTP(settings.NOTIFY_EMAIL_SERVER)
|
||||
if settings.NOTIFY_EMAIL_USERNAME != '':
|
||||
if settings.NOTIFY_EMAIL_USETLS:
|
||||
s.ehlo()
|
||||
s.starttls()
|
||||
s.ehlo()
|
||||
s.login(settings.NOTIFY_EMAIL_USERNAME, settings.NOTIFY_EMAIL_PASSWORD)
|
||||
s.sendmail(settings.NOTIFY_EMAIL_FROM,to,msg.as_string())
|
||||
s.quit()
|
||||
except smtplib.SMTPAuthenticationError as e:
|
||||
log.error('Error sending Email: %s' % e[1])
|
||||
except Exception as e:
|
||||
log.error('Error sending Email: %s' % e[0])
|
||||
206
lib/skein.py
206
lib/skein.py
@ -1,206 +0,0 @@
|
||||
# /usr/bin/env python
|
||||
# coding=utf-8
|
||||
|
||||
# Copyright 2010 Jonathan Bowman
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied. See the License for the specific language governing
|
||||
# permissions and limitations under the License.
|
||||
|
||||
"""Pure Python implementation of the Skein 512-bit hashing algorithm"""
|
||||
|
||||
import array
|
||||
import binascii
|
||||
import os
|
||||
import struct
|
||||
|
||||
from threefish import (add64, bigint, bytes2words, Threefish512, words,
|
||||
words2bytes, words_format, xrange,
|
||||
zero_bytes, zero_words)
|
||||
|
||||
# An empty bytestring that behaves itself whether in Python 2 or 3
|
||||
empty_bytes = array.array('B').tostring()
|
||||
|
||||
class Skein512(object):
|
||||
"""Skein 512-bit hashing algorithm
|
||||
|
||||
The message to be hashed may be set as `msg` when initialized, or
|
||||
passed in later using the ``update`` method.
|
||||
|
||||
Use `key` (a bytestring with arbitrary length) for MAC
|
||||
functionality.
|
||||
|
||||
`block_type` will typically be "msg", but may also be one of:
|
||||
"key", "nonce", "cfg_final", or "out_final". These will affect the
|
||||
tweak value passed to the underlying Threefish block cipher. Again,
|
||||
if you don't know which one to choose, "msg" is probably what you
|
||||
want.
|
||||
|
||||
Example:
|
||||
|
||||
>>> Skein512("Hello, world!").hexdigest()
|
||||
'8449f597f1764274f8bf4a03ead22e0404ea2dc63c8737629e6e282303aebfd5dd96f07e21ae2e7a8b2bdfadd445bd1d71dfdd9745c95b0eb05dc01f289ad765'
|
||||
|
||||
"""
|
||||
block_size = 64
|
||||
block_bits = 512
|
||||
block_type = {'key': 0,
|
||||
'nonce': 0x5400000000000000,
|
||||
'msg': 0x7000000000000000,
|
||||
'cfg_final': 0xc400000000000000,
|
||||
'out_final': 0xff00000000000000}
|
||||
|
||||
def __init__(self, msg='', digest_bits=512, key=None,
|
||||
block_type='msg'):
|
||||
self.tf = Threefish512()
|
||||
if key:
|
||||
self.digest_bits = 512
|
||||
self._start_new_type('key')
|
||||
self.update(key)
|
||||
self.tf.key = bytes2words(self.final(False))
|
||||
self.digest_bits = digest_bits
|
||||
self.digest_size = (digest_bits + 7) >> 3
|
||||
self._start_new_type('cfg_final')
|
||||
b = words2bytes((0x133414853,digest_bits,0,0,0,0,0,0))
|
||||
self._process_block(b,32)
|
||||
self._start_new_type(block_type)
|
||||
if msg:
|
||||
self.update(msg)
|
||||
|
||||
def _start_new_type(self, block_type):
|
||||
"""Setup new tweak values and internal buffer.
|
||||
|
||||
Primarily for internal use.
|
||||
|
||||
"""
|
||||
self.buf = empty_bytes
|
||||
self.tf.tweak = words([0, self.block_type[block_type]])
|
||||
|
||||
def _process_block(self, block, byte_count_add):
|
||||
"""Encrypt internal state using Threefish.
|
||||
|
||||
Primarily for internal use.
|
||||
|
||||
"""
|
||||
block_len = len(block)
|
||||
for i in xrange(0,block_len,64):
|
||||
w = bytes2words(block[i:i+64])
|
||||
self.tf.tweak[0] = add64(self.tf.tweak[0], byte_count_add)
|
||||
self.tf.prepare_tweak()
|
||||
self.tf.prepare_key()
|
||||
self.tf.key = self.tf.encrypt_block(w)
|
||||
self.tf._feed_forward(self.tf.key, w)
|
||||
# set second tweak value to ~SKEIN_T1_FLAG_FIRST:
|
||||
self.tf.tweak[1] &= bigint(0xbfffffffffffffff)
|
||||
|
||||
def update(self, msg):
|
||||
"""Update internal state with new data to be hashed.
|
||||
|
||||
`msg` is a bytestring, and should be a bytes object in Python 3
|
||||
and up, or simply a string in Python 2.5 and 2.6.
|
||||
|
||||
"""
|
||||
self.buf += msg
|
||||
buflen = len(self.buf)
|
||||
if buflen > 64:
|
||||
end = -(buflen % 64) or (buflen-64)
|
||||
data = self.buf[0:end]
|
||||
self.buf = self.buf[end:]
|
||||
try:
|
||||
self._process_block(data, 64)
|
||||
except:
|
||||
print(len(data))
|
||||
print(binascii.b2a_hex(data))
|
||||
|
||||
def final(self, output=True):
|
||||
"""Return hashed data as bytestring.
|
||||
|
||||
`output` is primarily for internal use. It should only be False
|
||||
if you have a clear reason for doing so.
|
||||
|
||||
This function can be called as either ``final`` or ``digest``.
|
||||
|
||||
"""
|
||||
self.tf.tweak[1] |= bigint(0x8000000000000000) # SKEIN_T1_FLAG_FINAL
|
||||
buflen = len(self.buf)
|
||||
self.buf += zero_bytes[:64-buflen]
|
||||
|
||||
self._process_block(self.buf, buflen)
|
||||
|
||||
if not output:
|
||||
hash_val = words2bytes(self.tf.key)
|
||||
else:
|
||||
hash_val = empty_bytes
|
||||
self.buf = zero_bytes[:]
|
||||
key = self.tf.key[:] # temporary copy
|
||||
i=0
|
||||
while i*64 < self.digest_size:
|
||||
self.buf = words_format[1].pack(i) + self.buf[8:]
|
||||
self.tf.tweak = [0, self.block_type['out_final']]
|
||||
self._process_block(self.buf, 8)
|
||||
n = self.digest_size - i*64
|
||||
if n >= 64:
|
||||
n = 64
|
||||
hash_val += words2bytes(self.tf.key)[0:n]
|
||||
self.tf.key = key
|
||||
i+=1
|
||||
return hash_val
|
||||
|
||||
digest = final
|
||||
|
||||
def hexdigest(self):
|
||||
"""Return a hexadecimal representation of the hashed data"""
|
||||
return binascii.b2a_hex(self.digest())
|
||||
|
||||
class Skein512Random(Skein512):
|
||||
"""A Skein-based pseudo-random bytestring generator.
|
||||
|
||||
If `seed` is unspecified, ``os.urandom`` will be used to provide the
|
||||
seed.
|
||||
|
||||
In case you are using this as an iterator, rather than generating
|
||||
new data at each iteration, a pool of length `queue_size` is
|
||||
generated periodically.
|
||||
|
||||
"""
|
||||
def __init__(self, seed=None, queue_size=512):
|
||||
Skein512.__init__(self, block_type='nonce')
|
||||
self.queue = []
|
||||
self.queue_size = queue_size
|
||||
self.tf.key = zero_words[:]
|
||||
if not seed:
|
||||
seed = os.urandom(100)
|
||||
self.reseed(seed)
|
||||
|
||||
def reseed(self, seed):
|
||||
"""(Re)seed the generator."""
|
||||
self.digest_size = 64
|
||||
self.update(words2bytes(self.tf.key) + seed)
|
||||
self.tf.key = bytes2words(self.final())
|
||||
|
||||
def getbytes(self, request_bytes):
|
||||
"""Return random bytestring of length `request_bytes`."""
|
||||
self.digest_size = 64 + request_bytes
|
||||
self.update(words2bytes(self.tf.key))
|
||||
output = self.final()
|
||||
self.tf.key = bytes2words(output[0:64])
|
||||
return output[64:]
|
||||
|
||||
def __iter__(self):
|
||||
return self
|
||||
|
||||
def next(self):
|
||||
if not self.queue:
|
||||
self.queue = array.array('B', self.getbytes(self.queue_size))
|
||||
return self.queue.pop()
|
||||
|
||||
if __name__ == '__main__':
|
||||
print Skein512('123').hexdigest()
|
||||
@ -1,20 +0,0 @@
|
||||
import hashlib
|
||||
import struct
|
||||
import skein
|
||||
|
||||
def skeinhash(msg):
|
||||
return hashlib.sha256(skein.Skein512(msg[:80]).digest()).digest()
|
||||
|
||||
def skeinhashmid(msg):
|
||||
s = skein.Skein512(msg[:64] + '\x00') # hack to force Skein512.update()
|
||||
return struct.pack('<8Q', *s.tf.key.tolist())
|
||||
|
||||
if __name__ == '__main__':
|
||||
mesg = "dissociative1234dissociative4567dissociative1234dissociative4567dissociative1234"
|
||||
h = skeinhashmid(mesg)
|
||||
print h.encode('hex')
|
||||
print 'ad0d423b18b47f57724e519c42c9d5623308feac3df37aca964f2aa869f170bdf23e97f644e81511df49c59c5962887d17e277e7e8513345137638334c8e59a4' == h.encode('hex')
|
||||
|
||||
h = skeinhash(mesg)
|
||||
print h.encode('hex')
|
||||
print '764da2e768811e91c6c0c649b052b7109a9bc786bce136a59c8d5a0547cddc54' == h.encode('hex')
|
||||
@ -3,21 +3,13 @@ import binascii
|
||||
import util
|
||||
import StringIO
|
||||
import settings
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
import ltc_scrypt
|
||||
elif settings.COINDAEMON_ALGO == 'scrypt-jane':
|
||||
import yac_scrypt
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
import quark_hash
|
||||
elif settings.COINDAEMON_ALGO == 'skeinhash':
|
||||
import skeinhash
|
||||
else: pass
|
||||
import sha3
|
||||
from twisted.internet import defer
|
||||
from lib.exceptions import SubmitException
|
||||
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('template_registry')
|
||||
log.debug("Got to Template Registry")
|
||||
|
||||
from mining.interfaces import Interfaces
|
||||
from extranonce_counter import ExtranonceCounter
|
||||
import lib.settings as settings
|
||||
@ -48,7 +40,7 @@ class TemplateRegistry(object):
|
||||
self.extranonce_counter = ExtranonceCounter(instance_id)
|
||||
self.extranonce2_size = block_template_class.coinbase_transaction_class.extranonce_size \
|
||||
- self.extranonce_counter.get_size()
|
||||
log.debug("Got to Template Registry")
|
||||
|
||||
self.coinbaser = coinbaser
|
||||
self.block_template_class = block_template_class
|
||||
self.bitcoin_rpc = bitcoin_rpc
|
||||
@ -65,13 +57,11 @@ class TemplateRegistry(object):
|
||||
def get_new_extranonce1(self):
|
||||
'''Generates unique extranonce1 (e.g. for newly
|
||||
subscribed connection.'''
|
||||
log.debug("Getting Unique Extranonce")
|
||||
return self.extranonce_counter.get_new_bin()
|
||||
|
||||
def get_last_broadcast_args(self):
|
||||
'''Returns arguments for mining.notify
|
||||
from last known template.'''
|
||||
log.debug("Getting Laat Template")
|
||||
return self.last_block.broadcast_args
|
||||
|
||||
def add_template(self, block,block_height):
|
||||
@ -139,7 +129,7 @@ class TemplateRegistry(object):
|
||||
start = Interfaces.timestamper.time()
|
||||
|
||||
template = self.block_template_class(Interfaces.timestamper, self.coinbaser, JobIdGenerator.get_new_id())
|
||||
log.info(template.fill_from_rpc(data))
|
||||
template.fill_from_rpc(data)
|
||||
self.add_template(template,data['height'])
|
||||
|
||||
log.info("Update finished, %.03f sec, %d txes" % \
|
||||
@ -150,18 +140,8 @@ class TemplateRegistry(object):
|
||||
|
||||
def diff_to_target(self, difficulty):
|
||||
'''Converts difficulty to target'''
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
diff1 = 0x0000ffff00000000000000000000000000000000000000000000000000000000
|
||||
elif settings.COINDAEMON_ALGO == 'scrypt-jane':
|
||||
diff1 = 0x0000ffff00000000000000000000000000000000000000000000000000000000
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
diff1 = 0x000000ffff000000000000000000000000000000000000000000000000000000
|
||||
elif settings.COINDAEMON_ALGO == 'riecoin':
|
||||
return difficulty
|
||||
else:
|
||||
diff1 = 0x00000000ffff0000000000000000000000000000000000000000000000000000
|
||||
|
||||
return diff1 / difficulty
|
||||
diff1 = 0xffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff00000000
|
||||
return diff1 / difficulty
|
||||
|
||||
def get_job(self, job_id):
|
||||
'''For given job_id returns BlockTemplate instance or None'''
|
||||
@ -206,23 +186,15 @@ class TemplateRegistry(object):
|
||||
raise SubmitException("Job '%s' not found" % job_id)
|
||||
|
||||
# Check if ntime looks correct
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
if len(ntime) != 16:
|
||||
raise SubmitException("Incorrect size of ntime. Expected 16 chars")
|
||||
else:
|
||||
if len(ntime) != 8:
|
||||
raise SubmitException("Incorrect size of ntime. Expected 8 chars")
|
||||
if len(ntime) != 8:
|
||||
raise SubmitException("Incorrect size of ntime. Expected 8 chars")
|
||||
|
||||
if not job.check_ntime(int(ntime, 16)):
|
||||
raise SubmitException("Ntime out of range")
|
||||
|
||||
# Check nonce
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
if len(nonce) != 64:
|
||||
raise SubmitException("Incorrect size of nonce. Expected 64 chars")
|
||||
else:
|
||||
if len(nonce) != 8:
|
||||
raise SubmitException("Incorrect size of nonce. Expected 8 chars")
|
||||
if len(nonce) != 8:
|
||||
raise SubmitException("Incorrect size of nonce. Expected 8 chars")
|
||||
|
||||
# Check for duplicated submit
|
||||
if not job.register_submit(extranonce1_bin, extranonce2, ntime, nonce):
|
||||
@ -237,9 +209,6 @@ class TemplateRegistry(object):
|
||||
extranonce2_bin = binascii.unhexlify(extranonce2)
|
||||
ntime_bin = binascii.unhexlify(ntime)
|
||||
nonce_bin = binascii.unhexlify(nonce)
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
ntime_bin = (''.join([ ntime_bin[(1-i)*4:(1-i)*4+4] for i in range(0, 2) ]))
|
||||
nonce_bin = (''.join([ nonce_bin[(7-i)*4:(7-i)*4+4] for i in range(0, 8) ]))
|
||||
|
||||
# 1. Build coinbase
|
||||
coinbase_bin = job.serialize_coinbase(extranonce1_bin, extranonce2_bin)
|
||||
@ -252,79 +221,58 @@ class TemplateRegistry(object):
|
||||
# 3. Serialize header with given merkle, ntime and nonce
|
||||
header_bin = job.serialize_header(merkle_root_int, ntime_bin, nonce_bin)
|
||||
|
||||
# 4. Reverse header and compare it with target of the user
|
||||
if settings.COINDAEMON_ALGO == 'scrypt':
|
||||
hash_bin = ltc_scrypt.getPoWHash(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
|
||||
elif settings.COINDAEMON_ALGO == 'scrypt-jane':
|
||||
hash_bin = yac_scrypt.getPoWHash(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]), int(ntime, 16))
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
hash_bin = quark_hash.getPoWHash(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
|
||||
elif settings.COINDAEMON_ALGO == 'skeinhash':
|
||||
hash_bin = skeinhash.skeinhash(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
|
||||
else:
|
||||
hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
|
||||
|
||||
# 4. Reverse header and compare it with target of the user
|
||||
s = sha3.SHA3256()
|
||||
ntime1 = str(int(ntime, 16))
|
||||
s.update(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]) + ntime1)
|
||||
hash_bin_temp = s.hexdigest()
|
||||
s = sha3.SHA3256()
|
||||
s.update(hash_bin_temp)
|
||||
hash_bin = s.hexdigest()
|
||||
hash_int = util.uint256_from_str(hash_bin)
|
||||
scrypt_hash_hex = "%064x" % hash_int
|
||||
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
# this is kind of an ugly hack: we use hash_int to store the number of primes
|
||||
hash_int = util.riecoinPoW( hash_int, job.target, int(nonce, 16) )
|
||||
|
||||
keccak_hash_hex = "%064x" % hash_int
|
||||
header_hex = binascii.hexlify(header_bin)
|
||||
if settings.COINDAEMON_ALGO == 'scrypt' or settings.COINDAEMON_ALGO == 'scrypt-jane':
|
||||
header_hex = header_hex+"000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000"
|
||||
elif settings.COINDAEMON_ALGO == 'quark':
|
||||
header_hex = header_hex+"000000800000000000000000000000000000000000000000000000000000000000000000000000000000000080020000"
|
||||
elif settings.COINDAEMON_ALGO == 'riecoin':
|
||||
header_hex = header_hex+"00000080000000000000000080030000"
|
||||
else: pass
|
||||
|
||||
target_user = self.diff_to_target(difficulty)
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
if hash_int < target_user:
|
||||
raise SubmitException("Share does not meet target")
|
||||
else:
|
||||
if hash_int > target_user:
|
||||
raise SubmitException("Share is above target")
|
||||
# Mostly for debugging purposes
|
||||
target_info = self.diff_to_target(100000)
|
||||
if hash_int <= target_info:
|
||||
log.info("Yay, share with diff above 100000")
|
||||
target_user = self.diff_to_target(difficulty)
|
||||
if hash_int > target_user and \
|
||||
( 'prev_jobid' not in session or session['prev_jobid'] < job_id \
|
||||
or 'prev_diff' not in session or hash_int > self.diff_to_target(session['prev_diff']) ):
|
||||
raise SubmitException("Share is above target")
|
||||
|
||||
# Mostly for debugging purposes
|
||||
target_info = self.diff_to_target(100000)
|
||||
if hash_int <= target_info:
|
||||
log.info("Yay, share with diff above 100000")
|
||||
|
||||
# Algebra tells us the diff_to_target is the same as hash_to_diff
|
||||
share_diff = int(self.diff_to_target(hash_int))
|
||||
|
||||
# 5. Compare hash with target of the network
|
||||
isBlockCandidate = False
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
if hash_int == 6:
|
||||
isBlockCandidate = True
|
||||
else:
|
||||
if hash_int <= job.target:
|
||||
isBlockCandidate = True
|
||||
|
||||
if isBlockCandidate == True:
|
||||
# 5. Compare hash with target of the network
|
||||
if hash_int <= job.target:
|
||||
# Yay! It is block candidate!
|
||||
log.info("We found a block candidate! %s" % scrypt_hash_hex)
|
||||
|
||||
# Reverse the header and get the potential block hash (for scrypt only)
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
block_hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 28) ]))
|
||||
else:
|
||||
block_hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
|
||||
|
||||
# Reverse the header and get the potential block hash (for scrypt only)
|
||||
s = sha3.SHA3256()
|
||||
ntime1 = str(int(ntime, 16))
|
||||
s.update(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]) + ntime1)
|
||||
hash_bin_temp = s.hexdigest()
|
||||
s = sha3.SHA3256()
|
||||
s.update(hash_bin_temp)
|
||||
block_hash_bin = s.hexdigest()
|
||||
block_hash_hex = block_hash_bin[::-1].encode('hex_codec')
|
||||
|
||||
|
||||
# 6. Finalize and serialize block object
|
||||
job.finalize(merkle_root_int, extranonce1_bin, extranonce2_bin, int(ntime, 16), int(nonce, 16))
|
||||
|
||||
|
||||
if not job.is_valid():
|
||||
# Should not happen
|
||||
log.exception("FINAL JOB VALIDATION FAILED!(Try enabling/disabling tx messages)")
|
||||
log.error("Final job validation failed!")
|
||||
|
||||
# 7. Submit block to the network
|
||||
serialized = binascii.hexlify(job.serialize())
|
||||
on_submit = self.bitcoin_rpc.submitblock(serialized, block_hash_hex, scrypt_hash_hex)
|
||||
on_submit = self.bitcoin_rpc.submitblock(serialized, block_hash_hex)
|
||||
if on_submit:
|
||||
self.update_block()
|
||||
|
||||
@ -334,11 +282,14 @@ class TemplateRegistry(object):
|
||||
return (header_hex, scrypt_hash_hex, share_diff, on_submit)
|
||||
|
||||
if settings.SOLUTION_BLOCK_HASH:
|
||||
# Reverse the header and get the potential block hash (for scrypt only) only do this if we want to send in the block hash to the shares table
|
||||
if settings.COINDAEMON_ALGO == 'riecoin':
|
||||
block_hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 28) ]))
|
||||
else:
|
||||
block_hash_bin = util.doublesha(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]))
|
||||
# Reverse the header and get the potential block hash (for scrypt only) only do this if we want to send in the block hash to the shares table
|
||||
s = sha3.SHA3256()
|
||||
ntime1 = str(int(ntime, 16))
|
||||
s.update(''.join([ header_bin[i*4:i*4+4][::-1] for i in range(0, 20) ]) + ntime1)
|
||||
hash_bin_temp = s.hexdigest()
|
||||
s = sha3.SHA3256()
|
||||
s.update(hash_bin_temp)
|
||||
block_hash_bin = s.hexdigest()
|
||||
block_hash_hex = block_hash_bin[::-1].encode('hex_codec')
|
||||
return (header_hex, block_hash_hex, share_diff, None)
|
||||
else:
|
||||
|
||||
147
lib/threefish.py
147
lib/threefish.py
@ -1,147 +0,0 @@
|
||||
# /usr/bin/env python
|
||||
# coding=utf-8
|
||||
|
||||
# Copyright 2010 Jonathan Bowman
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied. See the License for the specific language governing
|
||||
# permissions and limitations under the License.
|
||||
|
||||
"""Pure Python implementation of the Threefish block cipher
|
||||
|
||||
The core of the Skein 512-bit hashing algorithm
|
||||
|
||||
"""
|
||||
from util_numpy import add64, bigint, bytelist, bytes2words, imap, izip, sub64, \
|
||||
SKEIN_KS_PARITY, words, words2bytes, words_format, xrange, zero_bytes, zero_words, RotL_64, RotR_64, xor
|
||||
|
||||
from itertools import cycle
|
||||
|
||||
ROT = bytelist((46, 36, 19, 37,
|
||||
33, 27, 14, 42,
|
||||
17, 49, 36, 39,
|
||||
44, 9, 54, 56,
|
||||
39, 30, 34, 24,
|
||||
13, 50, 10, 17,
|
||||
25, 29, 39, 43,
|
||||
8, 35, 56, 22))
|
||||
|
||||
PERM = bytelist(((0,1),(2,3),(4,5),(6,7),
|
||||
(2,1),(4,7),(6,5),(0,3),
|
||||
(4,1),(6,3),(0,5),(2,7),
|
||||
(6,1),(0,7),(2,5),(4,3)))
|
||||
|
||||
class Threefish512(object):
|
||||
"""The Threefish 512-bit block cipher.
|
||||
|
||||
The key and tweak may be set when initialized (as
|
||||
bytestrings) or after initialization using the ``tweak`` or
|
||||
``key`` properties. When choosing the latter, be sure to call
|
||||
the ``prepare_key`` and ``prepare_tweak`` methods.
|
||||
|
||||
"""
|
||||
def __init__(self, key=None, tweak=None):
|
||||
"""Set key and tweak.
|
||||
|
||||
The key and the tweak will be lists of 8 64-bit words
|
||||
converted from `key` and `tweak` bytestrings, or all
|
||||
zeroes if not specified.
|
||||
|
||||
"""
|
||||
if key:
|
||||
self.key = bytes2words(key)
|
||||
self.prepare_key()
|
||||
else:
|
||||
self.key = words(zero_words[:] + [0])
|
||||
if tweak:
|
||||
self.tweak = bytes2words(tweak, 2)
|
||||
self.prepare_tweak()
|
||||
else:
|
||||
self.tweak = zero_words[:3]
|
||||
|
||||
def prepare_key(self):
|
||||
"""Compute key."""
|
||||
final = reduce(xor, self.key[:8]) ^ SKEIN_KS_PARITY
|
||||
try:
|
||||
self.key[8] = final
|
||||
except IndexError:
|
||||
#self.key.append(final)
|
||||
self.key = words(list(self.key) + [final])
|
||||
|
||||
def prepare_tweak(self):
|
||||
"""Compute tweak."""
|
||||
final = self.tweak[0] ^ self.tweak[1]
|
||||
try:
|
||||
self.tweak[2] = final
|
||||
except IndexError:
|
||||
#self.tweak.append(final)
|
||||
self.tweak = words(list(self.tweak) + [final])
|
||||
|
||||
def encrypt_block(self, plaintext):
|
||||
"""Return 8-word ciphertext, encrypted from plaintext.
|
||||
|
||||
`plaintext` must be a list of 8 64-bit words.
|
||||
|
||||
"""
|
||||
key = self.key
|
||||
tweak = self.tweak
|
||||
state = words(list(imap(add64, plaintext, key[:8])))
|
||||
state[5] = add64(state[5], tweak[0])
|
||||
state[6] = add64(state[6], tweak[1])
|
||||
|
||||
for r,s in izip(xrange(1,19),cycle((0,16))):
|
||||
for i in xrange(16):
|
||||
m,n = PERM[i]
|
||||
state[m] = add64(state[m], state[n])
|
||||
state[n] = RotL_64(state[n], ROT[i+s])
|
||||
state[n] = state[n] ^ state[m]
|
||||
for y in xrange(8):
|
||||
state[y] = add64(state[y], key[(r+y) % 9])
|
||||
state[5] = add64(state[5], tweak[r % 3])
|
||||
state[6] = add64(state[6], tweak[(r+1) % 3])
|
||||
state[7] = add64(state[7], r)
|
||||
|
||||
return state
|
||||
|
||||
def _feed_forward(self, state, plaintext):
|
||||
"""Compute additional step required when hashing.
|
||||
|
||||
Primarily for internal use.
|
||||
|
||||
"""
|
||||
state[:] = list(imap(xor, state, plaintext))
|
||||
|
||||
def decrypt_block(self, ciphertext):
|
||||
"""Return 8-word plaintext, decrypted from plaintext.
|
||||
|
||||
`ciphertext` must be a list of 8 64-bit words.
|
||||
|
||||
"""
|
||||
key = self.key
|
||||
tweak = self.tweak
|
||||
state = ciphertext[:]
|
||||
|
||||
for r,s in izip(xrange(18,0,-1),cycle((16,0))):
|
||||
for y in xrange(8):
|
||||
state[y] = sub64(state[y], key[(r+y) % 9])
|
||||
state[5] = sub64(state[5], tweak[r % 3])
|
||||
state[6] = sub64(state[6], tweak[(r+1) % 3])
|
||||
state[7] = sub64(state[7], r)
|
||||
|
||||
for i in xrange(15,-1,-1):
|
||||
m,n = PERM[i]
|
||||
state[n] = RotR_64(state[m] ^ state[n], ROT[i+s])
|
||||
state[m] = sub64(state[m], state[n])
|
||||
|
||||
result = list(imap(sub64, state, key))
|
||||
result[5] = sub64(result[5], tweak[0])
|
||||
result[6] = sub64(result[6], tweak[1])
|
||||
return result
|
||||
63
lib/util.py
63
lib/util.py
@ -3,8 +3,6 @@
|
||||
import struct
|
||||
import StringIO
|
||||
import binascii
|
||||
import settings
|
||||
import bitcoin_rpc
|
||||
from hashlib import sha256
|
||||
|
||||
def deser_string(f):
|
||||
@ -56,10 +54,7 @@ def uint256_from_str_be(s):
|
||||
|
||||
def uint256_from_compact(c):
|
||||
nbytes = (c >> 24) & 0xFF
|
||||
if nbytes <= 3:
|
||||
v = (c & 0xFFFFFFL) >> (8 * (3 - nbytes))
|
||||
else:
|
||||
v = (c & 0xFFFFFFL) << (8 * (nbytes - 3))
|
||||
v = (c & 0xFFFFFFL) << (8 * (nbytes - 3))
|
||||
return v
|
||||
|
||||
def deser_vector(f, c):
|
||||
@ -176,7 +171,7 @@ def address_to_pubkeyhash(addr):
|
||||
addr = b58decode(addr, 25)
|
||||
except:
|
||||
return None
|
||||
|
||||
|
||||
if addr is None:
|
||||
return None
|
||||
|
||||
@ -214,63 +209,9 @@ def ser_number(n):
|
||||
s.append(n)
|
||||
return bytes(s)
|
||||
|
||||
|
||||
def isPrime( n ):
|
||||
if pow( 2, n-1, n ) == 1:
|
||||
return True
|
||||
return False
|
||||
|
||||
def riecoinPoW( hash_int, diff, nNonce ):
|
||||
base = 1 << 8
|
||||
for i in range(256):
|
||||
base = base << 1
|
||||
base = base | (hash_int & 1)
|
||||
hash_int = hash_int >> 1
|
||||
trailingZeros = diff - 1 - 8 - 256
|
||||
if trailingZeros < 16 or trailingZeros > 20000:
|
||||
return 0
|
||||
base = base << trailingZeros
|
||||
|
||||
base += nNonce
|
||||
|
||||
if (base % 210) != 97:
|
||||
return 0
|
||||
|
||||
if not isPrime( base ):
|
||||
return 0
|
||||
primes = 1
|
||||
|
||||
base += 4
|
||||
if isPrime( base ):
|
||||
primes+=1
|
||||
|
||||
base += 2
|
||||
if isPrime( base ):
|
||||
primes+=1
|
||||
|
||||
base += 4
|
||||
if isPrime( base ):
|
||||
primes+=1
|
||||
|
||||
base += 2
|
||||
if isPrime( base ):
|
||||
primes+=1
|
||||
|
||||
base += 4
|
||||
if isPrime( base ):
|
||||
primes+=1
|
||||
|
||||
return primes
|
||||
|
||||
#if settings.COINDAEMON_Reward == 'POW':
|
||||
def script_to_address(addr):
|
||||
d = address_to_pubkeyhash(addr)
|
||||
if not d:
|
||||
raise ValueError('invalid address')
|
||||
(ver, pubkeyhash) = d
|
||||
return b'\x76\xa9\x14' + pubkeyhash + b'\x88\xac'
|
||||
#else:
|
||||
def script_to_pubkey(key):
|
||||
if len(key) == 66: key = binascii.unhexlify(key)
|
||||
if len(key) != 33: raise Exception('Invalid Address')
|
||||
return b'\x21' + key + b'\xac'
|
||||
|
||||
@ -1,100 +0,0 @@
|
||||
# /usr/bin/env python
|
||||
# coding=utf-8
|
||||
|
||||
# Copyright 2010 Jonathan Bowman
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied. See the License for the specific language governing
|
||||
# permissions and limitations under the License.
|
||||
|
||||
"""Various helper functions for handling arrays, etc. using numpy"""
|
||||
|
||||
import struct
|
||||
from operator import xor, add as add64, sub as sub64
|
||||
import numpy as np
|
||||
|
||||
words = np.uint64
|
||||
bytelist = np.uint8
|
||||
bigint = np.uint64
|
||||
|
||||
# working out some differences between Python 2 and 3
|
||||
try:
|
||||
from itertools import imap, izip
|
||||
except ImportError:
|
||||
imap = map
|
||||
izip = zip
|
||||
try:
|
||||
xrange = xrange
|
||||
except:
|
||||
xrange = range
|
||||
|
||||
SKEIN_KS_PARITY = np.uint64(0x1BD11BDAA9FC1A22)
|
||||
|
||||
# zeroed out byte string and list for convenience and performance
|
||||
zero_bytes = struct.pack('64B', *[0] * 64)
|
||||
zero_words = np.zeros(8, dtype=np.uint64)
|
||||
|
||||
# Build structs for conversion appropriate to this system, favoring
|
||||
# native formats if possible for slight performance benefit
|
||||
words_format_tpl = "%dQ"
|
||||
if struct.pack('2B', 0, 1) == struct.pack('=H', 1): # big endian?
|
||||
words_format_tpl = "<" + words_format_tpl # force little endian
|
||||
else:
|
||||
try: # is 64-bit integer native?
|
||||
struct.unpack(words_format_tpl % 2, zero_bytes[:16])
|
||||
except(struct.error): # Use standard instead of native
|
||||
words_format_tpl = "=" + words_format_tpl
|
||||
|
||||
# build structs for one-, two- and eight-word sequences
|
||||
words_format = dict(
|
||||
(i,struct.Struct(words_format_tpl % i)) for i in (1,2,8))
|
||||
|
||||
def bytes2words(data, length=8):
|
||||
"""Return a list of `length` 64-bit words from `data`.
|
||||
|
||||
`data` must consist of `length` * 8 bytes.
|
||||
`length` must be 1, 2, or 8.
|
||||
|
||||
"""
|
||||
return(np.fromstring(data, dtype=np.uint64))
|
||||
|
||||
def words2bytes(data, length=8):
|
||||
"""Return a `length` * 8 byte string from `data`.
|
||||
|
||||
|
||||
`data` must be a list of `length` 64-bit words
|
||||
`length` must be 1, 2, or 8.
|
||||
|
||||
"""
|
||||
try:
|
||||
return(data.tostring())
|
||||
except AttributeError:
|
||||
return(np.uint64(data).tostring())
|
||||
|
||||
def RotL_64(x, N):
|
||||
"""Return `x` rotated left by `N`."""
|
||||
#return (x << np.uint64(N & 63)) | (x >> np.uint64((64-N) & 63))
|
||||
return(np.left_shift(x, (N & 63), dtype=np.uint64) |
|
||||
np.right_shift(x, ((64-N) & 63), dtype=np.uint64))
|
||||
|
||||
def RotR_64(x, N):
|
||||
"""Return `x` rotated right by `N`."""
|
||||
return(np.right_shift(x, (N & 63), dtype=np.uint64) |
|
||||
np.left_shift(x, ((64-N) & 63), dtype=np.uint64))
|
||||
|
||||
def add64(a,b):
|
||||
"""Return a 64-bit integer sum of `a` and `b`."""
|
||||
return(np.add(a, b, dtype=np.uint64))
|
||||
|
||||
def sub64(a,b):
|
||||
"""Return a 64-bit integer difference of `a` and `b`."""
|
||||
return(np.subtract(a, b, dtype=np.uint64))
|
||||
|
||||
@ -1,24 +0,0 @@
|
||||
''' A simple wrapper for pylibmc. It can be overwritten with simple hashing if necessary '''
|
||||
import lib.settings as settings
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('Cache')
|
||||
|
||||
import pylibmc
|
||||
|
||||
class Cache():
|
||||
def __init__(self):
|
||||
# Open a new connection
|
||||
self.mc = pylibmc.Client([settings.MEMCACHE_HOST + ":" + str(settings.MEMCACHE_PORT)], binary=True)
|
||||
log.info("Caching initialized")
|
||||
|
||||
def set(self, key, value, time=settings.MEMCACHE_TIMEOUT):
|
||||
return self.mc.set(settings.MEMCACHE_PREFIX + str(key), value, time)
|
||||
|
||||
def get(self, key):
|
||||
return self.mc.get(settings.MEMCACHE_PREFIX + str(key))
|
||||
|
||||
def delete(self, key):
|
||||
return self.mc.delete(settings.MEMCACHE_PREFIX + str(key))
|
||||
|
||||
def exists(self, key):
|
||||
return str(key) in self.mc.get(settings.MEMCACHE_PREFIX + str(key))
|
||||
@ -3,8 +3,6 @@ import time
|
||||
from datetime import datetime
|
||||
import Queue
|
||||
import signal
|
||||
import Cache
|
||||
from sets import Set
|
||||
|
||||
import lib.settings as settings
|
||||
|
||||
@ -21,7 +19,8 @@ class DBInterface():
|
||||
self.q = Queue.Queue()
|
||||
self.queueclock = None
|
||||
|
||||
self.cache = Cache.Cache()
|
||||
self.usercache = {}
|
||||
self.clearusercache()
|
||||
|
||||
self.nextStatsUpdate = 0
|
||||
|
||||
@ -32,7 +31,7 @@ class DBInterface():
|
||||
signal.signal(signal.SIGINT, self.signal_handler)
|
||||
|
||||
def signal_handler(self, signal, frame):
|
||||
log.warning("SIGINT Detected, shutting down")
|
||||
print "SIGINT Detected, shutting down"
|
||||
self.do_import(self.dbi, True)
|
||||
reactor.stop()
|
||||
|
||||
@ -40,39 +39,23 @@ class DBInterface():
|
||||
self.bitcoinrpc = bitcoinrpc
|
||||
|
||||
def connectDB(self):
|
||||
if settings.DATABASE_DRIVER == "sqlite":
|
||||
log.debug('DB_Sqlite INIT')
|
||||
import DB_Sqlite
|
||||
return DB_Sqlite.DB_Sqlite()
|
||||
elif settings.DATABASE_DRIVER == "mysql":
|
||||
if settings.VARIABLE_DIFF:
|
||||
log.debug("DB_Mysql_Vardiff INIT")
|
||||
import DB_Mysql_Vardiff
|
||||
return DB_Mysql_Vardiff.DB_Mysql_Vardiff()
|
||||
else:
|
||||
log.debug('DB_Mysql INIT')
|
||||
import DB_Mysql
|
||||
return DB_Mysql.DB_Mysql()
|
||||
elif settings.DATABASE_DRIVER == "postgresql":
|
||||
log.debug('DB_Postgresql INIT')
|
||||
import DB_Postgresql
|
||||
return DB_Postgresql.DB_Postgresql()
|
||||
elif settings.DATABASE_DRIVER == "none":
|
||||
log.debug('DB_None INIT')
|
||||
import DB_None
|
||||
return DB_None.DB_None()
|
||||
else:
|
||||
log.error('Invalid DATABASE_DRIVER -- using NONE')
|
||||
log.debug('DB_None INIT')
|
||||
import DB_None
|
||||
return DB_None.DB_None()
|
||||
if settings.VARIABLE_DIFF:
|
||||
log.debug("DB_Mysql_Vardiff INIT")
|
||||
import DB_Mysql_Vardiff
|
||||
return DB_Mysql_Vardiff.DB_Mysql_Vardiff()
|
||||
else:
|
||||
log.debug('DB_Mysql INIT')
|
||||
import DB_Mysql
|
||||
return DB_Mysql.DB_Mysql()
|
||||
|
||||
def clearusercache(self):
|
||||
log.debug("DBInterface.clearusercache called")
|
||||
self.usercache = {}
|
||||
self.usercacheclock = reactor.callLater(settings.DB_USERCACHE_TIME , self.clearusercache)
|
||||
|
||||
def scheduleImport(self):
|
||||
# This schedule's the Import
|
||||
if settings.DATABASE_DRIVER == "sqlite":
|
||||
use_thread = False
|
||||
else:
|
||||
use_thread = True
|
||||
use_thread = True
|
||||
|
||||
if use_thread:
|
||||
self.queueclock = reactor.callLater(settings.DB_LOADER_CHECKTIME , self.run_import_thread)
|
||||
@ -155,37 +138,24 @@ class DBInterface():
|
||||
if username == "":
|
||||
log.info("Rejected worker for blank username")
|
||||
return False
|
||||
allowed_chars = Set('0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ_-.')
|
||||
if Set(username).issubset(allowed_chars) != True:
|
||||
log.info("Username contains bad arguments")
|
||||
return False
|
||||
if username.count('.') > 1:
|
||||
log.info("Username contains multiple . ")
|
||||
return False
|
||||
|
||||
# Force username and password to be strings
|
||||
username = str(username)
|
||||
password = str(password)
|
||||
if not settings.USERS_CHECK_PASSWORD and self.user_exists(username):
|
||||
wid = username + ":-:" + password
|
||||
|
||||
if wid in self.usercache:
|
||||
return True
|
||||
elif self.cache.get(username) == password:
|
||||
elif not settings.USERS_CHECK_PASSWORD and self.user_exists(username):
|
||||
self.usercache[wid] = 1
|
||||
return True
|
||||
elif self.dbi.check_password(username, password):
|
||||
self.cache.set(username, password)
|
||||
self.usercache[wid] = 1
|
||||
return True
|
||||
elif settings.USERS_AUTOADD == True:
|
||||
if self.dbi.get_uid(username) != False:
|
||||
uid = self.dbi.get_uid(username)
|
||||
self.dbi.insert_worker(uid, username, password)
|
||||
self.cache.set(username, password)
|
||||
return True
|
||||
else:
|
||||
self.dbi.insert_user(username, password)
|
||||
if self.dbi.get_uid(username) != False:
|
||||
uid = self.dbi.get_uid(username)
|
||||
self.dbi.insert_worker(uid, username, password)
|
||||
self.cache.set(username, password)
|
||||
return True
|
||||
self.insert_user(username, password)
|
||||
self.usercache[wid] = 1
|
||||
return True
|
||||
|
||||
log.info("Authentication for %s failed" % username)
|
||||
return False
|
||||
@ -194,14 +164,9 @@ class DBInterface():
|
||||
return self.dbi.list_users()
|
||||
|
||||
def get_user(self, id):
|
||||
if self.cache.get(id) is None:
|
||||
self.cache.set(id,self.dbi.get_user(id))
|
||||
return self.cache.get(id)
|
||||
|
||||
return self.dbi.get_user(id)
|
||||
|
||||
def user_exists(self, username):
|
||||
if self.cache.get(username) is not None:
|
||||
return True
|
||||
user = self.dbi.get_user(username)
|
||||
return user is not None
|
||||
|
||||
@ -209,13 +174,11 @@ class DBInterface():
|
||||
return self.dbi.insert_user(username, password)
|
||||
|
||||
def delete_user(self, username):
|
||||
self.mc.delete(username)
|
||||
self.usercache = {}
|
||||
return self.dbi.delete_user(username)
|
||||
|
||||
def update_user(self, username, password):
|
||||
self.mc.delete(username)
|
||||
self.mc.set(username, password)
|
||||
self.usercache = {}
|
||||
return self.dbi.update_user(username, password)
|
||||
|
||||
def update_worker_diff(self, username, diff):
|
||||
|
||||
@ -12,7 +12,7 @@ class DB_Mysql():
|
||||
|
||||
required_settings = ['PASSWORD_SALT', 'DB_MYSQL_HOST',
|
||||
'DB_MYSQL_USER', 'DB_MYSQL_PASS',
|
||||
'DB_MYSQL_DBNAME','DB_MYSQL_PORT']
|
||||
'DB_MYSQL_DBNAME']
|
||||
|
||||
for setting_name in required_settings:
|
||||
if not hasattr(settings, setting_name):
|
||||
@ -26,8 +26,7 @@ class DB_Mysql():
|
||||
getattr(settings, 'DB_MYSQL_HOST'),
|
||||
getattr(settings, 'DB_MYSQL_USER'),
|
||||
getattr(settings, 'DB_MYSQL_PASS'),
|
||||
getattr(settings, 'DB_MYSQL_DBNAME'),
|
||||
getattr(settings, 'DB_MYSQL_PORT')
|
||||
getattr(settings, 'DB_MYSQL_DBNAME')
|
||||
)
|
||||
self.dbc = self.dbh.cursor()
|
||||
self.dbh.autocommit(True)
|
||||
@ -82,11 +81,11 @@ class DB_Mysql():
|
||||
"""
|
||||
INSERT INTO `shares`
|
||||
(time, rem_host, username, our_result,
|
||||
upstream_result, reason, solution, difficulty)
|
||||
upstream_result, reason, solution)
|
||||
VALUES
|
||||
(FROM_UNIXTIME(%(time)s), %(host)s,
|
||||
%(uname)s,
|
||||
%(lres)s, 'N', %(reason)s, %(solution)s, %(difficulty)s)
|
||||
%(lres)s, 'N', %(reason)s, %(solution)s)
|
||||
""",
|
||||
{
|
||||
"time": v[4],
|
||||
@ -94,8 +93,7 @@ class DB_Mysql():
|
||||
"uname": v[0],
|
||||
"lres": v[5],
|
||||
"reason": v[9],
|
||||
"solution": v[2],
|
||||
"difficulty": v[3]
|
||||
"solution": v[2]
|
||||
}
|
||||
)
|
||||
|
||||
@ -155,13 +153,13 @@ class DB_Mysql():
|
||||
%(lres)s, %(result)s, %(reason)s, %(solution)s)
|
||||
""",
|
||||
{
|
||||
"time": data[4],
|
||||
"host": data[6],
|
||||
"uname": data[0],
|
||||
"lres": data[5],
|
||||
"result": data[5],
|
||||
"reason": data[9],
|
||||
"solution": data[2]
|
||||
"time": v[4],
|
||||
"host": v[6],
|
||||
"uname": v[0],
|
||||
"lres": v[5],
|
||||
"result": v[5],
|
||||
"reason": v[9],
|
||||
"solution": v[2]
|
||||
}
|
||||
)
|
||||
|
||||
@ -204,25 +202,6 @@ class DB_Mysql():
|
||||
|
||||
user = self.dbc.fetchone()
|
||||
return user
|
||||
|
||||
def get_uid(self, id_or_username):
|
||||
log.debug("Finding user id of %s", id_or_username)
|
||||
uname = id_or_username.split(".", 1)[0]
|
||||
self.execute("SELECT `id` FROM `accounts` where username = %s", (uname))
|
||||
row = self.dbc.fetchone()
|
||||
|
||||
if row is None:
|
||||
return False
|
||||
else:
|
||||
uid = row[0]
|
||||
return uid
|
||||
|
||||
def insert_worker(self, account_id, username, password):
|
||||
log.debug("Adding new worker %s", username)
|
||||
query = "INSERT INTO pool_worker"
|
||||
self.execute(query + '(account_id, username, password) VALUES (%s, %s, %s);', (account_id, username, password))
|
||||
self.dbh.commit()
|
||||
return str(username)
|
||||
|
||||
|
||||
def delete_user(self, id_or_username):
|
||||
@ -343,13 +322,6 @@ class DB_Mysql():
|
||||
|
||||
return ret
|
||||
|
||||
def insert_worker(self, account_id, username, password):
|
||||
log.debug("Adding new worker %s", username)
|
||||
query = "INSERT INTO pool_worker"
|
||||
self.execute(query + '(account_id, username, password) VALUES (%s, %s, %s);', (account_id, username, password))
|
||||
self.dbh.commit()
|
||||
return str(username)
|
||||
|
||||
def close(self):
|
||||
self.dbh.close()
|
||||
|
||||
@ -373,4 +345,3 @@ class DB_Mysql():
|
||||
if data[0] <= 0:
|
||||
raise Exception("There is no shares table. Have you imported the schema?")
|
||||
|
||||
|
||||
|
||||
@ -1,426 +0,0 @@
|
||||
import time
|
||||
import hashlib
|
||||
import lib.settings as settings
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('DB_Mysql')
|
||||
|
||||
import MySQLdb
|
||||
import DB_Mysql
|
||||
|
||||
class DB_Mysql_Extended(DB_Mysql.DB_Mysql):
|
||||
def __init__(self):
|
||||
DB_Mysql.DB_Mysql.__init__(self)
|
||||
|
||||
def updateStats(self, averageOverTime):
|
||||
log.debug("Updating Stats")
|
||||
# Note: we are using transactions... so we can set the speed = 0 and it doesn't take affect until we are commited.
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `pool_worker`
|
||||
SET `speed` = 0,
|
||||
`alive` = 0
|
||||
"""
|
||||
);
|
||||
|
||||
stime = '%.0f' % (time.time() - averageOverTime);
|
||||
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `pool_worker` pw
|
||||
LEFT JOIN (
|
||||
SELECT `worker`, ROUND(ROUND(SUM(`difficulty`)) * 4294967296) / %(average)s AS 'speed'
|
||||
FROM `shares`
|
||||
WHERE `time` > FROM_UNIXTIME(%(time)s)
|
||||
GROUP BY `worker`
|
||||
) AS leJoin
|
||||
ON leJoin.`worker` = pw.`id`
|
||||
SET pw.`alive` = 1,
|
||||
pw.`speed` = leJoin.`speed`
|
||||
WHERE pw.`id` = leJoin.`worker`
|
||||
""",
|
||||
{
|
||||
"time": stime,
|
||||
"average": int(averageOverTime) * 1000000
|
||||
}
|
||||
)
|
||||
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `pool`
|
||||
SET `value` = (
|
||||
SELECT IFNULL(SUM(`speed`), 0)
|
||||
FROM `pool_worker`
|
||||
WHERE `alive` = 1
|
||||
)
|
||||
WHERE `parameter` = 'pool_speed'
|
||||
"""
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_check(self):
|
||||
# Check for found shares to archive
|
||||
self.execute(
|
||||
"""
|
||||
SELECT `time`
|
||||
FROM `shares`
|
||||
WHERE `upstream_result` = 1
|
||||
ORDER BY `time`
|
||||
LIMIT 1
|
||||
"""
|
||||
)
|
||||
|
||||
data = self.dbc.fetchone()
|
||||
|
||||
if data is None or (data[0] + getattr(settings, 'ARCHIVE_DELAY')) > time.time():
|
||||
return False
|
||||
|
||||
return data[0]
|
||||
|
||||
def archive_found(self, found_time):
|
||||
self.execute(
|
||||
"""
|
||||
INSERT INTO `shares_archive_found`
|
||||
SELECT s.`id`, s.`time`, s.`rem_host`, pw.`id`, s.`our_result`,
|
||||
s.`upstream_result`, s.`reason`, s.`solution`, s.`block_num`,
|
||||
s.`prev_block_hash`, s.`useragent`, s.`difficulty`
|
||||
FROM `shares` s
|
||||
LEFT JOIN `pool_worker` pw
|
||||
ON s.`worker` = pw.`id`
|
||||
WHERE `upstream_result` = 1
|
||||
AND `time` <= FROM_UNIXTIME(%(time)s)
|
||||
""",
|
||||
{
|
||||
"time": found_time
|
||||
}
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_to_db(self, found_time):
|
||||
self.execute(
|
||||
"""
|
||||
INSERT INTO `shares_archive`
|
||||
SELECT s.`id`, s.`time`, s.`rem_host`, pw.`id`, s.`our_result`,
|
||||
s.`upstream_result`, s.`reason`, s.`solution`, s.`block_num`,
|
||||
s.`prev_block_hash`, s.`useragent`, s.`difficulty`
|
||||
FROM `shares` s
|
||||
LEFT JOIN `pool_worker` pw
|
||||
ON s.`worker` = pw.`id`
|
||||
WHERE `time` <= FROM_UNIXTIME(%(time)s)
|
||||
""",
|
||||
{
|
||||
"time": found_time
|
||||
}
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_cleanup(self, found_time):
|
||||
self.execute(
|
||||
"""
|
||||
DELETE FROM `shares`
|
||||
WHERE `time` <= FROM_UNIXTIME(%(time)s)
|
||||
""",
|
||||
{
|
||||
"time": found_time
|
||||
}
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_get_shares(self, found_time):
|
||||
self.execute(
|
||||
"""
|
||||
SELECT *
|
||||
FROM `shares`
|
||||
WHERE `time` <= FROM_UNIXTIME(%(time)s)
|
||||
""",
|
||||
{
|
||||
"time": found_time
|
||||
}
|
||||
)
|
||||
|
||||
return self.dbc
|
||||
|
||||
def import_shares(self, data):
|
||||
# Data layout
|
||||
# 0: worker_name,
|
||||
# 1: block_header,
|
||||
# 2: block_hash,
|
||||
# 3: difficulty,
|
||||
# 4: timestamp,
|
||||
# 5: is_valid,
|
||||
# 6: ip,
|
||||
# 7: self.block_height,
|
||||
# 8: self.prev_hash,
|
||||
# 9: invalid_reason,
|
||||
# 10: share_diff
|
||||
|
||||
log.debug("Importing Shares")
|
||||
checkin_times = {}
|
||||
total_shares = 0
|
||||
best_diff = 0
|
||||
|
||||
for k, v in enumerate(data):
|
||||
total_shares += v[3]
|
||||
|
||||
if v[0] in checkin_times:
|
||||
if v[4] > checkin_times[v[0]]:
|
||||
checkin_times[v[0]]["time"] = v[4]
|
||||
else:
|
||||
checkin_times[v[0]] = {
|
||||
"time": v[4],
|
||||
"shares": 0,
|
||||
"rejects": 0
|
||||
}
|
||||
|
||||
if v[5] == True:
|
||||
checkin_times[v[0]]["shares"] += v[3]
|
||||
else:
|
||||
checkin_times[v[0]]["rejects"] += v[3]
|
||||
|
||||
if v[10] > best_diff:
|
||||
best_diff = v[10]
|
||||
|
||||
self.execute(
|
||||
"""
|
||||
INSERT INTO `shares`
|
||||
(time, rem_host, worker, our_result, upstream_result,
|
||||
reason, solution, block_num, prev_block_hash,
|
||||
useragent, difficulty)
|
||||
VALUES
|
||||
(FROM_UNIXTIME(%(time)s), %(host)s,
|
||||
(SELECT `id` FROM `pool_worker` WHERE `username` = %(uname)s),
|
||||
%(lres)s, 0, %(reason)s, %(solution)s,
|
||||
%(blocknum)s, %(hash)s, '', %(difficulty)s)
|
||||
""",
|
||||
{
|
||||
"time": v[4],
|
||||
"host": v[6],
|
||||
"uname": v[0],
|
||||
"lres": v[5],
|
||||
"reason": v[9],
|
||||
"solution": v[2],
|
||||
"blocknum": v[7],
|
||||
"hash": v[8],
|
||||
"difficulty": v[3]
|
||||
}
|
||||
)
|
||||
|
||||
self.execute(
|
||||
"""
|
||||
SELECT `parameter`, `value`
|
||||
FROM `pool`
|
||||
WHERE `parameter` = 'round_best_share'
|
||||
OR `parameter` = 'round_shares'
|
||||
OR `parameter` = 'bitcoin_difficulty'
|
||||
OR `parameter` = 'round_progress'
|
||||
"""
|
||||
)
|
||||
|
||||
current_parameters = {}
|
||||
|
||||
for data in self.dbc.fetchall():
|
||||
current_parameters[data[0]] = data[1]
|
||||
|
||||
round_best_share = int(current_parameters['round_best_share'])
|
||||
difficulty = float(current_parameters['bitcoin_difficulty'])
|
||||
round_shares = int(current_parameters['round_shares']) + total_shares
|
||||
|
||||
updates = [
|
||||
{
|
||||
"param": "round_shares",
|
||||
"value": round_shares
|
||||
},
|
||||
{
|
||||
"param": "round_progress",
|
||||
"value": 0 if difficulty == 0 else (round_shares / difficulty) * 100
|
||||
}
|
||||
]
|
||||
|
||||
if best_diff > round_best_share:
|
||||
updates.append({
|
||||
"param": "round_best_share",
|
||||
"value": best_diff
|
||||
})
|
||||
|
||||
self.executemany(
|
||||
"""
|
||||
UPDATE `pool`
|
||||
SET `value` = %(value)s
|
||||
WHERE `parameter` = %(param)s
|
||||
""",
|
||||
updates
|
||||
)
|
||||
|
||||
for k, v in checkin_times.items():
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `pool_worker`
|
||||
SET `last_checkin` = FROM_UNIXTIME(%(time)s),
|
||||
`total_shares` = `total_shares` + %(shares)s,
|
||||
`total_rejects` = `total_rejects` + %(rejects)s
|
||||
WHERE `username` = %(uname)s
|
||||
""",
|
||||
{
|
||||
"time": v["time"],
|
||||
"shares": v["shares"],
|
||||
"rejects": v["rejects"],
|
||||
"uname": k
|
||||
}
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
|
||||
def found_block(self, data):
|
||||
# for database compatibility we are converting our_worker to Y/N format
|
||||
#if data[5]:
|
||||
# data[5] = 'Y'
|
||||
#else:
|
||||
# data[5] = 'N'
|
||||
# Note: difficulty = -1 here
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `shares`
|
||||
SET `upstream_result` = %(result)s,
|
||||
`solution` = %(solution)s
|
||||
WHERE `time` = FROM_UNIXTIME(%(time)s)
|
||||
AND `worker` = (SELECT id
|
||||
FROM `pool_worker`
|
||||
WHERE `username` = %(uname)s)
|
||||
LIMIT 1
|
||||
""",
|
||||
{
|
||||
"result": data[5],
|
||||
"solution": data[2],
|
||||
"time": data[4],
|
||||
"uname": data[0]
|
||||
}
|
||||
)
|
||||
|
||||
if data[5] == True:
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `pool_worker`
|
||||
SET `total_found` = `total_found` + 1
|
||||
WHERE `username` = %(uname)s
|
||||
""",
|
||||
{
|
||||
"uname": data[0]
|
||||
}
|
||||
)
|
||||
self.execute(
|
||||
"""
|
||||
SELECT `value`
|
||||
FROM `pool`
|
||||
WHERE `parameter` = 'pool_total_found'
|
||||
"""
|
||||
)
|
||||
total_found = int(self.dbc.fetchone()[0]) + 1
|
||||
|
||||
self.executemany(
|
||||
"""
|
||||
UPDATE `pool`
|
||||
SET `value` = %(value)s
|
||||
WHERE `parameter` = %(param)s
|
||||
""",
|
||||
[
|
||||
{
|
||||
"param": "round_shares",
|
||||
"value": "0"
|
||||
},
|
||||
{
|
||||
"param": "round_progress",
|
||||
"value": "0"
|
||||
},
|
||||
{
|
||||
"param": "round_best_share",
|
||||
"value": "0"
|
||||
},
|
||||
{
|
||||
"param": "round_start",
|
||||
"value": time.time()
|
||||
},
|
||||
{
|
||||
"param": "pool_total_found",
|
||||
"value": total_found
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def update_pool_info(self, pi):
|
||||
self.executemany(
|
||||
"""
|
||||
UPDATE `pool`
|
||||
SET `value` = %(value)s
|
||||
WHERE `parameter` = %(param)s
|
||||
""",
|
||||
[
|
||||
{
|
||||
"param": "bitcoin_blocks",
|
||||
"value": pi['blocks']
|
||||
},
|
||||
{
|
||||
"param": "bitcoin_balance",
|
||||
"value": pi['balance']
|
||||
},
|
||||
{
|
||||
"param": "bitcoin_connections",
|
||||
"value": pi['connections']
|
||||
},
|
||||
{
|
||||
"param": "bitcoin_difficulty",
|
||||
"value": pi['difficulty']
|
||||
},
|
||||
{
|
||||
"param": "bitcoin_infotime",
|
||||
"value": time.time()
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def get_pool_stats(self):
|
||||
self.execute(
|
||||
"""
|
||||
SELECT * FROM `pool`
|
||||
"""
|
||||
)
|
||||
|
||||
ret = {}
|
||||
|
||||
for data in self.dbc.fetchall():
|
||||
ret[data[0]] = data[1]
|
||||
|
||||
return ret
|
||||
|
||||
def get_workers_stats(self):
|
||||
self.execute(
|
||||
"""
|
||||
SELECT `username`, `speed`, `last_checkin`, `total_shares`,
|
||||
`total_rejects`, `total_found`, `alive`, `difficulty`
|
||||
FROM `pool_worker`
|
||||
WHERE `id` > 0
|
||||
"""
|
||||
)
|
||||
|
||||
ret = {}
|
||||
|
||||
for data in self.dbc.fetchall():
|
||||
ret[data[0]] = {
|
||||
"username": data[0],
|
||||
"speed": int(data[1]),
|
||||
"last_checkin": time.mktime(data[2].timetuple()),
|
||||
"total_shares": int(data[3]),
|
||||
"total_rejects": int(data[4]),
|
||||
"total_found": int(data[5]),
|
||||
"alive": True if data[6] is 1 else False,
|
||||
"difficulty": int(data[7])
|
||||
}
|
||||
|
||||
return ret
|
||||
@ -59,72 +59,6 @@ class DB_Mysql_Vardiff(DB_Mysql.DB_Mysql):
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def found_block(self, data):
|
||||
# for database compatibility we are converting our_worker to Y/N format
|
||||
if data[5]:
|
||||
data[5] = 'Y'
|
||||
else:
|
||||
data[5] = 'N'
|
||||
|
||||
# Check for the share in the database before updating it
|
||||
# Note: We can't use DUPLICATE KEY because solution is not a key
|
||||
|
||||
self.execute(
|
||||
"""
|
||||
Select `id` from `shares`
|
||||
WHERE `solution` = %(solution)s
|
||||
LIMIT 1
|
||||
""",
|
||||
{
|
||||
"solution": data[2]
|
||||
}
|
||||
)
|
||||
|
||||
shareid = self.dbc.fetchone()
|
||||
|
||||
if shareid[0] > 0:
|
||||
# Note: difficulty = -1 here
|
||||
self.execute(
|
||||
"""
|
||||
UPDATE `shares`
|
||||
SET `upstream_result` = %(result)s
|
||||
WHERE `solution` = %(solution)s
|
||||
AND `id` = %(id)s
|
||||
LIMIT 1
|
||||
""",
|
||||
{
|
||||
"result": data[5],
|
||||
"solution": data[2],
|
||||
"id": shareid[0]
|
||||
}
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
else:
|
||||
self.execute(
|
||||
"""
|
||||
INSERT INTO `shares`
|
||||
(time, rem_host, username, our_result,
|
||||
upstream_result, reason, solution)
|
||||
VALUES
|
||||
(FROM_UNIXTIME(%(time)s), %(host)s,
|
||||
%(uname)s,
|
||||
%(lres)s, %(result)s, %(reason)s, %(solution)s)
|
||||
""",
|
||||
{
|
||||
"time": v[4],
|
||||
"host": v[6],
|
||||
"uname": v[0],
|
||||
"lres": v[5],
|
||||
"result": v[5],
|
||||
"reason": v[9],
|
||||
"solution": v[2]
|
||||
}
|
||||
)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
|
||||
def update_worker_diff(self, username, diff):
|
||||
log.debug("Setting difficulty for %s to %s", username, diff)
|
||||
@ -182,4 +116,3 @@ class DB_Mysql_Vardiff(DB_Mysql.DB_Mysql):
|
||||
|
||||
return ret
|
||||
|
||||
|
||||
|
||||
@ -1,56 +0,0 @@
|
||||
import stratum.logger
|
||||
log = stratum.logger.get_logger('None')
|
||||
|
||||
class DB_None():
|
||||
def __init__(self):
|
||||
log.debug("Connecting to DB")
|
||||
|
||||
def updateStats(self,averageOverTime):
|
||||
log.debug("Updating Stats")
|
||||
|
||||
def import_shares(self,data):
|
||||
log.debug("Importing Shares")
|
||||
|
||||
def found_block(self,data):
|
||||
log.debug("Found Block")
|
||||
|
||||
def get_user(self, id_or_username):
|
||||
log.debug("Get User")
|
||||
|
||||
def list_users(self):
|
||||
log.debug("List Users")
|
||||
|
||||
def delete_user(self,username):
|
||||
log.debug("Deleting Username")
|
||||
|
||||
def insert_user(self,username,password):
|
||||
log.debug("Adding Username/Password")
|
||||
|
||||
def update_user(self,username,password):
|
||||
log.debug("Updating Username/Password")
|
||||
|
||||
def check_password(self,username,password):
|
||||
log.debug("Checking Username/Password")
|
||||
return True
|
||||
|
||||
def update_pool_info(self,pi):
|
||||
log.debug("Update Pool Info")
|
||||
|
||||
def clear_worker_diff(self):
|
||||
log.debug("Clear Worker Diff")
|
||||
|
||||
def get_pool_stats(self):
|
||||
log.debug("Get Pool Stats")
|
||||
ret = {}
|
||||
return ret
|
||||
|
||||
def get_workers_stats(self):
|
||||
log.debug("Get Workers Stats")
|
||||
ret = {}
|
||||
return ret
|
||||
|
||||
def check_tables(self):
|
||||
log.debug("Checking Tables")
|
||||
|
||||
def close(self):
|
||||
log.debug("Close Connection")
|
||||
@ -1,394 +0,0 @@
|
||||
import time
|
||||
import hashlib
|
||||
from stratum import settings
|
||||
import stratum.logger
|
||||
log = stratum.logger.get_logger('DB_Postgresql')
|
||||
|
||||
import psycopg2
|
||||
from psycopg2 import extras
|
||||
|
||||
class DB_Postgresql():
|
||||
def __init__(self):
|
||||
log.debug("Connecting to DB")
|
||||
self.dbh = psycopg2.connect("host='"+settings.DB_PGSQL_HOST+"' dbname='"+settings.DB_PGSQL_DBNAME+"' user='"+settings.DB_PGSQL_USER+\
|
||||
"' password='"+settings.DB_PGSQL_PASS+"'")
|
||||
# TODO -- set the schema
|
||||
self.dbc = self.dbh.cursor()
|
||||
|
||||
if hasattr(settings, 'PASSWORD_SALT'):
|
||||
self.salt = settings.PASSWORD_SALT
|
||||
else:
|
||||
raise ValueError("PASSWORD_SALT isn't set, please set in config.py")
|
||||
|
||||
def updateStats(self,averageOverTime):
|
||||
log.debug("Updating Stats")
|
||||
# Note: we are using transactions... so we can set the speed = 0 and it doesn't take affect until we are commited.
|
||||
self.dbc.execute("update pool_worker set speed = 0, alive = 'f'");
|
||||
stime = '%.2f' % ( time.time() - averageOverTime );
|
||||
self.dbc.execute("select username,SUM(difficulty) from shares where time > to_timestamp(%s) group by username", [stime])
|
||||
total_speed = 0
|
||||
for name,shares in self.dbc.fetchall():
|
||||
speed = int(int(shares) * pow(2,32)) / ( int(averageOverTime) * 1000 * 1000)
|
||||
total_speed += speed
|
||||
self.dbc.execute("update pool_worker set speed = %s, alive = 't' where username = %s", (speed,name))
|
||||
self.dbc.execute("update pool set value = %s where parameter = 'pool_speed'",[total_speed])
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_check(self):
|
||||
# Check for found shares to archive
|
||||
self.dbc.execute("select time from shares where upstream_result = true order by time limit 1")
|
||||
data = self.dbc.fetchone()
|
||||
if data is None or (data[0] + settings.ARCHIVE_DELAY) > time.time() :
|
||||
return False
|
||||
return data[0]
|
||||
|
||||
def archive_found(self,found_time):
|
||||
self.dbc.execute("insert into shares_archive_found select * from shares where upstream_result = true and time <= to_timestamp(%s)", [found_time])
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_to_db(self,found_time):
|
||||
self.dbc.execute("insert into shares_archive select * from shares where time <= to_timestamp(%s)",[found_time])
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_cleanup(self,found_time):
|
||||
self.dbc.execute("delete from shares where time <= to_timestamp(%s)",[found_time])
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_get_shares(self,found_time):
|
||||
self.dbc.execute("select * from shares where time <= to_timestamp(%s)",[found_time])
|
||||
return self.dbc
|
||||
|
||||
def import_shares(self,data):
|
||||
log.debug("Importing Shares")
|
||||
# 0 1 2 3 4 5 6 7 8 9 10
|
||||
# data: [worker_name,block_header,block_hash,difficulty,timestamp,is_valid,ip,block_height,prev_hash,invalid_reason,best_diff]
|
||||
checkin_times = {}
|
||||
total_shares = 0
|
||||
best_diff = 0
|
||||
for k,v in enumerate(data):
|
||||
if settings.DATABASE_EXTEND :
|
||||
total_shares += v[3]
|
||||
if v[0] in checkin_times:
|
||||
if v[4] > checkin_times[v[0]] :
|
||||
checkin_times[v[0]]["time"] = v[4]
|
||||
else:
|
||||
checkin_times[v[0]] = {"time": v[4], "shares": 0, "rejects": 0 }
|
||||
|
||||
if v[5] == True :
|
||||
checkin_times[v[0]]["shares"] += v[3]
|
||||
else :
|
||||
checkin_times[v[0]]["rejects"] += v[3]
|
||||
|
||||
if v[10] > best_diff:
|
||||
best_diff = v[10]
|
||||
|
||||
self.dbc.execute("insert into shares " +\
|
||||
"(time,rem_host,username,our_result,upstream_result,reason,solution,block_num,prev_block_hash,useragent,difficulty) " +\
|
||||
"VALUES (to_timestamp(%s),%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",
|
||||
(v[4],v[6],v[0],bool(v[5]),False,v[9],'',v[7],v[8],'',v[3]) )
|
||||
else :
|
||||
self.dbc.execute("insert into shares (time,rem_host,username,our_result,upstream_result,reason,solution) VALUES " +\
|
||||
"(to_timestamp(%s),%s,%s,%s,%s,%s,%s)",
|
||||
(v[4],v[6],v[0],bool(v[5]),False,v[9],'') )
|
||||
|
||||
if settings.DATABASE_EXTEND :
|
||||
self.dbc.execute("select value from pool where parameter = 'round_shares'")
|
||||
round_shares = int(self.dbc.fetchone()[0]) + total_shares
|
||||
self.dbc.execute("update pool set value = %s where parameter = 'round_shares'",[round_shares])
|
||||
|
||||
self.dbc.execute("select value from pool where parameter = 'round_best_share'")
|
||||
round_best_share = int(self.dbc.fetchone()[0])
|
||||
if best_diff > round_best_share:
|
||||
self.dbc.execute("update pool set value = %s where parameter = 'round_best_share'",[best_diff])
|
||||
|
||||
self.dbc.execute("select value from pool where parameter = 'bitcoin_difficulty'")
|
||||
difficulty = float(self.dbc.fetchone()[0])
|
||||
|
||||
if difficulty == 0:
|
||||
progress = 0
|
||||
else:
|
||||
progress = (round_shares/difficulty)*100
|
||||
self.dbc.execute("update pool set value = %s where parameter = 'round_progress'",[progress])
|
||||
|
||||
for k,v in checkin_times.items():
|
||||
self.dbc.execute("update pool_worker set last_checkin = to_timestamp(%s), total_shares = total_shares + %s, total_rejects = total_rejects + %s where username = %s",
|
||||
(v["time"],v["shares"],v["rejects"],k))
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
|
||||
def found_block(self,data):
|
||||
# Note: difficulty = -1 here
|
||||
self.dbc.execute("update shares set upstream_result = %s, solution = %s where id in (select id from shares where time = to_timestamp(%s) and username = %s limit 1)",
|
||||
(bool(data[5]),data[2],data[4],data[0]))
|
||||
if settings.DATABASE_EXTEND and data[5] == True :
|
||||
self.dbc.execute("update pool_worker set total_found = total_found + 1 where username = %s",(data[0],))
|
||||
self.dbc.execute("select value from pool where parameter = 'pool_total_found'")
|
||||
total_found = int(self.dbc.fetchone()[0]) + 1
|
||||
self.dbc.executemany("update pool set value = %s where parameter = %s",[(0,'round_shares'),
|
||||
(0,'round_progress'),
|
||||
(0,'round_best_share'),
|
||||
(time.time(),'round_start'),
|
||||
(total_found,'pool_total_found')
|
||||
])
|
||||
self.dbh.commit()
|
||||
|
||||
def get_user(self, id_or_username):
|
||||
log.debug("Finding user with id or username of %s", id_or_username)
|
||||
cursor = self.dbh.cursor(cursor_factory=extras.DictCursor)
|
||||
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT *
|
||||
FROM pool_worker
|
||||
WHERE id = %(id)s
|
||||
OR username = %(uname)s
|
||||
""",
|
||||
{
|
||||
"id": id_or_username if id_or_username.isdigit() else -1,
|
||||
"uname": id_or_username
|
||||
}
|
||||
)
|
||||
|
||||
user = cursor.fetchone()
|
||||
cursor.close()
|
||||
return user
|
||||
|
||||
def list_users(self):
|
||||
cursor = self.dbh.cursor(cursor_factory=extras.DictCursor)
|
||||
cursor.execute(
|
||||
"""
|
||||
SELECT *
|
||||
FROM pool_worker
|
||||
WHERE id > 0
|
||||
"""
|
||||
)
|
||||
|
||||
while True:
|
||||
results = cursor.fetchmany()
|
||||
if not results:
|
||||
break
|
||||
|
||||
for result in results:
|
||||
yield result
|
||||
|
||||
def delete_user(self, id_or_username):
|
||||
log.debug("Deleting Username")
|
||||
self.dbc.execute(
|
||||
"""
|
||||
delete from pool_worker where id = %(id)s or username = %(uname)s
|
||||
""",
|
||||
{
|
||||
"id": id_or_username if id_or_username.isdigit() else -1,
|
||||
"uname": id_or_username
|
||||
}
|
||||
)
|
||||
self.dbh.commit()
|
||||
|
||||
def insert_user(self,username,password):
|
||||
log.debug("Adding Username/Password")
|
||||
m = hashlib.sha1()
|
||||
m.update(password)
|
||||
m.update(self.salt)
|
||||
self.dbc.execute("insert into pool_worker (username,password) VALUES (%s,%s)",
|
||||
(username, m.hexdigest() ))
|
||||
self.dbh.commit()
|
||||
|
||||
return str(username)
|
||||
|
||||
def update_user(self, id_or_username, password):
|
||||
log.debug("Updating Username/Password")
|
||||
m = hashlib.sha1()
|
||||
m.update(password)
|
||||
m.update(self.salt)
|
||||
self.dbc.execute(
|
||||
"""
|
||||
update pool_worker set password = %(pass)s where id = %(id)s or username = %(uname)s
|
||||
""",
|
||||
{
|
||||
"id": id_or_username if id_or_username.isdigit() else -1,
|
||||
"uname": id_or_username,
|
||||
"pass": m.hexdigest()
|
||||
}
|
||||
)
|
||||
self.dbh.commit()
|
||||
|
||||
def update_worker_diff(self,username,diff):
|
||||
self.dbc.execute("update pool_worker set difficulty = %s where username = %s",(diff,username))
|
||||
self.dbh.commit()
|
||||
|
||||
def clear_worker_diff(self):
|
||||
if settings.DATABASE_EXTEND == True :
|
||||
self.dbc.execute("update pool_worker set difficulty = 0")
|
||||
self.dbh.commit()
|
||||
|
||||
def check_password(self,username,password):
|
||||
log.debug("Checking Username/Password")
|
||||
m = hashlib.sha1()
|
||||
m.update(password)
|
||||
m.update(self.salt)
|
||||
self.dbc.execute("select COUNT(*) from pool_worker where username = %s and password = %s",
|
||||
(username, m.hexdigest() ))
|
||||
data = self.dbc.fetchone()
|
||||
if data[0] > 0 :
|
||||
return True
|
||||
return False
|
||||
|
||||
def update_pool_info(self,pi):
|
||||
self.dbc.executemany("update pool set value = %s where parameter = %s",[(pi['blocks'],"bitcoin_blocks"),
|
||||
(pi['balance'],"bitcoin_balance"),
|
||||
(pi['connections'],"bitcoin_connections"),
|
||||
(pi['difficulty'],"bitcoin_difficulty"),
|
||||
(time.time(),"bitcoin_infotime")
|
||||
])
|
||||
self.dbh.commit()
|
||||
|
||||
def get_pool_stats(self):
|
||||
self.dbc.execute("select * from pool")
|
||||
ret = {}
|
||||
for data in self.dbc.fetchall():
|
||||
ret[data[0]] = data[1]
|
||||
return ret
|
||||
|
||||
def get_workers_stats(self):
|
||||
self.dbc.execute("select username,speed,last_checkin,total_shares,total_rejects,total_found,alive,difficulty from pool_worker")
|
||||
ret = {}
|
||||
for data in self.dbc.fetchall():
|
||||
ret[data[0]] = { "username" : data[0],
|
||||
"speed" : data[1],
|
||||
"last_checkin" : time.mktime(data[2].timetuple()),
|
||||
"total_shares" : data[3],
|
||||
"total_rejects" : data[4],
|
||||
"total_found" : data[5],
|
||||
"alive" : data[6],
|
||||
"difficulty" : data[7] }
|
||||
return ret
|
||||
|
||||
def close(self):
|
||||
self.dbh.close()
|
||||
|
||||
def check_tables(self):
|
||||
log.debug("Checking Tables")
|
||||
|
||||
shares_exist = False
|
||||
self.dbc.execute("select COUNT(*) from pg_catalog.pg_tables where schemaname = %(schema)s and tablename = 'shares'",
|
||||
{"schema": settings.DB_PGSQL_SCHEMA })
|
||||
data = self.dbc.fetchone()
|
||||
if data[0] <= 0 :
|
||||
self.update_version_1()
|
||||
|
||||
if settings.DATABASE_EXTEND == True :
|
||||
self.update_tables()
|
||||
|
||||
|
||||
def update_tables(self):
|
||||
version = 0
|
||||
current_version = 7
|
||||
while version < current_version :
|
||||
self.dbc.execute("select value from pool where parameter = 'DB Version'")
|
||||
data = self.dbc.fetchone()
|
||||
version = int(data[0])
|
||||
if version < current_version :
|
||||
log.info("Updating Database from %i to %i" % (version, version +1))
|
||||
getattr(self, 'update_version_' + str(version) )()
|
||||
|
||||
def update_version_1(self):
|
||||
if settings.DATABASE_EXTEND == True :
|
||||
self.dbc.execute("create table shares" +\
|
||||
"(id serial primary key,time timestamp,rem_host TEXT, username TEXT, our_result BOOLEAN, upstream_result BOOLEAN, reason TEXT, solution TEXT, " +\
|
||||
"block_num INTEGER, prev_block_hash TEXT, useragent TEXT, difficulty INTEGER)")
|
||||
self.dbc.execute("create index shares_username ON shares(username)")
|
||||
self.dbc.execute("create table pool_worker" +\
|
||||
"(id serial primary key,username TEXT, password TEXT, speed INTEGER, last_checkin timestamp)")
|
||||
self.dbc.execute("create index pool_worker_username ON pool_worker(username)")
|
||||
self.dbc.execute("alter table pool_worker add total_shares INTEGER default 0")
|
||||
self.dbc.execute("alter table pool_worker add total_rejects INTEGER default 0")
|
||||
self.dbc.execute("alter table pool_worker add total_found INTEGER default 0")
|
||||
self.dbc.execute("create table pool(parameter TEXT, value TEXT)")
|
||||
self.dbc.execute("insert into pool (parameter,value) VALUES ('DB Version',2)")
|
||||
else :
|
||||
self.dbc.execute("create table shares" + \
|
||||
"(id serial,time timestamp,rem_host TEXT, username TEXT, our_result BOOLEAN, upstream_result BOOLEAN, reason TEXT, solution TEXT)")
|
||||
self.dbc.execute("create index shares_username ON shares(username)")
|
||||
self.dbc.execute("create table pool_worker(id serial,username TEXT, password TEXT)")
|
||||
self.dbc.execute("create index pool_worker_username ON pool_worker(username)")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_2(self):
|
||||
log.info("running update 2")
|
||||
self.dbc.executemany("insert into pool (parameter,value) VALUES (%s,%s)",[('bitcoin_blocks',0),
|
||||
('bitcoin_balance',0),
|
||||
('bitcoin_connections',0),
|
||||
('bitcoin_difficulty',0),
|
||||
('pool_speed',0),
|
||||
('pool_total_found',0),
|
||||
('round_shares',0),
|
||||
('round_progress',0),
|
||||
('round_start',time.time())
|
||||
])
|
||||
self.dbc.execute("update pool set value = 3 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_3(self):
|
||||
log.info("running update 3")
|
||||
self.dbc.executemany("insert into pool (parameter,value) VALUES (%s,%s)",[
|
||||
('round_best_share',0),
|
||||
('bitcoin_infotime',0)
|
||||
])
|
||||
self.dbc.execute("alter table pool_worker add alive BOOLEAN")
|
||||
self.dbc.execute("update pool set value = 4 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_4(self):
|
||||
log.info("running update 4")
|
||||
self.dbc.execute("alter table pool_worker add difficulty INTEGER default 0")
|
||||
self.dbc.execute("create table shares_archive" +\
|
||||
"(id serial primary key,time timestamp,rem_host TEXT, username TEXT, our_result BOOLEAN, upstream_result BOOLEAN, reason TEXT, solution TEXT, " +\
|
||||
"block_num INTEGER, prev_block_hash TEXT, useragent TEXT, difficulty INTEGER)")
|
||||
self.dbc.execute("create table shares_archive_found" +\
|
||||
"(id serial primary key,time timestamp,rem_host TEXT, username TEXT, our_result BOOLEAN, upstream_result BOOLEAN, reason TEXT, solution TEXT, " +\
|
||||
"block_num INTEGER, prev_block_hash TEXT, useragent TEXT, difficulty INTEGER)")
|
||||
self.dbc.execute("update pool set value = 5 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_5(self):
|
||||
log.info("running update 5")
|
||||
# Adding Primary key to table: pool
|
||||
self.dbc.execute("alter table pool add primary key (parameter)")
|
||||
self.dbh.commit()
|
||||
# Adjusting indicies on table: shares
|
||||
self.dbc.execute("DROP INDEX shares_username")
|
||||
self.dbc.execute("CREATE INDEX shares_time_username ON shares(time,username)")
|
||||
self.dbc.execute("CREATE INDEX shares_upstreamresult ON shares(upstream_result)")
|
||||
self.dbh.commit()
|
||||
|
||||
self.dbc.execute("update pool set value = 6 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_6(self):
|
||||
log.info("running update 6")
|
||||
|
||||
try:
|
||||
self.dbc.execute("CREATE EXTENSION pgcrypto")
|
||||
except psycopg2.ProgrammingError:
|
||||
log.info("pgcrypto already added to database")
|
||||
except psycopg2.OperationalError:
|
||||
raise Exception("Could not add pgcrypto extension to database. Have you got it installed? Ubuntu is postgresql-contrib")
|
||||
self.dbh.commit()
|
||||
|
||||
# Optimising table layout
|
||||
self.dbc.execute("ALTER TABLE pool " +\
|
||||
"ALTER COLUMN parameter TYPE character varying(128), ALTER COLUMN value TYPE character varying(512);")
|
||||
self.dbh.commit()
|
||||
|
||||
self.dbc.execute("UPDATE pool_worker SET password = encode(digest(concat(password, %s), 'sha1'), 'hex') WHERE id > 0", [self.salt])
|
||||
self.dbh.commit()
|
||||
|
||||
self.dbc.execute("ALTER TABLE pool_worker " +\
|
||||
"ALTER COLUMN username TYPE character varying(512), ALTER COLUMN password TYPE character(40), " +\
|
||||
"ADD CONSTRAINT username UNIQUE (username)")
|
||||
self.dbh.commit()
|
||||
|
||||
self.dbc.execute("update pool set value = 7 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
@ -1,300 +0,0 @@
|
||||
import time
|
||||
from stratum import settings
|
||||
import stratum.logger
|
||||
log = stratum.logger.get_logger('DB_Sqlite')
|
||||
|
||||
import sqlite3
|
||||
|
||||
class DB_Sqlite():
|
||||
def __init__(self):
|
||||
log.debug("Connecting to DB")
|
||||
self.dbh = sqlite3.connect(settings.DB_SQLITE_FILE)
|
||||
self.dbc = self.dbh.cursor()
|
||||
|
||||
def updateStats(self,averageOverTime):
|
||||
log.debug("Updating Stats")
|
||||
# Note: we are using transactions... so we can set the speed = 0 and it doesn't take affect until we are commited.
|
||||
self.dbc.execute("update pool_worker set speed = 0, alive = 0");
|
||||
stime = '%.2f' % ( time.time() - averageOverTime );
|
||||
self.dbc.execute("select username,SUM(difficulty) from shares where time > :time group by username", {'time':stime})
|
||||
total_speed = 0
|
||||
sqldata = []
|
||||
for name,shares in self.dbc.fetchall():
|
||||
speed = int(int(shares) * pow(2,32)) / ( int(averageOverTime) * 1000 * 1000)
|
||||
total_speed += speed
|
||||
sqldata.append({'speed':speed,'user':name})
|
||||
self.dbc.executemany("update pool_worker set speed = :speed, alive = 1 where username = :user",sqldata)
|
||||
self.dbc.execute("update pool set value = :val where parameter = 'pool_speed'",{'val':total_speed})
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_check(self):
|
||||
# Check for found shares to archive
|
||||
self.dbc.execute("select time from shares where upstream_result = 1 order by time limit 1")
|
||||
data = self.dbc.fetchone()
|
||||
if data is None or (data[0] + settings.ARCHIVE_DELAY) > time.time() :
|
||||
return False
|
||||
return data[0]
|
||||
|
||||
def archive_found(self,found_time):
|
||||
self.dbc.execute("insert into shares_archive_found select * from shares where upstream_result = 1 and time <= :time",{'time':found_time})
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_to_db(self,found_time):
|
||||
self.dbc.execute("insert into shares_archive select * from shares where time <= :time",{'time':found_time})
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_cleanup(self,found_time):
|
||||
self.dbc.execute("delete from shares where time <= :time",{'time':found_time})
|
||||
self.dbc.execute("vacuum")
|
||||
self.dbh.commit()
|
||||
|
||||
def archive_get_shares(self,found_time):
|
||||
self.dbc.execute("select * from shares where time <= :time",{'time':found_time})
|
||||
return self.dbc
|
||||
|
||||
def import_shares(self,data):
|
||||
log.debug("Importing Shares")
|
||||
# 0 1 2 3 4 5 6 7 8 9 10
|
||||
# data: [worker_name,block_header,block_hash,difficulty,timestamp,is_valid,ip,block_height,prev_hash,invalid_reason,share_diff]
|
||||
checkin_times = {}
|
||||
total_shares = 0
|
||||
best_diff = 0
|
||||
sqldata = []
|
||||
for k,v in enumerate(data):
|
||||
if settings.DATABASE_EXTEND :
|
||||
total_shares += v[3]
|
||||
if v[0] in checkin_times:
|
||||
if v[4] > checkin_times[v[0]] :
|
||||
checkin_times[v[0]]["time"] = v[4]
|
||||
else:
|
||||
checkin_times[v[0]] = {"time": v[4], "shares": 0, "rejects": 0 }
|
||||
|
||||
if v[5] == True :
|
||||
checkin_times[v[0]]["shares"] += v[3]
|
||||
else :
|
||||
checkin_times[v[0]]["rejects"] += v[3]
|
||||
|
||||
if v[10] > best_diff:
|
||||
best_diff = v[10]
|
||||
|
||||
sqldata.append({'time':v[4],'rem_host':v[6],'username':v[0],'our_result':v[5],'upstream_result':0,'reason':v[9],'solution':'',
|
||||
'block_num':v[7],'prev_block_hash':v[8],'ua':'','diff':v[3]} )
|
||||
else :
|
||||
sqldata.append({'time':v[4],'rem_host':v[6],'username':v[0],'our_result':v[5],'upstream_result':0,'reason':v[9],'solution':''} )
|
||||
|
||||
if settings.DATABASE_EXTEND :
|
||||
self.dbc.executemany("insert into shares " +\
|
||||
"(time,rem_host,username,our_result,upstream_result,reason,solution,block_num,prev_block_hash,useragent,difficulty) " +\
|
||||
"VALUES (:time,:rem_host,:username,:our_result,:upstream_result,:reason,:solution,:block_num,:prev_block_hash,:ua,:diff)",sqldata)
|
||||
|
||||
|
||||
self.dbc.execute("select value from pool where parameter = 'round_shares'")
|
||||
round_shares = int(self.dbc.fetchone()[0]) + total_shares
|
||||
self.dbc.execute("update pool set value = :val where parameter = 'round_shares'",{'val':round_shares})
|
||||
|
||||
self.dbc.execute("select value from pool where parameter = 'round_best_share'")
|
||||
round_best_share = int(self.dbc.fetchone()[0])
|
||||
if best_diff > round_best_share:
|
||||
self.dbc.execute("update pool set value = :val where parameter = 'round_best_share'",{'val':best_diff})
|
||||
|
||||
self.dbc.execute("select value from pool where parameter = 'bitcoin_difficulty'")
|
||||
difficulty = float(self.dbc.fetchone()[0])
|
||||
|
||||
if difficulty == 0:
|
||||
progress = 0
|
||||
else:
|
||||
progress = (round_shares/difficulty)*100
|
||||
self.dbc.execute("update pool set value = :val where parameter = 'round_progress'",{'val':progress})
|
||||
|
||||
sqldata = []
|
||||
for k,v in checkin_times.items():
|
||||
sqldata.append({'last_checkin':v["time"],'addshares':v["shares"],'addrejects':v["rejects"],'user':k})
|
||||
|
||||
self.dbc.executemany("update pool_worker set last_checkin = :last_checkin, total_shares = total_shares + :addshares, " +\
|
||||
"total_rejects = total_rejects + :addrejects where username = :user",sqldata)
|
||||
else:
|
||||
self.dbc.executemany("insert into shares (time,rem_host,username,our_result,upstream_result,reason,solution) " +\
|
||||
"VALUES (:time,:rem_host,:username,:our_result,:upstream_result,:reason,:solution)",sqldata)
|
||||
|
||||
self.dbh.commit()
|
||||
|
||||
def found_block(self,data):
|
||||
# Note: difficulty = -1 here
|
||||
self.dbc.execute("update shares set upstream_result = :usr, solution = :sol where time = :time and username = :user",
|
||||
{'usr':data[5],'sol':data[2],'time':data[4],'user':data[0]})
|
||||
if settings.DATABASE_EXTEND and data[5] == True :
|
||||
self.dbc.execute("update pool_worker set total_found = total_found + 1 where username = :user",{'user':data[0]})
|
||||
self.dbc.execute("select value from pool where parameter = 'pool_total_found'")
|
||||
total_found = int(self.dbc.fetchone()[0]) + 1
|
||||
self.dbc.executemany("update pool set value = :val where parameter = :parm", [{'val':0,'parm':'round_shares'},
|
||||
{'val':0,'parm':'round_progress'},
|
||||
{'val':0,'parm':'round_best_share'},
|
||||
{'val':time.time(),'parm':'round_start'},
|
||||
{'val':total_found,'parm':'pool_total_found'}
|
||||
])
|
||||
self.dbh.commit()
|
||||
|
||||
def get_user(self, id_or_username):
|
||||
raise NotImplementedError('Not implemented for SQLite')
|
||||
|
||||
def list_users(self):
|
||||
raise NotImplementedError('Not implemented for SQLite')
|
||||
|
||||
def delete_user(self,id_or_username):
|
||||
raise NotImplementedError('Not implemented for SQLite')
|
||||
|
||||
def insert_user(self,username,password):
|
||||
log.debug("Adding Username/Password")
|
||||
self.dbc.execute("insert into pool_worker (username,password) VALUES (:user,:pass)", {'user':username,'pass':password})
|
||||
self.dbh.commit()
|
||||
|
||||
def update_user(self,username,password):
|
||||
raise NotImplementedError('Not implemented for SQLite')
|
||||
|
||||
def check_password(self,username,password):
|
||||
log.debug("Checking Username/Password")
|
||||
self.dbc.execute("select COUNT(*) from pool_worker where username = :user and password = :pass", {'user':username,'pass':password})
|
||||
data = self.dbc.fetchone()
|
||||
if data[0] > 0 :
|
||||
return True
|
||||
return False
|
||||
|
||||
def update_worker_diff(self,username,diff):
|
||||
self.dbc.execute("update pool_worker set difficulty = :diff where username = :user",{'diff':diff,'user':username})
|
||||
self.dbh.commit()
|
||||
|
||||
def clear_worker_diff(self):
|
||||
if settings.DATABASE_EXTEND == True :
|
||||
self.dbc.execute("update pool_worker set difficulty = 0")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_pool_info(self,pi):
|
||||
self.dbc.executemany("update pool set value = :val where parameter = :parm",[{'val':pi['blocks'],'parm':"bitcoin_blocks"},
|
||||
{'val':pi['balance'],'parm':"bitcoin_balance"},
|
||||
{'val':pi['connections'],'parm':"bitcoin_connections"},
|
||||
{'val':pi['difficulty'],'parm':"bitcoin_difficulty"},
|
||||
{'val':time.time(),'parm':"bitcoin_infotime"}
|
||||
])
|
||||
self.dbh.commit()
|
||||
|
||||
def get_pool_stats(self):
|
||||
self.dbc.execute("select * from pool")
|
||||
ret = {}
|
||||
for data in self.dbc.fetchall():
|
||||
ret[data[0]] = data[1]
|
||||
return ret
|
||||
|
||||
def get_workers_stats(self):
|
||||
self.dbc.execute("select username,speed,last_checkin,total_shares,total_rejects,total_found,alive,difficulty from pool_worker")
|
||||
ret = {}
|
||||
for data in self.dbc.fetchall():
|
||||
ret[data[0]] = { "username" : data[0],
|
||||
"speed" : data[1],
|
||||
"last_checkin" : data[2],
|
||||
"total_shares" : data[3],
|
||||
"total_rejects" : data[4],
|
||||
"total_found" : data[5],
|
||||
"alive" : data[6],
|
||||
"difficulty" : data[7] }
|
||||
return ret
|
||||
|
||||
def close(self):
|
||||
self.dbh.close()
|
||||
|
||||
def check_tables(self):
|
||||
log.debug("Checking Tables")
|
||||
if settings.DATABASE_EXTEND == True :
|
||||
self.dbc.execute("create table if not exists shares" +\
|
||||
"(time DATETIME,rem_host TEXT, username TEXT, our_result INTEGER, upstream_result INTEGER, reason TEXT, solution TEXT, " +\
|
||||
"block_num INTEGER, prev_block_hash TEXT, useragent TEXT, difficulty INTEGER)")
|
||||
self.dbc.execute("create table if not exists pool_worker" +\
|
||||
"(username TEXT, password TEXT, speed INTEGER, last_checkin DATETIME)")
|
||||
self.dbc.execute("create table if not exists pool(parameter TEXT, value TEXT)")
|
||||
|
||||
self.dbc.execute("select COUNT(*) from pool where parameter = 'DB Version'")
|
||||
data = self.dbc.fetchone()
|
||||
if data[0] <= 0:
|
||||
self.dbc.execute("alter table pool_worker add total_shares INTEGER default 0")
|
||||
self.dbc.execute("alter table pool_worker add total_rejects INTEGER default 0")
|
||||
self.dbc.execute("alter table pool_worker add total_found INTEGER default 0")
|
||||
self.dbc.execute("insert into pool (parameter,value) VALUES ('DB Version',2)")
|
||||
self.update_tables()
|
||||
else :
|
||||
self.dbc.execute("create table if not exists shares" + \
|
||||
"(time DATETIME,rem_host TEXT, username TEXT, our_result INTEGER, upstream_result INTEGER, reason TEXT, solution TEXT)")
|
||||
self.dbc.execute("create table if not exists pool_worker(username TEXT, password TEXT)")
|
||||
self.dbc.execute("create index if not exists pool_worker_username ON pool_worker(username)")
|
||||
|
||||
def update_tables(self):
|
||||
version = 0
|
||||
current_version = 6
|
||||
while version < current_version :
|
||||
self.dbc.execute("select value from pool where parameter = 'DB Version'")
|
||||
data = self.dbc.fetchone()
|
||||
version = int(data[0])
|
||||
if version < current_version :
|
||||
log.info("Updating Database from %i to %i" % (version, version +1))
|
||||
getattr(self, 'update_version_' + str(version) )()
|
||||
|
||||
|
||||
def update_version_2(self):
|
||||
log.info("running update 2")
|
||||
self.dbc.executemany("insert into pool (parameter,value) VALUES (?,?)",[('bitcoin_blocks',0),
|
||||
('bitcoin_balance',0),
|
||||
('bitcoin_connections',0),
|
||||
('bitcoin_difficulty',0),
|
||||
('pool_speed',0),
|
||||
('pool_total_found',0),
|
||||
('round_shares',0),
|
||||
('round_progress',0),
|
||||
('round_start',time.time())
|
||||
])
|
||||
self.dbc.execute("create index if not exists shares_username ON shares(username)")
|
||||
self.dbc.execute("create index if not exists pool_worker_username ON pool_worker(username)")
|
||||
self.dbc.execute("update pool set value = 3 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_3(self):
|
||||
log.info("running update 3")
|
||||
self.dbc.executemany("insert into pool (parameter,value) VALUES (?,?)",[
|
||||
('round_best_share',0),
|
||||
('bitcoin_infotime',0),
|
||||
])
|
||||
self.dbc.execute("alter table pool_worker add alive INTEGER default 0")
|
||||
self.dbc.execute("update pool set value = 4 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_4(self):
|
||||
log.info("running update 4")
|
||||
self.dbc.execute("alter table pool_worker add difficulty INTEGER default 0")
|
||||
self.dbc.execute("create table if not exists shares_archive" +\
|
||||
"(time DATETIME,rem_host TEXT, username TEXT, our_result INTEGER, upstream_result INTEGER, reason TEXT, solution TEXT, " +\
|
||||
"block_num INTEGER, prev_block_hash TEXT, useragent TEXT, difficulty INTEGER)")
|
||||
self.dbc.execute("create table if not exists shares_archive_found" +\
|
||||
"(time DATETIME,rem_host TEXT, username TEXT, our_result INTEGER, upstream_result INTEGER, reason TEXT, solution TEXT, " +\
|
||||
"block_num INTEGER, prev_block_hash TEXT, useragent TEXT, difficulty INTEGER)")
|
||||
self.dbc.execute("update pool set value = 5 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
|
||||
def update_version_5(self):
|
||||
log.info("running update 5")
|
||||
# Adding Primary key to table: pool
|
||||
self.dbc.execute("alter table pool rename to pool_old")
|
||||
self.dbc.execute("create table if not exists pool(parameter TEXT, value TEXT, primary key(parameter))")
|
||||
self.dbc.execute("insert into pool select * from pool_old")
|
||||
self.dbc.execute("drop table pool_old")
|
||||
self.dbh.commit()
|
||||
# Adding Primary key to table: pool_worker
|
||||
self.dbc.execute("alter table pool_worker rename to pool_worker_old")
|
||||
self.dbc.execute("CREATE TABLE pool_worker(username TEXT, password TEXT, speed INTEGER, last_checkin DATETIME, total_shares INTEGER default 0, total_rejects INTEGER default 0, total_found INTEGER default 0, alive INTEGER default 0, difficulty INTEGER default 0, primary key(username))")
|
||||
self.dbc.execute("insert into pool_worker select * from pool_worker_old")
|
||||
self.dbc.execute("drop table pool_worker_old")
|
||||
self.dbh.commit()
|
||||
# Adjusting indicies on table: shares
|
||||
self.dbc.execute("DROP INDEX shares_username")
|
||||
self.dbc.execute("CREATE INDEX shares_time_username ON shares(time,username)")
|
||||
self.dbc.execute("CREATE INDEX shares_upstreamresult ON shares(upstream_result)")
|
||||
self.dbh.commit()
|
||||
|
||||
self.dbc.execute("update pool set value = 6 where parameter = 'DB Version'")
|
||||
self.dbh.commit()
|
||||
@ -5,8 +5,6 @@ from twisted.internet.error import ConnectionRefusedError
|
||||
import time
|
||||
import simplejson as json
|
||||
from twisted.internet import reactor
|
||||
import threading
|
||||
from mining.work_log_pruner import WorkLogPruner
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def setup(on_startup):
|
||||
@ -39,65 +37,35 @@ def setup(on_startup):
|
||||
# - we are not still downloading the blockchain (Sleep)
|
||||
log.info("Connecting to litecoind...")
|
||||
while True:
|
||||
try:
|
||||
result = (yield bitcoin_rpc.check_submitblock())
|
||||
if result == True:
|
||||
log.info("Found submitblock")
|
||||
elif result == False:
|
||||
log.info("Did not find submitblock")
|
||||
else:
|
||||
log.info("unknown submitblock result")
|
||||
except ConnectionRefusedError, e:
|
||||
log.error("Connection refused while trying to connect to the coind (are your COIND_* settings correct?)")
|
||||
reactor.stop()
|
||||
break
|
||||
|
||||
except Exception, e:
|
||||
log.debug(str(e))
|
||||
|
||||
try:
|
||||
result = (yield bitcoin_rpc.getblocktemplate())
|
||||
if isinstance(result, dict):
|
||||
# litecoind implements version 1 of getblocktemplate
|
||||
if result['version'] >= 1:
|
||||
result = (yield bitcoin_rpc.getdifficulty())
|
||||
if isinstance(result,dict):
|
||||
if 'proof-of-stake' in result:
|
||||
settings.COINDAEMON_Reward = 'POS'
|
||||
log.info("Coin detected as POS")
|
||||
break
|
||||
else:
|
||||
settings.COINDAEMON_Reward = 'POW'
|
||||
log.info("Coin detected as POW")
|
||||
break
|
||||
break
|
||||
else:
|
||||
log.error("Block Version mismatch: %s" % result['version'])
|
||||
|
||||
|
||||
except ConnectionRefusedError, e:
|
||||
log.error("Connection refused while trying to connect to the coind (are your COIND_* settings correct?)")
|
||||
log.error("Connection refused while trying to connect to litecoin (are your LITECOIN_TRUSTED_* settings correct?)")
|
||||
reactor.stop()
|
||||
break
|
||||
|
||||
except Exception, e:
|
||||
if isinstance(e[2], str):
|
||||
try:
|
||||
if isinstance(json.loads(e[2])['error']['message'], str):
|
||||
error = json.loads(e[2])['error']['message']
|
||||
if isinstance(json.loads(e[2])['error']['message'], str):
|
||||
error = json.loads(e[2])['error']['message']
|
||||
if error == "Method not found":
|
||||
log.error("CoinD does not support getblocktemplate!!! (time to upgrade.)")
|
||||
log.error("Litecoind does not support getblocktemplate!!! (time to upgrade.)")
|
||||
reactor.stop()
|
||||
elif "downloading blocks" in error:
|
||||
log.error("CoinD downloading blockchain... will check back in 30 sec")
|
||||
elif error == "Litecoind is downloading blocks...":
|
||||
log.error("Litecoind downloading blockchain... will check back in 30 sec")
|
||||
time.sleep(29)
|
||||
else:
|
||||
log.error("Coind Error: %s", error)
|
||||
except ValueError:
|
||||
log.error("Failed Connect(HTTP 500 or Invalid JSON), Check Username and Password!")
|
||||
reactor.stop()
|
||||
log.error("Litecoind Error: %s", error)
|
||||
time.sleep(1) # If we didn't get a result or the connect failed
|
||||
|
||||
log.info('Connected to the coind - Begining to load Address and Module Checks!')
|
||||
log.info('Connected to litecoind - Ready to GO!')
|
||||
|
||||
# Start the coinbaser
|
||||
coinbaser = SimpleCoinbaser(bitcoin_rpc, getattr(settings, 'CENTRAL_WALLET'))
|
||||
@ -118,10 +86,6 @@ def setup(on_startup):
|
||||
# This is just failsafe solution when -blocknotify
|
||||
# mechanism is not working properly
|
||||
BlockUpdater(registry, bitcoin_rpc)
|
||||
|
||||
prune_thr = threading.Thread(target=WorkLogPruner, args=(Interfaces.worker_manager.job_log,))
|
||||
prune_thr.daemon = True
|
||||
prune_thr.start()
|
||||
|
||||
log.info("MINING SERVICE IS READY")
|
||||
on_startup.callback(True)
|
||||
|
||||
@ -87,7 +87,7 @@ class BasicShareLimiter(object):
|
||||
ts = int(timestamp)
|
||||
|
||||
# Init the stats for this worker if it isn't set.
|
||||
if worker_name not in self.worker_stats or self.worker_stats[worker_name]['last_ts'] < ts - settings.DB_USERCACHE_TIME :
|
||||
if worker_name not in self.worker_stats :
|
||||
self.worker_stats[worker_name] = {'last_rtc': (ts - self.retarget / 2), 'last_ts': ts, 'buffer': SpeedBuffer(self.buffersize) }
|
||||
dbi.update_worker_diff(worker_name, settings.POOL_TARGET)
|
||||
return
|
||||
@ -103,79 +103,49 @@ class BasicShareLimiter(object):
|
||||
# Set up and log our check
|
||||
self.worker_stats[worker_name]['last_rtc'] = ts
|
||||
avg = self.worker_stats[worker_name]['buffer'].avg()
|
||||
log.debug("Checking Retarget for %s (%i) avg. %i target %i+-%i" % (worker_name, current_difficulty, avg,
|
||||
log.info("Checking Retarget for %s (%i) avg. %i target %i+-%i" % (worker_name, current_difficulty, avg,
|
||||
self.target, self.variance))
|
||||
|
||||
if avg < 1:
|
||||
log.warning("Reseting avg = 1 since it's SOOO low")
|
||||
log.info("Reseting avg = 1 since it's SOOO low")
|
||||
avg = 1
|
||||
|
||||
# Figure out our Delta-Diff
|
||||
if settings.VDIFF_FLOAT:
|
||||
ddiff = float((float(current_difficulty) * (float(self.target) / float(avg))) - current_difficulty)
|
||||
else:
|
||||
ddiff = int((float(current_difficulty) * (float(self.target) / float(avg))) - current_difficulty)
|
||||
|
||||
ddiff = float((float(current_difficulty) * (float(self.target) / float(avg))) - current_difficulty)
|
||||
if (avg > self.tmax and current_difficulty > settings.VDIFF_MIN_TARGET):
|
||||
# For fractional -0.1 ddiff's just drop by 1
|
||||
if settings.VDIFF_X2_TYPE:
|
||||
ddiff = 0.5
|
||||
# Don't drop below POOL_TARGET
|
||||
if (ddiff * current_difficulty) < settings.VDIFF_MIN_TARGET:
|
||||
ddiff = settings.VDIFF_MIN_TARGET / current_difficulty
|
||||
else:
|
||||
if ddiff > -1:
|
||||
ddiff = -1
|
||||
# Don't drop below POOL_TARGET
|
||||
if (ddiff + current_difficulty) < settings.POOL_TARGET:
|
||||
ddiff = settings.VDIFF_MIN_TARGET - current_difficulty
|
||||
if ddiff > -1:
|
||||
ddiff = -1
|
||||
# Don't drop below POOL_TARGET
|
||||
if (ddiff + current_difficulty) < settings.VDIFF_MIN_TARGET:
|
||||
ddiff = settings.VDIFF_MIN_TARGET - current_difficulty
|
||||
elif avg < self.tmin:
|
||||
# For fractional 0.1 ddiff's just up by 1
|
||||
if settings.VDIFF_X2_TYPE:
|
||||
ddiff = 2
|
||||
# Don't go above LITECOIN or VDIFF_MAX_TARGET
|
||||
if settings.USE_COINDAEMON_DIFF:
|
||||
self.update_litecoin_difficulty()
|
||||
diff_max = min([settings.VDIFF_MAX_TARGET, self.litecoin_diff])
|
||||
else:
|
||||
diff_max = settings.VDIFF_MAX_TARGET
|
||||
|
||||
if (ddiff * current_difficulty) > diff_max:
|
||||
ddiff = diff_max / current_difficulty
|
||||
if ddiff < 1:
|
||||
ddiff = 1
|
||||
# Don't go above LITECOIN or VDIFF_MAX_TARGET
|
||||
self.update_litecoin_difficulty()
|
||||
if settings.USE_LITECOIN_DIFF:
|
||||
diff_max = min([settings.VDIFF_MAX_TARGET, self.litecoin_diff])
|
||||
else:
|
||||
if ddiff < 1:
|
||||
ddiff = 1
|
||||
# Don't go above LITECOIN or VDIFF_MAX_TARGET
|
||||
if settings.USE_COINDAEMON_DIFF:
|
||||
self.update_litecoin_difficulty()
|
||||
diff_max = min([settings.VDIFF_MAX_TARGET, self.litecoin_diff])
|
||||
else:
|
||||
diff_max = settings.VDIFF_MAX_TARGET
|
||||
diff_max = settings.VDIFF_MAX_TARGET
|
||||
|
||||
if (ddiff + current_difficulty) > diff_max:
|
||||
ddiff = diff_max - current_difficulty
|
||||
if (ddiff + current_difficulty) > diff_max:
|
||||
ddiff = diff_max - current_difficulty
|
||||
|
||||
else: # If we are here, then we should not be retargeting.
|
||||
return
|
||||
|
||||
# At this point we are retargeting this worker
|
||||
if settings.VDIFF_X2_TYPE:
|
||||
new_diff = current_difficulty * ddiff
|
||||
else:
|
||||
new_diff = current_difficulty + ddiff
|
||||
log.debug("Retarget for %s %i old: %i new: %i" % (worker_name, ddiff, current_difficulty, new_diff))
|
||||
new_diff = current_difficulty + ddiff
|
||||
log.info("Retarget for %s %i old: %i new: %i" % (worker_name, ddiff, current_difficulty, new_diff))
|
||||
|
||||
self.worker_stats[worker_name]['buffer'].clear()
|
||||
session = connection_ref().get_session()
|
||||
|
||||
(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, _) = \
|
||||
Interfaces.template_registry.get_last_broadcast_args()
|
||||
work_id = Interfaces.worker_manager.register_work(worker_name, job_id, new_diff)
|
||||
|
||||
session['prev_diff'] = session['difficulty']
|
||||
session['prev_jobid'] = job_id
|
||||
session['difficulty'] = new_diff
|
||||
connection_ref().rpc('mining.set_difficulty', [new_diff, ], is_notification=True)
|
||||
log.debug("Notified of New Difficulty")
|
||||
connection_ref().rpc('mining.notify', [work_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, False, ], is_notification=True)
|
||||
log.debug("Sent new work")
|
||||
dbi.update_worker_diff(worker_name, new_diff)
|
||||
|
||||
|
||||
@ -17,39 +17,12 @@ dbi.init_main()
|
||||
|
||||
class WorkerManagerInterface(object):
|
||||
def __init__(self):
|
||||
self.worker_log = {}
|
||||
self.worker_log.setdefault('authorized', {})
|
||||
self.job_log = {}
|
||||
self.job_log.setdefault('None', {})
|
||||
return
|
||||
|
||||
def authorize(self, worker_name, worker_password):
|
||||
# Important NOTE: This is called on EVERY submitted share. So you'll need caching!!!
|
||||
return dbi.check_password(worker_name, worker_password)
|
||||
|
||||
def get_user_difficulty(self, worker_name):
|
||||
wd = dbi.get_user(worker_name)
|
||||
if len(wd) > 6:
|
||||
if wd[6] != 0:
|
||||
return (True, wd[6])
|
||||
#dbi.update_worker_diff(worker_name, wd[6])
|
||||
return (False, settings.POOL_TARGET)
|
||||
|
||||
def register_work(self, worker_name, job_id, difficulty):
|
||||
now = Interfaces.timestamper.time()
|
||||
work_id = WorkIdGenerator.get_new_id()
|
||||
self.job_log.setdefault(worker_name, {})[work_id] = (job_id, difficulty, now)
|
||||
return work_id
|
||||
|
||||
class WorkIdGenerator(object):
|
||||
counter = 1000
|
||||
|
||||
@classmethod
|
||||
def get_new_id(cls):
|
||||
cls.counter += 1
|
||||
if cls.counter % 0xffff == 0:
|
||||
cls.counter = 1
|
||||
return "%x" % cls.counter
|
||||
|
||||
class ShareLimiterInterface(object):
|
||||
'''Implement difficulty adjustments here'''
|
||||
@ -76,13 +49,12 @@ class ShareManagerInterface(object):
|
||||
pass
|
||||
|
||||
def on_submit_share(self, worker_name, block_header, block_hash, difficulty, timestamp, is_valid, ip, invalid_reason, share_diff):
|
||||
log.debug("%s (%s) %s %s" % (block_hash, share_diff, 'valid' if is_valid else 'INVALID', worker_name))
|
||||
log.info("%s (%s) %s %s" % (block_hash, share_diff, 'valid' if is_valid else 'INVALID', worker_name))
|
||||
dbi.queue_share([worker_name, block_header, block_hash, difficulty, timestamp, is_valid, ip, self.block_height, self.prev_hash,
|
||||
invalid_reason, share_diff ])
|
||||
|
||||
def on_submit_block(self, is_accepted, worker_name, block_header, block_hash, timestamp, ip, share_diff):
|
||||
log.info("Block %s %s" % (block_hash, 'ACCEPTED' if is_accepted else 'REJECTED'))
|
||||
dbi.do_import(dbi, True)
|
||||
dbi.found_block([worker_name, block_header, block_hash, -1, timestamp, is_accepted, ip, self.block_height, self.prev_hash, share_diff ])
|
||||
|
||||
class TimestamperInterface(object):
|
||||
@ -106,7 +78,7 @@ class Interfaces(object):
|
||||
share_limiter = None
|
||||
timestamper = None
|
||||
template_registry = None
|
||||
|
||||
|
||||
@classmethod
|
||||
def set_worker_manager(cls, manager):
|
||||
cls.worker_manager = manager
|
||||
|
||||
@ -7,7 +7,7 @@ from stratum.pubsub import Pubsub
|
||||
from interfaces import Interfaces
|
||||
from subscription import MiningSubscription
|
||||
from lib.exceptions import SubmitException
|
||||
import json
|
||||
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('mining')
|
||||
|
||||
@ -21,30 +21,7 @@ class MiningService(GenericService):
|
||||
service_type = 'mining'
|
||||
service_vendor = 'stratum'
|
||||
is_default = True
|
||||
event = 'mining.notify'
|
||||
|
||||
@admin
|
||||
def get_server_stats(self):
|
||||
serialized = ''
|
||||
for subscription in Pubsub.iterate_subscribers(self.event):
|
||||
try:
|
||||
if subscription != None:
|
||||
session = subscription.connection_ref().get_session()
|
||||
session.setdefault('authorized', {})
|
||||
if session['authorized'].keys():
|
||||
worker_name = session['authorized'].keys()[0]
|
||||
difficulty = session['difficulty']
|
||||
ip = subscription.connection_ref()._get_ip()
|
||||
serialized += json.dumps({'worker_name': worker_name, 'ip': ip, 'difficulty': difficulty})
|
||||
else:
|
||||
pass
|
||||
except Exception as e:
|
||||
log.exception("Error getting subscriptions %s" % str(e))
|
||||
pass
|
||||
|
||||
log.debug("Server stats request: %s" % serialized)
|
||||
return '%s' % serialized
|
||||
|
||||
|
||||
@admin
|
||||
def update_block(self):
|
||||
'''Connect this RPC call to 'litecoind -blocknotify' for
|
||||
@ -66,12 +43,6 @@ class MiningService(GenericService):
|
||||
log.info("New litecoind connection added %s:%s" % (args[0], args[1]))
|
||||
return True
|
||||
|
||||
@admin
|
||||
def refresh_config(self):
|
||||
settings.setup()
|
||||
log.info("Updated Config")
|
||||
return True
|
||||
|
||||
def authorize(self, worker_name, worker_password):
|
||||
'''Let authorize worker on this connection.'''
|
||||
|
||||
@ -80,22 +51,10 @@ class MiningService(GenericService):
|
||||
|
||||
if Interfaces.worker_manager.authorize(worker_name, worker_password):
|
||||
session['authorized'][worker_name] = worker_password
|
||||
is_ext_diff = False
|
||||
if settings.ALLOW_EXTERNAL_DIFFICULTY:
|
||||
(is_ext_diff, session['difficulty']) = Interfaces.worker_manager.get_user_difficulty(worker_name)
|
||||
self.connection_ref().rpc('mining.set_difficulty', [session['difficulty'], ], is_notification=True)
|
||||
else:
|
||||
session['difficulty'] = settings.POOL_TARGET
|
||||
# worker_log = (valid, invalid, is_banned, diff, is_ext_diff, timestamp)
|
||||
Interfaces.worker_manager.worker_log['authorized'][worker_name] = (0, 0, False, session['difficulty'], is_ext_diff, Interfaces.timestamper.time())
|
||||
return True
|
||||
else:
|
||||
ip = self.connection_ref()._get_ip()
|
||||
log.info("Failed worker authorization: IP %s", str(ip))
|
||||
if worker_name in session['authorized']:
|
||||
del session['authorized'][worker_name]
|
||||
if worker_name in Interfaces.worker_manager.worker_log['authorized']:
|
||||
del Interfaces.worker_manager.worker_log['authorized'][worker_name]
|
||||
return False
|
||||
|
||||
def subscribe(self, *args):
|
||||
@ -109,61 +68,30 @@ class MiningService(GenericService):
|
||||
session = self.connection_ref().get_session()
|
||||
session['extranonce1'] = extranonce1
|
||||
session['difficulty'] = settings.POOL_TARGET # Following protocol specs, default diff is 1
|
||||
|
||||
return Pubsub.subscribe(self.connection_ref(), MiningSubscription()) + (extranonce1_hex, extranonce2_size)
|
||||
|
||||
def submit(self, worker_name, work_id, extranonce2, ntime, nonce):
|
||||
def submit(self, worker_name, job_id, extranonce2, ntime, nonce):
|
||||
'''Try to solve block candidate using given parameters.'''
|
||||
|
||||
session = self.connection_ref().get_session()
|
||||
session.setdefault('authorized', {})
|
||||
|
||||
# Check if worker is authorized to submit shares
|
||||
ip = self.connection_ref()._get_ip()
|
||||
if not Interfaces.worker_manager.authorize(worker_name, session['authorized'].get(worker_name)):
|
||||
log.info("Worker is not authorized: IP %s", str(ip))
|
||||
raise SubmitException("Worker is not authorized")
|
||||
|
||||
# Check if extranonce1 is in connection session
|
||||
extranonce1_bin = session.get('extranonce1', None)
|
||||
|
||||
if not extranonce1_bin:
|
||||
log.info("Connection is not subscribed for mining: IP %s", str(ip))
|
||||
raise SubmitException("Connection is not subscribed for mining")
|
||||
|
||||
# Get current block job_id
|
||||
difficulty = session['difficulty']
|
||||
if worker_name in Interfaces.worker_manager.job_log and work_id in Interfaces.worker_manager.job_log[worker_name]:
|
||||
(job_id, difficulty, job_ts) = Interfaces.worker_manager.job_log[worker_name][work_id]
|
||||
else:
|
||||
job_ts = Interfaces.timestamper.time()
|
||||
Interfaces.worker_manager.job_log.setdefault(worker_name, {})[work_id] = (work_id, difficulty, job_ts)
|
||||
job_id = work_id
|
||||
#log.debug("worker_job_log: %s" % repr(Interfaces.worker_manager.job_log))
|
||||
|
||||
submit_time = Interfaces.timestamper.time()
|
||||
|
||||
(valid, invalid, is_banned, diff, is_ext_diff, last_ts) = Interfaces.worker_manager.worker_log['authorized'][worker_name]
|
||||
percent = float(float(invalid) / (float(valid) if valid else 1) * 100)
|
||||
|
||||
if is_banned and submit_time - last_ts > settings.WORKER_BAN_TIME:
|
||||
if percent > settings.INVALID_SHARES_PERCENT:
|
||||
log.debug("Worker invalid percent: %0.2f %s STILL BANNED!" % (percent, worker_name))
|
||||
else:
|
||||
is_banned = False
|
||||
log.debug("Clearing ban for worker: %s UNBANNED" % worker_name)
|
||||
(valid, invalid, is_banned, last_ts) = (0, 0, is_banned, Interfaces.timestamper.time())
|
||||
|
||||
if submit_time - last_ts > settings.WORKER_CACHE_TIME and not is_banned:
|
||||
if percent > settings.INVALID_SHARES_PERCENT and settings.ENABLE_WORKER_BANNING:
|
||||
is_banned = True
|
||||
log.debug("Worker invalid percent: %0.2f %s BANNED!" % (percent, worker_name))
|
||||
else:
|
||||
log.debug("Clearing worker stats for: %s" % worker_name)
|
||||
(valid, invalid, is_banned, last_ts) = (0, 0, is_banned, Interfaces.timestamper.time())
|
||||
|
||||
log.debug("%s (%d, %d, %s, %s, %d) %0.2f%% work_id(%s) job_id(%s) diff(%f)" % (worker_name, valid, invalid, is_banned, is_ext_diff, last_ts, percent, work_id, job_id, difficulty))
|
||||
if not is_ext_diff:
|
||||
Interfaces.share_limiter.submit(self.connection_ref, job_id, difficulty, submit_time, worker_name)
|
||||
ip = self.connection_ref()._get_ip()
|
||||
|
||||
Interfaces.share_limiter.submit(self.connection_ref, job_id, difficulty, submit_time, worker_name)
|
||||
|
||||
# This checks if submitted share meet all requirements
|
||||
# and it is valid proof of work.
|
||||
@ -172,25 +100,14 @@ class MiningService(GenericService):
|
||||
worker_name, session, extranonce1_bin, extranonce2, ntime, nonce, difficulty)
|
||||
except SubmitException as e:
|
||||
# block_header and block_hash are None when submitted data are corrupted
|
||||
invalid += 1
|
||||
Interfaces.worker_manager.worker_log['authorized'][worker_name] = (valid, invalid, is_banned, difficulty, is_ext_diff, last_ts)
|
||||
|
||||
if is_banned:
|
||||
raise SubmitException("Worker is temporarily banned")
|
||||
|
||||
Interfaces.share_manager.on_submit_share(worker_name, False, False, difficulty,
|
||||
submit_time, False, ip, e[0], 0)
|
||||
submit_time, False, ip, e[0], 0)
|
||||
raise
|
||||
|
||||
valid += 1
|
||||
Interfaces.worker_manager.worker_log['authorized'][worker_name] = (valid, invalid, is_banned, difficulty, is_ext_diff, last_ts)
|
||||
|
||||
if is_banned:
|
||||
raise SubmitException("Worker is temporarily banned")
|
||||
|
||||
|
||||
|
||||
Interfaces.share_manager.on_submit_share(worker_name, block_header,
|
||||
block_hash, difficulty, submit_time, True, ip, '', share_diff)
|
||||
|
||||
|
||||
if on_submit != None:
|
||||
# Pool performs submitblock() to litecoind. Let's hook
|
||||
# to result and report it to share manager
|
||||
|
||||
@ -21,23 +21,9 @@ class MiningSubscription(Subscription):
|
||||
|
||||
(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, _) = \
|
||||
Interfaces.template_registry.get_last_broadcast_args()
|
||||
|
||||
|
||||
# Push new job to subscribed clients
|
||||
for subscription in Pubsub.iterate_subscribers(cls.event):
|
||||
try:
|
||||
if subscription != None:
|
||||
session = subscription.connection_ref().get_session()
|
||||
session.setdefault('authorized', {})
|
||||
if session['authorized'].keys():
|
||||
worker_name = session['authorized'].keys()[0]
|
||||
difficulty = session['difficulty']
|
||||
work_id = Interfaces.worker_manager.register_work(worker_name, job_id, difficulty)
|
||||
subscription.emit_single(work_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, clean_jobs)
|
||||
else:
|
||||
subscription.emit_single(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, clean_jobs)
|
||||
except Exception as e:
|
||||
log.exception("Error broadcasting work to client %s" % str(e))
|
||||
pass
|
||||
cls.emit(job_id, prevhash, coinb1, coinb2, merkle_branch, version, nbits, ntime, clean_jobs)
|
||||
|
||||
cnt = Pubsub.get_subscription_count(cls.event)
|
||||
log.info("BROADCASTED to %d connections in %.03f sec" % (cnt, (Interfaces.timestamper.time() - start)))
|
||||
|
||||
@ -1,23 +0,0 @@
|
||||
from time import sleep, time
|
||||
import traceback
|
||||
import lib.logger
|
||||
log = lib.logger.get_logger('work_log_pruner')
|
||||
|
||||
def _WorkLogPruner_I(wl):
|
||||
now = time()
|
||||
pruned = 0
|
||||
for username in wl:
|
||||
userwork = wl[username]
|
||||
for wli in tuple(userwork.keys()):
|
||||
if now > userwork[wli][2] + 120:
|
||||
del userwork[wli]
|
||||
pruned += 1
|
||||
log.info('Pruned %d jobs' % (pruned,))
|
||||
|
||||
def WorkLogPruner(wl):
|
||||
while True:
|
||||
try:
|
||||
sleep(60)
|
||||
_WorkLogPruner_I(wl)
|
||||
except:
|
||||
log.debug(traceback.format_exc())
|
||||
@ -1,34 +0,0 @@
|
||||
BeautifulSoup==3.2.1
|
||||
Jinja2==2.7.2
|
||||
Magic-file-extensions==0.2
|
||||
MarkupSafe==0.18
|
||||
MySQL-python==1.2.5
|
||||
PyYAML==3.10
|
||||
Twisted==13.2.0
|
||||
ansible==1.4.5
|
||||
argparse==1.2.1
|
||||
autobahn==0.8.4-3
|
||||
cffi==0.8.2
|
||||
cryptography==0.2.2
|
||||
defer==1.0.6
|
||||
ecdsa==0.10
|
||||
feedparser==5.1.3
|
||||
fpconst==0.7.2
|
||||
httplib2==0.8
|
||||
numpy==1.7.1
|
||||
paramiko==1.12.1
|
||||
pyOpenSSL==0.14
|
||||
pyasn1==0.1.7
|
||||
pycparser==2.10
|
||||
pycrypto==2.6.1
|
||||
pylibmc==1.2.3
|
||||
pyserial==2.7
|
||||
python-dateutil==2.1
|
||||
python-memcached==1.53
|
||||
pyxdg==0.25
|
||||
requests==2.2.0
|
||||
simplejson==3.3.3
|
||||
six==1.4.1
|
||||
wsgiref==0.1.2
|
||||
zope.interface==4.1.0
|
||||
stratum==0.2.15
|
||||
@ -1,46 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# Send notification to Stratum mining instance add a new litecoind instance to the pool
|
||||
|
||||
import socket
|
||||
import json
|
||||
import sys
|
||||
import argparse
|
||||
import time
|
||||
|
||||
start = time.time()
|
||||
|
||||
parser = argparse.ArgumentParser(description='Refresh the config of the Stratum instance.')
|
||||
parser.add_argument('--password', dest='password', type=str, help='use admin password from Stratum server config')
|
||||
parser.add_argument('--host', dest='host', type=str, default='localhost', help='hostname of Stratum mining instance')
|
||||
parser.add_argument('--port', dest='port', type=int, default=3333, help='port of Stratum mining instance')
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.password == None:
|
||||
parser.print_help()
|
||||
sys.exit()
|
||||
|
||||
message = {'id': 1, 'method': 'mining.refresh_config', 'params': []}
|
||||
|
||||
try:
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
s.connect((args.host, args.port))
|
||||
s.sendall(json.dumps(message)+"\n")
|
||||
data = s.recv(16000)
|
||||
s.close()
|
||||
except IOError:
|
||||
print "Refresh Config: Cannot connect to the pool"
|
||||
sys.exit()
|
||||
|
||||
for line in data.split("\n"):
|
||||
if not line.strip():
|
||||
# Skip last line which doesn't contain any message
|
||||
continue
|
||||
|
||||
message = json.loads(line)
|
||||
if message['id'] == 1:
|
||||
if message['result'] == True:
|
||||
print "Refresh Config: done in %.03f sec" % (time.time() - start)
|
||||
else:
|
||||
print "Refresh Config: Error during request:", message['error'][1]
|
||||
else:
|
||||
print "Refresh Config: Unexpected message from the server:", message
|
||||
@ -1,46 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# Send notification to Stratum mining instance add a new litecoind instance to the pool
|
||||
|
||||
import socket
|
||||
import json
|
||||
import sys
|
||||
import argparse
|
||||
import time
|
||||
|
||||
start = time.time()
|
||||
|
||||
parser = argparse.ArgumentParser(description='Refresh the config of the Stratum instance.')
|
||||
parser.add_argument('--password', dest='password', type=str, help='use admin password from Stratum server config')
|
||||
parser.add_argument('--host', dest='host', type=str, default='localhost', help='hostname of Stratum mining instance')
|
||||
parser.add_argument('--port', dest='port', type=int, default=3333, help='port of Stratum mining instance')
|
||||
args = parser.parse_args()
|
||||
|
||||
if args.password == None:
|
||||
parser.print_help()
|
||||
sys.exit()
|
||||
|
||||
message = {'id': 1, 'method': 'mining.refresh_config', 'params': []}
|
||||
|
||||
try:
|
||||
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
s.connect((args.host, args.port))
|
||||
s.sendall(json.dumps(message)+"\n")
|
||||
data = s.recv(16000)
|
||||
s.close()
|
||||
except IOError:
|
||||
print "Refresh Config: Cannot connect to the pool"
|
||||
sys.exit()
|
||||
|
||||
for line in data.split("\n"):
|
||||
if not line.strip():
|
||||
# Skip last line which doesn't contain any message
|
||||
continue
|
||||
|
||||
message = json.loads(line)
|
||||
if message['id'] == 1:
|
||||
if message['result'] == True:
|
||||
print "Refresh Config: done in %.03f sec" % (time.time() - start)
|
||||
else:
|
||||
print "Refresh Config: Error during request:", message['error'][1]
|
||||
else:
|
||||
print "Refresh Config: Unexpected message from the server:", message
|
||||
@ -1,260 +0,0 @@
|
||||
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
|
||||
SET time_zone = "+00:00";
|
||||
|
||||
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
|
||||
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
|
||||
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
|
||||
/*!40101 SET NAMES utf8 */;
|
||||
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `accounts` (
|
||||
`id` int(255) NOT NULL AUTO_INCREMENT,
|
||||
`is_admin` tinyint(1) NOT NULL DEFAULT '0',
|
||||
`is_anonymous` tinyint(1) NOT NULL DEFAULT '0',
|
||||
`no_fees` tinyint(1) NOT NULL DEFAULT '0',
|
||||
`username` varchar(40) NOT NULL,
|
||||
`pass` varchar(255) NOT NULL,
|
||||
`email` varchar(255) DEFAULT NULL COMMENT 'Assocaited email: used for validating users, and re-setting passwords',
|
||||
`timezone` varchar(35) NOT NULL DEFAULT '415',
|
||||
`notify_email` VARCHAR( 255 ) NULL DEFAULT NULL,
|
||||
`loggedIp` varchar(255) DEFAULT NULL,
|
||||
`is_locked` tinyint(1) NOT NULL DEFAULT '0',
|
||||
`failed_logins` int(5) unsigned DEFAULT '0',
|
||||
`failed_pins` int(5) unsigned DEFAULT '0',
|
||||
`signup_timestamp` int(10) DEFAULT '0',
|
||||
`last_login` int(10) DEFAULT NULL,
|
||||
`pin` varchar(255) NOT NULL COMMENT 'four digit pin to allow account changes',
|
||||
`api_key` varchar(255) DEFAULT NULL,
|
||||
`token` varchar(65) DEFAULT NULL,
|
||||
`donate_percent` float DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `username` (`username`),
|
||||
UNIQUE KEY `email` (`email`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `blocks` (
|
||||
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`height` int(10) unsigned NOT NULL,
|
||||
`blockhash` char(65) NOT NULL,
|
||||
`confirmations` int(10) NOT NULL,
|
||||
`amount` double NOT NULL,
|
||||
`difficulty` double NOT NULL,
|
||||
`time` int(11) NOT NULL,
|
||||
`accounted` tinyint(1) NOT NULL DEFAULT '0',
|
||||
`account_id` int(255) unsigned DEFAULT NULL,
|
||||
`worker_name` varchar(50) DEFAULT 'unknown',
|
||||
`shares` double unsigned DEFAULT NULL,
|
||||
`share_id` bigint(30) DEFAULT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `height` (`height`,`blockhash`),
|
||||
KEY `time` (`time`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Discovered blocks persisted from Litecoin Service';
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `coin_addresses` (
|
||||
`id` int(11) NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(11) NOT NULL,
|
||||
`currency` varchar(5) NOT NULL,
|
||||
`coin_address` varchar(255) NOT NULL,
|
||||
`ap_threshold` float DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `coin_address` (`coin_address`),
|
||||
KEY `account_id` (`account_id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `invitations` (
|
||||
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(11) unsigned NOT NULL,
|
||||
`email` varchar(50) NOT NULL,
|
||||
`token_id` int(11) NOT NULL,
|
||||
`is_activated` tinyint(1) NOT NULL DEFAULT '0',
|
||||
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `monitoring` (
|
||||
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`name` varchar(30) NOT NULL,
|
||||
`type` varchar(15) NOT NULL,
|
||||
`value` varchar(255) NOT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `name` (`name`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Monitoring events from cronjobs';
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `news` (
|
||||
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(10) unsigned NOT NULL,
|
||||
`header` varchar(255) NOT NULL,
|
||||
`content` text NOT NULL,
|
||||
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
`active` tinyint(1) NOT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `notifications` (
|
||||
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`type` varchar(25) NOT NULL,
|
||||
`data` varchar(255) NOT NULL,
|
||||
`active` tinyint(1) NOT NULL DEFAULT '1',
|
||||
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
`account_id` int(10) unsigned DEFAULT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `active` (`active`),
|
||||
KEY `data` (`data`),
|
||||
KEY `account_id` (`account_id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `notification_settings` (
|
||||
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`type` varchar(15) NOT NULL,
|
||||
`account_id` int(11) NOT NULL,
|
||||
`active` tinyint(1) NOT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `account_id` (`account_id`),
|
||||
UNIQUE KEY `account_id_type` (`account_id`,`type`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `payouts` (
|
||||
`id` int(11) NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(11) NOT NULL,
|
||||
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
`completed` tinyint(1) NOT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `account_id` (`account_id`,`completed`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `pool_worker` (
|
||||
`id` int(255) NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(255) NOT NULL,
|
||||
`username` char(50) DEFAULT NULL,
|
||||
`password` char(255) DEFAULT NULL,
|
||||
`difficulty` float NOT NULL DEFAULT '0',
|
||||
`monitor` tinyint(1) NOT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `username` (`username`),
|
||||
KEY `account_id` (`account_id`),
|
||||
KEY `pool_worker_username` (`username`(10))
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `settings` (
|
||||
`name` varchar(255) NOT NULL,
|
||||
`value` text DEFAULT NULL,
|
||||
PRIMARY KEY (`name`),
|
||||
UNIQUE KEY `setting` (`name`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
INSERT INTO `settings` (`name`, `value`) VALUES ('DB_VERSION', '1.0.3');
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `shares` (
|
||||
`id` bigint(30) NOT NULL AUTO_INCREMENT,
|
||||
`rem_host` varchar(255) NOT NULL,
|
||||
`username` varchar(120) NOT NULL,
|
||||
`our_result` enum('Y','N') NOT NULL,
|
||||
`upstream_result` enum('Y','N') DEFAULT NULL,
|
||||
`reason` varchar(50) DEFAULT NULL,
|
||||
`solution` varchar(257) NOT NULL,
|
||||
`difficulty` float NOT NULL DEFAULT '0',
|
||||
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `time` (`time`),
|
||||
KEY `upstream_result` (`upstream_result`),
|
||||
KEY `our_result` (`our_result`),
|
||||
KEY `username` (`username`),
|
||||
KEY `shares_username` (`username`(10))
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `shares_archive` (
|
||||
`id` bigint(30) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`share_id` bigint(30) unsigned NOT NULL,
|
||||
`username` varchar(120) NOT NULL,
|
||||
`our_result` enum('Y','N') DEFAULT NULL,
|
||||
`upstream_result` enum('Y','N') DEFAULT NULL,
|
||||
`block_id` int(10) unsigned NOT NULL,
|
||||
`difficulty` float NOT NULL DEFAULT '0',
|
||||
`time` datetime NOT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `share_id` (`share_id`),
|
||||
KEY `time` (`time`),
|
||||
KEY `our_result` (`our_result`),
|
||||
KEY `username` (`username`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Archive shares for potential later debugging purposes';
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `statistics_shares` (
|
||||
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(10) unsigned NOT NULL,
|
||||
`block_id` int(10) unsigned NOT NULL,
|
||||
`valid` float unsigned NOT NULL DEFAULT '0',
|
||||
`invalid` float unsigned NOT NULL DEFAULT '0',
|
||||
`pplns_valid` float unsigned NOT NULL DEFAULT '0',
|
||||
`pplns_invalid` float unsigned NOT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `account_id` (`account_id`),
|
||||
KEY `block_id` (`block_id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `tokens` (
|
||||
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(11) NOT NULL,
|
||||
`token` varchar(65) NOT NULL,
|
||||
`type` tinyint(4) NOT NULL,
|
||||
`time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `token` (`token`),
|
||||
KEY `account_id` (`account_id`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `token_types` (
|
||||
`id` tinyint(4) unsigned NOT NULL AUTO_INCREMENT,
|
||||
`name` varchar(25) NOT NULL,
|
||||
`expiration` INT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
UNIQUE KEY `name` (`name`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
INSERT INTO `token_types` (`id`, `name`, `expiration`) VALUES
|
||||
(1, 'password_reset', 3600),
|
||||
(2, 'confirm_email', 0),
|
||||
(3, 'invitation', 0),
|
||||
(4, 'account_unlock', 0),
|
||||
(5, 'account_edit', 3600),
|
||||
(6, 'change_pw', 3600),
|
||||
(7, 'withdraw_funds', 3600);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `transactions` (
|
||||
`id` int(255) NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(255) unsigned NOT NULL,
|
||||
`type` varchar(25) DEFAULT NULL,
|
||||
`coin_address` varchar(255) DEFAULT NULL,
|
||||
`amount` decimal(50,30) DEFAULT '0',
|
||||
`block_id` int(255) DEFAULT NULL,
|
||||
`timestamp` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
|
||||
`txid` varchar(256) DEFAULT NULL,
|
||||
`archived` tinyint(1) NOT NULL DEFAULT '0',
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `block_id` (`block_id`),
|
||||
KEY `account_id` (`account_id`),
|
||||
KEY `type` (`type`),
|
||||
KEY `archived` (`archived`),
|
||||
KEY `account_id_archived` (`account_id`,`archived`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE `statistics_users` (
|
||||
`id` int(11) NOT NULL AUTO_INCREMENT,
|
||||
`account_id` int(11) NOT NULL,
|
||||
`hashrate` bigint(20) unsigned NOT NULL,
|
||||
`workers` int(11) NOT NULL,
|
||||
`sharerate` float NOT NULL,
|
||||
`timestamp` int(11) NOT NULL,
|
||||
PRIMARY KEY (`id`),
|
||||
KEY `account_id_timestamp` (`account_id`,`timestamp`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS `user_settings` (
|
||||
`account_id` int(11) NOT NULL,
|
||||
`name` varchar(50) NOT NULL,
|
||||
`value` text DEFAULT NULL,
|
||||
PRIMARY KEY (`account_id`,`name`)
|
||||
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;
|
||||
|
||||
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
|
||||
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
|
||||
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
|
||||
@ -1,4 +1,3 @@
|
||||
#!/bin/sh
|
||||
git submodule init
|
||||
git submodule update
|
||||
|
||||
git submodule foreach git pull origin master
|
||||
|
||||
Loading…
Reference in New Issue
Block a user