Blog

  • DevsFood-frontend

    This project was bootstrapped with Create React App.

    Available Scripts

    In the project directory, you can run:

    npm start

    Runs the app in the development mode.
    Open http://localhost:3000 to view it in the browser.

    The page will reload if you make edits.
    You will also see any lint errors in the console.

    npm test

    Launches the test runner in the interactive watch mode.
    See the section about running tests for more information.

    npm run build

    Builds the app for production to the build folder.
    It correctly bundles React in production mode and optimizes the build for the best performance.

    The build is minified and the filenames include the hashes.
    Your app is ready to be deployed!

    See the section about deployment for more information.

    npm run eject

    Note: this is a one-way operation. Once you eject, you can’t go back!

    If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single build dependency from your project.

    Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.

    You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.

    Learn More

    You can learn more in the Create React App documentation.

    To learn React, check out the React documentation.

    Code Splitting

    This section has moved here: https://facebook.github.io/create-react-app/docs/code-splitting

    Analyzing the Bundle Size

    This section has moved here: https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size

    Making a Progressive Web App

    This section has moved here: https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app

    Advanced Configuration

    This section has moved here: https://facebook.github.io/create-react-app/docs/advanced-configuration

    Deployment

    This section has moved here: https://facebook.github.io/create-react-app/docs/deployment

    npm run build fails to minify

    This section has moved here: https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify

    Visit original content creator repository

  • react-native-qrcode-composer

    React Native QR Code Composer

    Build Status Version MIT License All Contributors PRs Welcome Conventional Commits gitmoji Keep a Changelog v1.1.0 badge Contributor Covenant

    React Native QR Code Composer is an advanced, highly customizable library designed to seamlessly integrate QR codes into your React Native applications. Leveraging the robustness of qrcode and the versatility of react-native-svg, this library offers unparalleled flexibility and ease of use, ensuring your QR code implementations are both beautiful and functional.

    Android iOS

    Getting Started

    To install the library, you can use npm or yarn:

    npm install react-native-qrcode-composer
    

    or

    yarn add react-native-qrcode-composer
    

    Peer Dependencies

    React Native QR Code Composer is designed to work seamlessly within the React Native ecosystem. However, it relies on several peer dependencies that need to be installed in your project. Ensure you have the following packages installed:

    Usage

    Here’s a basic example of how to use the library:

    import QRCode from 'react-native-qrcode-composer';
    import Logo from 'assets/logo.svg';
    import logo from 'assets/logo.png';
    
    // ...
    
    // Basic QR Code Example
    <QRCode value="https://github.com/afonsograca/react-native-qrcode-composer" />
    
    // Advanced Usage with SVG and PNG logos
    <QRCode value="QR code with SVG logo" logo={Logo} />
    <QRCode value="QR code with PNG logo" logo={logo} />

    Props

    The react-native-qrcode-composer library provides several props that you can use to customize the QR code and its appearance. These props allow you to specify the content of the QR code, its size, and the logo that appears in the center of the QR code, among other things. You can also specify a function that is called when an error occurs.

    The following sections provide more details about these props and how to use them.

    QRCodeProps

    Property Type Optional Default Description
    value string Yes 'QR code message' The content to be encoded in the QR code
    size number Yes 100 The size of the QR code in pixels
    logo LogoProp Yes undefined A custom logo to be displayed at the center of the QR code
    logoStyle LogoStyle Yes undefined The style of the logo
    style QRCodeStyle Yes undefined The style of the QR code container
    getRef React.Ref<Svg> Yes undefined A ref to the QR code SVG element for direct access
    onError (error: Error) => void Yes undefined Callback function triggered if an error occurs during rendering
    testID string Yes 'react-native-qrcode-composer' Identification prefix for the internal parts of the component

    LogoStyle

    Property Type Optional Default Description
    size number Yes 20% of the QR code size The size of the logo in pixels
    backgroundColor string Yes transparent The background color of the logo
    margin number Yes 0 The margin around the logo in pixels
    borderRadius number Yes 0 The border radius of the logo’s corners

    QRCodeStyle

    Property Type Optional Default Description
    color string Yes black The color of the QR code pattern
    backgroundColor string Yes white The background color of the entire QR code
    quietZone number Yes 0 The margin around the QR code
    cornerRadius number Yes 0 The corner radius applied the QR code’s quiet zone
    errorCorrectionLevel ErrorCorrectionLevel Yes M The error correction level, enhancing robustness
    linearGradient [ColorValue, ColorValue] Yes undefined The colors for a linear gradient effect
    gradientDirection [NumberProp, NumberProp, NumberProp, NumberProp] Yes ['0%', '0%', '100%', '100%'] The directions for gradient application
    detectionMarkerOptions DetectionMarkerOptions Yes undefined Options for styling the detection markers
    patternOptions PatternOptions Yes undefined Options for modifying the QR pattern

    DetectionMarkerOptions

    Property Type Optional Default Description
    connected boolean Yes true Indicates if the blocks that make up the marker are connected
    cornerRadius number Yes 0 Corner radius applied to the detection markers. Note: This does not take precedence over outerCornerRadius or innerCornerRadius
    outerCornerRadius number Yes 0 Specific corner radius for the outer part of the markers
    innerCornerRadius number Yes 0 Specific corner radius for the inner part of the markers

    PatternOptions

    Property Type Optional Default Description
    connected boolean Yes false Indicates if the blocks in the QR code pattern are connected
    cornerRadius number Yes 0 Corner radius for each block in the QR code pattern

    Try it out

    We have provided an example app for you to try out the library. You can find it in the /example directory of the repository. To run the example app, navigate to its directory and run:

    yarn
    yarn start
    

    Contributing

    Interested in contributing? Check out how you can make a difference in our contributing guide.

    Please note that this project is adheres to a Contributor Code of Conduct. By participating in it you agree to abide by its terms.

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Acknowledgments

    This project owes its gratitude to:

    • The developers of qrcode and react-native-svg for creating such robust foundations.
    • react-native-qrcode-svg for initial inspiration.
    • All the contributors who have helped extend and maintain this library.
    • The community testers who provided valuable feedback.
    Visit original content creator repository
  • node-api-huobi

    node-api-huobi

    WARNING: This package is still early beta! Expect breaking changes until this sees a major release.

    Non-official implementation of Huobi’s API’s. Developed for personal use.

    For support on using the API’s or development issues, please refer to the official API documentation. For questions regarding this package, please consult the code first.

    PUBLIC API

      const huobi=require('node-api-huobi');
    
      const publicAPI=new huobi.publicApi();

    Reference Data

    API DESCRIPTION
    getSystemStatus Not implemented
    getMarketStatus https://huobiapi.github.io/docs/spot/v1/en/#get-market-status
    getSymbols https://huobiapi.github.io/docs/spot/v1/en/#get-all-supported-trading-symbol-v2
    getCurrencies https://huobiapi.github.io/docs/spot/v1/en/#get-all-supported-currencies-v2
    getCurrencySettings https://huobiapi.github.io/docs/spot/v1/en/#get-currencys-settings
    getSymbolSettings https://huobiapi.github.io/docs/spot/v1/en/#get-symbols-setting
    getMarketSettings https://huobiapi.github.io/docs/spot/v1/en/#get-market-symbols-setting
    getChainsInfo https://huobiapi.github.io/docs/spot/v1/en/#get-chains-information
    getChainCurrencies https://huobiapi.github.io/docs/spot/v1/en/#apiv2-currency-amp-chains
    getTimestamp https://huobiapi.github.io/docs/spot/v1/en/#get-current-timestamp

    Market Data

    API DESCRIPTION
    getKlines https://huobiapi.github.io/docs/spot/v1/en/#get-klines-candles
    getTicker https://huobiapi.github.io/docs/spot/v1/en/#get-latest-aggregated-ticker
    getAllTickers https://huobiapi.github.io/docs/spot/v1/en/#get-latest-tickers-for-all-pairs
    getMarketDepth https://huobiapi.github.io/docs/spot/v1/en/#get-market-depth
    getLastTrade https://huobiapi.github.io/docs/spot/v1/en/#get-the-last-trade
    getRecentTrades https://huobiapi.github.io/docs/spot/v1/en/#get-the-most-recent-trades
    getMarketSummary https://huobiapi.github.io/docs/spot/v1/en/#get-the-last-24h-market-summary
    getNetAssetValue https://huobiapi.github.io/docs/spot/v1/en/#get-real-time-nav

    PRIVATE API

      const huobi=require('node-api-huobi');
    
      const auth = {
        apikey: 'MY_API_KEY',
        secret: 'MY_API_SECRET'
      };
    
      const privateAPI=new huobi.privateApi(auth);

    Account

    API DESCRIPTION
    getAccounts https://huobiapi.github.io/docs/spot/v1/en/#get-all-accounts-of-the-current-user
    getBalance https://huobiapi.github.io/docs/spot/v1/en/#get-account-balance-of-a-specific-account
    getPlatformValue https://huobiapi.github.io/docs/spot/v1/en/#get-the-total-valuation-of-platform-assets
    getAssetValuation https://huobiapi.github.io/docs/spot/v1/en/#get-asset-valuation
    transferAsset https://huobiapi.github.io/docs/spot/v1/en/#asset-transfer
    transferSubAccountAsset https://huobiapi.github.io/docs/spot/v1/en/#asset-transfer
    getAccountHistory https://huobiapi.github.io/docs/spot/v1/en/#get-account-history
    getAccountLedger https://huobiapi.github.io/docs/spot/v1/en/#get-account-ledger
    transferSpotFuture https://huobiapi.github.io/docs/spot/v1/en/#transfer-fund-between-spot-account-and-future-contract-account
    getPointBalance https://huobiapi.github.io/docs/spot/v1/en/#get-point-balance
    transferPoints https://huobiapi.github.io/docs/spot/v1/en/#point-transfer

    Wallet

    API DESCRIPTION
    getDepositAddress https://huobiapi.github.io/docs/spot/v1/en/#query-deposit-address
    getWithdrawQuota https://huobiapi.github.io/docs/spot/v1/en/#query-withdraw-quota
    getWithdrawAddress https://huobiapi.github.io/docs/spot/v1/en/#query-withdraw-address
    createWithdrawRequest https://huobiapi.github.io/docs/spot/v1/en/#create-a-withdraw-request
    getWithdrawal https://huobiapi.github.io/docs/spot/v1/en/#query-withdrawal-order-by-client-order-id
    cancelWithdrawal https://huobiapi.github.io/docs/spot/v1/en/#cancel-a-withdraw-request
    getWithdrawalsDeposits https://huobiapi.github.io/docs/spot/v1/en/#search-for-existed-withdraws-and-deposits

    Sub-User

    API DESCRIPTION
    setDeductionMode https://huobiapi.github.io/docs/spot/v1/en/#set-a-deduction-for-parent-and-sub-user
    getAPIKeys https://huobiapi.github.io/docs/spot/v1/en/#api-key-query
    getUID https://huobiapi.github.io/docs/spot/v1/en/#get-uid
    createSubUser https://huobiapi.github.io/docs/spot/v1/en/#sub-user-creation
    getSubUsersList https://huobiapi.github.io/docs/spot/v1/en/#get-sub-user-39-s-list
    updateSubUser https://huobiapi.github.io/docs/spot/v1/en/#lock-unlock-sub-user
    getSubUsersStatus https://huobiapi.github.io/docs/spot/v1/en/#get-sub-user-39-s-status
    setTradeableMarkets https://huobiapi.github.io/docs/spot/v1/en/#set-tradable-market-for-sub-users
    setAssetTransferPermission https://huobiapi.github.io/docs/spot/v1/en/#set-asset-transfer-permission-for-sub-users
    getSubUsersAccountList https://huobiapi.github.io/docs/spot/v1/en/#get-sub-user-39-s-account-list
    createSubUserAPIKey https://huobiapi.github.io/docs/spot/v1/en/#sub-user-api-key-creation
    updateSubUserAPIKey https://huobiapi.github.io/docs/spot/v1/en/#sub-user-api-key-modification
    deleteSubUserAPIKey https://huobiapi.github.io/docs/spot/v1/en/#sub-user-api-key-deletion
    transferSubUserAsset https://huobiapi.github.io/docs/spot/v1/en/#transfer-asset-between-parent-and-sub-account
    getSubUserDepositAddress https://huobiapi.github.io/docs/spot/v1/en/#query-deposit-address-of-sub-user
    getSubUserDeposits https://huobiapi.github.io/docs/spot/v1/en/#query-deposit-history-of-sub-user
    getAggregatedBalance https://huobiapi.github.io/docs/spot/v1/en/#get-the-aggregated-balance-of-all-sub-users
    getSubUserBalance https://huobiapi.github.io/docs/spot/v1/en/#get-account-balance-of-a-sub-user

    Trading

    API DESCRIPTION
    placeOrder https://huobiapi.github.io/docs/spot/v1/en/#place-a-new-order
    placeOrders https://huobiapi.github.io/docs/spot/v1/en/#place-a-batch-of-orders
    cancelOrder https://huobiapi.github.io/docs/spot/v1/en/#submit-cancel-for-an-order https://huobiapi.github.io/docs/spot/v1/en/#submit-cancel-for-an-order-based-on-client-order-id
    getOrders https://huobiapi.github.io/docs/spot/v1/en/#get-all-open-orders
    cancelOrders https://huobiapi.github.io/docs/spot/v1/en/#submit-cancel-for-multiple-orders-by-criteria https://huobiapi.github.io/docs/spot/v1/en/#submit-cancel-for-multiple-orders-by-ids
    cancelAllOrders https://huobiapi.github.io/docs/spot/v1/en/#dead-man-s-switch
    getOrderDetails https://huobiapi.github.io/docs/spot/v1/en/#get-the-order-detail-of-an-order https://huobiapi.github.io/docs/spot/v1/en/?json#get-the-order-detail-of-an-
    getMatchResult https://huobiapi.github.io/docs/spot/v1/en/#get-the-match-result-of-an-order
    searchPastOrders https://huobiapi.github.io/docs/spot/v1/en/#search-past-orders
    searchHistoricalOrders https://huobiapi.github.io/docs/spot/v1/en/#search-historical-orders-within-48-hours
    searchMatchResults https://huobiapi.github.io/docs/spot/v1/en/#search-match-results
    getFeeRate https://huobiapi.github.io/docs/spot/v1/en/#get-current-fee-rate-applied-to-the-user

    Conditional Order

    API DESCRIPTION
    placeConditionalOrder https://huobiapi.github.io/docs/spot/v1/en/#place-a-conditional-order
    cancelConditionalOrder https://huobiapi.github.io/docs/spot/v1/en/#cancel-conditional-orders-before-triggering
    getConditionalOrders https://huobiapi.github.io/docs/spot/v1/en/#query-open-conditional-orders-before-triggering
    searchConditionalOrderHistory https://huobiapi.github.io/docs/spot/v1/en/#query-conditional-order-history
    searchConditionalOrder https://huobiapi.github.io/docs/spot/v1/en/#query-a-specific-conditional-order

    Margin

    API DESCRIPTION
    repayMarginLoan https://huobiapi.github.io/docs/spot/v1/en/#repay-margin-loan-cross-isolated
    transferToMargin https://huobiapi.github.io/docs/spot/v1/en/#transfer-asset-from-spot-trading-account-to-isolated-margin-account-isolated https://huobiapi.github.io/docs/spot/v1/en/#transfer-asset-from-spot-trading-account-to-cross-margin-account-cross
    transferFromMargin https://huobiapi.github.io/docs/spot/v1/en/#transfer-asset-from-isolated-margin-account-to-spot-trading-account-isolated https://huobiapi.github.io/docs/spot/v1/en/#transfer-asset-from-cross-margin-account-to-spot-trading-account-cross
    getIsolatedLoanInfo https://huobiapi.github.io/docs/spot/v1/en/#get-loan-interest-rate-and-quota-isolated
    getCrossLoanInfo https://huobiapi.github.io/docs/spot/v1/en/#get-loan-interest-rate-and-quota-cross
    requestMarginLoan https://huobiapi.github.io/docs/spot/v1/en/#request-a-margin-loan-isolated https://huobiapi.github.io/docs/spot/v1/en/#request-a-margin-loan-cross
    repayIsolatedMarginLoan https://huobiapi.github.io/docs/spot/v1/en/#repay-margin-loan-isolated
    repayCrossMarginLoan https://huobiapi.github.io/docs/spot/v1/en/#repay-margin-loan-cross
    searchMarginOrders https://huobiapi.github.io/docs/spot/v1/en/#search-past-margin-orders-isolated https://huobiapi.github.io/docs/spot/v1/en/#search-past-margin-orders-cross
    getMarginBalance https://huobiapi.github.io/docs/spot/v1/en/#get-the-balance-of-the-margin-loan-account-isolated https://huobiapi.github.io/docs/spot/v1/en/#get-the-balance-of-the-margin-loan-account-cross
    getRepaymentReference https://huobiapi.github.io/docs/spot/v1/en/#repayment-record-reference

    Stable Coin Exchange

    API DESCRIPTION
    getExchangeRate https://huobiapi.github.io/docs/spot/v1/en/#stable-coin-exchange
    exchangeCoin https://huobiapi.github.io/docs/spot/v1/en/#exchange-stable-coin

    Exchange Traded Products (ETP)

    API DESCRIPTION
    getETPData https://huobiapi.github.io/docs/spot/v1/en/#get-reference-data-of-etp
    placeETPOrder https://huobiapi.github.io/docs/spot/v1/en/#etp-creation
    redeemETP https://huobiapi.github.io/docs/spot/v1/en/#etp-redemption
    getETPHistory https://huobiapi.github.io/docs/spot/v1/en/#get-etp-creation-amp-redemption-history
    getETPTransaction https://huobiapi.github.io/docs/spot/v1/en/#get-specific-etp-creation-or-redemption-record
    getRebalanceHistory https://huobiapi.github.io/docs/spot/v1/en/#get-position-rebalance-history
    cancelETPOrder https://huobiapi.github.io/docs/spot/v1/en/#submit-cancel-for-an-etp-order
    cancelETPOrders https://huobiapi.github.io/docs/spot/v1/en/#batch-cancellation-for-etp-orders
    getETPHoldingLimit https://huobiapi.github.io/docs/spot/v1/en/#get-holding-limit-of-leveraged-etp

    WEBSOCKET API

      const huobi=require('node-api-huobi');
    
      const auth = {
        apikey: 'MY_API_KEY',
        secret: 'MY_API_SECRET'
      };
    
      const marketAPI=new huobi.sockets.marketApi();
      const mbpAPI=new huobi.sockets.MBPApi();
      const tradingAPI=new huobi.sockets.tradingApi(auth);
    
      tradingAPI.setHandler('orders', (symbol,method,data,option) => { updateOrder(symbol,method,data); });
    
      tradingAPI.socket._ws.on('authenticated', async () => { // For market API's: initialized
        const res=await tradingAPI.subscribeOrderUpdates();
      });
    
      tradingAPI.socket._ws.on('closed', async () => {
        // do something, like clean-up and reconnect
      });
    
      function updateOrder(symbol,method,data) {
        // do something
      };

    MARKET DATA

      const marketAPI=new huobi.sockets.marketApi();

    API HANDLER DESCRIPTION
    subscribeCandles unsubscribeCandles getCandle market.kline https://huobiapi.github.io/docs/spot/v1/en/#market-candlestick
    subscribeTickers unsubscribeTickers getTicker market.ticker https://huobiapi.github.io/docs/spot/v1/en/#market-ticker
    subscribeMarketDepth unsubscribeMarketDepth getMarketDepth market.depth https://huobiapi.github.io/docs/spot/v1/en/#market-depth
    subscribeBests unsubscribeBests getBest market.bbo https://huobiapi.github.io/docs/spot/v1/en/#best-bid-offer
    subscribeTrades unsubscribeTrades getTrades market.trade https://huobiapi.github.io/docs/spot/v1/en/#trade-detail
    subscribeStats unsubscribeStats getStats market.detail https://huobiapi.github.io/docs/spot/v1/en/#market-details
    subscribeETP unsubscribeETP getETP market.etp https://huobiapi.github.io/docs/spot/v1/en/#subscribe-etp-real-time-nav

    MARKET BY PRICE (MBP) DATA

      const mbpAPI=new huobi.sockets.MBPApi();

    API HANDLER DESCRIPTION
    subscribeMBPIncremetal unsubscribeMBPIncremetal getMBPIncremetal market.mbp https://huobiapi.github.io/docs/spot/v1/en/#market-by-price-incremental-update
    subscribeMBPRefresh unsubscribeMBPRefresh getMBPRefresh market.mbp.refresh https://huobiapi.github.io/docs/spot/v1/en/#market-by-price-refresh-update

    ACCOUNT AND ORDER

      const tradingAPI=new huobi.sockets.tradingApi();

    API HANDLER DESCRIPTION
    subscribeOrderUpdates unsubscribeOrderUpdates orders https://huobiapi.github.io/docs/spot/v1/en/#subscribe-order-updates
    subscribeTradeClearing unsubscribeTradeClearing trade.clearing https://huobiapi.github.io/docs/spot/v1/en/#subscribe-trade-details-amp-order-cancellation-post-clearing
    subscribeAccountChange unsubscribeAccountChange accounts.update https://huobiapi.github.io/docs/spot/v1/en/#subscribe-account-change


    Visit original content creator repository

  • nabbitmq

    NabbitMQ Logo

    NabbitMQ

    Node.js library for interacting with RabbitMQ based on RxJS streams

    npm version CircleCI

    Installation

    npm install --save nabbitmq

    API Docs

    Detailed API docs can be found here. Generated with TypeDoc.

    Project status

    Project is being actively developed and improved. Any suggestions, help and criticism are warmly welcomed.

    Description

    NabbitMQ is a library that makes it easy for Node.js developers to interact with RabbitMQ. It’s built on top of famous amqplib package and it leverages RxJS streams.

    Message queues naturally are streams of events, therefore using RxJS with them is an efficient way for developers to solve complex problems in a very elegant fashion.

    There are a lot of use cases, when we don’t need to setup non standard exchanges and non trivial bindings to queues. In fact, most of the time what we actually need is just a simple queue, just out of the box. And NabbitMQ is here to help you with that! All you need is to provide custom name for the queue and you’re ready to go, everything else is handled for you!

    However, NabbitMQ allows you to use amqplib’s promise-based api directly, so that you can build a more complex solution for your specific needs and still make use of RxJS streams.

    Principles and reasons

    Obviously, one of the main reasons for this library to even exist is to have the threshold of entry to RabbitMQ world a bit lower, than it is now, but at the same time to allow us to make use of any piece of API that RabbitMQ provides us with.

    The other reason is seamless error handling and helping developers to easily build fault tolerant solutions. For example, NabbitMQ will provide you with an automatically set up dead letter queue that listens to your main queue, unless you just don’t need to have.

    NabbitMQ has its own set of error classes, therefore it makes it easy for developers to debug and build solutions, that will survive even in the most “cornery” corner cases.

    In the end, the main principle and goal is to have a solid and reliable solution out of the box, while working with RabbitMQ.

    Examples

    You can find examples under the examples folder. There are required nodemon configs and also a docker container of RabbitMq with management plugin within for your convenience.

    Quick start

    This snippet demonstrates how you can easily spin up a solid RabbitMQ setup and quickly start to consume a stream of events from it. Under the hood, NabbitMQ creates all necessary bindings, exchanges, dead letter queues and provides you with reconnect logic.

    import { RabbitMqConnectionFactory, ConsumerFactory, PublisherFactory } from 'nabbitmq';
    
    async function main() {
      const RabbitMqConnectionFactory = new RabbitMqConnectionFactory();
      RabbitMqConnectionFactory.setUri('amqp://localhost:5672');
      const connection = await RabbitMqConnectionFactory.newConnection();
      const consumerFactory = new ConsumerFactory(connection);
      consumerFactory.setConfigs({queue: {name: 'super_queue'}});
      const consumer = await consumerFactory.newConsumer();
    
      consumer.startConsuming().subscribe({next: console.log, error: console.error});
    
      const anotherConnection = await RabbitMqConnectionFactory.newConnection();
      const publisherFactory = new PublisherFactory(anotherConnection);
      publisherFactory.setConfigs({exchange: {name: consumer.getActiveConfigs().exchange.name}});
      const publisher = await publisherFactory.newPublisher();
      setInterval(() => publisher.publishMessage(Buffer.from('hello hello!'), `${consumer.getActiveConfigs().queue.name}_rk`), 1000);
    }
    
    main();

    Overview

    NabbitMQ provides you with two main abstractions: Publisher and Consumer. Each is represented by a class, that implements RabbitMqPeer interface. They are supposed to be instantiated with PublisherFactory and ConsumerFactory classes. However, there is a third abstraction called RabbitMqConnection. This is a class, that holds an active connection data to the used RabbitMQ server. It is injected into publishers and consumers via their factories. Configs to setup RabbitMQ internal structure of exchanges, queues and bindings, are provided to factories in form of plain JavaScript/TypeScript objects. There are interfaces for these objects, called ConsumerConfigs and PublisherConfigs. Most of the values for these objects are optional, the consumers and publishers themselves fill them up with some standard values. For example, if your provide a queue name like my_queue, but don’t provide an exchange name, the exchange will be called exchange_my_queue – you can rely on that. Also, if dead letter queue has to be set up (which is optional), but no name for it provided, consumer will also result with default name like my_queue_dlq. Dead letter exchange will have the following form: exchange_my_queue_dlq.

    However, it is obvious that you might need to have a rare and not so generic RabbitMQ structure with more than one queue and more than one exchange. Therefore there is an option for you not to provide these configs, but to provide a so called custom setup function. This function accepts a connection object from underlying amqplib package. Inside of this function you can do whatever you need, but it should return a promise that resolves with an object that contains amqplib channel instance and optional consumer’s prefetch count, if you use this function to set up a consumer (not mandatory though, a default prefetch value will be set if not provided).

    Basics

    Consumer configs

    The only required field to setup a consumer is name of the queue you want to use. Every other field is optional and will be filled by consumer itself. Here is an example of how consumer configs object will look like, when there was only queue name my_queue provided:

    const configs = { 
      queue: { 
        name: 'my_queue',
        bindingPattern: 'my_queue_rk', // routing key name: `${your queue name}_rk`
        options: {
          durable: true, // queue persistence is enabled by default
        },
      },
      exchange: { 
        name: 'exchange_my_queue', // exchange name: `exchange_${your queue name}`
        type: 'direct', // direct binding type by default with a name
        options: {
          durable: true, // exchange persistence is enabled by default
        },
      },
      autoAck: false, // RabbitMQ acknowledge on send is disabled by default, meaning that by default you have to commit your messages.
      prefetch: 100, // consumer prefetch
      reconnectAttempts: -1, // infinite amount of reconnect attempts
      reconnectTimeoutMillis: 1000, // 1 second window between failing reconnect attempts
      deadLetterQueue: { // dead letter queue is built and bound by default
        queue: {
          name: 'dlq_my_queue', // dead letter queue name: `dlq_${your queue name}`
          options: {
            durable: true, // dead letter queue is also persistent by default
          },
        },
        exchange: {
          name: 'exchange_dlq_my_queue', // dead letter queue exchange name: `exchange_${dead letter queue name}`
          type: 'fanout',  // fanout type by default
          options: {
            durable: true, // dead letter exchange persistent by default
          },
        },
      },
    };

    As for publisher, the only required field is the name of the exchange to publish to, everything else will be filled with default values by publisher itself. Here is an example of publisher configs with my_exchange exchange name:

    const configs = { 
      exchange: {
        name: 'my_exchange',
        options: {
          durable: true,  // exchange persistence is enabled by default
        },
        type: 'direct', // direct binding type by default
      },
      publisherConfirms: true, // publisher confirmations are enabled by default
      reconnectAttempts: -1, // infinite amount of reconnect attempts
      reconnectTimeoutMillis: 1000, // 1 second window between failing reconnect attempts
    };

    Usage

    Setting up topic exchange type

    import { ConsumerFactory, PublisherFactory, RabbitMqConnectionFactory } from 'nabbitmq';
    
    async function main() {
      const connectionFactory = new RabbitMqConnectionFactory();
      connectionFactory.setUri('amqp://localhost:5672');
      const connection = await connectionFactory.newConnection();
      const consumerFactory = new ConsumerFactory(connection);
      consumerFactory.setConfigs({
        queue: {
          name: 'queue',
          bindingPattern: 'route.#',
        },
        exchange: {
          name: 'exchange',
          type: 'topic',
        },
        prefetch: 50,
      });
      const consumer = await consumerFactory.newConsumer();
    
      consumer.startConsuming().subscribe({
        next: (msg) => {
          console.log(msg);
          consumer.commitMessage(msg);
        },
        error: console.error,
      });
    
      const anotherConnection = await connectionFactory.newConnection();
      const publisherFactory = new PublisherFactory(anotherConnection);
      publisherFactory.setConfigs({
        exchange: {
          name: 'exchange',
          type: 'topic',
        },
        publisherConfirms: false,
      });
      const publisher = await publisherFactory.newPublisher();
      publisher.actionsStream().subscribe({next: console.log, error: console.error});
      setInterval(() => publisher.publishMessage(Buffer.from('hello hello!'), `route.${Math.ceil(Math.random() * 10)}`), 1000);
    }
    
    main();

    With custom setup function

    Let’s see how we can achieve the same as in the example above, but instead of config objects we’re going to supply a custom setup function. For publishers and consumers there are different type aliases and requirements for these functions.

    import { RabbitMqConnectionFactory, ConsumerFactory, PublisherFactory, RabbitMqChannelCancelledError, RabbitMqChannelClosedError, RabbitMqConnectionClosedError, RabbitMqPublisherConfirmationError } from 'nabbitmq';
    
    async function main() {
      const connectionFactory = new RabbitMqConnectionFactory();
      connectionFactory.setUri('amqp://localhost:5672');
      const rabbitMqConnection = await connectionFactory.newConnection();
      const consumerFactory = new ConsumerFactory(rabbitMqConnection);
      consumerFactory.setCustomSetupFunction(async (connection) => {
        const channel = await connection.createChannel();
        await channel.assertExchange('exchange', 'topic', {});
        const queueMetadata = await channel.assertQueue('queue', {
          durable: true,
        });
    
        await channel.bindQueue(queueMetadata.queue, 'exchange', 'route.#');
        await channel.prefetch(10);
    
        return {channel, queue: 'queue', prefetch: 10, autoAck: false};
      });
    
      const consumer = await consumerFactory.newConsumer();
    
      consumer.startConsuming().subscribe({
        next: (msg) => {
          console.log('Received message', msg);
          consumer.commitMessage(msg);
        },
        error: (error) => {
          if (error instanceof RabbitMqConnectionClosedError)
            return void console.error('Connection was closed');
    
          if (error instanceof RabbitMqChannelClosedError)
            return void console.error('Channel was closed by the server');
    
          if (error instanceof RabbitMqChannelCancelledError)
            return void console.error('Channel cancellation occurred');
          
          // ... and so on
        },
      });
    
      const anotherConnection = await connectionFactory.newConnection();
      const publisherFactory = new PublisherFactory(anotherConnection);
      publisherFactory.setCustomSetupFunction(async (connection) => {
        const channel = await connection.createConfirmChannel();
        await channel.assertExchange('exchange', 'topic', {});
        const queueMetadata = await channel.assertQueue('queue', {
          durable: true,
        });
    
        await channel.bindQueue(queueMetadata.queue, 'exchange', 'route.#');
        return {channel, exchange: 'exchange'};
      });
      const publisher = await publisherFactory.newPublisher();
    
      publisher.actionsStream().subscribe({
        next: console.log,
        error: (error) => {
          if (error instanceof RabbitMqPublisherConfirmationError)
            return void console.error('Sent message failed to be confirmed');
          
          // ... and so on
        },
      });
    
      setInterval(() => publisher.publishMessage(Buffer.from('hello hello!'), `route.${Math.ceil(Math.random() * 10)}`), 1000);
    }
    
    main();

    Reconnect

    Let’s assume that we have a working consumer instance, built either with object based configs or custom setup function. We can build a service, in which this consumer will be injected. Then, active reconnection logic can be implemented in the following way:

    import { Message } from 'amqplib';
    import { ReplaySubject } from 'rxjs/internal/ReplaySubject';
    import { Consumer, RabbitMqError } from 'nabbitmq';
    
    export class ConsumerService {
      private stream: ReplaySubject<Message>;
      constructor(
        private readonly consumer: Consumer,
      ) {
        this.init();
      }
    
      public init() {
        this.stream = this.consumer.startConsuming();
        this.stream.subscribe({
          next: (message) => {
            console.log('Received a message', message);
            this.consumer.commitMessage(message);
          },
          error: (error) => {
            if (error instanceof RabbitMqError) {
              this.consumer.reconnect().toPromise() // reconnect method returns an observable, which will complete once connection is reestablished
                .then(() => this.init()) // something like mutual recursion
                .catch((err) => console.error('Failed to reconnect:', err));
            }
          },
        });
      }
    }

    The same logic can be reproduced for publisher instances.

    License

    NabbitMQ is MIT Licensed.

    Visit original content creator repository
  • kubefwd

    English|中文

    Kubernetes port forwarding for local development.

    NOTE: Accepting pull requests for bug fixes, tests, and documentation only.

    kubefwd - kubernetes bulk port forwarding

    Build Status GitHub license Go Report Card GitHub release

    kubefwd (Kube Forward)

    Read Kubernetes Port Forwarding for Local Development for background and a detailed guide to kubefwd. Follow Craig Johnston on Twitter for project updates.

    kubefwd is a command line utility built to port forward multiple services within one or more namespaces on one or more Kubernetes clusters. kubefwd uses the same port exposed by the service and forwards it from a loopback IP address on your local workstation. kubefwd temporarily adds domain entries to your /etc/hosts file with the service names it forwards.

    Key Differentiator: Unlike kubectl port-forward, kubefwd assigns each service its own unique IP address (127.x.x.x), allowing multiple services to use the same port simultaneously without conflicts. This enables you to run multiple databases on port 3306 or multiple web services on port 80, just as they would in the cluster.

    When working on our local workstation, my team and I often build applications that access services through their service names and ports within a Kubernetes namespace. kubefwd allows us to develop locally with services available as they would be in the cluster.

    kubefwd - Kubernetes port forward

    kubefwd - Kubernetes Port Forward Diagram

    Quick Start

    # macOS
    brew install txn2/tap/kubefwd
    
    # Linux (download from releases page)
    # https://github.com/txn2/kubefwd/releases
    
    # Windows
    scoop install kubefwd
    
    # Run kubefwd (requires sudo for /etc/hosts and network interface management)
    sudo -E kubefwd svc -n <your-namespace>

    Press Ctrl-C to stop forwarding and restore your hosts file.

    Supported Platforms

    • macOS (tested on Intel and Apple Silicon)
    • Linux (tested on various distributions and Docker containers)
    • Windows (via Scoop package manager or Docker)

    Installation

    macOS Install / Update

    kubefwd assumes you have kubectl installed and configured with access to a Kubernetes cluster. kubefwd uses the kubectl current context. The kubectl configuration is not used. However, its configuration is needed to access a Kubernetes cluster.

    Ensure you have a context by running:

    kubectl config current-context

    If you are running MacOS and use homebrew you can install kubefwd directly from the txn2 tap:

    brew install txn2/tap/kubefwd

    To upgrade:

    brew upgrade kubefwd

    Linux Install / Update

    Download pre-built binaries from the releases page:

    • .deb packages for Debian/Ubuntu
    • .rpm packages for RHEL/CentOS/Fedora
    • .tar.gz archives for any Linux distribution

    Example for Debian/Ubuntu:

    # Download the latest .deb file from releases page
    sudo dpkg -i kubefwd_*.deb

    Windows Install / Update

    Using Scoop:

    scoop install kubefwd

    To upgrade:

    scoop update kubefwd

    Docker

    Forward all services from the namespace the-project to a Docker container named the-project:

    docker run -it --rm --privileged --name the-project \
        -v "$(echo $HOME)/.kube/":/root/.kube/ \
        txn2/kubefwd services -n the-project

    Execute a curl call to an Elasticsearch service in your Kubernetes cluster:

    docker exec the-project curl -s elasticsearch:9200

    Key Features

    • Bulk Port Forwarding: Forward all services in a namespace with a single command
    • Unique IP per Service: Each service gets its own 127.x.x.x IP address, eliminating port conflicts
    • Automatic /etc/hosts Management: Service hostnames automatically added and removed
    • Headless Service Support: Forwards all pods for headless services
    • Dynamic Service Discovery: Automatically starts/stops forwarding as services are created/deleted
    • Pod Lifecycle Monitoring: Detects pod changes and maintains forwarding
    • Label & Field Selectors: Filter which services to forward
    • Multiple Namespace Support: Forward services from multiple namespaces simultaneously
    • Port Mapping: Remap service ports to different local ports
    • IP Reservation: Configure specific IP addresses for services

    Contribute

    Fork kubefwd and build a custom version.

    Pull Request Policy: We are accepting pull requests for:

    • Bug fixes
    • Tests and test improvements
    • Stability and compatibility enhancements
    • Documentation improvements

    Note: We are not accepting new feature requests at this time.

    Usage

    Important: kubefwd requires sudo (root access) to modify your /etc/hosts file and create network interfaces. Use sudo -E to preserve your environment variables, especially KUBECONFIG.

    Basic Usage

    Forward all services in a namespace:

    sudo -E kubefwd svc -n the-project

    Kubefwd finds the first Pod associated with each Kubernetes service in the namespace and port forwards it based on the Service spec to a local IP address and port. Service hostnames are added to your /etc/hosts file pointing to the local IP.

    How it works:

    • Normal Services: Forwards the first available pod using the service name
    • Headless Services: Forwards all pods (first pod accessible via service name, others via pod-name.service-name)
    • Service Monitoring: Automatically starts/stops forwarding when services are created/deleted
    • Pod Monitoring: Automatically restarts forwarding when pods are deleted or rescheduled

    Advanced Usage

    Filter services with label selectors:

    sudo -E kubefwd svc -n the-project -l system=wx

    Forward a single service using field selector:

    sudo -E kubefwd svc -n the-project -f metadata.name=my-service

    Forward multiple services using the in clause:

    sudo -E kubefwd svc -n the-project -l "app in (app1, app2)"

    Forward services from multiple namespaces:

    sudo -E kubefwd svc -n default -n the-project -n another-namespace

    Forward all services from all namespaces:

    sudo -E kubefwd svc --all-namespaces

    Use custom domain suffix:

    sudo -E kubefwd svc -n the-project -d internal.example.com

    Port mapping (map service port to different local port):

    sudo -E kubefwd svc -n the-project -m 80:8080 -m 443:1443

    Use IP reservation configuration:

    sudo -E kubefwd svc -n the-project -z path/to/conf.yml

    Reserve specific IP for a service:

    sudo -E kubefwd svc -n the-project -r my-service.the-project:127.3.3.1

    Enable verbose logging for debugging:

    sudo -E kubefwd svc -n the-project -v

    Help

    $ kubefwd svc --help
    
    INFO[00:00:48]  _          _           __             _     
    INFO[00:00:48] | | ___   _| |__   ___ / _|_      ____| |    
    INFO[00:00:48] | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |    
    INFO[00:00:48] |   <| |_| | |_) |  __/  _|\ V  V / (_| |    
    INFO[00:00:48] |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|    
    INFO[00:00:48]                                              
    INFO[00:00:48] Version 0.0.0                                
    INFO[00:00:48] https://github.com/txn2/kubefwd              
    INFO[00:00:48]                                              
    Forward multiple Kubernetes services from one or more namespaces. Filter services with selector.
    
    Usage:
      kubefwd services [flags]
    
    Aliases:
      services, svcs, svc
    
    Examples:
      kubefwd svc -n the-project
      kubefwd svc -n the-project -l app=wx,component=api
      kubefwd svc -n default -l "app in (ws, api)"
      kubefwd svc -n default -n the-project
      kubefwd svc -n default -d internal.example.com
      kubefwd svc -n the-project -x prod-cluster
      kubefwd svc -n the-project -m 80:8080 -m 443:1443
      kubefwd svc -n the-project -z path/to/conf.yml
      kubefwd svc -n the-project -r svc.ns:127.3.3.1
      kubefwd svc --all-namespaces
    
    Flags:
      -A, --all-namespaces          Enable --all-namespaces option like kubectl.
      -x, --context strings         specify a context to override the current context
      -d, --domain string           Append a pseudo domain name to generated host names.
      -f, --field-selector string   Field selector to filter on; supports '=', '==', and '!=' (e.g. -f metadata.name=service-name).
      -z, --fwd-conf string         Define an IP reservation configuration
      -h, --help                    help for services
      -c, --kubeconfig string       absolute path to a kubectl config file
      -m, --mapping strings         Specify a port mapping. Specify multiple mapping by duplicating this argument.
      -n, --namespace strings       Specify a namespace. Specify multiple namespaces by duplicating this argument.
      -r, --reserve strings         Specify an IP reservation. Specify multiple reservations by duplicating this argument.
      -l, --selector string         Selector (label query) to filter on; supports '=', '==', and '!=' (e.g. -l key1=value1,key2=value2).
      -v, --verbose                 Verbose output.

    Troubleshooting

    Permission Errors

    Always use sudo -E to run kubefwd. The -E flag preserves your environment variables, especially KUBECONFIG:

    sudo -E kubefwd svc -n the-project

    Connection Refused Errors

    If you see errors like connection refused or localhost:8080, ensure:

    • kubectl is properly configured
    • You can connect to your cluster: kubectl get nodes
    • Your KUBECONFIG is preserved with the -E flag

    Stale /etc/hosts Entries

    If kubefwd exits unexpectedly, your /etc/hosts file might contain stale entries. kubefwd backs up your original hosts file to ~/hosts.original. You can restore it:

    sudo cp ~/hosts.original /etc/hosts

    Services Not Appearing

    Check that:

    • Services have pod selectors (services without selectors are not supported)
    • Pods are in Running or Pending state
    • You have RBAC permissions to list/get/watch pods and services
    • Use verbose mode (-v) to see detailed logs

    Port Conflicts

    If you encounter port conflicts, use IP reservations to assign specific IPs to services:

    sudo -E kubefwd svc -n the-project -r service1:127.2.2.1 -r service2:127.2.2.2

    Or create a configuration file (see example.fwdconf.yml).

    Known Limitations

    • UDP Protocol: Not supported due to Kubernetes API limitations
    • Services Without Selectors: Services backed by manually created Endpoints are not supported
    • Manual Pod Restart Required: If pods restart due to deployments or crashes, you may need to restart kubefwd

    Getting Help

    License

    Apache License 2.0

    Sponsor

    Open source utility by Craig Johnston, imti blog and sponsored by Deasil Works, Inc.

    Please check out my book Advanced Platform Development with Kubernetes: Enabling Data Management, the Internet of Things, Blockchain, and Machine Learning.

    Book Cover - Advanced Platform Development with Kubernetes: Enabling Data Management, the Internet of Things, Blockchain, and Machine Learning

    Source code from the book Advanced Platform Development with Kubernetes: Enabling Data Management, the Internet of Things, Blockchain, and Machine Learning by Craig Johnston (@cjimti) ISBN 978-1-4842-5610-7 Apress; 1st ed. edition (September, 2020)

    Read my blog post Advanced Platform Development with Kubernetes for more info and background on the book.

    Follow me on Twitter: @cjimti (Craig Johnston)

    Visit original content creator repository
  • keith-number-finder

    Kieth Number Finder

    A quick tool to find keith numbers. http://mathworld.wolfram.com/KeithNumber.html

    def checker(num)
    

    This part creates a new function by the name of “checker” and it takes an input which is the number it will check to see if it’s keith number. The next 4 bottom parts are part of this function.

    stringnum = str(num)
    
    arraynum = list(map(int, stringnum))
    
    sumnum = sum(arraynum)
    

    This part is where some variables get declared so it will be easier for us to use later on. The stringnum variable is taking the number that you inputed and converting it into a string from an integer. This is so that the next variable declaration will work, arraynum. This variable converts the integer that was inputted and turns each digit into parts of an array, now the variable that we are converting must be a string or it will not work, that is the purpose of the stringnum variable. This variable also converts the array into a integer format after it becomes an array so that the digits inside may be added up. The variable sumnum takes all of the digits in the array and adds them up so it can be used later on.

    while sumnum < num:
    

    This part declares the opening of a “while loop” which only loops the code inside as long as the condition is met. In this case, the condition is as long as the variable, “sumnum” is smaller than the variable, “num” the loop will run. This is useful so if that sum of the the array becomes greater than the number we originally inputted, the computer knows that this number is not a keith number and moves on to the next.

    arraynum = arraynum[1:] + [sumnum]
    
    sumnum = sum(arraynum)
    

    This is a very important part. What it does is it replaces the last value in the array with the sum of all the digits in the array, mind you this doesn’t increase the number of digits, it only replaces. This is why it will work with numbers that have even x digits because the number of numbers being added up is always in relation to the number inputted.

    For example: If number inputted is 197, then the array at first is, [1,9,7]. Then after the loop is run once, the array is [9,7,17]. This works just like the way we were taught in class in which we would write it out as:

    197 - 1 + 9 + 7 = 17
    
    9 + 7 + 17 = blah, blah, blah
    
    
    
    if (sumnum == num):
    
    print str(num)  + " is a keith number"
    

    This is the part that actually validates if the number is a keith number by checking if the sum of all the numbers in the array equals to the starting number

    count = int(raw_input("\nAt what number would you like to stop at?: "))
    

    This asks you, at which number would you like the program to stop running.

    value = 9
    

    This sets the first number it checks to 9(technically 10, because in the loop 1 gets added to this variable before it gets checked) as keith numbers cannot be single-digit.

    while (value <= count):
    
        value += 1
       checker(value)
    

    This is the main loop of the program which keeps increasing the value that it checks by one until it reaches the value you entered, at which point, it moves to the last point of the program and ends.

    Program’s Output:

    At what number would you like to stop at?: 1000000000000000000000000000000000000000000

    14 is a keith number

    19 is a keith number

    28 is a keith number

    47 is a keith number

    61 is a keith number

    75 is a keith number

    197 is a keith number

    742 is a keith number

    1104 is a keith number

    1537 is a keith number

    2208 is a keith number

    2580 is a keith number

    3684 is a keith number

    4788 is a keith number

    7385 is a keith number

    7647 is a keith number

    7909 is a keith number

    31331 is a keith number

    34285 is a keith number

    34348 is a keith number

    55604 is a keith number

    62662 is a keith number

    86935 is a keith number

    93993 is a keith number

    120284 is a keith number

    129106 is a keith number

    147640 is a keith number

    156146 is a keith number

    174680 is a keith number

    183186 is a keith number

    298320 is a keith number

    355419 is a keith number

    694280 is a keith number

    925993 is a keith number

    1084051 is a keith number

    7913837 is a keith number

    11436171 is a keith number

    33445755 is a keith number

    44121607 is a keith number

    129572008 is a keith number

    251133297 is a keith number

    Number of keith numbers: 42

    Duration: 5 hours

    Visit original content creator repository

  • PLEXiDRIVE

    PLEXiDRIVE

    Scripts to facilitate the use of cloud storage providers (i.e. Google Drive) as storage for Plex media using rclone

    Purpose

    The purpose of this project is to use Cloud Drives as a means of storage for Plex. These scripts can support any cloud drive services that are supported by rclone. The main use case of this project specifically targets using Google Drive unlimited accounts. Traditionally, using a Drive account with Plex runs into issues with exceeding Google’s API call quota. This occurs during Plex scans of large media collections. To combat this, this project automates the uploading of media to a Drive account and automatically scans the individual directories where new media was placed. This means that only a small subset of the media library will be scanned as opposed to scanning the entire collection (requires automatic Plex scans to be switched off). The scripts also has the ability to upload media to multiple Google accounts for redundancy in a RAID 1-like manner. This can be useful if the Drive accounts have the potential to be banned or revoked (i.e. purchased on eBay, etc.).

    Disclaimer

    These scripts are use at your own risk, meaning I am not responsible for any issues or faults that may arise. I have tested these scripts on my own systems and verfied their functionality; however, due diligence is required by the end user. I am in no way affiliated with Google, Plex Inc., or rclone. I am not responsible if a ban is place on the user’s Drive account due to abuse or excessive API calls.

    Dependencies

    1. rclone
    2. Plex Media Server
    3. plexdrive (optional)

    Installation

    1. Clone Git repo in home directory

      > ~$ git clone https://github.com/masonr/PLEXiDRIVE
    2. Edit permissions to allow plex user full access

      > ~$ sudo chmod -R 777 PLEXiDRIVE
    3. Install rclone and configure each Google Drive account

    4. Move rclone into a directory found in the PATH environment variable and edit permissions

      > ~$ sudo mv rclone /usr/local/bin/
      > ~$ sudo chown root:root /usr/local/bin/rclone
      > ~$ sudo chmod 755 /usr/local/bin/rclone
    5. Mount Google Drive(s) using rclone mount with options

      > ~$ sudo mkdir /mnt/gdrive-main
      > ~$ sudo rclone mount --allow-non-empty --allow-other gdrive-main:/ /mnt/gdrive-main &

      Edit path as needed and use rclone remote names configured in Step 3

      Alternatively, plexdrive should also be able to achieve mounting the remote drive without needing to change anything.

      Encrypted rclone mounts can also be used, but be sure to point your Plex libraries to the decrypted mounts and use the encrypted rclone mount names in the plexidrive config file.

    6. Determine the Plex media section numbers for the Movies and TV Show libraries

      • Libraries must first be set up on the Plex server (map the Movies library to the rclone mounted path; same for TV Shows)

      > ~/PLEXiDRIVE$ sudo su -c 'export LD_LIBRARY_PATH=/usr/lib/plexmediaserver; /usr/lib/plexmediaserver/Plex\ Media\ Scanner --list' plex
      	1: Movies
      	2: TV Shows

      See command and example output above

      • Copy the corresponding library section numbers to the plexidrive.conf (plex_movies_section_num & plex_tvshow_section_num)

    Important Notes

    • Movies must be placed in the root of the Drive account in a folder called “Movies”
    • TV Shows must be placed in the root of the Drive account in a folder called “TV Shows”
    • TV Shows must be organized of the form: “(root)/Show Name/Season Number/files” (use an automation tool, such as SickRage or Sonarr for ease)
    • The script will not delete empty TV Show folders after successful uploading
    • Movies can be placed in individual folders or in the local Movies root directory
    • In order to avoid a ban on the Google Drive account with large Plex libraries, the automatic media scans within Plex server settings must be switched off
    • It’s very important to use the exact notation as described for the config file parameters or the scripts may not work at all
    • The plex-scan script must be run as root user (sudo ./plex-scan.sh) as the script must have the effective user as plex

    Usage

    Uploading media

    Simply run the script below after configuring the Plex server and setting up the plexidrive.conf file

    > ~/PLEXiDRIVE$ ./plexidrive.sh

    Scanning Plex library for new files

    > ~/PLEXiDRIVE$ sudo su -c './plex-scan.sh' plex

    Cron jobs

    In order to automate the uploading of media and Plex scans, cron jobs can be used. Add a cron job to the root crontab for the Plex scan, and to the local user’s account for the media uploads.

    Example cron job to run PLEXiDRIVE every 4 hours:

    0 */4 * * * /bin/bash /path/to/PLEXiDRIVE/plexidrive.sh && su -c '/bin/bash /path/to/PLEXiDRIVE/plex-scan.sh' plex

    Configuration (plexidrive.conf)

    GDrive Settings

    • num_of_gdrives: the number of Google Drive accounts to upload media files to
    • drive_names: the name(s) of the Google Drive accounts

    Options

    • delete_after_upload: denotes if the local media files should be deleted after successful upload
    • file_types: the file types to scan for when detecting files to upload
    • rclone_config: (optional) full path to rclone config file

    Plex Library Directories

    • plex_tvshow_path: the path of the rclone mounted drive and folder where TV Shows will be found
    • plex_movies_path: the path of the rclone mounted drive and folder where Movies will be found
    • plex_movies_section_num: the library section number corresponding to Movies, found in installation step 9
    • plex_tvshow_section_num: the library section number corresponding to TV Shows, found in installation step 9

    Local Media Directories

    • local_tvshow_path: the path where local TV Show media will be found
    • local_movies_path: the path where local Movie media will be found

    Enable/Disable Componenets

    • enable_show_uploads: enable or disable uploading of TV media
    • enable_movie_uploads: enable or disable uploading of Movie media

    Example Config w/ One Google Drive

    ## GDrive Settings ##
    num_of_gdrives=1
    drive_names=('gdrive-main')
    
    ## Options ##
    delete_after_upload=true # true/false
    file_types="mkv|avi|mp4|m4v|mpg|wmv|flv"
    rclone_config=""
    
    ## Plex Library Directories ##
    plex_tvshow_path="/mnt/main/TV Shows" # no ending /
    plex_movies_path="/mnt/main/Movies" # no ending /
    plex_movies_section_num=1
    plex_tvshow_section_num=2
    
    ## Local Media Directories ##
    local_tvshow_path="/home/masonr/tv-shows/" # end with /
    local_movies_path="/home/masonr/movies/" # end with /
    
    ## Enable/Disable Components ##
    enable_show_uploads=true # true/false
    enable_movie_uploads=true # true/false

    Example Config w/ Two Google Drives

    ## GDrive Settings ##
    num_of_gdrives=2
    drive_names=('gdrive-main' 'gdrive-backup')
    
    ## Options ##
    delete_after_upload=true # true/false
    file_types="mkv|avi|mp4|m4v|mpg|wmv|flv|mpeg"
    rclone_config="/home/masonr/.config/rclone/rclone.conf"
    
    ## Plex Library Directories ##
    plex_tvshow_path="/mnt/main/TV Shows" # no ending /
    plex_movies_path="/mnt/backup/Movies" # no ending /
    plex_movies_section_num=1
    plex_tvshow_section_num=2
    
    ## Local Media Directories ##
    local_tvshow_path="/home/masonr/tv-shows/" # end with /
    local_movies_path="/home/masonr/movies/" # end with /
    
    ## Enable/Disable Components ##
    enable_show_uploads=true # true/false
    enable_movie_uploads=true # true/false

    Visit original content creator repository

  • EigenFaces

    Eigen Faces

    The following is a Demonstration of Principal Component Analysis, dimensional reduction. The following has been developed in python2.7 however can be run on machines which use Python3, by using a python virtual environment

    This project is based on the following paper:- Face recognition using eigenfaces by Matthew A. Turk and Alex P. Pentland

    Dataset courtesy – http://vis-www.cs.umass.edu/lfw/

    Development

    The following can be best developed using pipenv. If you do not have pipen, simply run the following command (using pip3 or pip based on your version of Python)

    pip install pipenv
    

    Then clone the following repository

    git clone https://github.com/sahitpj/EigenFaces
    

    Then change the following working directory and then run the following commands

    pipenv install --dev
    

    This should have installed all the necessary dependencies for the project. If the pipenv shell doesn’t start running after this, simply run the following command

    pipenv shell
    

    Now in order to run the main program run the following command

    pipenv run python main.py
    

    Make sure to use python and not python3 because the following pip environment is of Python2.7. Any changes which are to be made, are to documented and make sure to lock dependencies if dependencies have been changed during the process.

    pipenv lock
    

    The detailed report about this, can be viewd here or can be found at https://sahitpj.github.io/EigenFaces

    If you like this repository and find it useful, please consider ★ starring it 🙂

    project repo link – https://github.com/sahitpj/EigenFaces

    Principal Component Analysis

    Face Recognition using Eigen Faces – Matthew A. Turk and Alex P. Pentland

    Abstract

    In this project I would lile to demonstarte the use of Principal Component Analysis, a method of dimensional reduction in order to help us create a model for Facial Recognition. The idea is to project faces onto a feature space which best encodes them, these features spaces mathematically correspond to the eigen vector space of these vectors

    We then use the following projections along with Machine Learning techniques to build a Facial Recognizer

    We will be using Python to help us develop this model

    Introduction

    Face Structures are 2D images, which can be represented as a 3D matrix, and can be reduced to a 2D space, by converting it to a greyscale image. Since human faces have a huge amount of variations in extremely small detail shifts, it can be tough to identify to minute differences in order to distinguish people two people’s faces. Thus in order to be sure that a machine learning can acquire the best accuracy, the whole of the face must be used as a feature set.

    Thus in order to develop a Facial Recognition model which is fast, reasonably simple and is quite accurate, a method of pattern Recognition is necessary.

    Thus the main idea is to transform these images, into features images, which we shall call as Eigen Faces upon which we apply our learning techniques.

    Eigen Faces

    In order to find the necessary Eigen Faces it would be necessary to capture the vriation of the features in the face without and using this to encode our faces.

    Thus mathematically we wish to find the principal components of the distribution. However rather than taking all of the possible Eigen Faces, we choose the best faces. why? computationally better.

    Thus our images, can be represented as a linear combination of our selected eigen faces.

    Developing the Model

    Initialization

    For the followoing we first need a dataset. We use sklearn for this, and use the following lfw_people dataset. Firstly we import the sklearn library

    from sklearn.datasets import fetch_lfw_people
    

    The following datset contains images of people

    no_of_sample, height, width = lfw_people.images.shape
    data = lfw_people.data
    labels = lfw_people.target
    

    We then import the plt function in matplotlib to plot our images

    import matplotlib.pyplot as plt
    
    plt.imshow(image_data[30, :, :]) #30 is the image number
    plt.show()
    

    Image 1

    plt.imshow(image_data[2, :, :]) 
    plt.show()
    

    Image 2

    We now understand see our labels, which come out of the form as number, each number referring to a specific person.

    jayakrishnasahit@Jayakrishna-Sahit in ~/Documents/Github/Eigenfaces on master [!?]$ python main.py
    these are the label [5 6 3 ..., 5 3 5]
    target labels ['Ariel Sharon' 'Colin Powell' 'Donald Rumsfeld' 'George W Bush'
     'Gerhard Schroeder' 'Hugo Chavez' 'Tony Blair']
    

    We now find the number of samples and the image dimensions

    jayakrishnasahit@Jayakrishna-Sahit in ~/Documents/Github/Eigenfaces on master [!?]$ python main.py
    number of images 1288
    image height and width 50 37
    

    Applying Principal Component Analysis

    Now that we have our data matrix, we now apply the Principal Component Analysis method to obtain our Eigen Face vectors. In order to do so we first need to find our eigen vectors.

    1. First we normalize our matrix, with respect to each feature. For this we use the sklearn normalize function. This subtracts the meam from the data and divides it by the variance
    from sklearn.preprocessing import normalize
    
    sk_norm = normalize(data, axis=0)
    
    1. Now that we have our data normalized we can now apply PCA. Firstly we compute the covariance matrix, which is given by
    Cov = 1/m(X'X)
    

    where m is the number of samples, X is the feature matrix and X’ is the transpose of the feature matrix. We now perform this with the help of the numpy module.

    import numpy as np 
    
    cov_matrix = matrix.T.dot(matrix)/(matrix.shape[0])
    

    the covariance matirx has dimensions of nxn, where n is the number of features of the original feature matrix.

    1. Now we simply have to find the eigen vectors of this matrix. This can be done using the followoing
    values, vectors = np.linalg.eig(cov_matrix)
    

    The Eigen vectors form the Eigen Face Space and when visualised look something like this.

    Eigen Face 1

    Eigen Face 2

    Now that we have our Eigen vector space, we choose the top k number of eigen vectors. which will form our projection space.

    pca_vectors = vectors[:, :red_dim]
    

    Now in order to get our new features which have been projected on our new eigen space, we do the following

    pca_vectors = matrix.dot(eigen_faces) 
    

    We now have our PCA space ready to be used for Face Recognition

    Applying Facial Recognition

    Once we have our feature set, we now have a classification problem at our hands. In this model I will be developing a K Nearest Neighbour model (Disclaimer! – This may not be the best model to use for this dataset, the idea is to understand how to implement it)

    Using out sklearn library we split our data into train and test and then apply our training data for the Classifier.

    from sklearn.model_selection import train_test_split
    from sklearn.neighbors import KNeighborsClassifier
    
    X_train, X_test, y_train, y_test = train_test_split(pca_vectors, labels, random_state=42)
    
    knn = KNeighborsClassifier(n_neighbors=10)
    knn.fit(X_train, y_train)
    

    And we then use the trained model on the test data

    print 'accuracy', knn.score(X_test, y_test)
    
    jayakrishnasahit@Jayakrishna-Sahit in ~/Documents/Github/Eigenfaces on master [!?]$ python main.py
    accuracy 0.636645962733
    
    Visit original content creator repository
  • godot-state-machine

    GoDot Finite State Machine

    This is a living template for a GoDot Finite State Machine, primarily influenced by Bitlytic‘s Finite State Machine, but with my own touches and tweaks for my games.

    This overall template can be used to pick and choose template states to quickly throw together player actions, enemy AI, etc..

    Usage

    • Place a State Machine node as a child to the entity (player or ai)
    • Add any States needed as children of the State node.
    • The State Machine will dynamically add the states to its dictionary

    State Machine.gd

    The “Brain” which manages the current state and transitions to new states.

    State Machine Parameters

    • initial_state (exported): The state the entity should spawn in with.
    • current_state: The current running state
    • states: A dictionary of all possible states for this entity.

    State Machine Functions

    • _ready():
      • Adds all child nodes to the states dictionary and sets the entity to the initial state, if provided.
    • _process()
      • Calls State.Update(delta)
    • _physics_process()
      • Calls Physics_Update(delta)
    • on_child_transitioned(state, new_state_name)
      • call Exit() on the current state
      • call Enter() on the new state
      • set the new state to current_state

    State.gd

    The State Template

    State Parameters

    • Transitioned signal

    State Functions

    • Enter()
      • What to do on entering this state
    • Exit()
      • What to do on exiting this state
    • Update()
      • What to do during _process()
    • Physics_Update()
      • What to do during _physics_process()

    Idle.gd

    An example idle state

    Idle Parameters

    • entity
      • A reference to this entity (effectively synonymous with self)
    • move_speed
      • Speed to move at while idling

    Idle Functions

    • randomize_wander()
      • pick an amount of time between 1 and 3 seconds, and a random direction
    • Enter()
      • call randomize_wander()
    • Update()
      • Keep wandering, calling randomize_wander() as needed.
    • Physics_Update
      • Handle the actual wandering

    Follow.gd

    An example follow state

    Follow Parameters

    • entity
      • A reference to this entity (effectively synonymous with self)
    • move_speed
      • Speed to move at while chasing
    • target
      • What should this entity chase?

    Follow Parameters

    • _ready()
      • Set the target entity to the first node in the “Target” group.
      • Note: It’s very important you add the entity(ies) you’d like to target to a group called “Target” or this will crash. Could probably handle this more gracefully.
    • Physics_Update()
      • Handle the actual chasing.

    Visit original content creator repository