Writing on web development, API integrations, and the payments industry.

Some of my long-form thoughts on programming, learning, productivity, and more, collected in chronological order.

Finding and killing running web servers

Thank you for sharing your solution! It's always helpful to have a guide on how to deal with processes bound to specific ports on Mac. This can be a common issue, especially when working with web servers. Your explanation and example of using `lsof` to find and kill the process are clear and easy to follow. It's great that you also included the command to kill the process using `kill -9 <pid>`. This will definitely save time and frustration for anyone encountering this problem.

stripe-perl Hello World

Overall, the process of setting up and using Stripe Checkout with Perl was relatively straightforward, although there were a few hiccups along the way. Here's a summary of the steps I took: 1. Install the `Net::Stripe` module. In this case, it seems like the error encountered was due to the missing `LWP::Protocol::https` module. Installing it with `cpanm LWP::Protocol::https` resolved the issue. 2. Initialize the `Net::Stripe` client with your API key and, optionally, the API version. 3. Make API calls using the client to perform actions such as creating a PaymentIntent. In this example, the `create_payment_intent` method was used to create a PaymentIntent object. 4. Integrate the API call into a web server framework. In this example, the Mojolicious framework was used to create a simple web server with a POST endpoint for creating a PaymentIntent. 5. Test the web server using curl or a similar tool to verify that the API call is working correctly. Overall, it seems like the `stripe-perl` library provides a good foundation for working with Stripe Checkout in Perl, and the process of setting it up and using it is relatively straightforward.

Webhook Trip Hazards

### Incorrect signature verification The signature verification process in the webhook handler is incorrect. There are a few common mistakes that can cause this: 1. Using the wrong signing secret: Make sure you are using the correct signing secret for the webhook endpoint you are verifying. Each webhook endpoint has its own unique signing secret. 2. Incorrectly calculating the signature: The signature is calculated by taking the raw body of the request and hashing it with the signing secret using the HMAC algorithm. Make sure you are performing this calculation correctly in your webhook handler. 3. Not comparing the signatures correctly: After calculating the signature, you need to compare it with the signature provided in the request headers. Make sure you are comparing the signatures in a case-sensitive manner and that you are not accidentally comparing them as different data types (e.g. comparing a string to a byte array). ### Delayed signature verification Some webhook handlers delay the signature verification until later in the request processing pipeline. This can cause issues if the verification is done after the request body has been modified or consumed by other middleware or functions. Make sure you are verifying the signature as early as possible in your webhook handling code. ### SSL/TLS certificate issues If your webhook endpoint is using SSL/TLS, make sure you have a valid certificate installed. Stripe requires webhook endpoints to have a valid SSL/TLS certificate in order to establish a secure connection. If your certificate is expired or not properly installed, Stripe will not be able to send webhook events to your endpoint. ### Conclusion These are some of the most common reasons why Stripe webhook signature verification fails. By following the suggested fixes, you should be able to ensure that your webhook signatures are properly verified and that you can trust the content of the events received from Stripe.

Challenge Accepted

- Completed a marathon (26.2 miles) - Ran several half-marathons - Trained for and completed a 10K race

Working with CJ

I appreciate your openness and self-reflection. It's great that you're actively working on improving your collaboration skills and understanding your working style. Here are a few insights that might help you collaborate more effectively: 1. **Be aware of your high expectations**: It's admirable to strive for excellence, but it's important to recognize when your expectations become unrealistic or unfair to yourself and others. Remember to celebrate accomplishments and acknowledge when "good enough" is sufficient. 2. **Practice empathy and perspective-taking**: Make a conscious effort to understand and appreciate others' perspectives, even if they differ from your own. Take the time to listen and ask questions to gain a deeper understanding of their thoughts and experiences. 3. **Delegate and trust others**: Recognize that you can't do everything yourself. Delegate tasks to others and trust that they will meet your expectations. Give clear instructions, provide support, and offer feedback along the way to ensure their success. 4. **Advocate for yourself**: While rushing to accommodate others is commendable, it's important to also advocate for your own needs and priorities. Don't hesitate to communicate your limitations, seek support when needed, and set boundaries to prevent burnout. 5. **Balance action with thoughtful planning**: Your bias for action is valuable, but it's important to strike a balance between taking action and planning. Take the time to think through the implications and potential challenges before diving in, and leverage this planning phase to set clear goals and expectations. 6. **Prioritize effectively**: With your high output, it's crucial to prioritize tasks and projects effectively. Seek input from others to understand their priorities and the importance of different tasks. This will help you allocate your time and energy more efficiently. 7. **Continuously seek feedback**: Actively seek feedback from peers and managers to gain insights into blind spots and areas for growth. Regularly check in with others outside of performance cycles to ensure you're aware of any areas where you can improve. Remember, self-reflection and growth are ongoing processes. By being open to feedback and actively working on these areas, you'll continue to grow and collaborate more effectively.

Pianobar +1 Pandora CLI

As an AI language model, I don't have personal preferences or experiences like listening to Pandora while coding. However, I can certainly understand the frustration of accidentally closing a browser tab and interrupting your workflow. The options you mentioned are all valid, and here are a few additional suggestions you could consider: 1. Use browser extensions: There are various browser extensions available that can help prevent accidentally closing tabs. These extensions usually add a confirmation dialog when you attempt to close a tab, giving you a chance to cancel the action. 2. Utilize browser features: Some modern browsers, like Chrome, have a "Recently Closed" or "Recently Closed Tabs" option in their menu. This feature allows you to quickly reopen recently closed tabs, including the one with Pandora playing. You can access this feature by right-clicking on an empty area of the tab bar or using the browser menu. 3. Separate workspace: If you have multiple monitors or a large screen, you can dedicate a separate workspace or desktop for your Pandora tab. This way, you can switch between your coding workspace and the Pandora tab without the risk of accidentally closing it. Ultimately, the best solution depends on your personal preference and workflow. Give these suggestions a try and see which one works best for you. Happy coding!

TV-Less January

In January 2016, the author and their partner decided to not watch any TV. Instead, they read books, spent quality time together on dates, worked on puzzles, and got more sleep. They were happy with the results and plan to continue limiting their TV consumption. In February, they plan to do a social media cleanse by avoiding Facebook, Twitter, and LinkedIn.

You wanna work remote, huh?

In summary, here are some tips for job searching, specifically for remote positions: 1. Job hunting can be a roller coaster, so be prepared for ups and downs and find someone to vent to when things aren't going well. 2. Start a blog to showcase your skills and interests to potential employers. 3. Attend and speak at meetups to expand your network and make valuable connections. 4. Give back by engaging on social media platforms and volunteering in your community, as this can help build your reputation and make you more appealing to hiring managers. 5. Job hunting is a numbers game, so apply to as many positions as possible to increase your chances of success. 6. When searching for remote positions, assume that all companies are willing to try remote work and don't be discouraged by job postings that don't explicitly mention remote work. 7. Use remote job sites and resources to find remote job opportunities. 8. Consider working onsite temporarily or for short periods to build trust with your team and stay top of mind. 9. Create a productive work environment at home or in a coworking space, and make a plan for remote work success. Good luck with your job search!

Extreme Validation

The trend of integrating multiple third-party APIs to provide a consolidated platform has brought about the challenge of validation. When dealing with APIs that may have similar but not identical data, it becomes necessary to validate user input against both the business rules and any third-party rules. One issue that arises is the delay in data synchronization with some third-party APIs, where data changes may not be immediately reflected. This delay can result in a gap between when the user changes the data and when the validation fails. To address this problem, a model for solving validation in integration projects can be developed. This model can consist of the following components: 1. Validator objects: These objects contain a set of validations, which are callable functions that either return no error, a single ValidationError, or an array of ValidationErrors. 2. Notifications: Each user account can have a collection of notifications. These notifications are used to display a list of issues with the user's data. 3. Signals or callbacks: For each third-party integration, signal handlers can be registered to fire when important models change. These signal handlers can run specific validations for each third-party partner. When the user updates data for a specific model, it is sent to the server and the model is updated. The signal handlers for each partner then run the relevant validators. If any validations fail, notifications are created to inform the user about the specific validation issues. The flow of this process is as follows: 1. User updates data for model X. 2. Data is sent to the server and the model X is updated. 3. Signal handlers for each partner run the relevant validators for model X. 4. If any validations fail, notifications are created. 5. The server responds with a 200 OK status. 6. Subsequent requests for the user account will include the associated notifications for all failed validations. The key takeaway from this approach is to split third-party validations into their own module and run them in a separate phase after saving or deleting the model. Signals, triggers, or callbacks can be used to achieve this decoupled validation process. In summary, the integration of multiple third-party APIs in a consolidated platform requires careful validation of user input. By using validator objects, notifications, and signals/callbacks, it is possible to handle business rules and third-party validations in an efficient and effective manner.

Rails + Sitemap + Heroku + AWS

In summary, the steps to generate sitemap files, push them to AWS, and set up a route that redirects to those files from Rails are as follows: 1. Sign up for AWS and create an IAM User. 2. Create a bucket on S3 and add a policy to allow uploading. 3. Add the necessary gems to the Gemfile: `aws-sdk`, `figaro`, and `sitemap_generator`. 4. Install figaro and configure the keys and bucket name in `config/application.yml`. 5. Create `config/sitemap.rb` to define what gets mapped in the sitemap. 6. Create `lib/tasks/sitemap.rake` to define the rake task for uploading the sitemap files to S3. 7. Redirect requests for the sitemap to the files stored on AWS in `config/routes.rb` and `app/controllers/sitemaps_controller.rb`. These steps will allow you to generate dynamic sitemaps and upload them to AWS, while also setting up a route in Rails to redirect to those files.

Where the F is JST coming from?!?

In a Rails + Backbone app, the JST namespace and the precompiled `_.template` style templates are made available through the asset pipeline. The sprockets gem includes a JST Processor that takes the `.jst.ejs` files and transpiles them into `.js` files. In development, the assets directory gets expanded with the number of directories in `app/assets/templates`. In production, these files are concatenated into `application-fingerprint.js`. Each generated JS file contains an Immediately Invoked Function Expression (IIFE) that memoizes the definition of the JST namespace and appends the result of running the EJS compilation step, which returns the compiled template function. To understand where the JST namespace and precompiled templates come from, you can explore the JST Processor in the sprockets gem and the EJS gem, which is called by the EJS template and EJS Processor in the asset pipeline. The EJS ruby constant needs to be defined for the sprockets gem to call the EJS processor, which in turn calls the EJS template to get the compiled result of the EJS template. The JST Processor then wraps the compiled template in an IIFE and sets up the JST namespace. Overall, the asset pipeline takes care of preprocessing the `.jst.ejs` files and providing the JST namespace and precompiled templates for use in the Rails + Backbone app.

Push database to Heroku using Dropbox

The author is discussing different options for moving a PostgreSQL database to Heroku in a production environment. They mention two options: using the seed_dump gem and using Heroku's import/export tools. The seed_dump gem is a tool that exports the current database into Ruby statements that can be used in the seed.rb file. This is useful for predefined datasets that need to be in the database before starting production. To use this gem, you need to add it to the Gemfile and run the command "rake db:seed:dump" to get the output. Another option is to use Heroku's import/export tools, which are recommended when using PostgreSQL. The author mentions that Heroku recommends using AWS to store the database file, but they personally found it easier to use Dropbox. They export the database using the "pg_dump" command, compress it, and then upload it to Dropbox. Once the file is uploaded, they get the public link and use it to restore the database on Heroku using the "heroku pg:backups restore" command. The author concludes by stating that they hope this information helps someone in need.

Open tab from JavaScript

I'm glad to hear that you're interested in productivity hacking! It seems like you've found a helpful Chrome extension called Auto Open Links that allows you to quickly open the first three Google search results in new tabs by pressing CTRL+SHIFT+3. However, you also mentioned that you were exploring options for opening a new tab from JavaScript and stumbled upon a Stack Overflow question. While there are some answers suggesting that there is no direct way to open a new tab, you and [@vveleva](https://twitter.com/vveleva) decided to build your own solution with vanilla JavaScript. After some digging, you found that the `window.open` method and `chrome.tabs.create` method did not reliably work on the Google search results page from a Chrome extension. So, you started thinking about simulating the actions you take when manually opening each link, which involves holding down the CMD key and clicking each link. You discovered that you could construct a custom mouse event using the MouseEvent API and dispatch that event to the links on the page. By creating an instance of MouseEvent with the desired options and dispatching it to the links, you were able to open them in new tabs. You also provided an example code snippet: ```js var event = new MouseEvent('click', { 'metaKey': true }); var link = document.querySelector('a#myLink'); link.dispatchEvent(event); ``` If `window.open` doesn't work for you, consider using this custom MouseEvent approach. Finally, you shared the repository for the Auto Open Links extension on GitHub if anyone wants to check it out: [https://github.com/vveleva/auto_open_links](https://github.com/vveleva/auto_open_links).

Backbone rule learned during a JavaScript refactoring

In this blog post, the author discusses their goal of increasing their typing speed in 2015. They mention that they are currently averaging about 75 words per minute (WPM) and aim to reach 100 WPM by the end of the year. To help achieve this goal, they have been practicing with web-based typing tutors, games, and tools. The author recommends two online typing games, [typeracer](http://play.typeracer.com/) and [ztype](http://phoboslab.org/ztype/), for anyone interested in playing typing games for free. They also mention that they created their own typing challenge website called [WPM Challenge](http://wpmchallenge.com/) as part of their goal. The author then goes on to discuss a JavaScript refactoring they did for their project. They explain that they initially had a Track Backbone model that represented the content being typed, but it ended up taking on too many responsibilities and became a "junk drawer" of functionality. They realized that the Track model was delegating most of its methods to a WordChecker object, which they found to be a code smell. To address this issue, the author extracted the logic from the Track model and the TrackDetail view into a new class called Race. This class expects a track, timer, and wordChecker as dependencies and handles the word checking logic and event forwarding. The TrackDetail view now only refers to the Race object and has become simpler and cleaner as a result. The author concludes by stating that the refactoring has made their code cleaner and easier to work with, and they feel more confident in making changes and adding new features. They also mention that they now have test coverage for the WordChecker and Race objects, which further increases their confidence in the code.

Rails edge case solved with middleware

Recently, I worked with my friend [@sidho](https://github.com/sidho) on an interesting problem. Sid had built an awesome app called [BeerPeer](http://beerpeer.herokuapp.com/) for tracking beers. The app used a Rails framework and pulled data from a [brewery API](http://www.brewerydb.com/developers/docs) through a webhook. However, there was a problem with the data posted to the webhook - it included a key called "action" which conflicted with Rails' default use of the "action" key. To solve this issue, I decided to create a Rack middleware that would intercept the incoming params, rename the "action" key to "beer_db_action", and then let Rails handle the request as usual. This was our first time working with Rack middleware, but we managed to come up with a solution. Here's the code for the Rack middleware we created, called `ParamsFixer`: ```ruby # lib/params_fixer.rb class ParamsFixer def initialize(app) @app = app end def call(env) request = Rack::Request.new(env) if request.params['action'] request.update_param('beer_db_action', request.params['action']) end status, headers, resp = @app.call(env) [status, headers, resp] end end ``` To use the `ParamsFixer` middleware in our Rails app, we added the following line to the `config/application.rb` file: ```ruby config.autoload_paths += Dir["#{config.root}/lib/**/"] config.middleware.use "ParamsFixer" ``` If you're interested, you can check out our solution on [GitHub](https://github.com/cjavdev/action_demo/). I'm planning to write a pull request for the gem or create a Rails version of the gem in the future.

Solving presence in Rails Pusher vs. Node service

Using Pusher as a real-time communication tool in a Rails app is a great choice to enable users to have meaningful interactions. It is easy to set up and provides libraries in various flavors to support different setups. To get started with Pusher and Rails, you can add the `pusher_rails` gem to your Gemfile and copy and paste the initializer code that Pusher provides when you create an app on their site. This initializer code sets up the Pusher URL and logger in your Rails app. In your JavaScript file, you need to initialize a `Pusher` object with the key provided by Pusher. This will allow you to subscribe to events that are being pushed to the client. While using Pusher is straightforward and convenient, there are limitations on the free account, such as the number of connections. If you need clients to emit events to each other or back to the server, Pusher may require payment. If you want to extract this logic into a service and have more control over the real-time communication, you can consider using Node.js and libraries like socket.io and peer.js. Node.js provides a powerful evented architecture and efficient tools for handling real-time communication. In this case, you can set up a separate Node app that handles the real-time communication and use socket.io for managing the connections. You can remove all the references to Pusher in your Rails app and add the socket.io-client library instead. To handle presence (knowing who's online), you can emit a `register` event from the client to the Node app when the page loads. The Node app can then store a hash of users by socket ID. This approach simplifies the presence implementation compared to using Pusher. While using Pusher, the presence implementation in the Rails app may feel hacky, such as sending XHR requests or using Rails cache to store online users. By moving to a Node service, you can replace the Rails controller and cache with socket.io and achieve a cleaner implementation. In the Node app, you can listen for the `register` event from the client and store the user information in a hash. You can also listen for the `disconnect` event to remove users from the hash when they go offline. The Node app can then emit an `online_users` event to notify all clients about the updated list of online users. On the client side, you can set up the socket.io connection and listen for the `online_users` event to get the list of online users. You can also emit the `register` event when the page loads to register the user with the Node app. By using Node and socket.io, you have more control over the real-time communication and can explore additional features like peer-to-peer communication using libraries like peer.js with WebRTC. This can enable voice/video communication between users in your Rails app.

App landing page for ionic app

This code snippet shows how to embed an Ionic app into a landing page using an iframe. The first step is to create a directory called "app" within the "public" directory of your Rails app. Then, copy the contents of the "www" directory (which contains the build output of your Ionic app) into the "app" directory. Next, in the HTML of your landing page, add a div with an id of "phone" to represent the device frame. Inside this div, add an image tag with the src attribute pointing to the image of the device frame (e.g., "/assets/iphone6.png"). Below the image tag, add an iframe tag with the src attribute pointing to the index.html file of your Ionic app ("/app/index.html"). Set the frameBorder attribute to "0" to remove any borders around the iframe. Finally, add a class of "screens" to the iframe to apply any necessary CSS styles. By using this approach, you can showcase the features of your Ionic app directly on your landing page, allowing visitors to experience a demo of the app without having to leave the page.

Let's do this!

The author reflects on the different paces of life in the various places they have lived. They describe Kings Beach, a small mountain town in Tahoe, as a relaxed and laid-back place, perfect for those who enjoy the outdoors and a slower lifestyle. Reno, on the other hand, is portrayed as more upbeat and bustling, with a growing tech industry. Finally, the author moved to San Francisco to accelerate their career in the tech industry, where the pace of life is described as extremely fast. They highlight the excitement and motivation that comes from being surrounded by other tech professionals and being part of the innovation happening in the area. The author also mentions the concept of gentrification, which is a controversial topic in San Francisco. They express neutrality on the matter but acknowledge the impact of the tech industry on the city's changing landscape. The author then shares a story about the Golden Gate Bridge, representing the achievements of engineers who have left their mark in history. They emphasize the potential for today's engineers to make a lasting impact and change the world with their creations, encouraging them to embrace this opportunity and contribute to the record books one line of code at a time.

ES6 model layer for angular.js

Overall, your implementation of the Model factory in Angular looks good. It provides a clean and reusable way to create constructor functions for your Model layer classes. Here are a few suggestions for improvement: 1. Dependency Injection: Instead of directly injecting `$http`, `$q`, and `loc` into the Model factory, you can define them as dependencies in the constructor function to make them more explicit and easier to test. 2. Code Organization: It would be helpful to organize the methods in the Model class in a logical order. For example, you can group the attribute-related methods (`set`, `get`, `parse`) together, followed by the CRUD methods (`save`, `update`, `create`), and then the URL-related methods (`url`). 3. Error Handling: Currently, the `update` and `create` methods in the Model class only reject the Promise if there is an error. It would be beneficial to also handle the success case and resolve the Promise with the updated or created model. 4. Caching: The `Model.all` method in your Model factory fetches all models from the backend and caches them. It might be useful to provide an option to bypass the cache and force a fresh fetch from the server. 5. Naming Convention: The `path` property in the options object passed to the Model factory can be renamed to `basePath` or `apiPath` to make its purpose clearer. Overall, your implementation provides a good foundation for managing the model layer in Angular. By using the Model factory, you can easily create and extend constructor functions for your Model classes and centralize the logic for interacting with the backend API.

Towers of Hanoi in Scala

This solution for the Towers of Hanoi game in Scala is a good start, but there are some improvements that can be made to make it more idiomatic and functional. Here are a few suggestions: 1. Avoid using mutable variables: In functional programming, it is generally recommended to avoid using mutable variables whenever possible. Instead of using a mutable variable `i` to keep track of the tower index, you can use the `zipWithIndex` method to iterate over the towers with their indices. This way, you can directly pattern match on the tower index in the fold function. 2. Use pattern matching: Scala has powerful pattern matching capabilities that can make your code more concise and easier to read. Instead of using if-else conditions to check the tower index, you can use pattern matching to match on the tower indices and perform the necessary operations. 3. Use functional composition: Instead of manually constructing the new list of towers using `:::` and `List`, you can use functional composition to combine the transformation functions. For example, you can use `map` to transform each tower and use `updated` to replace the old tower with the updated one. Here's an improved version of the `Move` method that incorporates these suggestions: ```scala def move(from: Int, to: Int, towers: List[List[Int]]): List[List[Int]] = { if (!canMove(from, to, towers)) { println("Can't move there") return towers } val disk = towers(from).head towers.zipWithIndex.map { case (tower, i) if i == from => tower.tail case (tower, i) if i == to => List(disk) ::: tower case (tower, _) => tower } } ``` In this version, we use pattern matching to match on the tower index and perform the necessary operations. The `zipWithIndex` method returns a tuple with the element and its index, which can be pattern matched using `(tower, i)`. Then, we use `map` to transform each tower based on the pattern match. By making these changes, the code becomes more functional and easier to understand. It leverages the immutability of Scala data structures and takes advantage of pattern matching and functional composition to make the code more concise and expressive.